How can you automatically run and remove a container in docker compose - docker

I'm looking to forwarding my ssh-agent and found this
https://github.com/nardeas/ssh-agent
and the steps are the following
0. Build
Navigate to the project directory and launch the following command to build the image:
docker build -t docker-ssh-agent:latest -f Dockerfile .
1. Run a long-lived container
docker run -d --name=ssh-agent docker-ssh-agent:latest
2. Add your ssh keys
Run a temporary container with volume mounted from host that includes your SSH keys. SSH key id_rsa will be added to ssh-agent (you can replace id_rsa with your key name):
docker run --rm --volumes-from=ssh-agent -v ~/.ssh:/.ssh -it docker-ssh-agent:latest ssh-add /root/.ssh/id_rsa
The ssh-agent container is now ready to use.
3. Add ssh-agent socket to other container:
If you're using docker-compose this is how you forward the socket to a container:
volumes_from:
- ssh-agent
environment:
- SSH_AUTH_SOCK=/.ssh-agent/socket
in a compose file, I add step 1 to it like so:
services:
ssh_agent:
image: nardeas/ssh-agent
However I do not what's the equivalent syntax in compose file for step 2
docker run --rm --volumes-from=ssh-agent -v ~/.ssh:/.ssh -it docker-ssh-agent:latest ssh-add /root/.ssh/id_rsa

You can do it as below -
docker-compose -f my-docker-compose.yml run --rm ssh_agent bash -c "ssh-add /root/.ssh/id_rsa"
Ref - https://docs.docker.com/compose/reference/run/

docker-compose.yml file will be
services:
ssh_agent:
image: docker-ssh-agent:latest
command: ssh-add /root/.ssh/id_rsa
volumes_from:
- ssh-agent
environment:
- SSH_AUTH_SOCK=/.ssh-agent/socket
volumes:
- ~/.ssh:/.ssh
then run the docker-compose command as below
docker-compose -f docker-compose.yml run --rm ssh_agent

Related

how to get volume data using ssh connection

I'm trying to pull docker volume data from the remote server. Now my docker volume data is on my local machine. But I want to make this data available to everyone. So I copied docker volume file to the server. How do I show the file path of my docker volume data on this server in the compose file? Like;
docker-compose.yml
version: '3.2'
services:
jira:
container_name: jira-software_8.3
image: atlassian/jira-software:8.3
volumes:
# How to get volume data using ssh connection
- <user_name>#<server_ip>:<server_path>:<container_path>
ports:
- '8080:8080'
environment:
- 'JIRA_DATABASE_URL=postgresql://jira#postgresql/jira_db'
- 'JIRA_DB_PASSWORD=docker'
volumes:
jira_data:
external: false
You can not mount remote server files and folders, docker looking for mounting in the local context.
So the work arround is to copy during run time and mount the directory to a container.
scp -r -i yourkey.pem centos#host.example.com:/home/centos/backup ./app/ && docker run --rm -it -v $PWD/app:/app alpine ash -c "ls /app"
your current docker-compose
volumes:
# How to get volume data using ssh connection
as you can not bind - <user_name>#<server_ip>:<server_path>:<container_path>
scp -r -i yourkey.pem centos#host.example.com:/home/centos/backup ./app/ && docker run --rm -it -v $PWD/app:/app alpine ash -c "ls /app"
below will break your docker-compose
volumes:
# How to get volume data using ssh connection
- <user_name>#<server_ip>:<server_path>:<container_path>
update the docker-compose file
volumes:
- ./app/:/app/
then run docker-compose up command like
scp -r -i yourkey.pem centos#host.example.com:/home/centos/backup ./app/ && docker-compose up

Run docker-compose without installation

I try to run docker-compose without installation, so using docker:compose repository (with docker run).
So I tried this way :
docker run -ti --rm -v $PWD:/XXX docker/compose:1.24.1 up -d
The problem is that I don't know the container dir name of docker/compose (here XXX) to mount my current folder as volume.
Any ideas...?
Thanks !
You can bind mount your local docker-compose.yaml to any place just remember to tell docker-compose use -f, like next:
docker run -ti --rm -v /var/run/docker.sock:/var/run/docker.sock -v ${PWD}:/code docker/compose:1.24.1 -f /code/docker-compose.yaml up -d
Meanwhile, don't forget to add docker.sock of your host machine bind mount to the container.
That XXX folder can be anything inside the container. Basically in -v option of docker run. Its -v [host-directory]:[container-directory].
If you are trying to run a docker-compose up inside the container, then follow these steps:
Create a directory on host mkdir /root/test
Create docker-compose.yaml file with following contents:
version: '2'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/code
redis:
image: redis
Run docker run command to run docker-compose inside the container.
docker run -itd -v /var/run/docker.sock:/var/run/docker.sock -v /root/test/:/var/tmp/ docker/compose:1.24.1 -f /var/tmp/docker-compose.yaml up -d
NOTE: Here /var/tmp directory inside the container will contain docker-compose.yaml file so I have used -f option to specify complete path of the yaml file.
Hope this helps.

Executing logstash command outside of docker container

I want to execute a Logstash command to start importing to Elasticsearch without entering the ELK Docker container.
This doesn't work:
docker exec -it docker_elk_1 opt/logstash/bin/logstash -f /home/configs/logstash-logs.config
Although it would show
Successfully started Logstash API endpoint {:port=>9600} but it would just exit after.
However, this would work, but I have to enter docker container first
docker exec -it docker_elk_1 bin/bash
Then
opt/logstash/bin/logstash -f /home/configs/logstash-logs.config
Thanks
docker-compose.yml
elk:
image: sebp/elk
volumes:
- ${PWD}:/home/configs
ports:
- "5601:5601"
- "9200:9200"
- "5044:5044"
I do not know if I understand...you can try with:
docker exec -it docker_elk_1 /bin/bash -c 'opt/logstash/bin/logstash -f /home/configs/logstash-logs.config'
or
docker-compose exec /bin/bash -c 'opt/logstash/bin/logstash -f /home/configs/logstash-logs.config'

Launching docker command from docker-compose v.3 file

I'm learning about Docker and I'm at first steps.
I've to 'refresh' postgres image from compose file to initialize db scripts as YOSIFKIT here do through shell (https://github.com/docker-library/postgres/issues/193).
here is my Docker file:
FROM postgres:9.6.7
COPY docker-postgresql-9.6.7/prova.sql /docker-entrypoint-initdb.d/
and here is my compose file:
version: '3'
services:
postgresql_rdbms:
restart: always
image: postgres-prova
build:
context: ../
dockerfile: docker-postgresql-9.6.7/Dockerfile
command: bash -c "docker run -it --rm postgres-prova ls -ln /docker-entrypoint-initdb.d && docker run -it --rm postgres-prova && postgres"
environment:
PG_PASSWORD: postgres
ports:
- "5432:5432"
volumes:
- /srv/docker/postgresql:/var/lib/postgresql
HOW can I insert a command in a compose-file to do "docker run -it --rm imageToReload" ???
Because I've seen that "command:" in compose file works inside the container, but I want operate ON the container, on a upper level (=manage the container from the compose file, after the container creation)
Thank you very much
From what I understand you want docker-compose to delete/remove the container after every run so that the build is run each time and a fresh prova.sql file can be copied into the image each time the service is brought up. The --force-recreate flag is probably what you need.
The command directive within the yaml file provides the command that is run inside the container.

Execute command in linked docker container

Is there any way posible to exec command from inside one docker container in the linked docker container?
I don't want to exec command from the host.
As long as you have access to something like the docker socket within your container, you can run any command inside any docker container, doesn't matter whether or not it is linked. For example:
# run a container and link it to `other`
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock \
--link other:other myimage bash -l
bash$ docker exec --it other echo hello
This works even if the link was not specified.
With docker-compose:
version: '2.1'
services:
site:
image: ubuntu
container_name: test-site
command: sleep 999999
dkr:
image: docker
privileged: true
working_dir: "/dkr"
volumes:
- ".:/dkr"
- "/var/run/docker.sock:/var/run/docker.sock"
command: docker ps -a
Then try:
docker-compose up -d site
docker-compose up dkr
result:
Attaching to tmp_dkr_1
dkr_1 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dkr_1 | 25e382142b2e docker "docker-entrypoint..." Less than a second ago Up Less than a second tmp_dkr_1
Example Project
https://github.com/reduardo7/docker-container-access
As "Abdullah Jibaly" said you can do that but there is some security issues you have to consider, also there is sdk docker to use, and for python applications can use Docker SDK for Python

Resources