This is my docker compose file
version: '2'
# based off compose-sample-2, only we build nginx.conf into image
# uses sample site from https://startbootstrap.com/template-overviews/agency/
services:
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
ports:
- '80:80'
web:
image: httpd
volumes:
- ./html:/usr/local/apache2/htdocs/
Now can I ssh into any of the services which gets creats when I run docker-compose up
The standard mechanism is not to ssh into containers, but to connect to a container using docker exec. Given a container Id like 3cdb7385c127 you can connect (aka ssh into it) with
docker exec -it 3cdb7385c127 sh
or for a full login shell if you have bash available in container
docker exec -it 3cdb7385c127 bash -l
You can still ssh into a container if you want to, but you would need to have a ssh server installed, configured and running and you would need to have access to the container IP from outside or redirect container's :22 onto some port on the host.
The easiest way is to run the docker-compose exec command:
docker-compose exec web /bin/bash
with the latest version of docker, since "Docker Compose is now in the Docker CLI", you can do:
docker compose exec web /bin/bash
If you do want to use the pure docker command, you can do:
web=`docker container ls |grep web |awk '{print \$1}'`
docker container exec -it $web /bin/bash
or in a single line:
docker container exec -it `docker container ls |grep web |awk '{print \$1}'` /bin/bash
in which we find the container id first, then run the familiar docker container exec -it command.
As it is mentioned in the question, the service containers have been created by running docker-compose up. Therefore I don't think docker-compose run is appropriate answer, since it will start a container in my test.
do 'docker ps' to get the names and docker id for your container.
do 'docker exec -it <docker_id> /bin/bash'
this will give you bash prompt inside container.
If you specify container_name in your docker-compose.yaml file, you can use that to log in with the docker exec command.
Example:
django:
container_name: django
more_stuff: ...
Then, to log in to a running docker-compose:
docker exec -it django /bin/bash
This works better for me as I don't need to check the current running ID and it's easy for me to remember.
While using the docker command also works, the easiest way to do this which does not require finding out the ID of the container is using the docker-compose run subcommand, for example:
service="web"
docker-compose run $service /bin/bash
Related
I have a docker compose file which works fine with these commands:
docker-compose up
docker-compose exec web /bin/bash
The issue I have is that when I want to start the container with --service-ports I can't connect to the container anymore with docker-compose.
docker-compose run --service-ports web
docker-compose exec web /bin/bash
ERROR: No container found for web_1
I also tried:
docker-compose run --service-ports --name container_web --rm web
docker-compose exec web /bin/bash
ERROR: No container found for web_1
# this works
docker exec -it container_web /bin/bash
root#b09618ad2840:/code/app#
Any help will be very much appreciated.
Thanks
According to the documentation, docker-compose run is used to "Run a one-off command on a service".
For this reason you cannot split your actions in run + exec.
This will work:
docker-compose run --service-ports web /bin/bash
You should use docker-compose up to start your services instead of docker-compose run.
How can do something like:
docker exec -it 06a0076fb4c0 install-smt
But use the name of the container instead
docker exec -it container/container install-smt
I am running a build on CI server so I can not manually input the container ID.
How can I achieve this?
Yes, you can do this by naming the container with --name. Note that your command with container/container is likely referencing an image name and not the container.
➜ ~ docker run --name my_nginx -p 80:80 -d nginx
d122acc37d5bc2a5e03bdb836ca7b9c69670de79063db995bfd6f66b9addfcac
➜ ~ docker exec my_nginx hostname
d122acc37d5b
Although it won't save any typing, you can do something like this if you want to use the image name instead of giving the container a name:
docker run debian
docker exec -it `docker ps -q --filter ancestor=debian` bash
This will only work if you're only running one instance of the debian image.
It does help if you're constantly amending the image when working on a new Dockerfile, and wanting to repeatedly run the same command in each new container to check your changes worked as expected.
I was able to fix this by setting a container name in the docker-compose file, and rundocker exec -it with the name form the file.
#Héctor (tnx)
These steps worked for me:
This will start the container named mytapir and spawn a shell into the docker container:
docker run -d --name mytapir -it wsmoses/tapir-built:latest bash
Upon docker ps to ensure the docker container is running:
docker exec -it mytapir /bin/bash
Will spawned a shell into an existing container named mytapir.
And you can stop the container as usual docker stop mytapir.
And starting it via docker start mytapir, if it is not running.
(check via docker ps -a)
I don't want to install postgres locally but as I have it in my docker container, I'd like to be able to run its commands and utils, like pg_dump myschema > schema.sql.
How can I run commands related to running containers inside of them?
docker exec -it <container> <cmd>
e.g.
docker exec -it your-container /bin/bash
There are different options
You can actually copy files to docker using docker cp command. Copy required files to docker and then you can go inside the docker and run the command.
Make some modification in docker file for docker image creation. Its actually really simple to create docker file. Then using EXPOSE option you can expose a port. After that you can use docker run --publish ie.. -p option to publish a container’s port(s) to the host. Then you can access postgres from outside and run scripts from outside by creating connection.
In the first option you need go inside the containers. For that first list running dockers using docker ps command. After that you can use docker exec -it container_name /bin/bash command
I am trying to write to the file in /etc/hosts within a docker container when I perform the run command, but when I shh into the container and check the hosts file, nothing has been written.
What is the correct command to do this?
I am running my image with the following command:
docker run -it -p 3000:3000 <imageName> bash echo 192.168.56.101 mypath.dev >> /etc/hosts
Use the "add-host" parameter when running the container:
docker run -it --add-host db-static:86.75.30.9 ubuntu cat /etc/hosts
Is there any way posible to exec command from inside one docker container in the linked docker container?
I don't want to exec command from the host.
As long as you have access to something like the docker socket within your container, you can run any command inside any docker container, doesn't matter whether or not it is linked. For example:
# run a container and link it to `other`
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock \
--link other:other myimage bash -l
bash$ docker exec --it other echo hello
This works even if the link was not specified.
With docker-compose:
version: '2.1'
services:
site:
image: ubuntu
container_name: test-site
command: sleep 999999
dkr:
image: docker
privileged: true
working_dir: "/dkr"
volumes:
- ".:/dkr"
- "/var/run/docker.sock:/var/run/docker.sock"
command: docker ps -a
Then try:
docker-compose up -d site
docker-compose up dkr
result:
Attaching to tmp_dkr_1
dkr_1 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dkr_1 | 25e382142b2e docker "docker-entrypoint..." Less than a second ago Up Less than a second tmp_dkr_1
Example Project
https://github.com/reduardo7/docker-container-access
As "Abdullah Jibaly" said you can do that but there is some security issues you have to consider, also there is sdk docker to use, and for python applications can use Docker SDK for Python