I have a docker compose file which works fine with these commands:
docker-compose up
docker-compose exec web /bin/bash
The issue I have is that when I want to start the container with --service-ports I can't connect to the container anymore with docker-compose.
docker-compose run --service-ports web
docker-compose exec web /bin/bash
ERROR: No container found for web_1
I also tried:
docker-compose run --service-ports --name container_web --rm web
docker-compose exec web /bin/bash
ERROR: No container found for web_1
# this works
docker exec -it container_web /bin/bash
root#b09618ad2840:/code/app#
Any help will be very much appreciated.
Thanks
According to the documentation, docker-compose run is used to "Run a one-off command on a service".
For this reason you cannot split your actions in run + exec.
This will work:
docker-compose run --service-ports web /bin/bash
You should use docker-compose up to start your services instead of docker-compose run.
Related
I use the Docker deployment of Airflow from their official docs page. When docker-compose changed to docker compose, it broke the airflow cli commands. It hangs on airflow-cli-1 starting when I try to run it from the shell script.
Here is Airflow's original docker-compose.yaml:
'https://airflow.apache.org/docs/apache-airflow/stable/docker-compose.yaml'
Here is the original command:
exec docker-compose run --rm airflow-cli
Here is the broken command with new docker compose:
exec docker compose run --rm airflow-cli
I can't figure out why the new docker compose command hangs on the airflow-cli service starting.
i try to do a simple script to run docker-compose and when container is running by docker-compose
it should enter inside container and start glassfish with ./asadmin start-domain domain1
Script of running docker-compose & glassfish container
#!/bin/sh
docker-compose up -d
docker exec -it glassfishapp bash -c 'cd glassfish5/bin && ./asadmin start-domain domain1'
when i run docker-compose and i enter into glassfish container with docker exec -it id bash
i can access to my application deploy into glassfish image
but when i use script i can't access
docker-compose up -d will fork into the background per the -d option, which means your script will immediately try to connect to a container which is almost certainly not started yet.
This is my docker compose file
version: '2'
# based off compose-sample-2, only we build nginx.conf into image
# uses sample site from https://startbootstrap.com/template-overviews/agency/
services:
proxy:
build:
context: .
dockerfile: nginx.Dockerfile
ports:
- '80:80'
web:
image: httpd
volumes:
- ./html:/usr/local/apache2/htdocs/
Now can I ssh into any of the services which gets creats when I run docker-compose up
The standard mechanism is not to ssh into containers, but to connect to a container using docker exec. Given a container Id like 3cdb7385c127 you can connect (aka ssh into it) with
docker exec -it 3cdb7385c127 sh
or for a full login shell if you have bash available in container
docker exec -it 3cdb7385c127 bash -l
You can still ssh into a container if you want to, but you would need to have a ssh server installed, configured and running and you would need to have access to the container IP from outside or redirect container's :22 onto some port on the host.
The easiest way is to run the docker-compose exec command:
docker-compose exec web /bin/bash
with the latest version of docker, since "Docker Compose is now in the Docker CLI", you can do:
docker compose exec web /bin/bash
If you do want to use the pure docker command, you can do:
web=`docker container ls |grep web |awk '{print \$1}'`
docker container exec -it $web /bin/bash
or in a single line:
docker container exec -it `docker container ls |grep web |awk '{print \$1}'` /bin/bash
in which we find the container id first, then run the familiar docker container exec -it command.
As it is mentioned in the question, the service containers have been created by running docker-compose up. Therefore I don't think docker-compose run is appropriate answer, since it will start a container in my test.
do 'docker ps' to get the names and docker id for your container.
do 'docker exec -it <docker_id> /bin/bash'
this will give you bash prompt inside container.
If you specify container_name in your docker-compose.yaml file, you can use that to log in with the docker exec command.
Example:
django:
container_name: django
more_stuff: ...
Then, to log in to a running docker-compose:
docker exec -it django /bin/bash
This works better for me as I don't need to check the current running ID and it's easy for me to remember.
While using the docker command also works, the easiest way to do this which does not require finding out the ID of the container is using the docker-compose run subcommand, for example:
service="web"
docker-compose run $service /bin/bash
How can do something like:
docker exec -it 06a0076fb4c0 install-smt
But use the name of the container instead
docker exec -it container/container install-smt
I am running a build on CI server so I can not manually input the container ID.
How can I achieve this?
Yes, you can do this by naming the container with --name. Note that your command with container/container is likely referencing an image name and not the container.
➜ ~ docker run --name my_nginx -p 80:80 -d nginx
d122acc37d5bc2a5e03bdb836ca7b9c69670de79063db995bfd6f66b9addfcac
➜ ~ docker exec my_nginx hostname
d122acc37d5b
Although it won't save any typing, you can do something like this if you want to use the image name instead of giving the container a name:
docker run debian
docker exec -it `docker ps -q --filter ancestor=debian` bash
This will only work if you're only running one instance of the debian image.
It does help if you're constantly amending the image when working on a new Dockerfile, and wanting to repeatedly run the same command in each new container to check your changes worked as expected.
I was able to fix this by setting a container name in the docker-compose file, and rundocker exec -it with the name form the file.
#Héctor (tnx)
These steps worked for me:
This will start the container named mytapir and spawn a shell into the docker container:
docker run -d --name mytapir -it wsmoses/tapir-built:latest bash
Upon docker ps to ensure the docker container is running:
docker exec -it mytapir /bin/bash
Will spawned a shell into an existing container named mytapir.
And you can stop the container as usual docker stop mytapir.
And starting it via docker start mytapir, if it is not running.
(check via docker ps -a)
I have got some docker container for instance my_container
I Want to run a long living script in my container, but not killing it while leaving the shell
I would like to do something like that
docker exec -ti my_container /bin/bash
And then
screen -S myScreen
Then
Executing my script in screen and exit the terminal
Unfortunately, I cannot execute screen in docker terminal
this maybe help you.
docker exec -i -t c2ab7ae71ab8 sh -c "exec >/dev/tty 2>/dev/tty </dev/tty && /usr/bin/screen -r nmsrv -s /bin/bash"
and this is the reference link
Only way I can think of is to run your container with your script at the start;
docker run -d --name my_container nginx /etc/init.d/myscript
If you have to run the script directly in an already running container, you can do that with exec:
docker exec my_container /path/to/some_script.sh
or if you wanna run it through Php:
docker exec my_container php /path/to/some_script.php
That said, you typically don't want to run scripts in already running containers, but to just use the same image as some already running container. You can do that with a standard docker run:
docker run -a stdout --rm some_repo/some_image:some_tag php /path/to/some_script.php