Dynamically get a running container name created by docker-compose - docker

When I run my docker-compose, it creates a web container and postgres container.
I want to manually trigger my Django tests to run, via something like
docker-compose run web python manage.py test
the problem with this is it creates a new container (requiring new migrations to be applied, housekeeping work, etc.)
The option I'm leaning towards it doing something like
docker exec -i -t <containerid> python manage.py test
This introduces a new issue which is that I must run docker ps first to grab the container name. The whole point of this is to automatically run the tests for each build so it has to be automated, manually running docker ps is not a solution.
So is there a way to dynamically grab the container id or is there a better way to do this? This would not be an issue if you could assign container names in docker-compose

While an accepted answer was provided, the answer itself is not really related to the title of this question:
Dynamically get a running container name created by docker-compose
To dynamically retrieve the name of a container run by docker-compose you can execute the following command:
$(docker inspect -f '{{.Name}}' $(docker-compose ps -q web) | cut -c2-)

Just use docker-compose exec. It will execute in the already-running container instead of starting a new one.
docker-compose exec web python manage.py test

You can assign a name to a container using container_name option on docker-compose.yml file.
container_name: container_name
Then, you can easily run commands in that container using.
docker exec container_name python manage.py test.
For more information about docker-compose options, visit the official documentation.
https://docs.docker.com/compose/compose-file/

Use docker-compose ps -q to find the ID of the container and run the command in that:
docker exec -it $(docker-compose ps -q) sh
PS: This will NOT work if you have multiple containers in the docker-compose file.

Related

docker compose exec changing my parameter value [duplicate]

I was trying to open a second terminal un a docker with docker-compose.
First run the container with
docker-compose run my-centos bash
And when I try to open a second terminal
docker-compose exec my-centos bash
I get the message
ERROR:No container found for my_centos_1
If I search the name of running container I get
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
34a95b44f0a2 centos6 "bash" 9 minutes ago Up 9 minutes docker_my-centos_run_1
why docker-compose exec search docker_my_centos_1 and not docker_my-centos_run_1?
docker-compose is meant to run multi-container applications and is supposed to be used with docker-compose up. When you use docker-compose run, you make a special container that's not really meant for normal use.
Since docker-compose is just a wrapper around docker, you can still access this special container via the normal docker command:
docker exec docker_my-centos_run_1 bash
Otherwise I'd suggest start your container with docker-compose up. This makes it so that you can run the second bash in the way that you specified:
docker-compose exec my-centos bash
Note: I don't know if you can attach a TTY directly with docker-compose up, so you might need to run an extra docker-compose exec my-centos bash to get two TTYs.
you need to do "docker-compose up -d"
then try ur commands
I think you may be having trouble with the project name, because you're not telling Docker with project are you referring to.
I was facing this problem and it can be fixed adding the project name like:
docker-compose -p myprojectname ps
or
docker-compose -p myprojectname exec php_service composer install
This post explain it: https://codereviewvideos.com/blog/how-i-fixed-docker-compose-exec-error-no-container-found-for/
I solved this issue by first running
sudo systemctl restart docker
if you are using docker-compose run this command
docker-compose run web python manage.py makemigrations

Plone on Docker always starts from scratch

I'm trying to develope Plone project with Docker, i have used this official image of Plone 5.2.0, the images is built a run perfectly with:
$ docker build -t plone-5.2.0-official-img .
$ docker run -p 8080:8080 -it plone-5.2.0-official-cntr
But the plone restarts each time i run the docker container asking to create the project from skratch.
Anybody could help me with this.
Thanks in advance.
You can also use a volume for data like:
$ docker run -p 8080:8080 -it -v plone-data:/data plone-5.2.0-official-cntr
The next time you'll run a new container it will re-use previous data.
If this helps,
Volumes are the docker way to persist data. You can read it up over here
When running the container just add a -v option and specify your path to store your data.
$ docker run -p "port:port" -it -v "path"
This is expected behavior, because docker run starts a new container, which doesn't have the state from your previous container.
You can use docker start CONTAINER, which will have the state from that CONTAINER's setup
https://docs.docker.com/engine/reference/commandline/start/
A more common approach is to use docker-compose.yml and docker-compose up -d, which will, in most cases, reuse previous state.
https://docs.docker.com/compose/gettingstarted/

Running Command "docker-compose run web ..." creates a new container in Kitematic container list

Each time I run the command "docker-compose run web ..." it results in another container being added to Kitematic, so that I have a list of containers like "image-name_web_run_1" and so on with each run of docker-compose.
Is this expected behaviour? If so, how do I work around it?
It is expected. You can use docker-compose run --rm to have the container removed when it exits.
You can clean up these old containers using docker-compose down or docker-compose rm -a.
Do you mean new container? That is the expected behavior for docker-compose run, see: https://docs.docker.com/compose/reference/run/
Did you mean to use docker-compose exec?
e.g., docker-compose exec web /bin/sh
See: https://docs.docker.com/compose/reference/exec/

How to start a stopped Docker container with a different command?

I would like to start a stopped Docker container with a different command, as the default command crashes - meaning I can't start the container and then use 'docker exec'.
Basically I would like to start a shell so I can inspect the contents of the container.
Luckily I created the container with the -it option!
Find your stopped container id
docker ps -a
Commit the stopped container:
This command saves modified container state into a new image named user/test_image:
docker commit $CONTAINER_ID user/test_image
Start/run with a different entry point:
docker run -ti --entrypoint=sh user/test_image
Entrypoint argument description:
https://docs.docker.com/engine/reference/run/#/entrypoint-default-command-to-execute-at-runtime
Note:
Steps above just start a stopped container with the same filesystem state. That is great for a quick investigation; but environment variables, network configuration, attached volumes and other stuff is not inherited. You should specify all these arguments explicitly.
Steps to start a stopped container have been borrowed from here: (last comment) https://github.com/docker/docker/issues/18078
Edit this file (corresponding to your stopped container):
vi /var/lib/docker/containers/923...4f6/config.json
Change the "Path" parameter to point at your new command, e.g. /bin/bash. You may also set the "Args" parameter to pass arguments to the command.
Restart the docker service (note this will stop all running containers unless you first enable live-restore):
service docker restart
List your containers and make sure the command has changed:
docker ps -a
Start the container and attach to it, you should now be in your shell!
docker start -ai mad_brattain
Worked on Fedora 22 using Docker 1.7.1.
NOTE: If your shell is not interactive (e.g. you did not create the original container with -it option), you can instead change the command to "/bin/sleep 600" or "/bin/tail -f /dev/null" to give you enough time to do "docker exec -it CONTID /bin/bash" as another way of getting a shell.
NOTE2: Newer versions of docker have config.v2.json, where you will need to change either Entrypoint or Cmd (thanks user60561).
Add a check to the top of your Entrypoint script
Docker really needs to implement this as a new feature, but here's another workaround option for situations in which you have an Entrypoint that terminates after success or failure, which can make it difficult to debug.
If you don't already have an Entrypoint script, create one that runs whatever command(s) you need for your container. Then, at the top of this file, add these lines to entrypoint.sh:
# Run once, hold otherwise
if [ -f "already_ran" ]; then
echo "Already ran the Entrypoint once. Holding indefinitely for debugging."
cat
fi
touch already_ran
# Do your main things down here
To ensure that cat holds the connection, you may need to provide a TTY. I'm running the container with my Entrypoint script like so:
docker run -t --entrypoint entrypoint.sh image_name
This will cause the script to run once, creating a file that indicates it has already run (in the container's virtual filesystem). You can then restart the container to perform debugging:
docker start container_name
When you restart the container, the already_ran file will be found, causing the Entrypoint script to stall with cat (which just waits forever for input that will never come, but keeps the container alive). You can then execute a debugging bash session:
docker exec -i container_name bash
While the container is running, you can also remove already_ran and manually execute the entrypoint.sh script to rerun it, if you need to debug that way.
I took #Dmitriusan's answer and made it into an alias:
alias docker-run-prev-container='prev_container_id="$(docker ps -aq | head -n1)" && docker commit "$prev_container_id" "prev_container/$prev_container_id" && docker run -it --entrypoint=bash "prev_container/$prev_container_id"'
Add this into your ~/.bashrc aliases file, and you'll have a nifty new docker-run-prev-container alias which'll drop you into a shell in the previous container.
Helpful for debugging failed docker builds.
This is not exactly what you're asking for, but you can use docker export on a stopped container if all you want is to inspect the files.
mkdir $TARGET_DIR
docker export $CONTAINER_ID | tar -x -C $TARGET_DIR
docker-compose run --entrypoint /bin/bash cont_id_or_name
(for conven, put your env, vol mounts in the docker-compose.yml)
or use docker run and manually spec all args
It seems docker can't change entry point after a container started. But you can set a custom entry point and change the code of the entry point next time you restart it.
For example you run a container like this:
docker run --name c --entrypoint "/boot" -v "./boot":/boot $image
Here is the boot entry point:
#!/bin/bash
command_a
When you need restart c with a different command, you just change the boot script:
#!/bin/bash
command_b
And restart:
docker restart c
My Problem:
I started a container with docker run <IMAGE_NAME>
And then added some files to this container
Then I closed the container and tried to start it again withe same command as above.
But when I checked the new files, they were missing
when I run docker ps -a I could see two containers.
That means every time I was running docker run <IMAGE_NAME> command, new image was getting created
Solution:
To work on the same container you created in the first place run follow these steps
docker ps to get container of your container
docker container start <CONTAINER_ID> to start existing container
Then you can continue from where you left. e.g. docker exec -it <CONTAINER_ID> /bin/bash
You can then decide to create a new image out of it
I have found a simple command
docker start -a [container_name]
This will do the trick
Or
docker start [container_name]
then
docker exec -it [container_name] bash
I had a docker container where the MariaDB container was continuously crashing on startup because of corrupted InnoDB tables.
What I did to solve my problem was:
copy out the docker-entrypoint.sh from the container to the local file system (docker cp)
edit it to include the needed command line parameter (--innodb-force-recovery=1 in my case)
copy the edited file back into the docker container, overwriting the existing entrypoint script.
To me Docker always leaves the impression that it was created for a hobby system, it works well for that.
If something fails or doesn't work, don't expect to have a professional solution.
That said: Docker does not only NOT support such basic administrative tasks, it tries to prevent them.
Solution:
cd /var/lib/docker/overlay2/
find | grep somechangedfile
# You now can see the changed file from your container in a hexcoded folder/diff
cd hexcoded-folder/diff
Create an entrypoint.sh (make sure to backup an existing one if it's there)
cat > entrypoint.sh
#!/bin/bash
while ((1)); do sleep 1; done;
Ctrl+C
chmod +x entrypoint.sh
docker stop
docker start
You now have your docker container running an endless loop instead of the originally entry, you can exec bash into it, or do whatever you need.
When finished stop the container, remove/rename your custom entrypoint.
It seems like most of the time people are running into this while modifying a config file, which is what I did. I was trying to bypass CORS for a PHP/Apache server with a Vue SPA as my entry point. Anyway, if you know the file you horked, a simple solution that worked for me was
Copy the file you horked out of the image:
docker cp bt-php:/etc/apache2/apache2.conf .
Fix it locally
Copy it back in
docker cp apache2.conf bt-php:/etc/apache2/apache2.conf
Start your container back up
*Bonus points - Since this file is being modified, add it to your Compose or Build scripts so that when you do get it right it will be baked into the image!
Lots of discussion surrounding this so I thought I would add one more which I did not immediately see listed above:
If the full path to the entrypoint for the container is known (or discoverable via inspection) it can be copied in and out of the stopped container using 'docker cp'. This means you can copy the original out of the container, edit a copy of it to start a bash shell (or a long sleep timer) instead of whatever it was doing, and then restart the container. The running container can now be further edited with the bash shell to correct any problems. When finished editing another docker cp of the original entrypoint back into the container and a re-restart should do the trick.
I have used this once to correct a 'quick fix' that I butterfingered and was no longer able to run the container with the normal entrypoint until it was corrected.
I also agree there should be a better way to do this via docker: Maybe an option to 'docker restart' that allows an alternate entrypoint? Hey, maybe that already works with '--entrypoint'? Not sure, didn't try it, left as exercise for reader, let me know if it works. :)

Is it possible to start a shell session in a running container (without ssh)

I was naively expecting this command to run a bash shell in a running container :
docker run "id of running container" /bin/bash
it looks like it's not possible, I get the error :
2013/07/27 20:00:24 Internal server error: 404 trying to fetch remote history for 27d757283842
So, if I want to run bash shell in a running container (ex. for diagnosis purposes)
do I have to run an SSH server in it and loggin via ssh ?
With docker 1.3, there is a new command docker exec. This allows you to enter a running docker:
docker exec -it "id of running container" bash
EDIT: Now you can use docker exec -it "id of running container" bash (doc)
Previously, the answer to this question was:
If you really must and you are in a debug environment, you can do this: sudo lxc-attach -n <ID>
Note that the id needs to be the full one (docker ps -notrunc).
However, I strongly recommend against this.
notice: -notrunc is deprecated, it will be replaced by --no-trunc soon.
Just do
docker attach container_name
As mentioned in the comments, to detach from the container without stopping it, type Ctrlpthen Ctrlq.
Since things are achanging, at the moment the recommended way of accessing a running container is using nsenter.
You can find more information on this github repository. But in general you can use nsenter like this:
PID=$(docker inspect --format {{.State.Pid}} <container_name_or_ID>)
nsenter --target $PID --mount --uts --ipc --net --pid
or you can use the wrapper docker-enter:
docker-enter <container_name_or_ID>
A nice explanation on the topic can be found on Jérôme Petazzoni's blog entry:
Why you don't need to run sshd in your docker containers
First thing you cannot run
docker run "existing container" command
Because this command is expecting an image and not a container and it would anyway result in a new container being spawned (so not the one you wanted to look at)
I agree with the fact that with docker we should push ourselves to think in a different way (so you should find ways so that you don't need to log onto the container), but I still find it useful and this is how I work around it.
I run my commands through supervisor in DEAMON mode.
Then I execute what I call docker_loop.sh
The content is pretty much this:
#!/bin/bash
/usr/bin/supervisord
/usr/bin/supervisorctl
while ( true )
do
echo "Detach with Ctrl-p Ctrl-q. Dropping to shell"
sleep 1
/bin/bash
done
What it does is that it allows you to "attach" to the container and be presented with the supervisorctl interface to stop/start/restart and check logs.
If that should not suffice, you can Ctrl+D and you will drop into a shell that will allow you to have a peek around as if it was a normal system.
PLEASE DO ALSO TAKE INTO ACCOUNT that this system is not as secure as having the container without a shell, so take all the necessary steps to secure your container.
Keep an eye on this pull request: https://github.com/docker/docker/pull/7409
Which implements the forthcoming docker exec <container_id> <command> utility. When this is available it should be possible to e.g. start and stop the ssh service inside a running container.
There is also nsinit to do this: "nsinit provides a handy way to access a shell inside a running container's namespace", but it looks difficult to get running.
https://gist.github.com/ubergarm/ed42ebbea293350c30a6
You can use
docker exec -it <container_name> bash
Here's my solution
In the Dockerfile:
# ...
RUN mkdir -p /opt
ADD initd.sh /opt/
RUN chmod +x /opt/initd.sh
ENTRYPOINT ["/opt/initd.sh"]
In the initd.sh file
#!/bin/bash
...
/etc/init.d/gearman-job-server start
/etc/init.d/supervisor start
#very important!!!
/bin/bash
After image is built you have two options using exec or attach:
Use exec (preferred) and run:
docker run --name $CONTAINER_NAME -dt $IMAGE_NAME
then
docker exec -it $CONTAINER_NAME /bin/bash
and use CTRL + D to detach
Use attach and run:
docker run --name $CONTAINER_NAME -dit $IMAGE_NAME
then
docker attach $CONTAINER_NAME
and use CTRL + P and CTRL + Q to detach
Note: The difference between options is in parameter -i
There is actually a way to have a shell in the container.
Assume your /root/run.sh launches the process, process manager (supervisor), or whatever.
Create /root/runme.sh with some gnu-screen tricks:
# Spawn a screen with two tabs
screen -AdmS 'main' /root/run.sh
screen -S 'main' -X screen bash -l
screen -r 'main'
Now, you have your daemons in tab 0, and an interactive shell in tab 1. docker attach at any time to see what's happening inside the container.
Another advice is to create a "development bundle" image on top of the production image with all the necessary tools, including this screen trick.
There are two ways.
With attach
$ sudo docker attach 665b4a1e17b6 #by ID
With exec
$ sudo docker exec - -t 665b4a1e17b6 #by ID
If the goal is to check on the application's logs, this post shows starting up tomcat and tailing the log as part of CMD. The tomcat log is available on the host using 'docker logs containerid'.
http://blog.trifork.com/2013/08/15/using-docker-to-efficiently-create-multiple-tomcat-instances/
It's useful assign name when running container. You don't need refer container_id.
docker run --name container_name yourimage
docker exec -it container_name bash
first, get the container id of the desired container by
docker ps
you will get something like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3ac548b6b315 frontend_react-web "npm run start" 48 seconds ago Up 47 seconds 0.0.0.0:3000->3000/tcp frontend_react-web_1
now copy this container id and run the following command:
docker exec -it container_id sh
docker exec -it 3ac548b6b315 sh
Maybe you were mislead like myself into thinking in terms of VMs when developing containers. My advice: Try not to.
Containers are just like any other process. Indeed you might want to "attach" to them for debugging purposes (think of /proc//env or strace -p ) but that's a very special case.
Normally you just "run" the process, so if you want to modify the configuration or read the logs, just create a new container and make sure you write the logs outside of it by sharing directories, writing to stdout (so docker logs works) or something like that.
For debugging purposes you might want to start a shell, then your code, then press CTRL-p + CTRL-q to leave the shell intact. This way you can reattach using:
docker attach <container_id>
If you want to debug the container because it's doing something you haven't expect it to do, try to debug it: https://serverfault.com/questions/596994/how-can-i-debug-a-docker-container-initialization
No. This is not possible. Use something like supervisord to get an ssh server if that's needed. Although, I definitely question the need.

Resources