docker: Only execute command in when container is running - docker

I want to create a backup of a mount volume in my docker container.
This is the command in my dockerfile:
RUN tar -cvpzf test.tar -C /test/ .
But the problem is that it can only be executed after the mount of my volume. (because my volume will be mounted to /test/
So this command needs to be executed after starting the docker container and not when I'm creating the image. How do I have to perform this?
Thanks

Once your container is running, assuming the container has tar, you can do exactly what you want with:
docker exec nameofcontainer [options] tar -cvpzf test.tar -C /test/ .
You can get the names of running containers using docker ps. For options, you may want to use -ti so that you can see the output.
You could also build the container with a custom ENTRYPOINT or CMD which will both start whatever the primary job of the container is going to be and run your backup script, as well as any other tasks that need performing.
The official mysql container does something like this, with the docker-entrypoint.sh script.

Related

how to copy files from one docker service to another, inside of docker bash

I am trying to copy a file from one docker-compose service to another while in the service's bash environment, but I cannot seem to figure out how to do it.
Can anybody provide me with an idea?
Here is the command I am attempting to run:
(docker cp ../db_backups/latest.sqlc pgadmin_1:/var/lib/pgadmin/storage/mine/)
The error is simply:
bash: docker: command not found
There's no way to do that by default. There are a few things you could do to enable that behavior.
The easiest solution is just to run docker cp on the host (docker cp from the first container to the host, then docker cp from the host to the second container).
If it all has to be done inside the container, the next easiest solution is probably to use a shared volume:
docker run -v shared:/shared --name containerA ...
docker run -v shared:/shared --name containerB ...
Then in containerA you can cp ../db_backups/latest.sqlc /shared, and in containerB you can cp /shared/latest.sqlc /var/lib/pgadmin/storage/mine.
This is a nice solution because it doesn't require installing anything inside the container.
Alternately, you could:
Install the docker CLI inside each container, and mount the Docker socket inside each container. This would let you run your docker cp command, but it gives anything inside the container complete control of your host (because access to docker == root access).
Run sshd in the target container, set up the necessary keys, and then use scp to copy things from the first container to the second container.

How to 'docker exec' a container built from scratch?

I am trying to docker exec a container that is built from scratch (say, a NATS container). Seems pretty straight-forward, but since it is built from scratch, I am unable to access /bin/bash, /bin/sh and literally any such command.
I get the error: oci runtime error (command not found, file not found, etc. depending upon the command that I enter).
I tried some commands like:
docker exec -it <container name> /bin/bash
docker exec -it <container name> /bin/sh
docker exec -it <container name> ls
My question is, how do I docker exec a container that is built from scratch and consisting only of binaries? By doing a docker exec, I wish to find out if the files have been successfully copied from my host to the container (I have a COPY in the Dockerfile).
If your scratch container is running you can copy a shell (and other needed utils) into its filesystem and then exec it. The shell would need to be a static binary. Busybox is a great choice here because it can double as so many other binaries.
Full example:
# Assumes scratch container is last launched one, else replace with container ID of
# scratch image, e.g. from `docker ps`, for example:
# scratch_container_id=401b31621b36
scratch_container_id=$(docker ps -ql)
docker run -d busybox:latest sleep 100
busybox_container_id=$(docker ps -ql)
docker cp "$busybox_container_id":/bin/busybox .
# The busybox binary will become whatever you name it (or the first arg you pass to it), for more info run:
# docker run busybox:latest /bin/busybox
# The `busybox --install` command copies the binary with different names into a directory.
docker cp ./busybox "$scratch_container_id":/busybox
docker exec -it "$scratch_container_id" /busybox sh -c '
export PATH="/busybin:$PATH"
/busybox mkdir /busybin
/busybox --install /busybin
sh'
For Kubernetes I think Ephemeral Containers provide or will provide equivalent functionality.
References:
distroless java docker image error
https://github.com/GoogleContainerTools/distroless/issues/168#issuecomment-371077961
There are several options.
You can do docker container cp ${CONTAINER}:/path/to/file/on/container /path/to/temp/dir/on/host. This will copy the files to your host where you can inspect things using host tools.
You can add an appropriate VOLUME to your Dockerfile. Then you can docker container inspect ${CONTAINER}. This will expose the volume name where the files should be. You can then inspect those in another container (based off an image with all the tools you need).
You can at runtime bind the container to a volume or host directory at the appropriate place.
You can add those binaries that you feel you need to the image. If you need /bin/ls or /bin/sh, then you can add them.
You can bind mount the necessary binaries to the container - so the container has them for verification purposes but the image is not bloated by them.
You can only use docker exec to run commands that actually exist in a container. If those commands don't exist, you can't run them. As you've noted, the scratch base image contains nothing – no shells, no libraries, no system files, nothing.
If all you're trying to check is if a Dockerfile COPY command actually copied the files you said it would, I'd generally assume the tooling works and just reference the copied files in my application.
Since it sounds like you control the Dockerfile, one workaround could be to change the base image to something lightweight but non-empty, like FROM busybox. That would give you a minimal set of tools that you could work with without blowing up the image size too much.
I am trying to do the same files check for my needs. I ended up with docker cp copy this file from container. In my case I am using nats container, but you can use any other container running scratch-based-image
sudo docker cp nats_nats_1:/nats-server.conf ./nats-server.conf
You can just grab the container identifier and throw it into a variable. For example, let's say the (truncated) output of docker ps -a is listed with your running container:
CONTAINER ID IMAGE
111111111111 neo4j-migrator
To further the example, you can docker exec -t using the variable you created. For example:
CONTAINER_ID=`docker ps -aqf "ancestor=neo4j-migrator"`
docker exec -it $CONAINER_ID \
sh -c "/usr/bin/neo4j-migrations \
--password $NEO4J_PASSWORD \
--username $NEO4J_USERNAME \
--address $NEO4J_URI \
migrate"

Check the File in Exited Container

I have an issue invoking the script to start the container. I think I'd better first find a way to tell if the script is actually located in the right place. But neither docker exec nor docker attach seems to allow me to get into an exited container.
I also tried docker run -it --volumes-from [exited_container_id] ubuntu. I thought I might be able to see the file system in ubuntu but I cannot find the mounting point. Is there any way for me to login to an exited container and see the files that I ADDed?
You can check if the script is located in the right place adding a RUN ls -l / line in your Dockerfile and building the image
FROM frolvlad/alpine-oraclejdk8:slim
ADD build/libs/zuul*.jar /app.jar
ADD src/main/script/startup.sh /startup.sh
RUN ls -lah /
EXPOSE 8080 8999
ENTRYPOINT ["/startup.sh"]
Then just build the Dockerfile
docker build -t myapp .
You should see the result of that ls in the output of the build

Automatically run command inside docker container after starting up + volume mount

I have created my simple own image from .
FROM python:2.7.11
RUN mkdir /extra/later/ \
&& mkdir /yyy
Now I'm able to perform the following steps:
docker run -d -v xxx:/yyy myimage:latest
So now my volume is mounted inside the container. I'm going to access and I'm able to perform commands on that mounted volume inside my container:
docker exec -it container_id bash
bash# tar -cvpzf /mybackup.tar -C /yyy/ .
Is there a way to automate this steps in your Dockerfile or describing the commands in your docker run command?
The commands executed in the Dockerfile build the image, and the volume is attached to a running container, so you will not be able to run your commands inside of the Dockerfile itself and affect the volume.
Instead, you should create a startup script that is the command run by your container (via CMD or ENTRYPOINT in your Dockerfile). Place the logic inside of your startup script to detect that it needs to initialize the volume, and it will run when the container is launched. If you run the script with CMD you will be able to override running that script with any command you pass to docker run which may or may not be a good thing depending on your situation.
Try using the CMD option in the Dockerfile to run the tar command
CMD tar -cvpzf /mybackup.tar -C /yyy/ .
or
CMD ["tar", "-cvpzf", "/mybackup.tar", "-C", "/yyy/", "."]

How to start a stopped Docker container with a different command?

I would like to start a stopped Docker container with a different command, as the default command crashes - meaning I can't start the container and then use 'docker exec'.
Basically I would like to start a shell so I can inspect the contents of the container.
Luckily I created the container with the -it option!
Find your stopped container id
docker ps -a
Commit the stopped container:
This command saves modified container state into a new image named user/test_image:
docker commit $CONTAINER_ID user/test_image
Start/run with a different entry point:
docker run -ti --entrypoint=sh user/test_image
Entrypoint argument description:
https://docs.docker.com/engine/reference/run/#/entrypoint-default-command-to-execute-at-runtime
Note:
Steps above just start a stopped container with the same filesystem state. That is great for a quick investigation; but environment variables, network configuration, attached volumes and other stuff is not inherited. You should specify all these arguments explicitly.
Steps to start a stopped container have been borrowed from here: (last comment) https://github.com/docker/docker/issues/18078
Edit this file (corresponding to your stopped container):
vi /var/lib/docker/containers/923...4f6/config.json
Change the "Path" parameter to point at your new command, e.g. /bin/bash. You may also set the "Args" parameter to pass arguments to the command.
Restart the docker service (note this will stop all running containers unless you first enable live-restore):
service docker restart
List your containers and make sure the command has changed:
docker ps -a
Start the container and attach to it, you should now be in your shell!
docker start -ai mad_brattain
Worked on Fedora 22 using Docker 1.7.1.
NOTE: If your shell is not interactive (e.g. you did not create the original container with -it option), you can instead change the command to "/bin/sleep 600" or "/bin/tail -f /dev/null" to give you enough time to do "docker exec -it CONTID /bin/bash" as another way of getting a shell.
NOTE2: Newer versions of docker have config.v2.json, where you will need to change either Entrypoint or Cmd (thanks user60561).
Add a check to the top of your Entrypoint script
Docker really needs to implement this as a new feature, but here's another workaround option for situations in which you have an Entrypoint that terminates after success or failure, which can make it difficult to debug.
If you don't already have an Entrypoint script, create one that runs whatever command(s) you need for your container. Then, at the top of this file, add these lines to entrypoint.sh:
# Run once, hold otherwise
if [ -f "already_ran" ]; then
echo "Already ran the Entrypoint once. Holding indefinitely for debugging."
cat
fi
touch already_ran
# Do your main things down here
To ensure that cat holds the connection, you may need to provide a TTY. I'm running the container with my Entrypoint script like so:
docker run -t --entrypoint entrypoint.sh image_name
This will cause the script to run once, creating a file that indicates it has already run (in the container's virtual filesystem). You can then restart the container to perform debugging:
docker start container_name
When you restart the container, the already_ran file will be found, causing the Entrypoint script to stall with cat (which just waits forever for input that will never come, but keeps the container alive). You can then execute a debugging bash session:
docker exec -i container_name bash
While the container is running, you can also remove already_ran and manually execute the entrypoint.sh script to rerun it, if you need to debug that way.
I took #Dmitriusan's answer and made it into an alias:
alias docker-run-prev-container='prev_container_id="$(docker ps -aq | head -n1)" && docker commit "$prev_container_id" "prev_container/$prev_container_id" && docker run -it --entrypoint=bash "prev_container/$prev_container_id"'
Add this into your ~/.bashrc aliases file, and you'll have a nifty new docker-run-prev-container alias which'll drop you into a shell in the previous container.
Helpful for debugging failed docker builds.
This is not exactly what you're asking for, but you can use docker export on a stopped container if all you want is to inspect the files.
mkdir $TARGET_DIR
docker export $CONTAINER_ID | tar -x -C $TARGET_DIR
docker-compose run --entrypoint /bin/bash cont_id_or_name
(for conven, put your env, vol mounts in the docker-compose.yml)
or use docker run and manually spec all args
It seems docker can't change entry point after a container started. But you can set a custom entry point and change the code of the entry point next time you restart it.
For example you run a container like this:
docker run --name c --entrypoint "/boot" -v "./boot":/boot $image
Here is the boot entry point:
#!/bin/bash
command_a
When you need restart c with a different command, you just change the boot script:
#!/bin/bash
command_b
And restart:
docker restart c
My Problem:
I started a container with docker run <IMAGE_NAME>
And then added some files to this container
Then I closed the container and tried to start it again withe same command as above.
But when I checked the new files, they were missing
when I run docker ps -a I could see two containers.
That means every time I was running docker run <IMAGE_NAME> command, new image was getting created
Solution:
To work on the same container you created in the first place run follow these steps
docker ps to get container of your container
docker container start <CONTAINER_ID> to start existing container
Then you can continue from where you left. e.g. docker exec -it <CONTAINER_ID> /bin/bash
You can then decide to create a new image out of it
I have found a simple command
docker start -a [container_name]
This will do the trick
Or
docker start [container_name]
then
docker exec -it [container_name] bash
I had a docker container where the MariaDB container was continuously crashing on startup because of corrupted InnoDB tables.
What I did to solve my problem was:
copy out the docker-entrypoint.sh from the container to the local file system (docker cp)
edit it to include the needed command line parameter (--innodb-force-recovery=1 in my case)
copy the edited file back into the docker container, overwriting the existing entrypoint script.
To me Docker always leaves the impression that it was created for a hobby system, it works well for that.
If something fails or doesn't work, don't expect to have a professional solution.
That said: Docker does not only NOT support such basic administrative tasks, it tries to prevent them.
Solution:
cd /var/lib/docker/overlay2/
find | grep somechangedfile
# You now can see the changed file from your container in a hexcoded folder/diff
cd hexcoded-folder/diff
Create an entrypoint.sh (make sure to backup an existing one if it's there)
cat > entrypoint.sh
#!/bin/bash
while ((1)); do sleep 1; done;
Ctrl+C
chmod +x entrypoint.sh
docker stop
docker start
You now have your docker container running an endless loop instead of the originally entry, you can exec bash into it, or do whatever you need.
When finished stop the container, remove/rename your custom entrypoint.
It seems like most of the time people are running into this while modifying a config file, which is what I did. I was trying to bypass CORS for a PHP/Apache server with a Vue SPA as my entry point. Anyway, if you know the file you horked, a simple solution that worked for me was
Copy the file you horked out of the image:
docker cp bt-php:/etc/apache2/apache2.conf .
Fix it locally
Copy it back in
docker cp apache2.conf bt-php:/etc/apache2/apache2.conf
Start your container back up
*Bonus points - Since this file is being modified, add it to your Compose or Build scripts so that when you do get it right it will be baked into the image!
Lots of discussion surrounding this so I thought I would add one more which I did not immediately see listed above:
If the full path to the entrypoint for the container is known (or discoverable via inspection) it can be copied in and out of the stopped container using 'docker cp'. This means you can copy the original out of the container, edit a copy of it to start a bash shell (or a long sleep timer) instead of whatever it was doing, and then restart the container. The running container can now be further edited with the bash shell to correct any problems. When finished editing another docker cp of the original entrypoint back into the container and a re-restart should do the trick.
I have used this once to correct a 'quick fix' that I butterfingered and was no longer able to run the container with the normal entrypoint until it was corrected.
I also agree there should be a better way to do this via docker: Maybe an option to 'docker restart' that allows an alternate entrypoint? Hey, maybe that already works with '--entrypoint'? Not sure, didn't try it, left as exercise for reader, let me know if it works. :)

Resources