Run commands in Docker during run process - docker

I want to be able to run a docker run... command for my custom Ubuntu command where the docker will run two commands as if they were typed once the docker begins running. I have my docker mounted to a local folder and have a custom code within the mounted folder, I want running the docker to also run cd Project and ./a.out within the docker but I am not sure how to do that in one long command.
I have tried docker run --mount type=bind,source="/home/ec2-user/environment/Project",target="Project" myubuntu cd Project && ./a.out but I get an OCI runtime create failed.
I have also tried docker run --mount type=bind,source="/home/ec2-user/environment/Project",target="Project" myubuntu -c 'cd Project && ./a.out' but get the same error.
Ultimately, it would be nice to have my mounted directory, cd Project, ./a.out, and exit command in my Dockerfile so that the docker container opens, runs the compiled code within a.out, and then exits with a simple docker run myubuntu command but I know that mounting within the Dockerfile requires the image be rebuilt every time that local folder changes. So that leaves me with being able to open the docker container, run my two commands, and exit the container with 1 docker run command line.

I think you want to start a shell that runs your two commands:
docker run --mount ... myubuntu /bin/bash -c 'cd somewhere && do something'

Related

Batch file not executing a command

I am trying to create a .bat file that will back-up the database inside the container. After the first command (that is used to enter the container) the next one is ignored.
docker exec -it CONTAINER_NAME /bin/bash
cd /var/opt/mssql/data
Any ideas why? If I'm trying to manually write cd /var/opt/mssql/data in the opened cmd it works.
When running docker exec -it CONTAINER_NAME /bin/bash, you are opening a bash-shell INSIDE the docker container.
The next command, i.e. cd /var/opt/mssql/data, is only executed, if the previous command, docker exec -it CONTAINER_NAME /bin/bash has exited successfully, which means that the shell on the docker container has been closed/exited.
This means that cd /var/opt/mssql/data is then executed on the local machine and not inside the docker container.
To run a command inside the docker container, use the following command
docker exec -it CONTAINER_NAME /bin/bash -c "<command>"
Although it may be better to create a script inside the container during the build process or mount a script inside the docker container while starting the container and then simply call this script with the above mentioned command.

How to navigate to different folder in prebuilt Docker container?

I'm using a prebuilt container from Dockerhub. When I run the container it acts like it's in a folder called workspace, since my run command sudo docker run -it shubhamgoel/birds:bigbang bash returns root#eg2e775g0a1b:/workspace#
I don't know how to navigate to the correct folder. I need to run this container in a folder /home/s/ucmr.
If I do
sudo docker run -it shubhamgoel/birds:bigbang bash -c "cd:/home/s/ucmr"
I get
bash: cd:/home/s/ucmr: No such file or directory
How do I navigate to the correct folder with this prebuilt container? Thank you.
__
Edit: I've tried
sudo docker run -v /kitty:/dog --name kittycat -it shubhamgoel/birds:bigbang
and when I search for 'dog' on my disk there's no such folder. Also when I type in mkdir frog and search for 'frog' on my disk there's no such folder...
docker run -it shubhamgoel/birds:bigbang bash -c "cd:/home/s/ucmr" is wrong for 2 reasons. The first one has already been covered by the other answer (wrong syntax with cd command). The other is that using the -it docker option with a non-interactive bash is kind of meaningless. The -c bash option just means "execute whatever there is between the double quotes and return to the caller", this last part makes the interactivity vanish.
A first naive solution, but still working, could be creating another shell like this:
docker run -it shubhamgoel/birds:bigbang bash -c "cd /home/s/ucmr && bash"
However, docker is far smarter and flexible and lets you override some Dockerfile directive, for instance the WORKDIR:
docker run -it -w="/home/s/ucmr" shubhamgoel/birds:bigbang bash

File created in interactive session within container dissapears after exiting container (container running in background)

Objective
Essentially what I am trying to accomplish is to install a bunch of software but store the commands run in the Dockerfile in the end. I was planning on recording the installation process by running the "script" function to record the commands run on the command line. I would like to know why it doesn't work but if there is a better way of doing it, I am all ears!
The issue
I'm sure there is a simple answer to this but I can't seem to figure it out. When I create a dummy file within my docker container it dissapears when I exit the container even though the container is running in the background.
Attempt
This is my Dockerfile
##Filename = Dockerfile
FROM centos:7
WORKDIR /dummy_folder
CMD ["echo", "hello world"]
I build the image and run it in the background.
docker build -t my_test_image:v1.0 .
docker run -d e9e949b5d85a tail -f /dev/null
Now I can see my container running in the background
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6f4da7a1b74d e9e949b5d85a "tail -f /dev/null" 14 minutes ago Up 14 minutes trusting_poincare
If I create an interactive session and just dump a file in /dummy_folder and exit. When I create a new interactive session /dummy_folder is empty again.
docker run -it e9e949b5d85a /bin/bash
echo "dummy" > dummy
exit
docker run -it e9e949b5d85a /bin/bash
ls -alh /dummy_folder
P.S tail -f /dev/null is just a trick I use to keep the container running in the background as just running it with the flag -d doesn't work for centos containers apparently.
I am running Docker version 19.03.8, build afacb8b
Thanks
Sabri
You're creating a new container every time your run docker run ... . What I think you're trying to do is run a shell on the same container you started with docker run -d e9e949b5d85a tail -f /dev/null. If so, the Docker command your'e looking for is exec
Start an interactive session using the container ID (not the image ID) and do your stuff
docker exec -it 6f4da7a1b74d /bin/bash
$ echo "dummy" > dummy
$ exit
And then check the contents again with exec
docker exec 6f4da7a1b74d ls -alh /dummy_folder

re-running a script in a docker container

I have created a docker image that includes some python code and a shell script that can execute it. It is going to process a bunch of images from the host system.
This command should create a new contaier and run it.
sudo docker run -v /host/folder:/container/folder opencv:latest bash /extract-embeddings.sh
At the end, the container exits. If I type the same command, then another container is created and exited at completion. But how is the correct usage of containers? Should I use restart, start or run (and then clean up exited containers after)? It just seems unnessary to create a new container each time.
I basically just want a docker image containing some code and 3-4 different commands I can execute whenever needed.
And the docker start command doesn't seem to accept "bash /extract-embeddings.sh" as parameters, instead things bash and extract-embeddings.sh are containers. So maybe I am misunderstanding the lifecycle of containers or the usage.
edit:
Got it to work with:
docker run -t -d --name opencv -v /host/folder:/container/folder
docker exec -it opencv bash /extract-embeddings.sh
You can write the Dockerfile to create your docker image and keep the scripts into it-
Dockerfile:
FROM opencv:latest
COPY ./your-script /some_folder
Create image:
docker build -t my_image .
Run your container:
docker run -d --name my_container
Run the script inside the container:
docker exec -it <container_id_or_name> bash /some_folder/your-script
Build your own docker image that starts with opencv:latest and give the command you run as the entrypoint. Dockerfile could be like
FROM opencv:latest
CMD ["/bin/bash", "/extract-embeddings.sh"]
Use docker create to create a named container.
sudo docker create --name=processmyimage -v /host/folder:/container/folder myopencv:latest
Then use docker start each time you want to run it.
sudo docker start processmyimage
This works well if there is only one command you want to run. If there is more than one command, I would take the approach of building an image that runs unrelated command forever (like a tail -f < /dev/null). Then you can use
sudo docker exec -d /bin/bash < cmd-to-run >
for each command

Automatically run command inside docker container after starting up + volume mount

I have created my simple own image from .
FROM python:2.7.11
RUN mkdir /extra/later/ \
&& mkdir /yyy
Now I'm able to perform the following steps:
docker run -d -v xxx:/yyy myimage:latest
So now my volume is mounted inside the container. I'm going to access and I'm able to perform commands on that mounted volume inside my container:
docker exec -it container_id bash
bash# tar -cvpzf /mybackup.tar -C /yyy/ .
Is there a way to automate this steps in your Dockerfile or describing the commands in your docker run command?
The commands executed in the Dockerfile build the image, and the volume is attached to a running container, so you will not be able to run your commands inside of the Dockerfile itself and affect the volume.
Instead, you should create a startup script that is the command run by your container (via CMD or ENTRYPOINT in your Dockerfile). Place the logic inside of your startup script to detect that it needs to initialize the volume, and it will run when the container is launched. If you run the script with CMD you will be able to override running that script with any command you pass to docker run which may or may not be a good thing depending on your situation.
Try using the CMD option in the Dockerfile to run the tar command
CMD tar -cvpzf /mybackup.tar -C /yyy/ .
or
CMD ["tar", "-cvpzf", "/mybackup.tar", "-C", "/yyy/", "."]

Resources