docker CMD /sbin/init with insmod - docker

Could you let me know how to run insmod and /sbin/init in one CMD.
I wrote Dockerfile with one of bellow CMDs
CMD [ "insmod", "a_module.ko", "&&", "/sbin/init" ]
CMD insmod a_module.ko && /sbin/init
Then I run
docker run -p 8080:80 -p 4433:443 -it --rm -d --cap-add=ALL --privileged <image id>
It emit container ID, but the docker ps didn't show anything.
only second CMD could install a_module, but the process was not running.
Do you have any idea to run these two from one CMD?
EDIT
I would like to setup experimental environment in this container.
The /sbin/init is needed to run systemctl in docker.
insmod is not permitted from RUN even though it is from root. Currently it is done in container manually, but I would like to automate it
That's why insmod and /sbin/init are needed from CMD.

To run more than one command with CMD, you would use:
CMD ["/bin/bash", "-c", "command1 && command2"]
Why you would want to try and run the full operating system services in the container by running /sbin/init is another question. It is not normal practice to do so. Containers would normally run one application only. Depending on how the container is setup when run, you may not even be able to run all normal operating system services or commands.

Related

Run a repository in Docker

I am super new to Docker. I have a repository (https://github.com/hect1995/UBIMET_Challenge.git) I have developed in Mac that want to test it in a Ubuntu environment using Docker.
I have created a Dockerfile as:
FROM ubuntu:18.04
# Update aptitude with new repo
RUN apt-get update \
&& apt-get install -y git
RUN git clone https://github.com/hect1995/UBIMET_Challenge.git
WORKDIR /UBIMET_Challenge
RUN mkdir build
WORKDIR build
RUN cmake ..
RUN make
Now, following some examples I am running:
docker run --publish 8000:8080 --detach --name trial
But I do not see the output of the terminal from the docker to see what is going on. How could I create this docker and check what things I need to add and so on and so forth while inside the docker
TLDR
add '-it' and remove '--detach'
or add ENTRYPOINT in Dockerfile and use docker exec -it to access your container
Longer explanation:
With this command
docker run --publish 8000:8080 --detach --name trial image_name
you tell docker to run image image_name as container named trial, expose port 8080 to host and detach (run in background).
Your Dockerfile does not mention which command should be executed (CMD, ENTRYPOINT), however your image extends 'ubuntu:18.04' image, so docker will run command defined in that image. It's bash.
Your container by default is in non interactive mode so bash has nothing to do and simply exits. Check this with docker ps -a command.
Also you have specified --detach command which tells docker to run container in background.
To avoid this situation you need to remove --detach and add -it (interactive, allocate pseudo-tty). Now you can execute commands in your container.
Next step
Better idea is to set ENTRYPOINT to your application or just hang container with 'sleep infinity' command.
try (sleep forever or run /opt/my_app):
docker run --publish 8000:8080 --detach --name trial image_name sleep infinity
or
docker run --publish 8000:8080 --detach --name trial image_name /opt/my_app
You can also define ENTRYPOINT in your Dockerfile
ENTRYPOINT=sleep infinity
or
ENTRYPOINT=/opt/my_app
then use
docker exec -it trial bash #to run bash on container
docker exec trial cat /opt/app_logs #to see logs
docker logs trial # to see console output of your app
You want to provide and ENTRYPOINT or CMD layer to your docker file I believe.
Right now, it configures itself nicely when you build it - but I'm not seeing any component that points to an executable for the container to do something with.
You're probably not seeing any output because the container 'doesn't do anything' currently.
Checkout this breakdown of CMD: Difference between RUN and CMD in a Dockerfile

re-running a script in a docker container

I have created a docker image that includes some python code and a shell script that can execute it. It is going to process a bunch of images from the host system.
This command should create a new contaier and run it.
sudo docker run -v /host/folder:/container/folder opencv:latest bash /extract-embeddings.sh
At the end, the container exits. If I type the same command, then another container is created and exited at completion. But how is the correct usage of containers? Should I use restart, start or run (and then clean up exited containers after)? It just seems unnessary to create a new container each time.
I basically just want a docker image containing some code and 3-4 different commands I can execute whenever needed.
And the docker start command doesn't seem to accept "bash /extract-embeddings.sh" as parameters, instead things bash and extract-embeddings.sh are containers. So maybe I am misunderstanding the lifecycle of containers or the usage.
edit:
Got it to work with:
docker run -t -d --name opencv -v /host/folder:/container/folder
docker exec -it opencv bash /extract-embeddings.sh
You can write the Dockerfile to create your docker image and keep the scripts into it-
Dockerfile:
FROM opencv:latest
COPY ./your-script /some_folder
Create image:
docker build -t my_image .
Run your container:
docker run -d --name my_container
Run the script inside the container:
docker exec -it <container_id_or_name> bash /some_folder/your-script
Build your own docker image that starts with opencv:latest and give the command you run as the entrypoint. Dockerfile could be like
FROM opencv:latest
CMD ["/bin/bash", "/extract-embeddings.sh"]
Use docker create to create a named container.
sudo docker create --name=processmyimage -v /host/folder:/container/folder myopencv:latest
Then use docker start each time you want to run it.
sudo docker start processmyimage
This works well if there is only one command you want to run. If there is more than one command, I would take the approach of building an image that runs unrelated command forever (like a tail -f < /dev/null). Then you can use
sudo docker exec -d /bin/bash < cmd-to-run >
for each command

Docker container is not running even if -d

I'm french and new here (so I don't know how stack overflow works, his community) I'm gonna try to adapt myself.
So, my first problem is the following :
I run docker container with my image who it created with Dockerfile. (there is DNS container)
In Dockerfile, this container have to start script.sh when it start.
But after use that :
docker run -d -ti -p 53:53 alex/dns
(Use -p 53:53 because DNS.)
I can see my DNS runing at the end of my script.sh but, when I do :
Docker ps -a ; but > container is not running.
I'm novice with docker. I have started to learn it 2days ago.
I tried to add (one by one of course):
CMD ["bash"]
CMD ["/bin/bash"]
to run bash and make sure that does not poweroff.
I tried to add -d in Docker run command
I tried to use :
docker commit ti alex/dns
and
docker exec -ti alex/dns /bin/bsh
My dockerfile file :
FROM debian
...
RUN apt-get install bind9
...
ADD script.sh /usr/bin/script.sh
...
ENTRYPOINT ["/bin/bash", "script.sh]
CMD ["/bin/bash"]
My file script.sh :
service bind9 stop
*It copy en remplace conf file for bind9*
service bind9 restart
I hope that there are not too many mistakes and that I managed to make myself understood
I expect the DNS container stay runing and can use it with docker exec.
But now, after use docker run, the container start en stop juste after my script finish. Yes, the DNS server is runing the container tell me before close [ok] Bind9 running or somthing like that. But after container stop.
I suspect the problem you're facing is that your container will terminate once service bind9 restart completes.
You need to have a foreground process running to keep the container running.
I'm unfamiliar with bind9 but I recommend you explore ways to run bind9 in the foreground in your container.
Your command to run the container is correct:
docker run -d -ti -p 53:53 alex/dns
You may need to:
RUN apt-get update && apt-get -y install bind9
You will likely need something like (don't know):
ENTRYPOINT ["/bind9"]
Googled it ;-)
https://manpages.debian.org/jessie/bind9/named.8.en.html
After you've configured it, you can run it as a foreground process:
ENTRYPOINT ["named","-g"]

docker ubuntu container exec bash issue

I pull & run an image like
docker run -d --name=lemp \
-v /Users/bappa/Desktop/server/www:/var/www/ \
-p 8080:80 \
stenote/docker-lemp:16.04
& then go to bash like
docker exec -it lemp bash
which is absolutely fine. But When I do with ubuntu:16.04 image same thing. I found response like below
Where is the problem? why the container exit? Thanks.
The reason that caused the different behavior is because of their Dockerfile CMD or ENTRYPOINT.
Once the main process (CMD or ENTRYPOINT) finishes, a docker container stops.
If you look at docker-lemp Dockerfile:
ENTRYPOINT ["/entrypoint.sh"]
Comparing to Ubuntu Dockerfile:
CMD ["bash"]
docker-lemp runs entrypoint.sh which runs further processes that remain in the foreground while Ubuntu runs bash that quits itself after completion.
If you want to keep Ubuntu in the background, a simple trick would be:
docker container run -d ubuntu:16.04 tail -f /dev/null
This replaces the default CMD bash with tail -f /dev/null so the container does not exits.

Keeping alive Docker containers with supervisord

I end my Debian Dockerfile with these lines:
EXPOSE 80 22
COPY etc/supervisor/conf.d /etc/supervisor/conf.d
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
In /etc/supervisor/conf.d/start.conf file:
[program:ssh]
command=/usr/sbin/service ssh restart
[program:nginx]
command=/usr/sbin/nginx -g 'daemon off;'
[program:systemctl]
command=/bin/systemctl daemon-reload
[program:systemctl]
command=/bin/systemctl start php7-fpm.service
If I try to run this Docker image with the following command:
$ docker run -d -p 8080:80 -p 8081:22 lanti/debian
It's immediately stops running. if I try to run it on the foreground:
$ docker run -it -v /home/core/share:/root/share -p 8080:80 -p 8081:22 lanti/debian
It's the same, immediate exit. If I run with bash CMD:
$ docker run --rm -it -v /home/core/share:/root/share -p 8080:80 -p 8081:22 lanti/debian bash
It stays active in the console, but the predefined commands by supervisor not run, so I need to run $ service supervisor restart inside the container, otherwise Nginx and SSH won't be started.
How I can start a docker container with multiple commands run at startup? In the past I used ExecStartPost lines in a systemd file under the host OS, but becouse of that, the systemd file became complex so I try to move the pre-start commands into the container, to run automatically at any type of startup.
This docker container will have nginx, php, ssh, phpmyadmin and mysql in the future. I don't want multiple containers.
Thank You for your help!
Lets preface this by saying running the kitchen sink in a docker container is not a best practice. Docker is not a virtual machine.
That said, a few problems.
just like the processes that supervisor controls, supervisor itself should NOT daemonize. Add -n
I'm not entirely sure why you expect, need, or want to have systemd and supervisor running. Most docker containers do not have a functioning init system. Why not just user supervisor for everything? Unless docker has significantly changed in the last couple versions, systemd inside the container will not work like you think it should.

Resources