docker interactive mode exits after ENTRYPOINT - docker

Given the following Dockerfile:
From python:2.7-wheezy
RUN apt-get update && apt-get upgrade --assume-yes
RUN apt-get install mysql-server --assume-yes
ENTRYPOINT service mysql start
When I run the docker, it exits immediately after starting mysql server:
Jamess-iMac:docker-python-test jameslin$ docker run -i -t 9618f71f65e4 /bin/bash
[ ok ] Starting MySQL database server: mysqld ..
[info] Checking for tables which need an upgrade, are corrupt or were
not closed cleanly..
Jamess-iMac:docker-python-test jameslin$
How do I make it automatically start up mysql but stays in interactive mode?

ubuntu 12.04 into docker “service mysql start”
If you read above thread you can see that Docker doesn't have any runlevel so mysql doesn't know when it should start.
You can run two container and create a nework between them. One for mysql and another for pythonapp. Here is how you can create a network between two container
docker network create <network_name>
Start the container attaching the container to new network using --net=<network_name>
docker run -d --net=anetwork --name=mysql -e MYSQL_USER=ROOT -e MYSQL_ALLOW_EMPTY_PASSWORD=yes mysql
docker run --net=anetwork --name=pythonapp -it python:2.7-wheezy bash
I think you are confused between ENTRYPOINT and CMD. The key point to understand is that an ENTRYPOINT will always be run when the image is started, even if a command is supplied to the docker run invocation. If you try to supply a command, it will add that as an argument to the ENTRYPOINT, replacing the default defined in the CMD instruction. You can only override the ENTRYPOINT if you explicitly pass in an --entrypoint flag to the docker run command.
This means that running the image with a /bin/bash command will not give you a shell; rather it will supply /bin/bash as an argument to the service mysql start.
Network between container
ubuntu 12.04 into docker “service mysql start”
Difference Between CMD and ENTRYPOINT in Dockerfile

Your ENTRYPOINT / CMD directive needs to be a long running command. service mysql start is not a continuous command as it runs, in Ubuntu, the service itself and then the command exits.
To make this simple - if you're just trying to run a mysql container you can run docker run mysql. If you absolutely need to run a one-off mysql container you can piggy-back off of the way that the default mysql container starts here: MySql Dockerfile - CMD ["mysqld"] - which should be similar to what you see as the actual start command in /etc/init.d/mysql

Related

docker run --env, --net and --volume options in docker-compose for displaying image

I'm trying to replicate docker run command with options within a docker-compose file:
My Dockerfile is:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y
RUN apt-get install -y python3-pip python3-dev python3-opencv
RUN apt-get install -y libcanberra-gtk-module libcanberra-gtk0 libcanberra-gtk3-module
WORKDIR /
RUN mkdir /imgs
COPY app.py ./
CMD ["/bin/bash"]
And I use the following command to run the container so that it can display images from shared volume properly:
docker build -t docker_test:v1 .
docker run -it --net=host --env=DISPLAY --volume=$HOME/.Xauthority:/root/.Xauthority docker_test:v1
In order to replicate the previous command, I tried the docker-compose file below:
version: "3.7"
services: docker_test:
container_name: docker_test
build: .
environment:
- DISPLAY=:1
volumes:
- $HOME/.Xauthority:/root/.Xauthority
- $HOME/docker_test/imgs:/imgs
network_mode: "host"
However, after building the image and running app script from inside container (image copied on container, not from shared volume):
docker-compose up
docker run -ti docker_test_docker_test
python3 app.py
The following error arises:
Unable to init server: Could not connect: Connection refused
(OpenCV Image Reading:9): Gtk-WARNING **: 09:15:24.889: cannot open display:
In addition, volumes do not seem to be shared
docker run never looks at a docker-compose.yml file; every option you need to run the container needs to be specified directly in the docker run command. Conversely, Compose is much better at running long-running process than at running interactive shells (and you want the container to run the program directly, in the much the same way you don't typically start a python REPL and invoke main() from there).
With your setup, first you're launching a container via Compose. This will promptly exit (because the main container command is an interactive bash shell and it has no stdin). Then, you're launching a second container with default options and manually running your script there. Since there's no docker run -e DISPLAY option, it doesn't see that environment variable.
The first thing to change here, then, is to make the image's CMD be to start the application
...
COPY app.py .
CMD ./app.py
Then running docker-compose up (or docker run your-image) will start the application without further intervention from you. You probably need a couple of other settings to successfully connect to the host display (propagating $DISPLAY unmodified, mounting the host's X socket into the container); see Can you run GUI applications in a Linux Docker container?.
(If you're trying to access the host display and use the host network, consider whether an isolation system like Docker is actually the right tool; it would be much simpler to directly run ./app.py in a Python virtual environment.)

Docker container is not running even if -d

I'm french and new here (so I don't know how stack overflow works, his community) I'm gonna try to adapt myself.
So, my first problem is the following :
I run docker container with my image who it created with Dockerfile. (there is DNS container)
In Dockerfile, this container have to start script.sh when it start.
But after use that :
docker run -d -ti -p 53:53 alex/dns
(Use -p 53:53 because DNS.)
I can see my DNS runing at the end of my script.sh but, when I do :
Docker ps -a ; but > container is not running.
I'm novice with docker. I have started to learn it 2days ago.
I tried to add (one by one of course):
CMD ["bash"]
CMD ["/bin/bash"]
to run bash and make sure that does not poweroff.
I tried to add -d in Docker run command
I tried to use :
docker commit ti alex/dns
and
docker exec -ti alex/dns /bin/bsh
My dockerfile file :
FROM debian
...
RUN apt-get install bind9
...
ADD script.sh /usr/bin/script.sh
...
ENTRYPOINT ["/bin/bash", "script.sh]
CMD ["/bin/bash"]
My file script.sh :
service bind9 stop
*It copy en remplace conf file for bind9*
service bind9 restart
I hope that there are not too many mistakes and that I managed to make myself understood
I expect the DNS container stay runing and can use it with docker exec.
But now, after use docker run, the container start en stop juste after my script finish. Yes, the DNS server is runing the container tell me before close [ok] Bind9 running or somthing like that. But after container stop.
I suspect the problem you're facing is that your container will terminate once service bind9 restart completes.
You need to have a foreground process running to keep the container running.
I'm unfamiliar with bind9 but I recommend you explore ways to run bind9 in the foreground in your container.
Your command to run the container is correct:
docker run -d -ti -p 53:53 alex/dns
You may need to:
RUN apt-get update && apt-get -y install bind9
You will likely need something like (don't know):
ENTRYPOINT ["/bind9"]
Googled it ;-)
https://manpages.debian.org/jessie/bind9/named.8.en.html
After you've configured it, you can run it as a foreground process:
ENTRYPOINT ["named","-g"]

Puppet container wont start automatically

So I have created a puppet container for a certificate authority. It works, but does not start correctly. Here is my Dockerfile:
FROM centos:6
RUN yum update -y
RUN rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm
RUN yum install -y ruby puppet-server
ADD puppet.conf /etc/puppet/puppet.conf
VOLUME ["/var/lib/puppet/ssl/"]
EXPOSE 9140
#NOTHING BELOW THIS COMMENT RUNS
RUN puppet master --verbose
CMD service puppetmaster start
CMD chkconfig puppetmaster on
CMD []
I can then start the container with the following run(note that I named the image ca-puppet):
docker run -d -p 9140:9140 -it --name ca-puppet \
-v /puppet-host/ssl:/var/lib/puppet/ssl \
ca-puppet bash
The issue is that I need to docker exec into the container and run the following commands to get it started and create the ca certificates in its ssl directory:
puppet master --verbose
service puppetmaster start
chkconfig puppetmaster on
I have a feeling I should be using some other Docker file commands to run the last 3 commands. What should that be?
There can only be one CMD instruction in a Dockerfile. If you list
more than one CMD then only the last CMD will take effect.
also
If the user specifies arguments to docker run then they will override
the default specified in CMD.
However, using the default process manager (e.g., SysV, systemd, etc) in Docker for most mainstream distros can cause problems (without making a lot of modifications). You generally don't need it, however -- particularly if you're only running one application (as is often considered best practice). In a Docker container, you generally want your primary application to be the first process (PID 1).
You can do this by not daemonizing puppet and start it as the default container command via something like:
CMD puppet master --verbose --no-daemonize
And use the Docker host to manage it (via restart policy, etc.).

Must I provide a command when running a docker container?

I'd like to install mysql server on a centos:6.6 container.
However, when I run docker run --name myDB -e MYSQL_ROOT_PASSWORD=my-secret-pw -d centos:6.6, I got docker: Error response from daemon: No command specified. error.
Checking the document from docker run --help, I found that the COMMAND seems to be an optional argument when executing docker run. This is because [COMMAND] is placed inside a pair of square brackets.
$ docker run --help
Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
Run a command in a new container
I also find out that the official repository of mysql doesn't specify a command when starting a MySQL container:
Starting a MySQL instance is simple:
$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql:tag
Why should I provide a command when running a centos:6.6 container, but not so when running a mysql container?
I'm guessing that maybe centos:6.6 is specially-configured so that the user must provide a command when running it.
if you use centos:6.6, you do need to provide a command when you issue "docker run" command.
The reason the officical repository of mysql does not specify a command is because it has CMD command in it's docker file: CMD ["mysqld"]. Check it's docker file here.
The CMD in docker file is the default command when run the container without a command.
You can read here to better understand what you can use in a docker file.
In your case, you can
Start your centos 6.6 container
Take official mysql docker file as reference, issue similar command (change apt-get to yum ( or sudo yum if you don't use the default root user)
Once you can successfully start mysql, you can put all your command in your docker file, just to make sure the first line is "From centos:6.6"
Build your image
Run a container with your image, then you don't need to provide a command in docker run
You can share your docker file in docker hub, so that other people can user yours.
good luck.

Keeping alive Docker containers with supervisord

I end my Debian Dockerfile with these lines:
EXPOSE 80 22
COPY etc/supervisor/conf.d /etc/supervisor/conf.d
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
In /etc/supervisor/conf.d/start.conf file:
[program:ssh]
command=/usr/sbin/service ssh restart
[program:nginx]
command=/usr/sbin/nginx -g 'daemon off;'
[program:systemctl]
command=/bin/systemctl daemon-reload
[program:systemctl]
command=/bin/systemctl start php7-fpm.service
If I try to run this Docker image with the following command:
$ docker run -d -p 8080:80 -p 8081:22 lanti/debian
It's immediately stops running. if I try to run it on the foreground:
$ docker run -it -v /home/core/share:/root/share -p 8080:80 -p 8081:22 lanti/debian
It's the same, immediate exit. If I run with bash CMD:
$ docker run --rm -it -v /home/core/share:/root/share -p 8080:80 -p 8081:22 lanti/debian bash
It stays active in the console, but the predefined commands by supervisor not run, so I need to run $ service supervisor restart inside the container, otherwise Nginx and SSH won't be started.
How I can start a docker container with multiple commands run at startup? In the past I used ExecStartPost lines in a systemd file under the host OS, but becouse of that, the systemd file became complex so I try to move the pre-start commands into the container, to run automatically at any type of startup.
This docker container will have nginx, php, ssh, phpmyadmin and mysql in the future. I don't want multiple containers.
Thank You for your help!
Lets preface this by saying running the kitchen sink in a docker container is not a best practice. Docker is not a virtual machine.
That said, a few problems.
just like the processes that supervisor controls, supervisor itself should NOT daemonize. Add -n
I'm not entirely sure why you expect, need, or want to have systemd and supervisor running. Most docker containers do not have a functioning init system. Why not just user supervisor for everything? Unless docker has significantly changed in the last couple versions, systemd inside the container will not work like you think it should.

Resources