Container does not start if I map existing in container dir - docker

I am trying to create a container with directory mapped to host directory.
docker run -d --name web1 -p 8080:80 -v /vagrant/web/config:/etc/nginx/conf.d nginx
But for some reason if I am specifying directory existing in the container - container is created (i get ID back) but it does not start.
if I run docker ps I do not see the container but if i run docker ps -a I see:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6b8bde856fdc nginx "nginx -g 'daemon off" 10 minutes ago Exited (1) 10 minutes ago web1
If i do the same but instead of existing /etc/nginx/conf.d dir I specify something else - it works.
There is no related records in log /var/log/upstart/docker.log
Additional info
Docker version 1.11.2, build b9f10c9
Ubuntu Trusty

What is the content of /vagrant/web/config in the host ?
The problem may be an error in one of your nginx configuration files that prevents nginx from starting. And if nginx doesn't start, the container stops from running.

Run another container with the same settings and bash as entrypoint. Afterwards manually start the command from the original container and look for the error:
$ docker run -it --entrypoint=/bin/bash --name web2 -p 8080:80 -v /vagrant/web/config:/etc/nginx/conf.d nginx
root#7553b294969f:/# nginx -g 'daemon-off'
# this is just an example with a badly formatted default.conf file
2016/07/04 08:14:15 [emerg] 10#10: unexpected end of parameter, expecting ";" in command line
nginx: [emerg] unexpected end of parameter, expecting ";" in command line

Related

Docker: how to map host directory?

I am trying to reproduce steps to create an Ubuntu based image + nginx, described there:
https://www.howtoforge.com/tutorial/how-to-create-docker-images-with-dockerfile/
My host machine is Windows.
The image is built, then I have created d:\webroot folder on host, index.html file inside and try to run
docker run -v /d/webroot:/var/www/html -p 80:80 --name doom nginx_image
standard_init_linux.go:211: exec user process caused "no such file or directory"
What may be the reason and how to fix it?
The issue is with the start.sh script which is loaded from Windows. Excerpt below:
#!/bin/sh
/usr/bin/supervisord -n -c /etc/supervisor/supervisord.conf
You need to change the change line ending from CRLF to LF for the start.sh.
And then run: docker run -v /d/webroot:/var/www/html -p 80:80 --name doom nginx_image

Command with 2 paths when running a docker container

Hey I'm very new at this so bear with me please.
I'm trying to run a docker container I exported. The container was running with command:
I've tried using this:
sudo docker run -p 8080:8080 --name=test testcontainer --entrypoint=/sbin/tini -- /usr/local/bin/jenkins.sh
However I get errors:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"--entrypoint=/sbin/tini\": stat --entrypoint--/sbin/tini: no such file or directory": unknown.
I've also tried a combination of them with a space between them like such:
sudo docker run -p 8080:8080 --name=test testcontainer --entrypoint=/sbin/tini /usr/local/bin/jenkins.sh
How would i go about to running that command?
--entrypoint would go before the image name
sudo docker run -p 8080:8080 --name=test --entrypoint=/sbin/tini testcontainer /usr/local/bin/jenkins.sh
The extra arguments would follow that to become the command (and dashes aren't needed.
Or if bash is the default entrypoint, you can give the whole thing as a command.
sudo docker run -p 8080:8080 --name=test testcontainer bash -c "/sbin/tini -- /usr/local/bin/jenkins.sh"

Apache website not updating after restarting docker

I have created an apache2 docker container
docker run -dit --name tecmint-web -p 8080:80 -v /home/user/website/:/usr/local/apache2/htdocs/ httpd:2.4
This is working. but if I stop this container and start again by docker restart <container-id>. It does not pick new html file from my /home/user/website/.
How can I realign these 2 directories after restarting docker?

Keeping alive Docker containers with supervisord

I end my Debian Dockerfile with these lines:
EXPOSE 80 22
COPY etc/supervisor/conf.d /etc/supervisor/conf.d
CMD ["/usr/bin/supervisord", "-c", "/etc/supervisor/supervisord.conf"]
In /etc/supervisor/conf.d/start.conf file:
[program:ssh]
command=/usr/sbin/service ssh restart
[program:nginx]
command=/usr/sbin/nginx -g 'daemon off;'
[program:systemctl]
command=/bin/systemctl daemon-reload
[program:systemctl]
command=/bin/systemctl start php7-fpm.service
If I try to run this Docker image with the following command:
$ docker run -d -p 8080:80 -p 8081:22 lanti/debian
It's immediately stops running. if I try to run it on the foreground:
$ docker run -it -v /home/core/share:/root/share -p 8080:80 -p 8081:22 lanti/debian
It's the same, immediate exit. If I run with bash CMD:
$ docker run --rm -it -v /home/core/share:/root/share -p 8080:80 -p 8081:22 lanti/debian bash
It stays active in the console, but the predefined commands by supervisor not run, so I need to run $ service supervisor restart inside the container, otherwise Nginx and SSH won't be started.
How I can start a docker container with multiple commands run at startup? In the past I used ExecStartPost lines in a systemd file under the host OS, but becouse of that, the systemd file became complex so I try to move the pre-start commands into the container, to run automatically at any type of startup.
This docker container will have nginx, php, ssh, phpmyadmin and mysql in the future. I don't want multiple containers.
Thank You for your help!
Lets preface this by saying running the kitchen sink in a docker container is not a best practice. Docker is not a virtual machine.
That said, a few problems.
just like the processes that supervisor controls, supervisor itself should NOT daemonize. Add -n
I'm not entirely sure why you expect, need, or want to have systemd and supervisor running. Most docker containers do not have a functioning init system. Why not just user supervisor for everything? Unless docker has significantly changed in the last couple versions, systemd inside the container will not work like you think it should.

Docker CMD to start Haproxy in Dockerfile

Here is my CMD command in my Dockerfile for haproxy:
CMD ["/etc/init.d/haproxy"]
Now when I run the image the following happens:
...
Successfully built 2eb6549e0a22
root#server:/# docker run -d -p 80:80 -p 81:81 -p 443:443 -p 1988:1988 --name haproxy -h haproxy user/haproxy
09b510c4df712414d8855d3e0fb27b7e35d5c5c2f0f9b07f7f29c8efdb93e852
root#server:/# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
09b510c4df71 user/haproxy "/etc/init.d/haproxy" 5 seconds ago Exited (2) 4 seconds ago haproxy
As you can see it exits straight away. How do I keep it running?
Does your user/haproxy inherit from the haproxy container in the main docker repo? If so the command haproxy should be in the PATH and it includes a -f flag for running it in the foreground.
I usually specify the config file too, so my CMD looks like:
haproxy -d -f /usr/local/etc/haproxy/haproxy.cfg

Resources