I have the following Dockerfile that is supposed to start nginx server:
# Set the base image to Ubuntu
FROM ubuntu
# File Author / Maintainer
MAINTAINER myname "myemail#gemail.com"
# Install Nginx
# Add application repository URL to the default sources
# RUN echo "deb http://archive.ubuntu.com/ubuntu/ raring main universe" >> /etc/apt/sources.list
# Update the repository
RUN apt-get update
# Install necessary tools
RUN apt-get install -y nano wget dialog net-tools
# Download and Install Nginx
RUN apt-get install -y nginx
# Remove the default Nginx configuration file
RUN rm -v /etc/nginx/nginx.conf
# Copy a configuration file from the current directory
ADD nginx.conf /etc/nginx/
# Append "daemon off;" to the configuration file
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
# Expose ports
EXPOSE 80
# Set the default command to execute when creating a new container
CMD service nginx start
When I tried to start the image that I created using docker build, I get the following error:
my-MacBook-Pro:nginx-docker me$ docker run -it myname:nginx-latest
2016-07-15T12:57:00.525640076+02:00 container create 0220107a0060cf61ff7dcb601870086b4ae7a0b7cacd914d63719e9a3d0d9451 (image=myname:nginx-latest, name=sad_knuth)
2016-07-15T12:57:00.542025710+02:00 container attach 0220107a0060cf61ff7dcb601870086b4ae7a0b7cacd914d63719e9a3d0d9451 (image=myname:nginx-latest, name=sad_knuth)
2016-07-15T12:57:00.573835719+02:00 network connect 5e0b48e45380f1548ab524cfa7bfe94ae952ce740df76fda9e35561a258e31ef (container=0220107a0060cf61ff7dcb601870086b4ae7a0b7cacd914d63719e9a3d0d9451, name=bridge, type=bridge)
2016-07-15T12:57:00.654772110+02:00 container start 0220107a0060cf61ff7dcb601870086b4ae7a0b7cacd914d63719e9a3d0d9451 (image=myname:nginx-latest, name=sad_knuth)
2016-07-15T12:57:00.656973139+02:00 container resize 0220107a0060cf61ff7dcb601870086b4ae7a0b7cacd914d63719e9a3d0d9451 (height=42, image=myname:nginx-latest, name=sad_knuth, width=181)
* Starting nginx nginx [fail]
2016-07-15T12:57:00.715136152+02:00 container die 0220107a0060cf61ff7dcb601870086b4ae7a0b7cacd914d63719e9a3d0d9451 (exitCode=1, image=myname:nginx-latest, name=sad_knuth)
2016-07-15T12:57:00.757891577+02:00 network disconnect 5e0b48e45380f1548ab524cfa7bfe94ae952ce740df76fda9e35561a258e31ef (container=0220107a0060cf61ff7dcb601870086b4ae7a0b7cacd914d63719e9a3d0d9451, name=bridge, type=bridge)
my-MacBook-Pro:nginx-docker me$
EDIT:
I even tried the following as per the recommendations:
myname-MacBook-Pro:nginx-docker me$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0220107a0060 myname:nginx-latest "/bin/sh -c 'service " 8 minutes ago Exited (1) 8 minutes ago sad_knuth
7252213512f0 myname:nginx-latest "/bin/sh -c 'service " 23 minutes ago Exited (1) 23 minutes ago clever_jennings
53a5e6978978 myname:nginx-latest "/bin/sh -c 'service " 25 minutes ago Exited (1) 25 minutes ago gigantic_dubinsky
10fc9e4b749f myname:nginx-latest "/bin/sh -c 'service " 27 minutes ago Exited (1) 27 minutes ago distracted_almeida
64161feb2b2a 0d73419e8da9 "nginx -g 'daemon off" 3 hours ago Exited (1) 3 hours ago pensive_mcnulty
5c9eab4c7998 hello-world "/hello" 4 hours ago Exited (0) 4 hours ago sharp_kare
myname-MacBook-Pro:nginx-docker me$ docker container 0220107a0060
docker: 'container' is not a docker command.
See 'docker --help'.
myname-MacBook-Pro:nginx-docker me$ docker logs 0220107a0060
* Starting nginx nginx [fail]
CMD service nginx start means the main process exits immediatly after launching nginx.
You would need
CMD /usr/sbin/nginx -g "daemon off;"
To make sure it is not launched in daemon mode.
Other alternatives are here, but I prefer this one as it limits the number of processes to just nginx.
Related
could you help me?
I'm trying to run a container by a dockerfile but it shows this
warning and my container does not start.
compose.parallel.parallel_execute_iter: Finished processing:
<Container: remote-Starting remote-host ... done
compose.parallel.feed_queue: Pending: set()
compose.parallel.parallel_execute_iter: Finished processing: <Service:
remote_host>
compose.parallel.feed_queue: Pending: set()
Attaching to jenkinks, remote-host
compose.cli.verbose_proxy.proxy_callable: docker logs <-
('f2e305942e57ce1fe90c2ca94d3d9bbc004155a136594157e41b7a916d1ca7de',
stdout=True, stderr=True, stream=True, follow=True)
remote-host | Unable to load host key: /etc/ssh/ssh_host_rsa_key
remote-host | Unable to load host key: /etc/ssh/ssh_host_ecdsa_key
remote-host | Unable to load host key:
/etc/ssh/ssh_host_ed25519_key remote-host | sshd: no hostkeys
available -- exiting.
compose.cli.verbose_proxy.proxy_callable: docker events <-
(filters={'label': ['com.docker.compose.project=jenkins',
'com.docker.compose.oneoff=False']}, decode=True)
My dockerfile is this:
FROM centos RUN yum -y install openssh-server RUN yum install -y
passwd RUN useradd remote_user &&
echo "1234" | passwd remote_user --stdin &&
mkdir /home/remote_user/.ssh &&
chmod 700 /home/remote_user/.ssh COPY remote_user.pub /home/remote_user/.ssh/authorized_keys RUN chown
remote_user:remote_user -R /home/remote_user &&
chmod 400 /home/remote_user/.ssh/authorized_keys CMD /usr/sbin/sshd -D
start with an empty dir and put following in that dir as a file called Dockerfile
FROM centos
RUN yum -y install openssh-server
RUN yum install -y passwd
RUN useradd remote_user
RUN echo "1234" | passwd remote_user --stdin
RUN mkdir /home/remote_user/.ssh
RUN chmod 700 /home/remote_user/.ssh
COPY remote_user.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user
RUN chmod 400 /home/remote_user/.ssh/authorized_keys
CMD /usr/sbin/sshd -D
# CMD ["/bin/bash"]
# ... save this file as Dockerfile then in same dir issue following
#
# docker build --tag stens_centos . # creates image stens_ubuntu
#
# docker run -d stens_centos sleep infinity # launches container and just sleeps only purpose here is to keep container running
#
# docker ps # show running containers
#
#
# ... find CONTAINER ID from above and put into something like this
#
# docker exec -ti $( docker ps | grep stens_centos | cut -d' ' -f1 ) bash # login to running container
#
then in that same dir put your ssh key files as per
eve#milan ~/Dropbox/Documents/code/docker/centos $ ls -la
total 28
drwxrwxr-x 2 eve eve 4096 Nov 2 15:20 .
drwx------ 77 eve eve 12288 Nov 2 15:14 ..
-rw-rw-r-- 1 eve eve 875 Nov 2 15:20 Dockerfile
-rwx------ 1 eve eve 3243 Nov 2 15:18 remote_user
-rwx------ 1 eve eve 743 Nov 2 15:18 remote_user.pub
then cat out Dockerfile and copy and paste commands it explains at bottom of Dockerfile file ... for me all of them just worked OK
after I copy and pasted those commands listed at bottom of Dockerfile the container gets built and executed
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0a06ebd2752a stens_centos "sleep infinity" 7 minutes ago Up 7 minutes pedantic_brahmagupta
keep in mind you must define in your Dockerfile the bottom CMD or similar to be just what you want to get executed as the container runs which typically is a server which by definition runs forever ... alternatively this CMD can be simply something which runs then finishes like a batch job in which case the container will exit when that job finishes ... with this knowledge I suggest you confirm sshd -D will hold that command as a server or will immediately terminate upon launch of container
I've just replied to this GitHub issue, but here's what I experienced and how I fixed it
I just had this issue for my Jekyll blog site which I normally bring up using docker-compose with mapped volume to rebuild when I create a new post - it was hanging, tried to run docker-compose up with the --verbose switch and saw the same compose.parallel.feed_queue: Pending: set().
I tried it on my Macbook and it was working fine
I didn't have any experimental features turned on, but I need need to go into (on Windows) settings-> resources -> File Sharing and add the folder I was mapping in my docker compose (the root of my blog site)
Re ran docker compose and its now up and running
Version Info:
This Dockerfile hangs after the download has completed:
FROM ubuntu:18.04
MAINTAINER Dean Schulze
ENV JS_CE_VERSION 7.1.0
ENV JS_CE_HOME /opt/jasperreports-server-cp-${JS_CE_VERSION}
ENV PATH $PATH:${JS_CE_HOME}
RUN apt-get update && apt-get install -y wget \
&& wget --progress=bar:force:noscroll -O TIB_js-jrs-cp_${JS_CE_VERSION}_linux_x86_64.run https://sourceforge.net/projects/jasperserver/files/JasperServer/JasperReports%20Server%20Community%20Edition%20${JS_CE_VERSION}/TIB_js-jrs-cp_${JS_CE_VERSION}_linux_x86_64.run \
&& chmod a+x TIB_js-jrs-cp_${JS_CE_VERSION}_linux_x86_64.run \
&& /TIB_js-jrs-cp_${JS_CE_VERSION}_linux_x86_64.run --mode unattended --jasperLicenseAccepted yes --postgres_password Postgres1 --prefix ${JS_CE_HOME} \
&& rm TIB_js-jrs-cp_${JS_CE_VERSION}_linux_x86_64.run \
&& rm -rf ${JS_CE_HOME}/apache-ant ${JS_CE_HOME}/buildomatic \
${JS_CE_HOME}/docs ${JS_CE_HOME}/samples ${JS_CE_HOME}/scripts \
&& apt-get clean
EXPOSE 8080
CMD ctlscript.sh start && tail -f /dev/null
The last few lines of output are
Resolving superb-dca2.dl.sourceforge.net (superb-dca2.dl.sourceforge.net)... 209.61.193.20
Connecting to superb-dca2.dl.sourceforge.net (superb-dca2.dl.sourceforge.net)|209.61.193.20|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 343517555 (328M) [application/x-makeself]
Saving to: 'TIB_js-jrs-cp_7.1.0_linux_x86_64.run'
TIB_js-jrs-cp_7.1.0 100%[===================>] 327.60M 1.60MB/s in 3m 52s
2018-07-28 03:15:28 (1.41 MB/s) - 'TIB_js-jrs-cp_7.1.0_linux_x86_64.run' saved [343517555/343517555]
How do I diagnose a docker build that hangs like this?
Edit
Here's the requested output:
$ docker image history d5d47e51eafc
IMAGE CREATED CREATED BY SIZE COMMENT
d5d47e51eafc 23 minutes ago /bin/sh -c #(nop) ENV PATH=/usr/local/sbin:… 0B
831a3a551fa7 23 minutes ago /bin/sh -c #(nop) ENV JS_CE_HOME=/opt/jaspe… 0B
e8361426e492 23 minutes ago /bin/sh -c #(nop) ENV JS_CE_VERSION=6.3.0 0B
7af364f52b1b 23 minutes ago /bin/sh -c #(nop) MAINTAINER JS Minet 0B
735f80812f90 30 hours ago /bin/sh -c #(nop) CMD ["/bin/bash"] 0B
<missing> 30 hours ago /bin/sh -c mkdir -p /run/systemd && echo 'do… 7B
<missing> 30 hours ago /bin/sh -c sed -i 's/^#\s*\(deb.*universe\)$… 2.76kB
<missing> 30 hours ago /bin/sh -c rm -rf /var/lib/apt/lists/* 0B
<missing> 30 hours ago /bin/sh -c set -xe && echo '#!/bin/sh' > /… 745B
<missing> 30 hours ago /bin/sh -c #(nop) ADD file:4bb62bb0587406855… 83.5MB
It looks like the lines are in reverse order and the last one executed is the ENV PATH. The next line would be the RUN line which has multiple commands separated by &&.
This is a modification of a Dockerfile on DockerHub. I changed ubuntu versions and the version of the app being installed. That shouldn't have broken anything.
Should I file a bug report?
First list the "layers" of your finished or incomplete image. Each layer typically corresponds to an instruction in your Dockerfile.
Identify the image ID using
docker image ls
then list the layers inside the image using
docker image history <ID>
You will see something like this:
IMAGE CREATED CREATED BY SIZE COMMENT
6c32fe3da793 2 days ago /bin/sh -c #(nop) COPY file:c25ef1dcc737cb59… 635B
4c1309db9b9c 2 days ago /bin/sh -c #(nop) COPY dir:30506cf0fc0cdb096… 8.64kB
5f5ae40b5fd5 3 weeks ago /bin/sh -c apk update && apk add --no-cache … 164MB
989d78741d0b 3 weeks ago /bin/sh -c #(nop) ENV DOCKER_VERSION=18.03.… 0B
6160911711fc 3 weeks ago /bin/sh -c #(nop) CMD ["python3"] 0B
... etc
Then create a container from any point inside your image. From there you can perform the next instruction that would cause a problem in your Dockerfile.
Eg:
docker run -it --rm 4c1309db9b9c sh
At some point in the last couple of years, Buildkit has become the default Docker backend. Buildkit does not write out intermediate layers as images, as a performance optimization. Therefore, if you need to debug a hanging docker build, comment out the line that the build is hanging at and all subsequent lines. You now have a Dockerfile that you can execute $ docker build . with. Once the build is finished, you can launch the image, bash into the container, run the command that is causing the build to hang and then inspect the state of the container to debug the situation.
I got this to work by rolling back to ubuntu 16.04. There must be some change in 18.04 that causes this Dockerfile to fail, or maybe the ubuntu 18.04 image omitted something.
I made changes to fabric code and would like to test it by running using docker-compose. I run make peer; make docker. I see the following.
mkdir -p build/image/peer/payload
cp build/docker/bin/peer build/sampleconfig.tar.bz2 build/image/peer/payload
mkdir -p build/image/orderer/payload
cp build/docker/bin/orderer build/sampleconfig.tar.bz2 build/image/orderer/payload
mkdir -p build/image/testenv/payload
cp build/docker/bin/orderer build/docker/bin/peer build/sampleconfig.tar.bz2 images/testenv/install-softhsm2.sh build/image/testenv/payload
mkdir -p build/image/tools/payload
cp build/docker/bin/cryptogen build/docker/bin/configtxgen build/docker/bin/configtxlator build/docker/bin/peer build/sampleconfig.tar.bz2 build/image/tools/payload
When I do docker images, I still see the same bunch of images
hyperledger/fabric-orderer latest 391b202306fa 3 weeks ago 180MB
hyperledger/fabric-orderer x86_64-1.1.0 391b202306fa 3 weeks ago 180MB
hyperledger/fabric-peer latest e0f3bdb4506f 3 weeks ago 187MB
hyperledger/fabric-peer x86_64-1.1.0 e0f3bdb4506f 3 weeks ago 187MB
How do I go from there to generate new docker image? what am I missing?
I am using Ubuntu 16.04. Thanks for your time
cd build/image/peer/
docker build -t hyperledger/fabric-peer:x86_64-1.1.x .
Change the image name in docker-compose.yaml file and you are good to go.
I need to get some containers to dead state, as I want to check if a script of mine is working. Any advice is welcome. Thank you.
You've asked for a dead container.
TL;DR: This is how to create a dead container
Don't do this at home:
ID=$(docker run --name dead-experiment -d -t alpine sh)
docker kill dead-experiment
test "$ID" != "" && chattr +i -R /var/lib/docker/containers/$ID
docker rm -f dead-experiment
And voila, docker could not delete the container root directory, so it falls to a status=dead:
docker ps -a -f status=dead
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
616c2e79b75a alpine "sh" 6 minutes ago Dead dead-experiment
Explanation
I've inspected the source code of docker and saw this state transition:
container.SetDead()
// (...)
if err := system.EnsureRemoveAll(container.Root); err != nil {
return errors.Wrapf(err, "unable to remove filesystem for %s", container.ID)
}
// (...)
container.SetRemoved()
So, if docker cannot remove the container root directory, it remain as dead and does not continue to the Removed state. So I've forced the file permissions to not permit root remove files (chattr -i).
PS: to revert the directory permissions do this: chattr -i -R /var/lib/docker/containers/$ID
For docker-1.12+, instruction HEALTHCHECK can help you.
The HEALTHCHECK instruction tells Docker how to test a container to check that it is still working. This can detect cases such as a web server that is stuck in an infinite loop and unable to handle new connections, even though the server process is still running.
For example, we have a Dockerfile to define my own webapp:
FROM nginx:1.13.1
RUN apt-get update \
&& apt-get install -y curl \
&& rm -rf /var/lib/apt/lists/*
HEALTHCHECK --interval=15s --timeout=3s \
CMD curl -fs http://localhost:80/ || exit 1
check every five minutes or so that a web-server is able to serve the site’s main page within three seconds.
The command’s exit status indicates the health status of the container. The possible values are:
0: success - the container is healthy and ready for use
1: unhealthy - the container is not working correctly
2: reserved - do not use this exit code
Then use docker build command to build an image:
$ docker build -t myapp:v1 .
And run a container using this image:
$ docker run -d --name healthcheck-demo -p 80:80 myapp:v1
check the status of container:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b812c8d6f43a myapp:v1 "nginx -g 'daemon ..." 3 seconds ago Up 2 seconds (health: starting) 0.0.0.0:80->80/tcp healthcheck-demo
At the beginning, the status of container is (health: starting); after a while, it changes to be healthy:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d2bb640a6036 myapp:v1 "nginx -g 'daemon ..." 2 minutes ago Up 5 minutes (healthy) 0.0.0.0:80->80/tcp healthcheck-demo
It takes retries consecutive failures of the health check for the container to be considered unhealthy.
You can use your own script to replace the command curl -fs http://localhost:80/ || exit 1. What's more, stdout and stderr of your script can be fetched from docker inspect command:
$ docker inspect --format '{{json .State.Health}}' healthcheck-demo |python -m json.tool
{
"FailingStreak": 0,
"Log": [
{
"End": "2017-06-09T19:39:58.379906565+08:00",
"ExitCode": 0,
"Output": " % Total % Received % Xferd Average Speed Time Time Time Current\n Dload Upload Total Spent Left Speed\n\r 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\r100 612 100 612 0 0 97297 0 --:--:-- --:--:-- --:--:-- 99k\n<!DOCTYPE html>\n<html>\n<head>\n<title>Welcome to nginx!</title>\n<style>\n body {\n width: 35em;\n margin: 0 auto;\n font-family: Tahoma, Verdana, Arial, sans-serif;\n }\n</style>\n</head>\n<body>\n<h1>Welcome to nginx!</h1>\n<p>If you see this page, the nginx web server is successfully installed and\nworking. Further configuration is required.</p>\n\n<p>For online documentation and support please refer to\nnginx.org.<br/>\nCommercial support is available at\nnginx.com.</p>\n\n<p><em>Thank you for using nginx.</em></p>\n</body>\n</html>\n",
"Start": "2017-06-09T19:39:58.229550952+08:00"
}
],
"Status": "healthy"
}
Hope this helps!
I have a problem with a container that runs a cron job. The job executes curator to remove some elasticsearch indices. I have read many similar posts on stackoverflow but I still don't get it. The job seems to call the curator but the indices are not removed. The same command works if I run it manually.
This is my Dockerfile
FROM ubuntu:xenial
RUN apt-get update && apt-get install python-pip rsyslog -y
RUN groupadd -r curator && useradd -r -g curator curator
RUN pip install elasticsearch-curator
RUN apt-get install cron
COPY delete_indices_cron /etc/cron.d/delete_indices_cron
COPY ./delete_indices.sh /opt/delete_indices.sh
COPY ./configs /opt/config
RUN ["crontab", "/etc/cron.d/delete_indices_cron"]n
RUN chmod 644 /etc/cron.d/delete_indices_cron
RUN chmod 744 /opt/delete_indices.sh
RUN touch /var/log/cron.log
CMD ["rsyslogd"]
ENTRYPOINT ["cron","-f","&&", "tail","-f","/var/log/cron.log"]
I run the image afterward with
docker run -d --link elasticsearch:elasticsearch --name curator mycurator4
and the docker ps output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
eea96a48aa3a mycurator4 "cron -f && tail -f /" 15 minutes ago Up 15 minutes curator
e584c9b090c8 vagrant-registry.vm:5000/sslserver "python /sslServer/ss" 2 weeks ago Up 2 weeks 0.0.0.0:12121->12121/tcp sslserver
20eee9943664 kibana:4 "/docker-entrypoint.s" 2 weeks ago Up 2 weeks 0.0.0.0:5601->5601/tcp kibana
8c462586982e logstash:2 "/docker-entrypoint.s" 2 weeks ago Up 2 weeks 0.0.0.0:5044->5044/tcp, 0.0.0.0:12201->12201/tcp, 0.0.0.0:12201->12201/udp logstash
c971fa3e357b elasticsearch:2 "/docker-entrypoint.s" 2 weeks ago Up 2 weeks 0.0.0.0:9200->9200/tcp, 9300/tcp elasticsearch
4af9a78a4b1f jenkins "/bin/tini -- /usr/lo" 2 weeks ago Up 2 weeks 0.0.0.0:8080->8080/tcp, 50000/tcp
jenkins
UPDATE: the problem was that the curator could not be found as a command in the environment. When i changed it to the relative path the problem solved. Also based on some suggestions i removed the .sh from the /opt/delete_indices.sh because ansible "does not like this"!.
IMHO, this is a square peg, round hole situation.
Instead, I would add only the curator contents and necessary files into the image to do and use the host system cron to run the container. This would ensure you have the right env vars set and other misc problems you may have with cron.
To answer your question, this would be what command you are running from within the container:
cron -f && tail -f /var/log/cron.log rsyslogd
The first issue is the &&, which wouldn't behave like you want it to because the command cron exits which causes docker to exit when cron is complete, thus never calling tail -f. At least, that's what I found when I ran the && locally as a test. Secondly, if you want to look at the output, you'd run docker logs curator