I spent the weekend pouring over the Docker docs and playing around with the toy applications and example projects. I'm now trying to write a super-simple web service of my own and run it from inside a container. In the container, I want my app (a Spring Boot app under the hood) -- called bootup -- to have the following directory structure:
/opt/
bootup/
bin/
bootup.jar ==> the app
logs/
bootup.log ==> log file; GETS CREATED BY THE APP # STARTUP
config/
application.yml ==> app config file
logback.groovy ==> log config file
It's very important to note that when I run my app locally on my host machine - outside of Docker - everything works perfectly fine, including the creation of log files to my host's /opt/bootup/logs directory. The app endpoints serve up the correct content, etc. All is well and dandy.
So I created the following Dockerfile:
FROM openjdk:8
RUN mkdir /opt/bootup
RUN mkdir /opt/bootup/logs
RUN mkdir /opt/bootup/config
RUN mkdir /opt/bootup/bin
ADD build/libs/bootup.jar /opt/bootup/bin
ADD application.yml /opt/bootup/config
ADD logback.groovy /opt/bootup/config
WORKDIR /opt/bootup/bin
EXPOSE 9200
ENTRYPOINT java -Dspring.config=/opt/bootup/config -jar bootup.jar
I then build my image via:
docker build -t bootup .
I then run my container:
docker run -it -p 9200:9200 -d --name bootup bootup
I run docker ps:
CONTAINER ID IMAGE COMMAND ...
3f1492790397 bootup "/bin/sh -c 'java ..."
So far, so good!
My app should then be serving a simple web page at localhost:9200, so I open my browser to http://localhost:9200 and I get nothing.
When I use docker exec -it 3f1492790397 bash to "ssh" into my container, I see everything looks fine, except the /opt/bootup/logs directory, which should have a bootup.log file in it -- created at startup -- is instead empty.
I tried using docker attach 3f1492790397 and then hitting http://localhost:9200 in my browser, to see if that would generated some standard output (my app logs both to /opt/bootup/logs/bootup.log as well as the console) but that doesn't yield any output.
So I think what's happening is that my app (for some reason) doesn't have permission to create its own log file when the container starts up, and puts the app in a weird state, or even prevents it from starting up altogether.
So I ask:
Is there a way to see what user my app is starting up as?; or
Is there a way to tail standard output while the container is starting? Attaching after startup doesn't help me because I think by the time I run the docker attach command the app has already choked
Thanks in advance!
I don't know why your app isn't working, but can answer your questions-
Is there a way to see what user my app is starting up as?; or
A: Docker containers run as root unless otherwise specified.
Is there a way to tail standard output while the container is starting? Attaching after startup doesn't help me because I think by the time I run the docker attach command the app has already choked
A: Docker containers dump stdout/stderr to the Docker logs by default. There are two ways to see these- 1 is to run the container with the flag -it instead of -d to get an interactive session that will list the stdout from your container. The other is to use the docker logs *container_name* command on a running or stopped container.
docker attach 3f1492790397
This doesn't do what you are hoping for. What you want is docker exec (probably docker exec -it bootup bash), which will give you a shell in the scope of the container which will let you check for your log files or try and hit the app using curl from inside the container.
Why do I get no output?
Hard to say without the info from the earlier commands. Is your app listening on 0.0.0.0 or on localhost (your laptop browser will look like an external machine to the container)? Does your app require a supervisor process that isn't running? Does it require some other JAR files that are on the CLASSPATH on your laptop but not in the container? Are you running docker using Docker-Machine (in which case localhost is probably not the name of the container)?
I'm facing a problem with my docker.
I have my own image of SonarQube 3.6.2 which includes a couple a custom rules.
I tried to put it in a container, but if I run SonarQube while I'm trying to start my container, then my container keeps on restarting again and again.
I just tried every single idea that I had: ENTRYPOINT (both forms: ENTRYPOINT["/sonarQube362/bin/linux-x86-64/sonar.sh", "start"] and ENTRYPOINT /sonarQube362/bin/linux-x86-64/sonar.sh start), CMD (both forms), using a third party run.sh with these command lines inside:
#!/bin/bash
set -e
#nohup /sonarQube362/bin/linux-x86-64/sonar.sh start
exec /sonarQube362/bin/linux-x86-64/sonar.sh start
I always have the "Restarting" status on my container and the logs simply complains that Sonar is restarted, again, and again, and again...
If my Dockerfile ends with CMD top for example, then I can docker exec -ti container bash into it and run any of the commands above successfully.
Do you guys have any idea of why, when set as a CMD or an ENTRYPOINT SonarQube/Docker loops restarting?
Cheers,
OK. I just found the solution.
I updated the sonar.sh script to change the COMMAND_LINE. It used to daemonize the wrapper, I just changed that to NOT daemonize the wrapper. Thus Docker can keep track of it...
For the sake of clarity, here is the line:
Before:
#COMMAND_LINE="$CMDNICE \"$WRAPPER_CMD\" \"$WRAPPER_CONF\" wrapper.syslog.ident=$APP_NAME wrapper.pidfile=\"$PIDFILE\" wrapper.daemonize=TRUE $ANCHORPROP $IGNOREPROP $LOCKPROP"
After:
COMMAND_LINE="$CMDNICE \"$WRAPPER_CMD\" \"$WRAPPER_CONF\" wrapper.syslog.ident=$APP_NAME wrapper.pidfile=\"$PIDFILE\" wrapper.daemonize=FALSE $ANCHORPROP $IGNOREPROP $LOCKPROP"
Of course, you may to this using awk or sed while building the Docker image, but that's another topic...
Hope this helps,
Cheers,
Olivier
I am trying to run a simple docker container with my web application installed (Not using docker file).
During the testing I would always run a container using -t -i option and then start the tomcat service inside it by running a shell script.
How when I am moving to production I dont want to use the -t -i option any more and just need my Tomcat service to start and be the only primary service.
I trying pointing the entrypoint to the start up script for starting tomcat but the container terminates after that script finishes.
How do I run a container, start a service and keep that service as the single primary service of the container?
Note: I read some posts about supervisor but not sure if I would need to start building my image from scratch if I go that route? I would prefer not doing that.
Any suggestions?
If you have a Dockerfile that uses an entrypoint pattern, it will look something like this:
(Dockerfile)
FROM ubuntu
...Some configuration steps...
add start.sh /start.sh
ENTRYPOINT ["/start.sh"]
All you need to do is make sure your start.sh script 'hangs' in some way. Some people like to tail the syslogs, but tailing any file that exists will work.
(start.sh)
#!/bin/bash
service Your_Service_Or_Whatever start
tail -f /var/log/dmesg
A shorter version:
FROM ubuntu
...Some configuration steps...
ENTRYPOINT ["/bin/sh", "-c", "while true; do sleep 1; done"]
tested with Docker version 1.12.1, build 23cf638
Use docker --version to find out your version
Docker containers as default will run according to the configuration in the images Dockerfile. If you usually run a container with the -i flag, you leave STDIN open allowing you access to the containers entrypoint or it could be a bash shell. To achieve what you want, you can run the container in a detached state passing your commands into docker run directly.
docker run -d myapp /opt/catalina/bin/startup.sh
This will run the myapp container in a detached state and will run the command passed as the 3rd argument. If the command results in a long lived service, the container will stay active as long as the service is.
This is explained in detail in the docs.
Where does docker store it logs in the host machine as well as in the docker container. I know you can use docker logs . I want to know what is the physical location. Here is any example to illustrate more. I have got a java application which is generating standard output logs. I am using the following script to run it on a docker container
#!/bin/bash
nohup java -jar /opt/pubsub/publish.jar &
java -jar /opt/pubsub/subscribe.jar
I am unable to find my nohup in the container however I can see the content of nohup using docker logs . So where is my nohup??
Secondly where are the logs which are generated by docker itself?
They are stored in /var/lib/docker/containers
You can try docker-ci, it offers a solution to logging. It will watch the logs and send it to the client.
I hope this will help you.
http://docker-ci.org/documentation#remote-logging
I've seen a bunch of tutorials that seem do the same thing I'm trying to do, but for some reason my Docker containers exit. Basically, I'm setting up a web-server and a few daemons inside a Docker container. I do the final parts of this through a bash script called run-all.sh that I run through CMD in my Dockerfile. run-all.sh looks like this:
service supervisor start
service nginx start
And I start it inside of my Dockerfile as follows:
CMD ["sh", "/root/credentialize_and_run.sh"]
I can see that the services all start up correctly when I run things manually (i.e. getting on to the image with -i -t /bin/bash), and everything looks like it runs correctly when I run the image, but it exits once it finishes starting up my processes. I'd like the processes to run indefinitely, and as far as I understand, the container has to keep running for this to happen. Nevertheless, when I run docker ps -a, I see:
➜ docker_test docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c7706edc4189 some_name/some_repo:blah "sh /root/run-all.sh 8 minutes ago Exited (0) 8 minutes ago grave_jones
What gives? Why is it exiting? I know I could just put a while loop at the end of my bash script to keep it up, but what's the right way to keep it from exiting?
If you are using a Dockerfile, try:
ENTRYPOINT ["tail", "-f", "/dev/null"]
(Obviously this is for dev purposes only, you shouldn't need to keep a container alive unless it's running a process eg. nginx...)
I just had the same problem and I found out that if you are running your container with the -t and -d flag, it keeps running.
docker run -td <image>
Here is what the flags do (according to docker run --help):
-d, --detach=false Run container in background and print container ID
-t, --tty=false Allocate a pseudo-TTY
The most important one is the -t flag. -d just lets you run the container in the background.
This is not really how you should design your Docker containers.
When designing a Docker container, you're supposed to build it such that there is only one process running (i.e. you should have one container for Nginx, and one for supervisord or the app it's running); additionally, that process should run in the foreground.
The container will "exit" when the process itself exits (in your case, that process is your bash script).
However, if you really need (or want) to run multiple service in your Docker container, consider starting from "Docker Base Image", which uses runit as a pseudo-init process (runit will stay online while Nginx and Supervisor run), which will stay in the foreground while your other processes do their thing.
They have substantial docs, so you should be able to achieve what you're trying to do reasonably easily.
you can run plain cat without any arguments as mentioned by bro #Sa'ad to simply keep the container working [actually doing nothing but waiting for user input] (Jenkins' Docker plugin does the same thing)
The reason it exits is because the shell script is run first as PID 1 and when that's complete, PID 1 is gone, and docker only runs while PID 1 is.
You can use supervisor to do everything, if run with the "-n" flag it's told not to daemonize, so it will stay as the first process:
CMD ["/usr/bin/supervisord", "-n"]
And your supervisord.conf:
[supervisord]
nodaemon=true
[program:startup]
priority=1
command=/root/credentialize_and_run.sh
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=false
startsecs=0
[program:nginx]
priority=10
command=nginx -g "daemon off;"
stdout_logfile=/var/log/supervisor/nginx.log
stderr_logfile=/var/log/supervisor/nginx.log
autorestart=true
Then you can have as many other processes as you want and supervisor will handle the restarting of them if needed.
That way you could use supervisord in cases where you might need nginx and php5-fpm and it doesn't make much sense to have them apart.
Motivation:
There is nothing wrong in running multiple processes inside of a docker container. If one likes to use docker as a light weight VM - so be it. Others like to split their applications into micro services. Me thinks: A LAMP stack in one container? Just great.
The answer:
Stick with a good base image like the phusion base image. There may be others. Please comment.
And this is yet just another plead for supervisor. Because the phusion base image is providing supervisor besides of some other things like cron and locale setup. Stuff you like to have setup when running such a light weight VM. For what it's worth it also provides ssh connections into the container.
The phusion image itself will just start and keep running if you issue this basic docker run statement:
moin#stretchDEV:~$ docker run -d phusion/baseimage
521e8a12f6ff844fb142d0e2587ed33cdc82b70aa64cce07ed6c0226d857b367
moin#stretchDEV:~$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS
521e8a12f6ff phusion/baseimage "/sbin/my_init" 12 seconds ago Up 11 seconds
Or dead simple:
If a base image is not for you... For the quick CMD to keep it running I would suppose something like this for bash:
CMD exec /bin/bash -c "trap : TERM INT; sleep infinity & wait"
Or this for busybox:
CMD exec /bin/sh -c "trap : TERM INT; (while true; do sleep 1000; done) & wait"
This is nice, because it will exit immediately on a docker stop.
Just plain sleep or cat will take a few seconds before the container is forcefully killed by docker.
Updates
As response to Charles Desbiens concerning running multiple processes in one container:
This is an opinion. And the docs are pointing in this direction. A quote: "It’s ok to have multiple processes, but to get the most benefit out of Docker, avoid one container being responsible for multiple aspects of your overall application." For sure it obviously much more powerful to devide your complex service into multiple containers. But there are situations where it can be beneficial to go the one container route. Especially for appliances. The GitLab Docker image is my favourite example of a multi process container. It makes deployment of this complex system easy. There is no way for mis-configuration. GitLab retains all control over their appliance. Win-Win.
Make sure that you add daemon off; to you nginx.conf or run it with CMD ["nginx", "-g", "daemon off;"] as per the official nginx image
Then use the following to run both supervisor as service and nginx as foreground process that will prevent the container from exiting
service supervisor start && nginx
In some cases you will need to have more than one process in your container, so forcing the container to have exactly one process won't work and can create more problems in deployment.
So you need to understand the trade-offs and make your decision accordingly.
Since docker engine v1.25 there is an option called init.
Docker-compose included this command as of version 3.7.
So my current CMD when running a container that should run into infinity:
CMD ["sleep", "infinity"]
and then run it using:
docker build
docker run --rm --init app
crf.:
rm docs and init docs
Capture the PID of the ngnix process in a variable (for example $NGNIX_PID) and at the end of the entrypoint file do
wait $NGNIX_PID
In that way, your container should run until ngnix is alive, when ngnix stops, the container stops as well
Along with having something along the lines of : ENTRYPOINT ["tail", "-f", "/dev/null"] in your docker file, you should also run the docker container with -td option. This is particularly useful when the container runs on a remote m/c. Think of it more like you have ssh'ed into a remote m/c having the image and started the container. In this case, when you exit the ssh session, the container will get killed unless it's started with -td option. Sample command for running your image would be: docker run -td <any other additional options> <image name>
This holds good for docker version 20.10.2
There are some cases during development when there is no service yet but you want to simulate it and keep the container alive.
It is very easy to write a bash placeholder that simulates a running service:
while true; do
sleep 100
done
You replace this by something more serious as the development progress.
How about using the supervise form of service if available?
service YOUR_SERVICE supervise
Once supervise is successfully running, it will not exit unless it is
killed or specifically asked to exit.
Saves having to create a supervisord.conf