I often have to recreate my container using "docker-compose up" command. The problem I have is every time after I recreate the container, I have to go to terminal within the container to run command like "sudo service xx start" to start the app. Is there provision for me to include that sudo command within my docker-compose file? so that I can avoid this extra step.
I tried adding following line within docker-compose but does not work "command: sudo service.."
Any help is appreciated. Thank you.
Docker needs a process to keep running otherwise the container will exit. Therefore a sudo service xx start which starts the process in background won't work.
One possible solution is to append another command such as tail or bash:
command: service xx start && tail -f /dev/null
Edited to add a concrete example with cron.
The Dockerfile:
FROM ubuntu:latest
RUN apt-get update && apt-get -y install cron
# Create the log file to run run tail
RUN touch /var/log/cron.log
# set the CMD and ENTRYPOINT in docker-compose
Build the image
docker build -t my-test .
Add entrypoint and command in docker-compose.yml:
version: "3.4"
services:
service:
image: my-test
entrypoint: /bin/bash
command: -c "service cron start && tail -f /var/log/cron.log"
By adding the tail command the container does not exit.
Related
I'm pretty new to Docker and especially to docker-compose and I'm running into an issue I can't seem to fix.
I have a docker-compose.yml file that looks like
version: '3.7'
services:
backup:
build:
context: .
dockerfile: Dockerfile
command: sh -c "while :;do sleep 5; done"
tty: true
stdin_open: true
volumes:
- ./data:/app/data
and I have a file called start.sh that looks simple like
python3 -u ./upload_to_s3.py > log/upload_to_s3.f9beb4d9.out 2>&1 &
When I run docker-compose exec backup /bin/sh I can get onto the docker image and I can run ./start.sh and it will run my processes which I can verify through a simple ps aux. However, when I run
docker-compose exec backup sh start.sh
it doesn't seem to run at all.
I try to verify by getting back onto the image and running ps aux and, in fact, the python script is not running.
What's going on? Why can't I seem to run my start.sh file using docker-compose?
EDIT: I've also tried to run this using docker-compose run --rm --detach --entrypoint="sh" backup -c "/app/start.sh"and I get the exact same issue
The script you show starts a background process. But if that's run in the context of a docker exec debugging shell, as soon as the docker exec command completes, any background processes that are still running will get terminated.
I might run this in a temporary container instead of a docker exec session. The important thing is to run this as a foreground process instead of launching a background job. For example:
docker-compose run backup \
./upload_to_s3.py
docker-compose run will inherit many of the settings from the backup container, like its image: and volumes: mounts, but you get to specify the command: to run at the command line. This also saves you the trouble of keeping a meaningless container alive so that you can docker exec into it later; just run a new container for these one-off tasks.
(Note, the specific invocation I've shown here assumes that the Python script is marked executable, with chmod +x if required; that it begins with a "shebang" line like #!/usr/bin/env python3; and that the image sets an environment variable ENV PYTHONUNBUFFERED=1.)
I am trying to write a docker-compose file that references a Dockerfile in the same directory. The purpose of this docker-compose file is to run the command htop when I build my Dockerfile image it runs htop perfectly fine and I can pass arguments to an entry point. Whenever I go to try to run docker-compose up it starts the htop instances but then exits immediately. Is there anyway I can open two terminals or two containers and each container be running an htop instance?
Dockerfile:
FROM alpine:latest
MAINTAINER anon
RUN apk --no-cache add \
htop
ENTRYPOINT ["htop"]
docker-compose.yml
version: '3'
services:
htop_one:
build: .
environment:
TERM: "linux"
htop_two:
build: .
environment:
TERM: "linux"
Any help would be greatly appreciated!
The immediate problem is a terminal incompatibility. You run this from a terminal that is unknown to the software in the docker image.
The second problem, of the containers exiting immediately, could be fixed by using a proper init like tini:
Dockerfile:
FROM alpine:latest
MAINTAINER anon
RUN apk --no-cache add \
htop\
tini
ENTRYPOINT ["/sbin/tini", "--"]
docker-compose.yaml:
version: '3'
services:
htop_one:
build: .
environment:
TERM: "linux"
command: ["top"]
htop_two:
build: .
environment:
TERM: "linux"
command: ["top"]
To run the two services in parallel, as they each need a controlling terminal, you would run, from two different terminals:
docker-compose up htop_one
and
docker-compose up htop_two
respectively.
Note this is creating two containers from the same image. Each docker-compose service is, of course, run in a separate container.
If you'd like to run commands in the same container, you could start a service like
docker-compose up myservice
and run commands in it:
docker exec -it <container_name> htop
from different terminals, as many times as you'd like.
Not also that you can determine container_name via docker container ls and you can also set the container name from the docker-compose file,
On the issue of your htop command exiting, thus causing your docker container to exit.
This is normal behavior for docker containers. The htop is most likely exiting because it can't figure out the terminal when in a docker image, as #petre mentioned. When you run your docker image, be sure to use the -i option for an interactive session.
docker run -it MYIMAGE htop
To change the docker auto-exit behavior, do something like this in your Dockerfile:
CMD exec /bin/sh -c "trap : TERM INT; (while true; do MYCOMMAND; sleep 1000; done) & wait"
This runs your MYCOMMAND command over and over again, but allows the container to be stopped when you want. You can run a docker exec -it MYCONTAINER sh when you want to do other things in that same container.
Also, if you happen to be running docker in Windows, then prefix a winpty to the docker command like: winpty docker ... so it can get the terminal correct.
Trying to run sendmailconfig after my PHP FPM (7.1-fpm) docker has started, but i'm having a hard time doing so without getting in the way of the FPM part of the container.
FROM php:7.1-fpm
RUN apt-get update && apt-get install
CMD "/usr/local/bin/config.sh" && /bin/bash
I've tried making a script that purely executes yes | sendmailconfig but seems to stop the image's default script from running which causes PHP-FPM to never actually run.
The reason I want this done in the image is because I have to run the sendmailconfig command every time I restart the container, which is impractical when managing multiple docker stacks.
Set your entrypoint to run a file you've copied in, that file should have something like the following in it
/usr/local/bin/config.sh
# If this isn't the correct command for you to start php-fpm look up the correct one for your image
sudo service php7.1-fpm start
# Execute the CMD passed in from the dockerfile
sudo -H bash -c "$#;"
# You'll probably be ok with just `bash -c "$#;"` if you don't have sudo installed
I'm new to docker.
I'm setting up nginx server to serve static files inside a docker container. I'd like to enable nginx to start automatically on every startup inside a docker container.
I've tried changing ENTRYPOINT, CMD and crontab when building DockerImage. But these settings to run nginx on every startup works only the first time I "run" a container. When I "stop" the container and "start" it again, nginx does not start automatically inside the container.
I'm looking for a way to start nginx on every startup of a container and my first question is "is it possible to do this?"
My second question is about a container cycle. Given that there are not many discussions on this subject (all discussions are about automatically running a script or sth else at the moment of "run"), I wonder if it is more efficient to "run" and "kill" a container each time than just "stopping" and "starting" a container.
Here are the lines of code I tried for DockerImage (with crontab), which was my first try.
RUN apt-get install -y cron
COPY run_server /etc/cron.d/run_server
RUN chmod 0644 /etc/cron.d/run_server
RUN crontab /etc/cron.d/run_server
RUN touch /var/log/cron.log
CMD cron && tail -f /var/log/cron.log
run_server is a simple crontab config file which includes:
#reboot service nginx start
Since this was not the solution I was looking for (it worked only when I "ran" a conainer, not "stopped" and "started" a container) I tried with supervisor, too.
RUN apt-get -y install supervisor && \
mkdir -p /var/log/supervisor && \
mkdir -p /etc/supervisor/conf.d
ADD supervisor.conf /etc/supervisor.conf
CMD ["supervisord", "-c", "/etc/supervisor.conf"]
supervisor.confg contains:
[supervisord]
nodaemon=true
[program:run_server]
command=/usr/bin/python3.6 /home/server.py
autostart=true
directory=/home
redirect_stderr=true
But neither of them worked the way I wanted ..
my Dockerfile , container ( CentOS 8 and nginx ) linux mint 19.3 Docker version 19.03.4
# howto: Dockerfile
# CentOS 8 and nginx
# docker build -t centose .
# docker run -it -p 80:80 centose
# curl localhost
FROM centos:latest
# MAINTAINER їван
RUN yum -y install nginx
EXPOSE 80
CMD ["/usr/sbin/nginx", "-g", "daemon off;"]
#WORKDIR /usr/sbin/
I would try to update my docker file and enable the nginx service so it will be started during the next reboot.
Here is a couple of different ways:
RUN systemctl enable nginx
RUN service nginx start
Another way would be to add a bootstrap script that starts the service:
#!/bin/bash
sudo service nginx start
tail -f /var/log/nginx/error.log
Make sure the bootstrap.sh is executable i.e sudo chmod +x bootstrap.sh.
Then update your docker file:
COPY boostrap.sh /bin/.
CMD ["bootstrap.sh"]
you can apply that using docker restart policy
you can simply set it when creating the container or updating the created ones
examples: docker run -t -d --restart unless-stopped nginx
to update the created ones: docker update --restart unless-stopped {container ID}
I’m trying to use docker-compose to bring up a container. As an ENTRYPOINT, that container has a simple bash script that I wrote. Now, when I try to bring up the container in any of the below ways:
docker-compose up
docker-compose up foo
it doesn’t complete. i.e., trying to attach (docker exec -i -t $1 /bin/bash) to the running container, fails with:
Error response from daemon: Container xyz is restarting, wait until the container is running.
I tried playing around with putting commands in the background. That didn’t work.
my-script.sh
cmd1 &
cmd2 &&
...
cmdn &
I also tried i) with and without entrypoint: /bin/sh /usr/local/bin/my-script.sh and ii) with and without the tty: true option. No dice.
docker-compose.yml
version: '2'
services:
foo:
build:
context: .
dockerfile: Dockerfile.foo
...
tty: true
entrypoint: /bin/sh /usr/local/bin/my-script.sh
Also tried just a manual docker build / run cycle. And (without launching /bin/sh in the ENTRYPOINT ) the run just exits.
$ docker build ... .
$ docker run -it ...
... shell echos commands here, then exits
$
I'm sure its something simple. What's the solution here?
Your entrypoint in your docker-compose.yml only needs to be
entrypoint: /usr/local/bin/my-script.sh
Just add #! /bin/sh to the top of the script to specify the shell you want to use.
You also need to add exec "$#" to the bottom of your entrypoint script or else it will exit immediately, which will terminate the container.
First of all you need to put something infinite to keep running your container in background,like you can tail -f application.log or anything like this so that even if you exit from your container bash it keeps running in background
you do not need to do cmd1 & cmd2 &&...cmdn & just place one command like this touch 1.txt && tail -f 1.txt as a last step in your my-script.sh. It will keep running your container.
One thing also you need to change is docker run -it -d -d will start container with background mode.If you want to go inside your container than docker exec -it container_name/id bash debug the issue and exit.It will still keep your container running until you stop with docker stop container_id/name
Hope this help.
Thank you!