I'm pretty new to Docker and especially to docker-compose and I'm running into an issue I can't seem to fix.
I have a docker-compose.yml file that looks like
version: '3.7'
services:
backup:
build:
context: .
dockerfile: Dockerfile
command: sh -c "while :;do sleep 5; done"
tty: true
stdin_open: true
volumes:
- ./data:/app/data
and I have a file called start.sh that looks simple like
python3 -u ./upload_to_s3.py > log/upload_to_s3.f9beb4d9.out 2>&1 &
When I run docker-compose exec backup /bin/sh I can get onto the docker image and I can run ./start.sh and it will run my processes which I can verify through a simple ps aux. However, when I run
docker-compose exec backup sh start.sh
it doesn't seem to run at all.
I try to verify by getting back onto the image and running ps aux and, in fact, the python script is not running.
What's going on? Why can't I seem to run my start.sh file using docker-compose?
EDIT: I've also tried to run this using docker-compose run --rm --detach --entrypoint="sh" backup -c "/app/start.sh"and I get the exact same issue
The script you show starts a background process. But if that's run in the context of a docker exec debugging shell, as soon as the docker exec command completes, any background processes that are still running will get terminated.
I might run this in a temporary container instead of a docker exec session. The important thing is to run this as a foreground process instead of launching a background job. For example:
docker-compose run backup \
./upload_to_s3.py
docker-compose run will inherit many of the settings from the backup container, like its image: and volumes: mounts, but you get to specify the command: to run at the command line. This also saves you the trouble of keeping a meaningless container alive so that you can docker exec into it later; just run a new container for these one-off tasks.
(Note, the specific invocation I've shown here assumes that the Python script is marked executable, with chmod +x if required; that it begins with a "shebang" line like #!/usr/bin/env python3; and that the image sets an environment variable ENV PYTHONUNBUFFERED=1.)
Related
I'm trying to replicate docker run command with options within a docker-compose file:
My Dockerfile is:
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update -y
RUN apt-get install -y python3-pip python3-dev python3-opencv
RUN apt-get install -y libcanberra-gtk-module libcanberra-gtk0 libcanberra-gtk3-module
WORKDIR /
RUN mkdir /imgs
COPY app.py ./
CMD ["/bin/bash"]
And I use the following command to run the container so that it can display images from shared volume properly:
docker build -t docker_test:v1 .
docker run -it --net=host --env=DISPLAY --volume=$HOME/.Xauthority:/root/.Xauthority docker_test:v1
In order to replicate the previous command, I tried the docker-compose file below:
version: "3.7"
services: docker_test:
container_name: docker_test
build: .
environment:
- DISPLAY=:1
volumes:
- $HOME/.Xauthority:/root/.Xauthority
- $HOME/docker_test/imgs:/imgs
network_mode: "host"
However, after building the image and running app script from inside container (image copied on container, not from shared volume):
docker-compose up
docker run -ti docker_test_docker_test
python3 app.py
The following error arises:
Unable to init server: Could not connect: Connection refused
(OpenCV Image Reading:9): Gtk-WARNING **: 09:15:24.889: cannot open display:
In addition, volumes do not seem to be shared
docker run never looks at a docker-compose.yml file; every option you need to run the container needs to be specified directly in the docker run command. Conversely, Compose is much better at running long-running process than at running interactive shells (and you want the container to run the program directly, in the much the same way you don't typically start a python REPL and invoke main() from there).
With your setup, first you're launching a container via Compose. This will promptly exit (because the main container command is an interactive bash shell and it has no stdin). Then, you're launching a second container with default options and manually running your script there. Since there's no docker run -e DISPLAY option, it doesn't see that environment variable.
The first thing to change here, then, is to make the image's CMD be to start the application
...
COPY app.py .
CMD ./app.py
Then running docker-compose up (or docker run your-image) will start the application without further intervention from you. You probably need a couple of other settings to successfully connect to the host display (propagating $DISPLAY unmodified, mounting the host's X socket into the container); see Can you run GUI applications in a Linux Docker container?.
(If you're trying to access the host display and use the host network, consider whether an isolation system like Docker is actually the right tool; it would be much simpler to directly run ./app.py in a Python virtual environment.)
While in the process of painstakingly sculpting a Dockerfile and docker-compose.yml, what is THE RIGHT WAY to run root shell in work-in-progress containers (without actually starting their services!) in order to debug issues? I need to be able to run the shell as root, because only root has full access to files containing the information that I need to examine.
I can modify Dockerfile and docker-compose.yml in order to achieve this goal; as I wrote above, I am in the process of sculpting those anyway.
The problem however is that about the only way I can think of is putting USER root in Dockerfile or user: root in docker-compose.yml, but those SimplyHaveNoEffect™ in the docker-compose run <service> bash scenario. whoami in the shell thus started says neo4j instead of root, no matter what I try.
I might add sudo to the image, which doesn't have sudo, but this should be considered last resort. Also using docker directly instead of docker-compose is less than preferable.
All of the commands that launch shells in containers (including, for example, docker-compose run have a --user option and so you can specify an arbitrary user for your debugging shell.
docker-compose run -u root <service> bash
If you're in the process of debugging your image build, note that each build step produces an image, and you can run a debugging shell on that image. (For example, examine the step before a RUN step to see what the filesystem looks like before it executes, or after to see its results.)
$ docker build .
...
Step 7/9 : RUN ...
---> Using cache
---> 55c91a5dca05
...
$ docker run --rm -it -u root 55c91a5dca05 bash
In both of these cases the command (bash) overrides the CMD in the Dockerfile. If you have an ENTRYPOINT wrapper script that will still run, but the standard exec "$#" command will launch your debugging shell. If you've put your default command to run as ENTRYPOINT, change it to CMD to better support this use case (and also the wrapper entrypoint pattern, should you need it).
If you really can't change the Dockerfile, you can override the ENTRYPOINT too, but it's a little awkward.
docker run --rm -it -u root --entrypoint ls myimage -al /app
You can also use it this way:
version: '3'
services:
jenkins:
user: root
image: jenkins/jenkins:lts
ports:
- "8080:8080"
- "50000:50000"
volumes:
- /jenkins:/var/jenkins_home
you can refer to How to configure docker-compose.yml to up a container as root
I am trying to write a docker-compose file that references a Dockerfile in the same directory. The purpose of this docker-compose file is to run the command htop when I build my Dockerfile image it runs htop perfectly fine and I can pass arguments to an entry point. Whenever I go to try to run docker-compose up it starts the htop instances but then exits immediately. Is there anyway I can open two terminals or two containers and each container be running an htop instance?
Dockerfile:
FROM alpine:latest
MAINTAINER anon
RUN apk --no-cache add \
htop
ENTRYPOINT ["htop"]
docker-compose.yml
version: '3'
services:
htop_one:
build: .
environment:
TERM: "linux"
htop_two:
build: .
environment:
TERM: "linux"
Any help would be greatly appreciated!
The immediate problem is a terminal incompatibility. You run this from a terminal that is unknown to the software in the docker image.
The second problem, of the containers exiting immediately, could be fixed by using a proper init like tini:
Dockerfile:
FROM alpine:latest
MAINTAINER anon
RUN apk --no-cache add \
htop\
tini
ENTRYPOINT ["/sbin/tini", "--"]
docker-compose.yaml:
version: '3'
services:
htop_one:
build: .
environment:
TERM: "linux"
command: ["top"]
htop_two:
build: .
environment:
TERM: "linux"
command: ["top"]
To run the two services in parallel, as they each need a controlling terminal, you would run, from two different terminals:
docker-compose up htop_one
and
docker-compose up htop_two
respectively.
Note this is creating two containers from the same image. Each docker-compose service is, of course, run in a separate container.
If you'd like to run commands in the same container, you could start a service like
docker-compose up myservice
and run commands in it:
docker exec -it <container_name> htop
from different terminals, as many times as you'd like.
Not also that you can determine container_name via docker container ls and you can also set the container name from the docker-compose file,
On the issue of your htop command exiting, thus causing your docker container to exit.
This is normal behavior for docker containers. The htop is most likely exiting because it can't figure out the terminal when in a docker image, as #petre mentioned. When you run your docker image, be sure to use the -i option for an interactive session.
docker run -it MYIMAGE htop
To change the docker auto-exit behavior, do something like this in your Dockerfile:
CMD exec /bin/sh -c "trap : TERM INT; (while true; do MYCOMMAND; sleep 1000; done) & wait"
This runs your MYCOMMAND command over and over again, but allows the container to be stopped when you want. You can run a docker exec -it MYCONTAINER sh when you want to do other things in that same container.
Also, if you happen to be running docker in Windows, then prefix a winpty to the docker command like: winpty docker ... so it can get the terminal correct.
I often have to recreate my container using "docker-compose up" command. The problem I have is every time after I recreate the container, I have to go to terminal within the container to run command like "sudo service xx start" to start the app. Is there provision for me to include that sudo command within my docker-compose file? so that I can avoid this extra step.
I tried adding following line within docker-compose but does not work "command: sudo service.."
Any help is appreciated. Thank you.
Docker needs a process to keep running otherwise the container will exit. Therefore a sudo service xx start which starts the process in background won't work.
One possible solution is to append another command such as tail or bash:
command: service xx start && tail -f /dev/null
Edited to add a concrete example with cron.
The Dockerfile:
FROM ubuntu:latest
RUN apt-get update && apt-get -y install cron
# Create the log file to run run tail
RUN touch /var/log/cron.log
# set the CMD and ENTRYPOINT in docker-compose
Build the image
docker build -t my-test .
Add entrypoint and command in docker-compose.yml:
version: "3.4"
services:
service:
image: my-test
entrypoint: /bin/bash
command: -c "service cron start && tail -f /var/log/cron.log"
By adding the tail command the container does not exit.
I’m trying to use docker-compose to bring up a container. As an ENTRYPOINT, that container has a simple bash script that I wrote. Now, when I try to bring up the container in any of the below ways:
docker-compose up
docker-compose up foo
it doesn’t complete. i.e., trying to attach (docker exec -i -t $1 /bin/bash) to the running container, fails with:
Error response from daemon: Container xyz is restarting, wait until the container is running.
I tried playing around with putting commands in the background. That didn’t work.
my-script.sh
cmd1 &
cmd2 &&
...
cmdn &
I also tried i) with and without entrypoint: /bin/sh /usr/local/bin/my-script.sh and ii) with and without the tty: true option. No dice.
docker-compose.yml
version: '2'
services:
foo:
build:
context: .
dockerfile: Dockerfile.foo
...
tty: true
entrypoint: /bin/sh /usr/local/bin/my-script.sh
Also tried just a manual docker build / run cycle. And (without launching /bin/sh in the ENTRYPOINT ) the run just exits.
$ docker build ... .
$ docker run -it ...
... shell echos commands here, then exits
$
I'm sure its something simple. What's the solution here?
Your entrypoint in your docker-compose.yml only needs to be
entrypoint: /usr/local/bin/my-script.sh
Just add #! /bin/sh to the top of the script to specify the shell you want to use.
You also need to add exec "$#" to the bottom of your entrypoint script or else it will exit immediately, which will terminate the container.
First of all you need to put something infinite to keep running your container in background,like you can tail -f application.log or anything like this so that even if you exit from your container bash it keeps running in background
you do not need to do cmd1 & cmd2 &&...cmdn & just place one command like this touch 1.txt && tail -f 1.txt as a last step in your my-script.sh. It will keep running your container.
One thing also you need to change is docker run -it -d -d will start container with background mode.If you want to go inside your container than docker exec -it container_name/id bash debug the issue and exit.It will still keep your container running until you stop with docker stop container_id/name
Hope this help.
Thank you!