I'm running a docker-compose setup, and when I want to update files in my image I create a new docker image. Though the problem is; the file I'm editing is located in the persistent volume, meaning the Docker image itself will get the changes, but since I'm not deleting docker-compose volumes the volume will be used by the new image, hence the old file will be used by new image.
Running docker-compose down -v is not an options because I want to keep other existing files in the volume (logs etc.).
I want to know if it possible to do this without too much hacks, since I'm looking to automate this.
Example docker-compose.yml
version: '3.3'
services:
myService:
image: myImage
container_name: myContainer
volumes:
- data_volume:/var/data
volumes:
data_volume
NOTE: The process of doing change in my case:
docker-compose down
docker build -t myImage:t1 .
docker compose up -d
You could start a container, mount the volume and execute a command to delete single files. Something like
docker run -d --rm -v data_volume:/var/data myImage rm /var/data/[file to delete]
I am running an image in a docker container locally with the following commands
docker pull locustio/locust
and my docker-compose looks as below, for which I use the docker-compose up
version: '3'
services:
locust-service:
image: locustio/locust
ports:
- "8089:8089"
volumes:
- ./:/mnt/locust
command: -f /mnt/locust/locustfile.py -H http://master:8089
I have my volume, which is the locustfile.py which has all the code to test my system. Now I would need to push and deploy this image into another private repository along with the volume, that is the file locustfile.py.
How can I do that with the docker-compose push? Or is there any other way I can copy the volume? The docker-compose push for the above compose file doesn't seem to work
Volumes are generally intended to hold data, not application code. You should build your code into a derived Docker image, which then can be pushed.
You can write what you show here into a basic Dockerfile:
FROM locustio/locust
COPY locustfile.py /mnt/locust
# CMD must be a JSON array if it's passing additional options to an ENTRYPOINT
CMD ["-f", "/mnt/locust/locustfile.py", "-H", "http://master:8089"]
Then your docker-compose.yml file only needs to specify to build and run it, but not duplicate any of these options:
version: '3.8'
services:
locust-service:
build: .
image: my-docker-hub-name/locust
ports:
- "8089:8089"
Then docker-compose build && docker-compose push would build and push the image. On the target host you'd need to copy this docker-compose.yml file but remove the build: line.
Glancing at the Locust documentation, this is similar to what is suggested to Use docker image as a base image. You also may find it more flexible to use environment variables to set options, rather than command-line arguments, which would let you split options between the Dockerfile and the docker-compose.yml runtime configuration.
Only docker images can be pushed.
The volumes are generated when you run the image creating a container with its volumes as explained in the official documentation https://docs.docker.com/storage/volumes/ .
I report here the example in the official documentation:
docker run -d \
--name=nginxtest \
-v nginx-vol:/usr/share/nginx/html \
nginx:latest
I would like to run an oracle docker container using docker-compose. In my docker-compose.yml file i mount the docker volume as
volumes: - /host/folder:/opt/oracle/scripts/setup
Actually the /host/folder has multiple subdirectories containing some setup scripts which i want them to be executed when i do docker-compose up. Would runScripts.sh in container consider the subdirectories too ?
No. docker-compose does not consider your subdirectories for that.
You can run a specific bash script according to your requirements in which you can execute the specific scripts.
Your docker-compose.yml will look like following:
version: "3"
services:
setup:
image: ubuntu:latest
volumes:
- ./startup-script.sh:/root/startup-script.sh
- /host/folder:/opt/oracle/scripts/setup
entrypoint: "/root/startup-script.sh"
stdin_open: true
tty: true
And startup-script.sh will look like following:
#!/bin/bash
bash /directory1/script.sh
bash /directory2/script.sh
bash /directory3/script.sh
bash /directory4/script.sh
/bin/bash
exec "$#"
So, when docker container gets up, startup-script.sh will be executed and it will then execute all of your other required scripts.
Note: If your container is not of ubuntu image and supports sh instead of bash, then you can replace /bin/bash with bin/sh within your docker-compose.yml and startup-script.sh
I have a docker-compose with just one image. This is the docker-compose.yml definition:
services:
myNodeApp:
image: "1234567890.dkr.ecr.us-west-1.amazonaws.com/myNodeApp:latest"
container_name: 'myNodeApp'
volumes:
- data:/root/data
But I want to move it to docker run as I am using just one container. Executing a docker run command as the following:
docker run 1234567890.dkr.ecr.us-west-1.amazonaws.com/myNodeApp:latest --name myNodeApp -v "data:/root/data"
But I get this message 1.12.4. However, executing docker-compose up starts the application and shows the log by output.
What is the difference? What is the equivalent of docker-compose up with docker? What am I doing differently?
I think you are looking for this?
docker run -it --name myNodeApp -v "data:/root/data"
1234567890.dkr.ecr.us-west-1.amazonaws.com/myNodeApp:latest
Or maybe this command would help you, because it will build a local image associated with the config in your docker-compose.yml .
docker-compose build
docker images
I want to start a service with docker-compose and keep the container running so I can get its IP-address via 'docker inspect'. However, the container always exits right after starting up.
I tried to add "command: ["sleep", "60"]" and other things to the docker-compose.yml but whenever I add the line with "command:..." I cant call "docker-compose up" as I will get the message "Cannot start container ..... System error: invalid character 'k' looking for beginning of value"
I also tried adding "CMD sleep 60" and whatnot to the Dockerfile itself but these commands do not seem to be executed.
Is there an easy way to keep the container alive or to fix one of my problems?
EDIT:
Here is the Compose file I want to run:
version: '2'
services:
my-test:
image: ubuntu
command: bash -c "while true; do echo hello; sleep 2; done"
It's working fine If I start this with docker-compose under OS X, but if I try the same under Ubuntu 16.04 it gives me above error message.
If I try the approach with the Dockerfile, the Dockerfile looks like this:
FROM ubuntu:latest
CMD ["sleep", "60"]
Which does not seem to do anything
EDIT 2:
I have to correct myself, turned out it was the same problem with the Dockerfile and the docker-compose.yml:
Each time I add either "CMD ..." to the Dockerfile OR add "command ..." to the compose file, I get above error with the invalid character. If I remove both commands, it works flawlessly.
To keep a container running when you start it with docker-compose, use the following command
command: tail -F anything
In the above command the last part anything should be included literally, and the assumption is that such a file is not present in the container, but with the -F option (capital -F not to be confused with -f which in contrast will terminate immediateley if the file is not found) the tail command will wait forever for the file anything to appear. A forever waiting process is basically what we need.
So your docker-compose.yml becomes
version: '2'
services:
my-test:
image: ubuntu
command: tail -F anything
and you can run a shell to get into the container using the following command
docker exec -i -t composename_my-test_1 bash
where composename is the name that docker-compose prepends to your containers.
You can use tty configuration option.
version: '3'
services:
app:
image: node:8
tty: true # <-- This option
Note: If you use Dockerfile for image and CMD in Dockerfile, this option won't work; however, you can use the entrypoint option in the compose file which clears the CMD from the Dockerfile.
Based on the comment of #aanand on GitHub Aug 26, 2015, one could use tail -f /dev/null in docker-compose to keep the container running.
docker-compose.yml example
version: '3'
services:
some-app:
command: tail -f /dev/null
Why this command?
The only reason for choosing this option was that it received a lot of thumbs up on GitHub, but the highest voted answer does not mean that it is the best answer. The second reason was a pragmatic one as issues had to be solved as soon as possible due to deadlines.
Create a file called docker-compose.yml
Add the following to the file
version: "3"
services:
ubuntu:
image: ubuntu:latest
tty: true
Staying in the same directory, run docker-compose up -d from the terminal
Run docker ps to get the container id or name
You can run docker inspect $container_id
You can enter the container and get a bash shell running docker-compose exec ubuntu /bin/bash or docker-compose exec ubuntu /bin/sh
When done, make sure you are outside the container and run docker-compose down
Here's a small bash script (my-docker-shell.sh) to create the docker compose file, run the container, login to the container and then finally cleanup the docker container and the docker compose file when you log out.
#!/bin/bash
cat << 'EOF' > ./docker-compose.yml
---
version: "3"
services:
ubuntu:
image: ubuntu:latest
command: /bin/bash
# tty: true
...
EOF
printf "Now entering the container...\n"
docker-compose run ubuntu bash
docker-compose down
rm -v ./docker-compose.yml
In the Dockerfile you can use the command:
{CMD sleep infinity}
Some people here write about overwriting the entrypoint so that the command can also have its effect. But no one gives an example. I then:
docker-compose.yml:
version: '3'
services:
etfwebapp:
# For messed up volumes and `sudo docker cp`:
command: "-f /dev/null"
entrypoint: /usr/bin/tail
tty: true
# ...
I am not sure if tty is needed at this point. Is it better to do it twice? In my case it did no harm and worked perfectly. Without entrypoint it didn't work for me because then command had no effect. So I guess for this solution tty is optional.
To understand which command is executed at start-up, simply read the entrypoint before the command (concat with space): /usr/bin/tail -f /dev/null.
I'm late to the party, but you can simply use: stdin_open: true
version: '2'
services:
my-test:
image: ubuntu
stdin_open: true
Blocking command is all you need.
I have been struggling with this problem for half a day.
. There are many answers below, but not clear enough. And nobody said why.
In short, there are two methods, but it can also be said that there is only one, running a Blocking processes in background.
This first one is using COMMAND:
version: '3'
services:
some-app:
command: ["some block command"]
put some block command like sleep infinity, tail -f /dev/null, watch anything, while true ...
Here I recommend sleep infinity.
The second is enable tty=true, then open a shell in command like /bin/bash.
services:
ubuntu:
image: ubuntu:latest
tty: true
command: "/bin/bash"
Since the tty is enabled, bash will keep running background, you can put some other block commands before it if you want.
Be careful, you must excute shell command at the end, like
command: /bin/bash -c "/root/.init-service && /bin/bash"
As you can see, all you need is blocking command.
Just a quick note
I have tested single image based on golang, so when I call docker-compose down here what I get:
version: "3.1"
...
command: tail -f /dev/null # stopping container takes about 10 sec.
tty: true # stopping container takes about 2 sec.
My system info:
Ubuntu 18.04.4 LTS (64-bit)
Docker version 19.03.6, build 369ce74a3c
docker-compose version 1.26.0, build d4451659
As the commenter stated, we'd have to see the Dockerfile in question to give you a complete answer, but this is a very common mistake. I can pretty much guarantee that the command you're trying to run is starting a background process. This might be the command you'd run in non-Docker situations, but it's the wrong thing to do in a Dockerfile. For instance, if what you're running is typically defined as a system service, you might use something like "systemctl start". That would start the process in the background, which will not work. You have to run the process in the foreground, so the entire process will block.
Okay I found my mistake. In the Dockerfile for the image used for compose I specified that the base image should be ubuntu:latest, but I previously created an image called ubuntu by myself and that image did not work. So I did not use the original ubuntu image but rather a corrupt version of my own image also called ubuntu.