how to ignore some container when i run `docker-compose rm` - docker

I have four containers that was node ,redis, mysql, and data. when i run docker-compose rm,it will remove all of my container that include the container data.my data of mysql is in the the container and i don't want to rm the container data.
why i must rm that containers?
Sometime i must change some configure files of node and mysql and rebuild.So
,I must remove containers and start again.
I have searched using google again over again and got nothing.

As things stand, you need to keep your data containers outside of Docker Compose for this reason. A data container shouldn't be running anyway, so this makes sense.
So, to create your data-container do something like:
docker run --name data mysql echo "App Data Container"
The echo command will complete and the container will exit immediately, but as long as you don't docker rm the container you will still be able to use it in --volumes-from commands, so you can do the following in Compose:
db:
image: mysql
volumes-from:
- data
And just remove any code in docker-compose.yml to start up the data container.

An alternative to docker-compose, in Go (https://github.com/michaelsauter/crane), let's you create contianer groups -- including overriding the default group so that you can ignore your data containers when rebuilding your app.
Given you have a "crane.yaml" with the following containers and groups:
containers:
my-app:
...
my-data1:
...
my-data2:
...
groups:
default:
- "my-app"
data:
- "my-data1"
- "my-data2"
You can build your data containers once:
# create your data-only containers (safe to run several times)
crane provision data # needed when building from Dockerfile
crane create data
# build/start your app.
crane lift -r # similar to docker-compose build && docker compose up
# Force re-create off your data-only containers...
crane create --recreate data
PS! Unlike docker-compose, even if building from Dockerfile, you MUST specify an "image" -- when not pulling, this is the name docker will give the image locally! Also note that the container names are global, and not prefixed by the folder name the way they are in docker-compose.
Note that there is at least one major pitfall with crane: It simply ignores misplaced or wrongly spelled fields! This makes it harder to debug that docker-compose yaml.

#AdrianMouat Now , I can specify a *.yml file when I starting all container with the new version 1.2rc of docker-compose (https://github.com/docker/compose/releases). just like follows:
file:data.yml
data:
image: ubuntu
volumes:
- "/var/lib/mysql"
thinks for your much useful answer

Related

Docker editing entrypoint of existing container

I've docker container build from debian:latest image.
I need to execute a bash script that will start several services.
My host machine is Windows 10 and I'm using Docker Desktop, I've found configuration files in
docker-desktop-data wsl2 drive in data\docker\containers\<container_name>
I've 2 config files there:
config.v2.json and hostcongih.json
I've edited the first of them and replaced:
"Entrypoint":null with "Entrypoint":["/bin/bash", "/opt/startup.sh"]
I have done it while the container was down, when I restarted it the script was not executed. When I opened config.v2.json file again the Entrypoint was set to null again.
I need to run this script at every container start.
Additional strange thing is that this container doesn't have any volume appearing in docker desktop. I can checkout this container and start another one, but I need to preserve current state of this container (installed packages, files, DB content). How can I change the entrypoint or run the script in other way?
Is there anyway to export the container to image alongside with it's configuration? I need to expose several ports and run the startup script. Is there anyway to make every new container made from the image exported from current container expose the same ports and run same startup script?
Docker's typical workflow involves containers that only run a single process, and are intrinsically temporary. You'd almost never create a container, manually set it up, and try to persist it; instead, you'd write a script called a Dockerfile that describes how to create a reusable image, and then launch some number of containers from that.
It's almost always preferable to launch multiple single-process containers than to try to run multiple processes in a single container. You can use a tool like Docker Compose to describe the multiple containers and record the various options you'd need to start them:
# docker-compose.yml
# Describe the file version. Required with the stable Python implementation
# of Compose. Most recent stable version of the file format.
version: '3.8'
# Persistent storage managed by Docker; will not be accessible on the host.
volumes:
dbdata:
# Actual containers.
services:
# The database.
db:
# Use a stock Docker Hub image.
image: postgres:15
# Persist its data.
volumes:
- dbdata:/var/lib/postgresql/data
# Describe how to set up the initial database.
environment:
POSTGRES_PASSWORD: passw0rd
# Make the container accessible from outside Docker (optional).
ports:
- '5432:5432' # first port any available host port
# second port MUST be standard PostgreSQL port 5432
# Reverse proxy / static asset server
nginx:
image: nginx:1.23
# Get static assets from the host system.
volumes:
- ./static:/usr/share/nginx/html
# Make the container externally accessible.
ports:
- '8000:80'
You can check this file into source control with your application. Also consider adding a third container that build: an image containing the actual application code; that probably will not have volumes:.
docker-compose up -d will start this stack of containers (without -d, in the foreground). If you make a change to the docker-compose.yml file, re-running the same command will delete and recreate containers as required. Note that you are never running an unmodified debian image, nor are you manually running commands inside a container; the docker-compose.yml file completely describes the containers, their startup sequences (if not already built into the images), and any required runtime options.
Also see Networking in Compose for some details about how to make connections between containers: localhost from within a container will call out to that same container and not one of the other containers or the host system.

Docker compose command is failing with conflict

I am bringing up my project dependencies using docker-compose. So far this used to work
docker-compose up -d --no-recreate;
However today I tried running the project again after couple of weeks and I was greeted with error message
Creating my-postgres ... error
ERROR: for my-postgres Cannot create container for service postgres: b'Conflict. The container name "/my-postgres" is already in use by container "dbd06bb1d99eda6f075ea688df16e8b355e559e1759f084dee8f3cddfc535b0b". You have to remove (or rename) that container to be able to reuse that name.'
ERROR: for postgres Cannot create container for service postgres: b'Conflict. The container name "/my-postgres" is already in use by container "dbd06bb1d99eda6f075ea688df16e8b355e559e1759f084dee8f3cddfc535b0b". You have to remove (or rename) that container to be able to reuse that name.'
ERROR: Encountered errors while bringing up the project.
My docker-compose.yml file is
postgres:
container_name: my-postgres
image: postgres:latest
ports:
- "15432:5432"
Docker version is
Docker version 19.03.1, build 74b1e89
Docker compose version is
docker-compose version 1.24.1, build 4667896b
Intended behavior of this call is to:
make the container if it does not exist
start the container if it exists
just chill and do nothing if the container is already started
Docker Compose normally assigns a container name based on its current project name and the name of the services: block. Specifying container_name: explicitly overrides this; but, it means you can’t launch multiple copies of the same Compose file with different project names (from different directories) because the container name you’ve explicitly chosen won’t be used.
You almost never care what the container name is explicitly. It only really matters if you’re trying to use plain docker commands to manipulate Compose-managed containers; it has no impact on inter-service communication. Just delete the container_name: line.
(For similar reasons you can almost always delete hostname: and links: sections if you have them with no practical impact on your overall system.)
In my case I moved the project in an other directory.
When I tryed to run docker-compose up it failed because of some conflicts.
With command docker system prune I resolved them.
It's caused by being in a different directory than when you last ran docker-compose up. One option is to change back to the original directory. Or if you've configured it as a systemd service you can use systemctl.
Well...the error message seems pretty straightforward to me...
The container name "/my-postgres" is already in use by container
If you just want to restart where you left, you should use docker-compose start.
Otherwise, just clean up your workspace before running it :
docker-compose down
docker-compose up -d
Remove --no-recreate flag from your docker-compose command. And execute the command again.
$docker-compose up -d
--no-recreate is using for preventing accedental updates.
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers. To prevent Compose from picking up changes, use the --no-recreate flag.
official docker docs.Link
I had similar issue
dcdown --remove-orphans
That worked for me.

How to avoid the "Docker cannot link to a non running container" error when the external-linked container is actually running using docker-compose

What we want to do:
We want to use docker-compose to link one already running container (A) to another container (B) by container name. We use "external-link" as both containers are started from different docker-compose.yml files.
Problem:
Container B fails to start with the error although a container with that name is running.
ERROR: for container_b Cannot start service container_b: Cannot link to a non running container: /PREVIOUSLY_LINKED_ID_container_a_1 AS /container_b_1/container_a_1
output of "docker ps":
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
RUNNING_ID container_a "/docker-entrypoint.s" 15 minutes ago Up 15 minutes 5432/tcp container_a_1
Sample code:
docker-compose.yml of Container B:
container_b:
external_links:
- container_a_1
What differs this question from the other "how to fix"-questions:
we can't use "sudo service docker restart" (which works) as this is a production environment
We don't want to fix this every time manually but find the reason so that we can
understand what we are doing wrong
understand how to avoid this
Assumptions:
It seems like two instances of the container_a exist (RUNNING_ID and PREVIOUSLY_LINKED_ID)
This might happen because we
rebuilt the container via docker-compose build and
changed the forwarded external port of the container (80801:8080)
Comment
Do not use docker-compose down as suggested in the comments, this removes volumnes!
Docker links are deprecated so unless you need some functionality they provide or are on an extremely old version of docker, I'd recommend switching to docker networks.
Since the containers you want to connect appear to be started in separate compose files, you would create that network externally:
docker network create app_net
Then in your docker-compose.yml files, you connect your containers to that network:
version: '3'
networks:
app_net:
external:
name: app_net
services:
container_a:
# ...
networks:
- app_net
Then in your container_b, you would connect to container_a as "container_a", not "container_a_1".
As an aside, docker-compose down is not documented to remove volumes unless you pass the -v flag. Perhaps you are using anonymous volumes, in which case I'm not sure that docker-compose up would know where to find your data. A named volume is preferred. More than likely, your data was not being stored in a volume, which is dangerous and removes your ability to update your containers:
$ docker-compose down --help
By default, the only things removed are:
- Containers for services defined in the Compose file
- Networks defined in the `networks` section of the Compose file
- The default network, if one is used
Networks and volumes defined as `external` are never removed.
Usage: down [options]
Options:
--rmi type Remove images. Type must be one of:
'all': Remove all images used by any service.
'local': Remove only images that don't have a custom tag
set by the `image` field.
-v, --volumes Remove named volumes declared in the `volumes` section
of the Compose file and anonymous volumes
attached to containers.
--remove-orphans Remove containers for services not defined in the
Compose file

How to link multiple Docker containers and encapsulate the result?

I have a Node.js web-application that connects to a Neo4j database. I would like to encapsulate these in a single Docker image (using also a Neo4j Docker container), but I'm a docker novice and can't seem to figure this out. What's the recommended way to do it in the latest Docker versions?
My intuition would be to run the Neo4j container nested inside the app container. But from what I've read, I think the supported / recommended approach is to link the containers together. What I need is pretty well illustrated in this image. But the article where the image comes from isn't clear to me. Anyway, it's using the soon-to-be-deprecated legacy container linking, while networking is recommended these days. A tutorial or explanation would be much appreciated.
Also, how does docker-compose fit into all this?
Running a container within another container would imply to run a Docker engine within a Docker container. This is referenced as dind for Docker-in-Docker and I would strongly advise against it. You can search 'dind' online and discover why in most cases it is a bad idea, but as it is not the main object of your question I won't extend this subject any further.
Running both a node.js process and a neo4j process in the same container
While most people will tell you to refrain yourself from running more than one process within a Docker container, nothing prevents you from doing so. If you want to follow this path, take a look at the Using Supervisor with Docker from the Docker documentation website, or at the Phusion baseimage Docker image.
Just be aware that this way of doing things will make your Docker image more and more difficult to maintain over time.
Linking containers
As you found out, keeping Docker images as simple as you can (i.e: running one and only one app within a Docker container) will make your life easier on the long term.
Linking containers together is trivial when both containers run on the same Docker engine. It is just a matter of:
having your neo4j container expose the port its service listens on
running your node.js container with the --link <neo4j container name>:<alias> option
within the node.js application configuration, set the neo4j host to the <alias> hostname, docker will take care of forwarding that connection to the IP it assigned to the neo4j container
When you want to run those two containers on different hosts, things get more difficult.
With Docker Compose, you have to use the link: key to define your links
The new Docker network feature
You also discovered that linking containers won't be supported in the future and that the new way of making multiple Docker containers communicate is to create a virtual network and attach those 2 containers to that network.
Here's how to proceed:
docker network create mynet
docker run --detach --name myneo4j --net mynet neo4j
docker run --detach --name mynodejs --net mynet <your nodejs image>
Your node application configuration should then use myneo4j as the host to connect to.
To tell Docker Compose to use the new network feature, you would have to use the --x-networking option. Also you would not use the links: key.
Using the new networking feature also means that you won't be able to define any alias for the db. As a result you have to use the container name. Beware that unless you use the container_name: key in your docker-compose.yml file, Compose will create container names based on the directory which contains your docker-compose.yml file, the service name as found in the yml file and a number.
For instance, the following docker-compose.yml file, if within a directory named "foo" would create two containers named foo_web_1 and foo_db_1:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
when started with docker-compose --x-networking up, the web app configuration should then use foo_db_1 as the db hostname.
While if you use container_name:
web:
build: .
ports:
- "8000:8000"
db:
image: postgres
container_name: mydb
when started with docker-compose --x-networking up, the web app configuration should then use mydb as the db hostname.
Example of using Docker Compose to run a web app using nodeJS and neo4j
In this example, I will show how to dockerize the example app from github project aseemk/node-neo4j-template which uses nodejs and neo4j.
I assume you already have Docker 1.9.0+ and Docker Compose 1.5+ installed.
This project will use 2 docker containers, one to run the neo4j database and one to run the nodeJS web app.
Dockerizing the web app
We need to build a Docker image from which Docker compose will run a container. For that, we will write a Dockerfile.
Create a file named Dockerfile (mind the capital D) with the following content:
FROM node
RUN git clone https://github.com/aseemk/node-neo4j-template.git
WORKDIR /node-neo4j-template
RUN npm install
# ugly 20s sleep to wait for neo4j to initialize
CMD sleep 20s && node app.js
This Dockerfile describes the steps the Docker engine will have to follow to build a docker image for our web app. This docker image will:
be based on the official node docker image
clone the nodeJS example project from Github
change the working directory to the directory containing the git clone
run the npm install command to download and install the nodeJS app dependencies
instruct docker which command to use when running a container of that image
A quick review of the nodeJS code reveals that the author allows us to configure the URL to use to connect to the neo4j database using the NEO4J_URL environment variable.
Dockerizing the neo4j database
Well people took care of that for us already. We will use the official Docker image for neo4j which can be found on the Docker Hub.
A quick review of the readme tells us to use the NEO4J_AUTH environment variable to change the neo4j password. And setting this variable to none will disable the authentication all together.
Setting up Docker Compose
In the same directory as the one containing our Dockerfile, create a docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
This Compose configuration file describes 2 services: db and web.
The db service will produce a container named my-neo4j-db from the official neo4j docker image and will start that container setting up the NEO4J_AUTH environment variable to none.
The web service will produce a container named at docker compose discretion using a docker image built from the Dockerfile found in the current directory (build: .). It will start that container setting up the environment variable NEO4J_URL to http://my-neo4j-db:7474 (note how we use here the name of the neo4j container my-neo4j-db). Furthermore, docker compose will instruct the Docker engine to expose the web container's port 3000 on the docker host port 80.
Firing it up
Make sure you are in the directory that contains the docker-compose.yml file and type: docker-compose --x-networking up.
Docker compose will read the docker-compose.yml file, figure out it has to first build a docker image for the web service, then create and start both containers and finally will provide you with the logs from both containers.
Once the log shows web_1 | Express server listening at: http://localhost:3000/, everything is cooked and you can direct your Internet navigator to http://<ip of the docker host>/.
To stop the application, hit Ctrl+C.
If you want to start the app in the background, use docker-compose --x-networking up -d instead. Then in order to display the logs, run docker-compose logs.
To stop the service: docker-compose stop
To delete the containers: docker-compose rm
Making neo4j storage persistent
The official neo4j docker image readme says the container persists its data on a volume at /data. We then need to instruct Docker Compose to mount that volume to a directory on the docker host.
Change the docker-compose.yml file with the following content:
db:
container_name: my-neo4j-db
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db:7474
ports:
- 80:3000
With that config file, when you will run docker-compose --x-networking up, docker compose will create a neo4j-data directory and mount it into the container at location /data.
Starting a 2nd instance of the application
Create a new directory and copy over the Dockerfile and docker-compose.yml files.
We then need to edit the docker-compose.yml file to avoid name conflict for the neo4j container and the port conflict on the docker host.
Change its content to:
db:
container_name: my-neo4j-db2
image: neo4j
environment:
NEO4J_AUTH: none
volumes:
- ./neo4j-data:/data
web:
build: .
environment:
NEO4J_URL: http://my-neo4j-db2:7474
ports:
- 81:3000
Now it is ready for the docker-compose --x-networking up command. Note that you must be in the directory with that new docker-compose.yml file to start the 2nd instance up.

docker postgres with initial data is not persisted over commits

I created a rails app in a docker environment and it links to a postgres instance. I edited the
postgres container to add initial data (by running rake db:setup from the rails app). Now I commited the postgres database, but it doesn't seem to remember my data when I create a new container (of the commited postgres image).
Isn't it possible to save data in a commit and then reuse it afterwards?
I used the postgres image: https://registry.hub.docker.com/_/postgres/
The problem is that the postgres Dockerfile declares "/var/lib/postgresql/data" as a volume. This is a just a normal directory that lives outside of the Union File System used by images. Volumes live until no containers link to them and they are explicitly deleted.
You have a few choices:
Use the --volumes-from command to share data with new containers. This will only work if there is only one running postgres image at a time, but it is the best solution.
Write your own Dockerfile which creates the data before declaring the volume. This data will then be copied into the volume when the container is created.
Write an entrypoint or cmd script which populates the database at run time.
All of these suggestions require you to use Volumes to manage the data once the container is running. Alternatively, you could write your own Dockerfile and simply not declare a volume. You could then use docker commit to create a new image after adding data. This will probably work in the short term, but is definitely not how you should work with containers - it isn't repeatable and you will eventually run out of layers in the Union File System.
Have a look at the official Docker docs on managing data in containers for more info.
Create a new Dockerfile and change PGDATA:
FROM postgres:9.2.10
RUN mkdir -p /var/lib/postgresql-static/data
ENV PGDATA /var/lib/postgresql-static/data
You should be all set with the following command. The most important part is the PGDATA location, which should be anything but the default.
docker run -e PGDATA=/var/lib/postgresql/pgdata -e POSTGRES_PASSWORD=YourPa$$W0rd -d postgres
It is not possible to save data during a commit since the data resides on a mount which is specific for that container and will get removed once you run docker rm <container ID> but you can use data volumes to share and reuse data between container and the changes made are directly on the volume.
You can use docker run -v /host/path:/Container/path to mount the volume to the new container.
Please refer to: https://docs.docker.com/userguide/dockervolumes/
For keeping permanent data such as databases, you should define these data volumes as external, therefore it will not be removed or created automatically every time you run docker-compose up or down commands, or redeploy your stack to the swarm.
...
volumes:
db-data:
external: true
...
then you should create this volume:
docker volume create db-data
and use it as data volume for your databse:
...
db:
image: postgres:latest
volumes:
- db-data:/var/lib/postgresql/data
ports:
- 5432:5432
...
In production, there are many factors to consider when using docker for keeping permanent data safely, specially in swarm mode, or in kubernetes cluster.

Resources