How can I link an image created volume with a docker-compose specified named volume? - docker

I have been trying to user docker-compose to spin up a postgres container container, with a single, persisted named volume.
The goal is to have different postgres containers share the same persisted data (not concurrently!) - one container dies or is killed, another takes it place without losing previously persisted data.
As I understand "named volumes" are supposed to replace "Data Volume Containers".
However, so far either one of two things happen:
The postgres container fails to start up, with error message "ERROR: Container command not found or does not exist."
I achieve persistance for only that specific container. If it is stopped and removed and another container started, we start with a blank slate.
SO, as far as I understand, the postgres image does create it's own volume, which is of course bound to that specific container. Which would be fine, if I could just get THAT volume aliased or linked or something with the named volume.
Current incarnation of docker-compose.yml:
version: '2'
services:
db:
image: postgres
restart: allways
volumes:
- myappdb:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=mysecretpasswordPleaseChangeME
volumes:
myappdb:
driver: local
Am I doing something stupidly wrong, or attempting something that is simply not supported?
Docker version 1.10.3, build 20f81dd
docker-compose version 1.6.0,
build d99cad6

Ok, after a lot of trial and error, things are now working as they should (meaning I am able to run docker-compose down and then docker-compose up and my data is in the state where it was left with the down command).
In general, a few things:
Don't use the PGDATA environment option with the official postgres image
If using spring boot (like I was), and docker compose (as I was) and passing environment options to a service linked to your database container, do not wrap a profile name in double quotes. It is passed as-is to the Spring as-is, resulting in a non-existing profile to be used as the active profile.
I had some subtle and strange things incorrectly configured initially, but I suspect the killer was point 2 above - it caused my app,when running in a container, to use in-mem H2 database instead of the linked container database. So everything functioned (almost) perfectly - until container shutdown. And, when running from IDE, against container DB (with ports exposed to host), all worked perfectly (including persistence), since the active profile parameter was correctly set in the IDE launcher (NO quotes!).
Live and learn I guess (but I do feel a LOT of egg on my face).

You need to tell Compose that it should manage creation of the Volume, otherwise it assumes it should already exist on the host.
volumes:
myappdb:
external: false
Docs: https://docs.docker.com/compose/compose-file/#external

Related

Is necessary rebuild container to change ports or stop/start is enough?

I have a composer file with four services. I need to OPEN one of them to outside by settings ports.
After changing .yml file, do I need to 'rebuild the container' (docker-compose down/up) or do I just need to stop/start? (docker-compose stop/start)?
Specifically, what I neet to do accesible to outside is a Posgree Server. This is my actual postgres service definition in .yml:
mydb:
image: postgres:9.4
environment:
- POSTGRES_PASSWORD=myPassword
volumes:
- db-data:/var/lib/postgresql/data
I think I just need to change it to:
mydb:
image: postgres:9.4
ports:
- "5432:5432"
environment:
- POSTGRES_PASSWORD=myPassword
volumes:
- db-data:/var/lib/postgresql/data
I'm worried of loosing data on 'db-data' volume, or connection to the other services, if I use down/up.
Also, there are 3 other services specified in the .yml file. If it is necessary to REBUILD (without loosing data in db-data, of course), I don't want to touch these other containers. In this case, which would be the steps?:
First, rebuild 'mydb' container with 'docker run' (Could you provide me the right command, please?)
Modify the .yml as stated before, just adding the ports
Perform a simple docker-compose stop/start
Could you help me, please?
If you're only changing settings like ports:, it is enough to re-run docker-compose up -d again. Compose will figure out which things are different from the existing containers, and destroy and recreate only those specific containers.
If you're changing a Dockerfile or your application code you may specifically need to docker-compose build your application or use docker-compose up -d --build. But you don't specifically need to rebuild the images if you're only changing runtime settings like ports:.
docker-compose down tears down your entire container stack. You don't need it for routine rebuilds or container updates. You may want to intentionally shut down the container system (and free up host ports, memory, and other resources) and it's useful then.
docker-compose stop leaves the containers in an unusual state of existing but without a running process. You almost never need this. docker-compose start restarts containers in this unusual state, and you also almost never need it.
You have to rebuild it.
For that reason the best practice is to map all the mount points and resources externally, so you can recreate the container (with changed parameters) without any loss of data.
In your scenario I see that you put all the data in an external docker volume, so I think you could recreate it with changed ports in a safe way.

Docker compose command is failing with conflict

I am bringing up my project dependencies using docker-compose. So far this used to work
docker-compose up -d --no-recreate;
However today I tried running the project again after couple of weeks and I was greeted with error message
Creating my-postgres ... error
ERROR: for my-postgres Cannot create container for service postgres: b'Conflict. The container name "/my-postgres" is already in use by container "dbd06bb1d99eda6f075ea688df16e8b355e559e1759f084dee8f3cddfc535b0b". You have to remove (or rename) that container to be able to reuse that name.'
ERROR: for postgres Cannot create container for service postgres: b'Conflict. The container name "/my-postgres" is already in use by container "dbd06bb1d99eda6f075ea688df16e8b355e559e1759f084dee8f3cddfc535b0b". You have to remove (or rename) that container to be able to reuse that name.'
ERROR: Encountered errors while bringing up the project.
My docker-compose.yml file is
postgres:
container_name: my-postgres
image: postgres:latest
ports:
- "15432:5432"
Docker version is
Docker version 19.03.1, build 74b1e89
Docker compose version is
docker-compose version 1.24.1, build 4667896b
Intended behavior of this call is to:
make the container if it does not exist
start the container if it exists
just chill and do nothing if the container is already started
Docker Compose normally assigns a container name based on its current project name and the name of the services: block. Specifying container_name: explicitly overrides this; but, it means you can’t launch multiple copies of the same Compose file with different project names (from different directories) because the container name you’ve explicitly chosen won’t be used.
You almost never care what the container name is explicitly. It only really matters if you’re trying to use plain docker commands to manipulate Compose-managed containers; it has no impact on inter-service communication. Just delete the container_name: line.
(For similar reasons you can almost always delete hostname: and links: sections if you have them with no practical impact on your overall system.)
In my case I moved the project in an other directory.
When I tryed to run docker-compose up it failed because of some conflicts.
With command docker system prune I resolved them.
It's caused by being in a different directory than when you last ran docker-compose up. One option is to change back to the original directory. Or if you've configured it as a systemd service you can use systemctl.
Well...the error message seems pretty straightforward to me...
The container name "/my-postgres" is already in use by container
If you just want to restart where you left, you should use docker-compose start.
Otherwise, just clean up your workspace before running it :
docker-compose down
docker-compose up -d
Remove --no-recreate flag from your docker-compose command. And execute the command again.
$docker-compose up -d
--no-recreate is using for preventing accedental updates.
If there are existing containers for a service, and the service’s configuration or image was changed after the container’s creation, docker-compose up picks up the changes by stopping and recreating the containers. To prevent Compose from picking up changes, use the --no-recreate flag.
official docker docs.Link
I had similar issue
dcdown --remove-orphans
That worked for me.

Docker container names

I'm using Docker on Rails project. I found only one way to reliably link services between each other, i. e. specifying container_name in docker-compose.yml:
version: '3'
services:
db:
container_name: sociaball_db
...
web:
container_name: sociaball_web
...
sphinx:
container_name: sociaball_sphinx
...
So now I can write something like this in database.yml and stop worrying about, say, database container randomly changing its name from db to db_1:
common: &common
...
host: sociaball_db
However, I can only run three containers at the same time. Whenever I try to run docker-container up if some containers aren't down it will raise an error.
ERROR: for sociaball_db Cannot create container for service db: Conflict. The container name "/sociaball_db" is already in use by container "ee787c06db7b2a0205e3c1e552b6a5496545a78fe12d942fb792b27f3c38769c". You have to remove (or rename) that container to be able to reuse that name.
It is very inconvenient. It often forces explicitly deleting all the containers just to make sure they have no opportunity to break. Is there a way around that?
When running several containers from one compose file, there will be a default network where all containers are attached to (if not specified differently).
There is no need to reference a container by its container or hostname as docker-compose automatically sets up some dns service discovery where each docker-compose service can be resolved by its service name (the key used one level below services:.
So your service called web can reach your database using the name db. No need to specify a container name for this use case. For more details please see the docker docs on networking that also demonstrates a rails app accessing a database.

Docker | Mounting volumes causes error

I am trying to setup a Docker swarm which connects 3 of my servers together. The swarm is setup and going to the URL I get the same result, which is perfect and just what I need.
However, for the server in which I am working on now I am producing a global nginx on every server in order to allow load balancing.
Sitting on the server will be multiple config files which I need in order to map the domain to the correct folder, which is the part which I am stuck on/not working for me.
I have a really simple docker-compose.yml as I have shrunk it in order to debug the issue, it consists of the following...
version: '3'
services:
nginx:
image: nginx:latest
volumes:
- /var/www/nginx/config/:/etc/nginx/conf.d/:ro
deploy:
mode: global
The volume is coming back with the error "invalid mount config for type "bind": bind source path does not exist" so obviously when I remove the volume line it works perfectly, however I 100% need this line.
I can, inside of the server navigate perfectly to /var/www/nginx/config/ and my config files exist within. Same with the other, if I run docker exec -it <container> /bin/bash and navigate to /etc/nginx/conf.d I can get to there perfectly fine which is why I'm posting on here.
I've looked at other posts and done what other people have said have fixed it such as
Adding quotes to the volume
Remove the slash at the end of the file
Restart the server
Restart Docker
But nothing seems to be working
The potential issue could be that not all the nodes in your swarm cluster have the directory (/var/www/nginx/config/) created. Since in swarm the service can be placed in any of the available nodes(unless you put in a constraint) you might be seeing this error.
Make sure that you have this directory created in all the 3 nodes.
Additionally you can also have a look here for defining configs.

How to share a value between all docker containers spun op by the same "docker-compose up" call?

Context
We are migrating an older application to docker and as a first step we're working against some constraints. The database can not yet be put in a container, and moreover, it is shared between all developers in our team. So this question is to find a fix for a temporary problem.
To not clash with other developers using the same database, there is a system in place where each developer machine starts the application with a value that is unique to his machine. Each container should use this same value.
Question
We are using docker-compose to start the containers. Is there a way to provide a (environment) variable to it that gets propagated to all containers?
How I'm trying to do it:
My docker-compose.yml looks kind of like this:
my_service:
image: my_service:latest
command: ./my_service.sh
extends:
file: base.yml
service: base
environment:
- batch.id=${BATCH_ID}
then I thought running BATCH_ID=$somevalue docker-compose up my_service would fill in the ${BATCH_ID}, but it doesn't seem to work that way.
Is there another way? A better way?
Optional: Ideally everything should be contained so that a developer can just call docker-compose up my_service leading to compose itself calculating a value to pass to all the containers. But from what I see online, I think this is not possible.
You are correct. Alternatively you can just specify the env var name:
my_service:
environment:
- BATCH_ID
So the var BATCH_ID is defined from the current docker-compose execution scope; and passed to the container with the same name.
I don't know what I changed, but suddenly it works as described.
BATCH_ID is the name of the environment variable on the host.
batch.id will be the name of the environment variable inside the container.
my_service:
environment:
- batch.id=${BATCH_ID}

Resources