Docker container names - ruby-on-rails

I'm using Docker on Rails project. I found only one way to reliably link services between each other, i. e. specifying container_name in docker-compose.yml:
version: '3'
services:
db:
container_name: sociaball_db
...
web:
container_name: sociaball_web
...
sphinx:
container_name: sociaball_sphinx
...
So now I can write something like this in database.yml and stop worrying about, say, database container randomly changing its name from db to db_1:
common: &common
...
host: sociaball_db
However, I can only run three containers at the same time. Whenever I try to run docker-container up if some containers aren't down it will raise an error.
ERROR: for sociaball_db Cannot create container for service db: Conflict. The container name "/sociaball_db" is already in use by container "ee787c06db7b2a0205e3c1e552b6a5496545a78fe12d942fb792b27f3c38769c". You have to remove (or rename) that container to be able to reuse that name.
It is very inconvenient. It often forces explicitly deleting all the containers just to make sure they have no opportunity to break. Is there a way around that?

When running several containers from one compose file, there will be a default network where all containers are attached to (if not specified differently).
There is no need to reference a container by its container or hostname as docker-compose automatically sets up some dns service discovery where each docker-compose service can be resolved by its service name (the key used one level below services:.
So your service called web can reach your database using the name db. No need to specify a container name for this use case. For more details please see the docker docs on networking that also demonstrates a rails app accessing a database.

Related

Docker compose/swarm 3: docker file path , build, container name, links, migration

I have project with docker-compose file and want to migrate to V3, but when deploy with
docker stack deploy --compose-file=docker-compose.yml vertx
It does not understand build path, links, container names...
My file locate d here
https://github.com/armdev/vertx-spring/blob/master/docker-compose.yml
version: '3'
services:
eureka-node:
image: eureka-node
build: ./eureka-node
container_name: eureka-node
ports:
- '8761:8761'
networks:
- vertx-network
postgres-node:
image: postgres-node
build: ./postgres-node
container_name: postgres-node
ports:
- '5432:5432'
networks:
- vertx-network
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: socnet
POSTGRES_DB: socnet
vertx-node:
image: vertx-node
build: ./vertx-node
container_name: vertx-node
links:
- postgres-node
- eureka-node
ports:
- '8585:8585'
networks:
- vertx-network
networks:
vertx-network:
driver: overlay
when I run docker-compose up, it is working, but with
stack deploy not.
How to define path for docker file?
docker stack deploy works only on images, not on builds.
This means that you will have to push your images to an image registry (created with the build process), later docker stack deploy will download the images and execute them.
here you have an example of how was it done for a php application.
You have to pay attention to the parts 1, 3 and 4.
The articles are about php, but can easily be applied to any other language.
The swarm mode "docker service" interface has a few fundamental differences in how it manages containers. You are no longer directly running containers like with "docker run", and it is assumed that you will be doing this in a distributed environment more often than not.
I'll break down the answer by these specific things you listed.
It does not understand build path, links, container names...
Links
The link option has been deprecated for quite some time in favor of the network service discovery feature introduced alongside the "docker network" feature. You no longer need to specify specific links to/from containers. Instead, you simply need to ensure that all containers are on the same network and then they can discovery eachother by the container name or "network alias"
docker-compose will put all your containers into the same network by default, and it sets up the compose service name as an alias. That means if you have a service called 'postgres-node', you can reach it via dns by the name 'postgres-node'.
Container Names
The "docker service" interface allows you to declare a desired state. "I want x number of identical services". Since the interface must support x number of instances of a service, it doesn't allow you to choose the specific container name. Instead, you get to choose the service name. In the case of 'docker stack deploy', the service name defined under the services key in your docker-compose.yml file will be used, but it will also prepend the stack name to the service name.
In most cases, I would argue that overriding the container name in a docker-compose.yml file is unnecessary, even when using regular containers via docker-compose up.
If you need a different name for network service discovery purposes, add a different alias or use the service name alias that you get when using docker-compose or docker stack deploy.
build path
Because swarm mode was built to be a distributed system, building an image in place locally isn't something that "docker stack deploy" was meant to do. Instead, you should build and push your image to a registry that all nodes in your cluster can access.
In the case where you are using a single node swarm "cluster", you should be able to use the docker-compose build option to get the images built locally, and then use docker stack deploy.

How to avoid the "Docker cannot link to a non running container" error when the external-linked container is actually running using docker-compose

What we want to do:
We want to use docker-compose to link one already running container (A) to another container (B) by container name. We use "external-link" as both containers are started from different docker-compose.yml files.
Problem:
Container B fails to start with the error although a container with that name is running.
ERROR: for container_b Cannot start service container_b: Cannot link to a non running container: /PREVIOUSLY_LINKED_ID_container_a_1 AS /container_b_1/container_a_1
output of "docker ps":
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
RUNNING_ID container_a "/docker-entrypoint.s" 15 minutes ago Up 15 minutes 5432/tcp container_a_1
Sample code:
docker-compose.yml of Container B:
container_b:
external_links:
- container_a_1
What differs this question from the other "how to fix"-questions:
we can't use "sudo service docker restart" (which works) as this is a production environment
We don't want to fix this every time manually but find the reason so that we can
understand what we are doing wrong
understand how to avoid this
Assumptions:
It seems like two instances of the container_a exist (RUNNING_ID and PREVIOUSLY_LINKED_ID)
This might happen because we
rebuilt the container via docker-compose build and
changed the forwarded external port of the container (80801:8080)
Comment
Do not use docker-compose down as suggested in the comments, this removes volumnes!
Docker links are deprecated so unless you need some functionality they provide or are on an extremely old version of docker, I'd recommend switching to docker networks.
Since the containers you want to connect appear to be started in separate compose files, you would create that network externally:
docker network create app_net
Then in your docker-compose.yml files, you connect your containers to that network:
version: '3'
networks:
app_net:
external:
name: app_net
services:
container_a:
# ...
networks:
- app_net
Then in your container_b, you would connect to container_a as "container_a", not "container_a_1".
As an aside, docker-compose down is not documented to remove volumes unless you pass the -v flag. Perhaps you are using anonymous volumes, in which case I'm not sure that docker-compose up would know where to find your data. A named volume is preferred. More than likely, your data was not being stored in a volume, which is dangerous and removes your ability to update your containers:
$ docker-compose down --help
By default, the only things removed are:
- Containers for services defined in the Compose file
- Networks defined in the `networks` section of the Compose file
- The default network, if one is used
Networks and volumes defined as `external` are never removed.
Usage: down [options]
Options:
--rmi type Remove images. Type must be one of:
'all': Remove all images used by any service.
'local': Remove only images that don't have a custom tag
set by the `image` field.
-v, --volumes Remove named volumes declared in the `volumes` section
of the Compose file and anonymous volumes
attached to containers.
--remove-orphans Remove containers for services not defined in the
Compose file

How to share a value between all docker containers spun op by the same "docker-compose up" call?

Context
We are migrating an older application to docker and as a first step we're working against some constraints. The database can not yet be put in a container, and moreover, it is shared between all developers in our team. So this question is to find a fix for a temporary problem.
To not clash with other developers using the same database, there is a system in place where each developer machine starts the application with a value that is unique to his machine. Each container should use this same value.
Question
We are using docker-compose to start the containers. Is there a way to provide a (environment) variable to it that gets propagated to all containers?
How I'm trying to do it:
My docker-compose.yml looks kind of like this:
my_service:
image: my_service:latest
command: ./my_service.sh
extends:
file: base.yml
service: base
environment:
- batch.id=${BATCH_ID}
then I thought running BATCH_ID=$somevalue docker-compose up my_service would fill in the ${BATCH_ID}, but it doesn't seem to work that way.
Is there another way? A better way?
Optional: Ideally everything should be contained so that a developer can just call docker-compose up my_service leading to compose itself calculating a value to pass to all the containers. But from what I see online, I think this is not possible.
You are correct. Alternatively you can just specify the env var name:
my_service:
environment:
- BATCH_ID
So the var BATCH_ID is defined from the current docker-compose execution scope; and passed to the container with the same name.
I don't know what I changed, but suddenly it works as described.
BATCH_ID is the name of the environment variable on the host.
batch.id will be the name of the environment variable inside the container.
my_service:
environment:
- batch.id=${BATCH_ID}

How can I link an image created volume with a docker-compose specified named volume?

I have been trying to user docker-compose to spin up a postgres container container, with a single, persisted named volume.
The goal is to have different postgres containers share the same persisted data (not concurrently!) - one container dies or is killed, another takes it place without losing previously persisted data.
As I understand "named volumes" are supposed to replace "Data Volume Containers".
However, so far either one of two things happen:
The postgres container fails to start up, with error message "ERROR: Container command not found or does not exist."
I achieve persistance for only that specific container. If it is stopped and removed and another container started, we start with a blank slate.
SO, as far as I understand, the postgres image does create it's own volume, which is of course bound to that specific container. Which would be fine, if I could just get THAT volume aliased or linked or something with the named volume.
Current incarnation of docker-compose.yml:
version: '2'
services:
db:
image: postgres
restart: allways
volumes:
- myappdb:/var/lib/postgresql/data/
environment:
- POSTGRES_PASSWORD=mysecretpasswordPleaseChangeME
volumes:
myappdb:
driver: local
Am I doing something stupidly wrong, or attempting something that is simply not supported?
Docker version 1.10.3, build 20f81dd
docker-compose version 1.6.0,
build d99cad6
Ok, after a lot of trial and error, things are now working as they should (meaning I am able to run docker-compose down and then docker-compose up and my data is in the state where it was left with the down command).
In general, a few things:
Don't use the PGDATA environment option with the official postgres image
If using spring boot (like I was), and docker compose (as I was) and passing environment options to a service linked to your database container, do not wrap a profile name in double quotes. It is passed as-is to the Spring as-is, resulting in a non-existing profile to be used as the active profile.
I had some subtle and strange things incorrectly configured initially, but I suspect the killer was point 2 above - it caused my app,when running in a container, to use in-mem H2 database instead of the linked container database. So everything functioned (almost) perfectly - until container shutdown. And, when running from IDE, against container DB (with ports exposed to host), all worked perfectly (including persistence), since the active profile parameter was correctly set in the IDE launcher (NO quotes!).
Live and learn I guess (but I do feel a LOT of egg on my face).
You need to tell Compose that it should manage creation of the Volume, otherwise it assumes it should already exist on the host.
volumes:
myappdb:
external: false
Docs: https://docs.docker.com/compose/compose-file/#external

Host names are not set in docker compose

I created simple compose config to try Postgres BDR replication.
I expect containers to have host names as service names I defined and I expect one container to be able to resolve and reach another with this hostname. I expect it to be true because of that:
https://docs.docker.com/compose/networking/
My config:
version: '2'
services:
bdr1:
image: bdr
volumes:
- /var/lib/postgresql/data1:/var/lib/postgresql/data
ports:
- "5001:5432"
bdr2:
image: bdr
volumes:
- /var/lib/postgresql/data2:/var/lib/postgresql/data
ports:
- "5002:5432"
But in reality both containers get rubbish hostnames and are not reachable by container names:
Creating network "bdr_default" with the default driver
Creating bdr_bdr1_1
Creating bdr_bdr2_1
Attaching to bdr_bdr1_1, bdr_bdr2_1
bdr1_1 | Hostname: 938e0585fee2
bdr2_1 | Hostname: 7153165f4d5b
Is it a bug, or I did something wrong?
I use Ubuntu 14.04.4 LTS, Docker version 1.10.1, build 9e83765, docker-compose version 1.6.0, build d99cad6
docker-compose gives you the option of scaling services up or down, meaning you can launch multiple instances of the same service. That is at least one reason why the hostnames are not just service names. You will notice that if you scale bdr1 to 2 instance, you will then have bdr_bdr1_1 and bdr_bdr1_2 containers.
You can work around this inside the containers that were started up by docker-compose in at least two ways:
If a service refers to other service, you can use links section, for example make bdr1 link to bdr2. In this case when you are inside bdr1 you can call host bdr2 by name. I have not tried what happens when you scale up bdr2 in this case.
You can force the hostname of a container internally to the name you want by using the hostname section. For example if you add hostname: bdr1 to bdr1, then you can internally connect to bdr1, which is itself.
You can possibly achieve similar result with the networks section, but I have not yet used it myself so I don't know for sure.
The hostname inside the container should be the short container id, so this is correct (note there was a bug with Compose 1.6.0 and the short container id, so you should use at least version 1.6.2). Also /etc/hosts is not longer used, there is now an embedded dns server that handles resolving names to container ip addresses.
The container is discoverable by other containers with 3 names: the container name, the container short id, and the service name.
However, the other container may not be available immediately when the first one starts. You can use depends_on to set the order.
If you are testing the discovery, try using ping, and make sure to retry , because the name may not resolve immediately.

Resources