Micro Services With Docker Compose: Same Container, Multiple Projects - docker

Along with a few others, I am having issues using a micro services architecture of applications and employing docker-compose the way I want to.
Summary:
I have X micro service projects (lets call these project A, project B and project C. Each micro service depends on the same containers (lets call these dependency D and dependency E.
The Problem:
Ideally, project A, B and C would ALL have both dependencies (D & E) in their docker-compose.yml files; however, this becomes an issue as docker compose sees these as duplicate containers when in reality, I would like to reuse them. Here is an error message that is commonly seen:
ERROR: for A Cannot create container for service A: b'Conflict. The
container name "/A" is already in use by container "sha". You have to
remove (or rename) that container to be able to reuse that name.'
From what I have seen, people are recommending that you define the container in one project and reference it using networks and external links. Although this works, it introduces a dependency on a different docker-compose yml file (the file that defines the dependency!).
Another approach that I've read argues for isolation of containers in their docker compose files and then referencing multiple files when you want to build. Again, although this works, its certainly not as stunningly convenient as docker typically is to work with. If I am unable to work out a solution, I will go with this approach.
Have other people in the non-mono repo world (specifically with micro services) had any success with a different approach?
I've been ask to clarify with some examples:
Here is what 2 different compose yml files look like for project A and project B:
Project A:
version: '2'
services:
dependencyD:
image: dependencyD:latest
container_name: dependencyD
dependencyE:
image: dependencyE:latest
container_name: dependencyE
projectA:
image: projectA:latest
container_name: projectA
depends_on:
- dependencyD
- dependencyE
Project B:
version: '2'
services:
dependencyD:
image: dependencyD:latest
container_name: dependencyD
dependencyE:
image: dependencyE:latest
container_name: dependencyE
projectB:
image: projectB:latest
container_name: projectB
depends_on:
- dependencyD
- dependencyE

There is a feature called external links. From the docs:
Link to containers started outside this docker-compose.yml or even outside of Compose, especially for containers that provide shared or common services.
Having multiple docker-compose.yml files is also common to organize containers into meaningful groups. Maybe your scenario can use multiple YAML files and the external links.

Related

Why adding top level keys in docker compose?

I find top-level keys like volumes and networks in many docker compose yml files, for example here and in this repository:
networks:
network:
volumes:
db:
Only keys are declared and no values are found. I notice that all these two keywords have already been defined in services, then I wonder why adding those keywords globally again, and should they already have appeared in predefined services?
GPT answered that:
Adding top-level keys to Docker Compose allows you to define multiple services that can be run together in an application. This can be useful when creating complex applications with multiple components that are connected to each other. It also provides an easy way to configure and scale your application.
I don't think its first sentence is correct since without those keys I can also define multiple services that can cooperate in an application.
Could anyone please verify that? Thanks.
These things can be configured. One example is adding configuration to a specific networks:. Say you have a set of containers, but you also need to interact with containers in another Compose setup. You could define that other network with a specific name:
networks:
other:
external: true
name: other_default
services:
one:
networks: [default, other]
two:
networks: [default, other]
This saves us from repeating the configuration every time a network or volume appears, since they can appear in multiple services.
In principle it would be possible for Compose to scan the list of services and create everything with default settings if required. Requiring the top-level lists of volumes: and networks: would simplify the implementation a little bit. It is also a little bit of protection against typos; Docker doesn't give you a lot of that, but if you
services:
one:
volumes:
- exchange:/data
two:
volumes:
- echxange:/data
volumes:
exchange:
Compose will notice that the misspelled echxange doesn't exist and complain.
Top-level elements, for instance networks top-level element, volumes top-level element, are just like the mostly used services top-level element, and they are used to define networks and volumes which can be used in services defined in the services top-level element.

Docker: Multiple Compositions

I've seen many examples of Docker compose and that makes perfect sense to me, but all bundle their frontend and backend as separate containers on the same composition. In my use case I've developed a backend (in Django) and a frontend (in React) for a particular application. However, I want to be able to allow my backend API to be consumed by other client applications down the road, and thus I'd like to isolate them from one another.
Essentially, I envision it looking something like this. I would have a docker-compose file for my backend, which would consist of a PostgreSQL container and a webserver (Apache) container with a volume to my source code. Not going to get into implementation details but because containers in the same composition exist on the same network I can refer to the DB in the source code using the alias in the file. That is one environment with 2 containers.
On my frontend and any other future client applications that consume the backend, I would have a webserver (Apache) container to serve the compiled static build of the React source. That of course exists in it's own environement, so my question is like how do I converge the two such that I can refer to the backend alias in my base url (axios, fetch, etc.) How do you ship both "environments" to a registry and then deploy from that registry such that they can continue to communicate across?
I feel like I'm probably missing the mark on how the Docker architecture works at large but to my knowledge there is a default network and Docker will execute the composition and run it on the default network unless otherwise specified or if it's already in use. However, two separate compositions are two separate networks, no? I'd very much appreciate a lesson on the semantics, and thank you in advance.
There's a couple of ways to get multiple Compose files to connect together. The easiest is just to declare that one project's default network is the other's:
networks:
default:
external:
name: other_default
(docker network ls will tell you the actual name once you've started the other Compose project.) This is also suggested in the Docker Networking in Compose documentation.
An important architectural point is that your browser application will never be able to use the Docker hostnames. Your fetch() call runs in the browser, not in Docker, and so it needs to reach a published port. The best way to set this up is to have the Apache server that's serving the built UI code also run a reverse proxy, so that you can use a same-server relative URL /api/... to reach the backend. The Apache ProxyPass directive would be able to use the Docker-internal hostnames.
You also mention "volume with your source code". This is not a Docker best practice. It's frequently used to make Docker simulate a local development environment, but it's not how you want to deploy or run your code in production. The Docker image should be self-contained, and your docker-compose.yml generally shouldn't need volumes: or a command:.
A skeleton layout for what you're proposing could look like:
version: '3'
services:
db:
image: postgres:12
volumes:
- pgdata:/var/lib/postgresql/data
backend:
image: my/backend
environment:
PGHOST: db
# No ports: (not directly exposed) (but it could be)
# No volumes: or command: (use what's in the image)
volumes:
pgdata:
version: '3'
services:
frontend:
image: my/frontend
environment:
BACKEND_URL: http://backend:3000
ports:
- 8080:80
networks:
default:
external:
name: backend_default

Docker swarm having some shared volume

I will try to describe my desired functionality:
I'm running docker swarm over docker-compose
In the docker-compose, I've services,for simplicity lets call it A ,B ,C.
Assume C service that include shared code modules need to be accessible for services A and B.
My questions are:
1. Should each service that need access to the shared volume must mount the C service to its own local folder,(using the volumes section as below) or can it be accessible without mounting/coping to a path in local container.
In docker swarm, it can be that 2 instances of Services A and B will reside in computer X, while Service C will reside on computer Y.
Is it true that because the services are all maintained under the same docker swarm stack, they will communicate without problem with service C.
If not which definitions should it have to acheive it?
My structure is something like that:
version: "3.4"
services:
A:
build: .
volumes:
- C:/usr/src/C
depends_on:
- C
B:
build: .
volumes:
- C:/usr/src/C
depends_on:
- C
C:
image: repository.com/C:1.0.0
volumes:
- C:/shared_code
volumes:
C:
If what you’re sharing is code, you should build it into the actual Docker images, and not try to use a volume for this.
You’re going to encounter two big problems. One is getting a volume correctly shared in a multi-host installation. The second is a longer-term issue: what are you going to do if the shared code changes? You can’t just redeploy the C module with the shared code, because the volume that holds the code already exists; you need to separately update the code in the volume, restart the dependent services, and hope they both work. Actually baking the code into the images makes it possible to test the complete setup before you try to deploy it.
Sharing code is an anti-pattern in a distributed model like Swarm. Like David says, you'll need that code in the image builds, even if there's duplicate data. There are lots of ways to have images built on top of others to limit the duplicate data.
If you still need to share data between containers in swarm on a file system, you'll need to look at some shared storage like AWS EFS (multi-node read/write) plus REX-Ray to get your data to the right containers.
Also, depends_on doesn't work in swarm. Your apps in a distributed system need to handle the lack of connection to other services in a predicable way. Maybe they just exit (and swarm will re-create them) or go into a retry loop in code, etc. depends_on is mean for local docker-compose cli in development where you want to spin up a app and its dependencies by doing something like docker-compose up api.

Nexus3 docker different repo for different env, such as dev, prod

Sir,
May I ask a question, if I want to setup multiple separate repos for different env, such as dev, prod for different repo to avoid unstable image to be used in prod version. does that means I have to use different port for different repos?
Such as:
Dev Hosted:8083
Dev Group:8082
PRD Hosted:8183
PRD Hosted:8182
If so, if we would like to create many many, does that means we have to use many ports?
Source workflows are usually different from company-to-company, but generally I recommend single repo per service and multiple-branch approach, so you can easily merge features to the master (eg: prod) branch from your feature branches which might be dedicated per environment.
Regarding static configuration I recommend to create a generic, non-environment specific container images that picks up all environment specific configuration from environment variables at startup and runtime.
On the port mapping, within the container you should always use the same ports (eg: build your image build with 82 and 83), and only change those when you expose it to the host during the composition.
When you build your docker images, you can use labels to set which one is the dev, prod revision of those images, so you can target those easier, with imagename:label
With this you can specify multiple docker compositions per environment, by creating the following files:
docker-compose.dev.yml:
version: '3'
services:
web:
image: "webapp:dev"
ports:
- "8082:82"
- "8083:83"
environment:
- DEBUG=true
- ENVIRONMENT_NAME=dev
docker-compose.prod.yml:
version: '3'
services:
web:
image: "webapp:prod"
ports:
- "8182:82"
- "8183:83"
environment:
- DEBUG=false
- ENVIRONMENT_NAME=prod
With this configuration you can create your service compositions based on the same or similar images by running docker-compose:
# To start a DEV service composition
docker-compose up -f ./docker-compose.dev.yml
# To start a DEV service composition
docker-compose up -f ./docker-compose.prod.yml
See more info about these:
docker-compose reference: https://docs.docker.com/compose/reference/overview/
github-flow branching stratgey: https://guides.github.com/introduction/flow/

Docker compose/swarm 3: docker file path , build, container name, links, migration

I have project with docker-compose file and want to migrate to V3, but when deploy with
docker stack deploy --compose-file=docker-compose.yml vertx
It does not understand build path, links, container names...
My file locate d here
https://github.com/armdev/vertx-spring/blob/master/docker-compose.yml
version: '3'
services:
eureka-node:
image: eureka-node
build: ./eureka-node
container_name: eureka-node
ports:
- '8761:8761'
networks:
- vertx-network
postgres-node:
image: postgres-node
build: ./postgres-node
container_name: postgres-node
ports:
- '5432:5432'
networks:
- vertx-network
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: socnet
POSTGRES_DB: socnet
vertx-node:
image: vertx-node
build: ./vertx-node
container_name: vertx-node
links:
- postgres-node
- eureka-node
ports:
- '8585:8585'
networks:
- vertx-network
networks:
vertx-network:
driver: overlay
when I run docker-compose up, it is working, but with
stack deploy not.
How to define path for docker file?
docker stack deploy works only on images, not on builds.
This means that you will have to push your images to an image registry (created with the build process), later docker stack deploy will download the images and execute them.
here you have an example of how was it done for a php application.
You have to pay attention to the parts 1, 3 and 4.
The articles are about php, but can easily be applied to any other language.
The swarm mode "docker service" interface has a few fundamental differences in how it manages containers. You are no longer directly running containers like with "docker run", and it is assumed that you will be doing this in a distributed environment more often than not.
I'll break down the answer by these specific things you listed.
It does not understand build path, links, container names...
Links
The link option has been deprecated for quite some time in favor of the network service discovery feature introduced alongside the "docker network" feature. You no longer need to specify specific links to/from containers. Instead, you simply need to ensure that all containers are on the same network and then they can discovery eachother by the container name or "network alias"
docker-compose will put all your containers into the same network by default, and it sets up the compose service name as an alias. That means if you have a service called 'postgres-node', you can reach it via dns by the name 'postgres-node'.
Container Names
The "docker service" interface allows you to declare a desired state. "I want x number of identical services". Since the interface must support x number of instances of a service, it doesn't allow you to choose the specific container name. Instead, you get to choose the service name. In the case of 'docker stack deploy', the service name defined under the services key in your docker-compose.yml file will be used, but it will also prepend the stack name to the service name.
In most cases, I would argue that overriding the container name in a docker-compose.yml file is unnecessary, even when using regular containers via docker-compose up.
If you need a different name for network service discovery purposes, add a different alias or use the service name alias that you get when using docker-compose or docker stack deploy.
build path
Because swarm mode was built to be a distributed system, building an image in place locally isn't something that "docker stack deploy" was meant to do. Instead, you should build and push your image to a registry that all nodes in your cluster can access.
In the case where you are using a single node swarm "cluster", you should be able to use the docker-compose build option to get the images built locally, and then use docker stack deploy.

Resources