My Django application uses Celery to process tasks on a regular basis. Sadly this results in having 3 continers (App, Celery Worker, Celery Beat) each of them having a very own startup shell-script instead of a docker entrypoint script.
So my Idea was to have a single entrypoint script which is able to process the lables I enter at my docker-compose.yml. Based on the lables the container should start as App, Celery Beat or Celery Worker instance.
I never did such a Implementation before but asking myself if this is even possible as I saw something similar at the trafik loadblancer project, see e.g.:
loadbalancer:
image: traefik:1.7
command: --docker
ports:
- 80:80
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- frontend
- backend
labels:
- "traefik.frontend.passHostHeader=false"
- "traefik.docker.network=frontend"
...
I didn't found any good material according to that on the web or on how to implement such a scenario or if it's even possible the way I think here. Does smb did it like that befor or should I better stay with 3 single shell scripts, one for each service?
You can access the labels from within the container, but it does not seem to be as straight forward as other options and I do not recommend it. See this StackOverflow question.
If your use cases (== entrypoints) are more different than alike, it is probably easier to use three entrypoints or three commands.
If your use cases are more similar, then it is easier and clearer to simply use environment variables.
Another nice alternative that I like to use, it to create one entrypoint shell script, that accept arguments - so you have one entrypoint, and the arguments are provided using the command definition.
Labels are designed to be used by the docker engine and other applications that work at the host or docker-orchestrator level, and not at the container level.
I am not sure how the traefik project is using that implementation. If they use it, it should be totally possible.
However, I would recommend using environment variables instead of docker labels. Environment variables are the recommended way to processes configuration parameters in a cloud-native app. The use of the labels is more related to the service metadata, so you can identify and filter specific services. In your scenario you can have something like this:
version: "3"
services:
celery-worker:
image: generic-dev-image:latest
environment:
- SERVICE_TYPE=celery-worker
celery-beat:
image: generic-dev-image:latest
environment:
- SERVICE_TYPE=celery-beat
app:
image: generic-dev-image:latest
environment:
- SERVICE_TYPE=app
Then you can use the SERVICE_TYPE environment variable in your docker entrypoint to launch the specific service.
However (again), there is nothing wrong with having 3 different docker images. In fact, that's the idea of containers (and microservices). You encapsulate the processes in images and instantiate them in containers. Each one of them will have different purposes and lifecycles. For development purposes, there is nothing wrong with your implementation. But in production, I would recommend separating the services in different images. Otherwise, you have big images, only using a third of the functionality in each service, and hard coupling the livecycle of the services.
Related
I've seen many examples of Docker compose and that makes perfect sense to me, but all bundle their frontend and backend as separate containers on the same composition. In my use case I've developed a backend (in Django) and a frontend (in React) for a particular application. However, I want to be able to allow my backend API to be consumed by other client applications down the road, and thus I'd like to isolate them from one another.
Essentially, I envision it looking something like this. I would have a docker-compose file for my backend, which would consist of a PostgreSQL container and a webserver (Apache) container with a volume to my source code. Not going to get into implementation details but because containers in the same composition exist on the same network I can refer to the DB in the source code using the alias in the file. That is one environment with 2 containers.
On my frontend and any other future client applications that consume the backend, I would have a webserver (Apache) container to serve the compiled static build of the React source. That of course exists in it's own environement, so my question is like how do I converge the two such that I can refer to the backend alias in my base url (axios, fetch, etc.) How do you ship both "environments" to a registry and then deploy from that registry such that they can continue to communicate across?
I feel like I'm probably missing the mark on how the Docker architecture works at large but to my knowledge there is a default network and Docker will execute the composition and run it on the default network unless otherwise specified or if it's already in use. However, two separate compositions are two separate networks, no? I'd very much appreciate a lesson on the semantics, and thank you in advance.
There's a couple of ways to get multiple Compose files to connect together. The easiest is just to declare that one project's default network is the other's:
networks:
default:
external:
name: other_default
(docker network ls will tell you the actual name once you've started the other Compose project.) This is also suggested in the Docker Networking in Compose documentation.
An important architectural point is that your browser application will never be able to use the Docker hostnames. Your fetch() call runs in the browser, not in Docker, and so it needs to reach a published port. The best way to set this up is to have the Apache server that's serving the built UI code also run a reverse proxy, so that you can use a same-server relative URL /api/... to reach the backend. The Apache ProxyPass directive would be able to use the Docker-internal hostnames.
You also mention "volume with your source code". This is not a Docker best practice. It's frequently used to make Docker simulate a local development environment, but it's not how you want to deploy or run your code in production. The Docker image should be self-contained, and your docker-compose.yml generally shouldn't need volumes: or a command:.
A skeleton layout for what you're proposing could look like:
version: '3'
services:
db:
image: postgres:12
volumes:
- pgdata:/var/lib/postgresql/data
backend:
image: my/backend
environment:
PGHOST: db
# No ports: (not directly exposed) (but it could be)
# No volumes: or command: (use what's in the image)
volumes:
pgdata:
version: '3'
services:
frontend:
image: my/frontend
environment:
BACKEND_URL: http://backend:3000
ports:
- 8080:80
networks:
default:
external:
name: backend_default
My pet project includes two servers. Each has its own docker-compose environment, with nginx at the front end, and different images/services behind.
These servers are on the same domain, say at main.example.com and subdomain.example.com.
The goal is to run both instances on my dev machine, to ease debug, testing etc...
I understand that the same port (namely 80/443 here) can be assigned only once within a host, so my two servers (or more!) cannot easily be present at the same time.
But is there a way of getting close to this? One constraint I have is I'd like to work with real domain names as much as possible, avoiding hard-coding IPs etc... Knowing that one exercise I have in mind is implementing a DNS server in the main.example.com server for the subdomain ones. So sticking to a setup as close to reality as possible it important.
I have thought of running docker in docker, to wrap all this into another layer of networking, simulating going through a WAN, but apparently this is quite a can of worms, and I am not event sure this would give me what I want at the networking level.
Is there a solution to this? Or am I way better off using at least two physical hosts (meaning not being easily able to work on the move etc...)?
Thanks!
Let me start with an advice: Don't use real production hostnames in your dev environment. All environment dependent configuration should be provided to your application by environment variables. Your application should therefore access the host which is defined in e.g. $OTHER_HOST, no matter if it's an IP address or a hostname.
Back to your question. You can easily use Docker compose to spin up your dev environment with two different containers. A simple example can look like this:
version: '3'
services:
microservice:
build: .
ports:
- "8080:8080"
environment:
- DB_HOST=database
- DB_PORT=5432
- DB_NAME=demo
- DB_USER=demo
- DB_PASS=demo
database:
image: "postgres:alpine"
ports:
- "5432:5432"
environment:
- POSTGRES_DB=demo
- POSTGRES_USER=demo
- POSTGRES_PASSWORD=demo
This example uses the Dockerfile to build the image called "microservice" and sets environment variables for the database. A second image for the database just gets pulled from Dockerhub and configured by environment variables.
I will try to describe my desired functionality:
I'm running docker swarm over docker-compose
In the docker-compose, I've services,for simplicity lets call it A ,B ,C.
Assume C service that include shared code modules need to be accessible for services A and B.
My questions are:
1. Should each service that need access to the shared volume must mount the C service to its own local folder,(using the volumes section as below) or can it be accessible without mounting/coping to a path in local container.
In docker swarm, it can be that 2 instances of Services A and B will reside in computer X, while Service C will reside on computer Y.
Is it true that because the services are all maintained under the same docker swarm stack, they will communicate without problem with service C.
If not which definitions should it have to acheive it?
My structure is something like that:
version: "3.4"
services:
A:
build: .
volumes:
- C:/usr/src/C
depends_on:
- C
B:
build: .
volumes:
- C:/usr/src/C
depends_on:
- C
C:
image: repository.com/C:1.0.0
volumes:
- C:/shared_code
volumes:
C:
If what you’re sharing is code, you should build it into the actual Docker images, and not try to use a volume for this.
You’re going to encounter two big problems. One is getting a volume correctly shared in a multi-host installation. The second is a longer-term issue: what are you going to do if the shared code changes? You can’t just redeploy the C module with the shared code, because the volume that holds the code already exists; you need to separately update the code in the volume, restart the dependent services, and hope they both work. Actually baking the code into the images makes it possible to test the complete setup before you try to deploy it.
Sharing code is an anti-pattern in a distributed model like Swarm. Like David says, you'll need that code in the image builds, even if there's duplicate data. There are lots of ways to have images built on top of others to limit the duplicate data.
If you still need to share data between containers in swarm on a file system, you'll need to look at some shared storage like AWS EFS (multi-node read/write) plus REX-Ray to get your data to the right containers.
Also, depends_on doesn't work in swarm. Your apps in a distributed system need to handle the lack of connection to other services in a predicable way. Maybe they just exit (and swarm will re-create them) or go into a retry loop in code, etc. depends_on is mean for local docker-compose cli in development where you want to spin up a app and its dependencies by doing something like docker-compose up api.
I really did not know how to word the title.
In my system I have two instances:
a Prod server
a Dev server
Dev used mostly for testing. In each case I have two versions of AMQP both having different hostnames.
To avoid duplication or unnecessary time rewriting the same code in multiple projects I wanted to use the env file that docker compose has, though everywhere I read, no one discusses this case. That case being that depending on where a stack is deployed is which env file it would use and that env file existing on the swarm itself rather than the individual projects.
Hopefully, I didn't miss anything when explaining this. Summary being two swarms each having their own env file that the containers deployed to it can use. Also if I need to reword anything, I will do so.
you can have multiple .env files and assign them to services in docker-compose.yml like
web:
env_file:
- web-variables.env
nginx:
env_file:
- nginx-variables.env
and if you want to change them for development environemnt you could override the docker-compose.yml with docker-compose.development.ymlfile and then start it with
docker-compose -f docker-compose.yml -f docker-compose.development.yml up -d
I have been search google for a solution to the below problem for longer than I care to admit.
I have a docker-compose.yml file, which allows me to fire up an ecosystem of 2 containers on my local machine. Which is awesome. But I need to be able to deploy to Google Container Engine (GCP). To do so, I am using Kubernetes; deploying to a single node only.
In order to keep the deploying process simple, I am using kompose, which allows me to deploy my containers on Google Container Engine using my original docker-compose.yml. Which is also very cool. The issue is that, by default, Kompose will deploy each docker service (I have 2) in seperate pods; one container per pod. But I really want all containers/services to be in the same pod.
I know there are ways to deploy multiple containers in a single pod, but I am unsure if I can use Kompose to accomplish this task.
Here is my docker-compose.yml:
version: "2"
services:
server:
image: ${IMAGE_NAME}
ports:
- "3000"
command: node server.js
labels:
kompose.service.type: loadbalancer
ui:
image: ${IMAGE_NAME}
ports:
- "3001"
command: npm run ui
labels:
kompose.service.type: loadbalancer
depends_on:
- server
Thanks in advance.
The thing is, that neither dose docker-compose launch them like this. They are completely separate. It means, for example, that you can have two containers listening on port 80, cause they are independent. If you try to pack them into same pod you will get port conflict and end up with a mess. The scenario you want to achieve should be achieved on your Dockerfile level to make any sense (although fat [supervisor based] containers can be considered an antipattern in many cases), in turn making your compose obsolete...
IMO you should embrace how things are, cause it does not make sense to map docker-compose defined stack to single pod.