I've seen many examples of Docker compose and that makes perfect sense to me, but all bundle their frontend and backend as separate containers on the same composition. In my use case I've developed a backend (in Django) and a frontend (in React) for a particular application. However, I want to be able to allow my backend API to be consumed by other client applications down the road, and thus I'd like to isolate them from one another.
Essentially, I envision it looking something like this. I would have a docker-compose file for my backend, which would consist of a PostgreSQL container and a webserver (Apache) container with a volume to my source code. Not going to get into implementation details but because containers in the same composition exist on the same network I can refer to the DB in the source code using the alias in the file. That is one environment with 2 containers.
On my frontend and any other future client applications that consume the backend, I would have a webserver (Apache) container to serve the compiled static build of the React source. That of course exists in it's own environement, so my question is like how do I converge the two such that I can refer to the backend alias in my base url (axios, fetch, etc.) How do you ship both "environments" to a registry and then deploy from that registry such that they can continue to communicate across?
I feel like I'm probably missing the mark on how the Docker architecture works at large but to my knowledge there is a default network and Docker will execute the composition and run it on the default network unless otherwise specified or if it's already in use. However, two separate compositions are two separate networks, no? I'd very much appreciate a lesson on the semantics, and thank you in advance.
There's a couple of ways to get multiple Compose files to connect together. The easiest is just to declare that one project's default network is the other's:
networks:
default:
external:
name: other_default
(docker network ls will tell you the actual name once you've started the other Compose project.) This is also suggested in the Docker Networking in Compose documentation.
An important architectural point is that your browser application will never be able to use the Docker hostnames. Your fetch() call runs in the browser, not in Docker, and so it needs to reach a published port. The best way to set this up is to have the Apache server that's serving the built UI code also run a reverse proxy, so that you can use a same-server relative URL /api/... to reach the backend. The Apache ProxyPass directive would be able to use the Docker-internal hostnames.
You also mention "volume with your source code". This is not a Docker best practice. It's frequently used to make Docker simulate a local development environment, but it's not how you want to deploy or run your code in production. The Docker image should be self-contained, and your docker-compose.yml generally shouldn't need volumes: or a command:.
A skeleton layout for what you're proposing could look like:
version: '3'
services:
db:
image: postgres:12
volumes:
- pgdata:/var/lib/postgresql/data
backend:
image: my/backend
environment:
PGHOST: db
# No ports: (not directly exposed) (but it could be)
# No volumes: or command: (use what's in the image)
volumes:
pgdata:
version: '3'
services:
frontend:
image: my/frontend
environment:
BACKEND_URL: http://backend:3000
ports:
- 8080:80
networks:
default:
external:
name: backend_default
Related
I want to run a webapp and a db using Docker, is there any way to connect 2 dockers(webApp Docker Container in One Machine and DB Docker container in another Machine) using docker-compose file without docker-swarm-mode
I mean 2 separate server
This is my Mongodb docker-compose file
version: '2'
services:
mongodb_container:
image: mongo:latest
restart: unless-stopped
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
Here is my demowebapp docker-compose file
version: '2'
services:
demowebapp:
image: demoapp:latest
restart: unless-stopped
volumes:
- ./uploads:/app/uploads
environment:
- PORT=3000
- ROOT_URL=http://localhost
- MONGO_URL=mongodb://35.168.21.133/demodb
ports:
- 3000:3000
Can any one suggest me How to do
Using only one docker-compose.yml with compose version: 2 there is no way to deploy 2 services on two different machines. That's what version: 3 using a stack.yml and swarm-mode are used for.
You can however deploy to two different machines using two docker-compose.yml version 2, but will have to connect them using different hostnames/ips than the service-name from the compose-file.
You shouldn't need to change anything in the sample files you show: you have to connect to the other host's IP address (or DNS name) and the published ports:.
Once you're on a different machine (or in a different VM) none of the details around Docker are visible any more. From the point of view of the system running the Web application, the first system is running MongoDB on port 27017; it might be running on bare metal, or in a container, or port-forwarded from a VM, or using something like HAProxy to pass through from another system; there's literally no way to tell.
The configuration you have to connect to the first server's IP address will work. I'd set up a DNS system if you don't already have one (BIND, AWS Route 53, ...) to avoid needing to hard-code the IP address. You also might look at a service-discovery system (I have had good luck with Hashicorp's Consul in the past) which can send you to "the host system running MongoDB" without needing to know which one that is.
My pet project includes two servers. Each has its own docker-compose environment, with nginx at the front end, and different images/services behind.
These servers are on the same domain, say at main.example.com and subdomain.example.com.
The goal is to run both instances on my dev machine, to ease debug, testing etc...
I understand that the same port (namely 80/443 here) can be assigned only once within a host, so my two servers (or more!) cannot easily be present at the same time.
But is there a way of getting close to this? One constraint I have is I'd like to work with real domain names as much as possible, avoiding hard-coding IPs etc... Knowing that one exercise I have in mind is implementing a DNS server in the main.example.com server for the subdomain ones. So sticking to a setup as close to reality as possible it important.
I have thought of running docker in docker, to wrap all this into another layer of networking, simulating going through a WAN, but apparently this is quite a can of worms, and I am not event sure this would give me what I want at the networking level.
Is there a solution to this? Or am I way better off using at least two physical hosts (meaning not being easily able to work on the move etc...)?
Thanks!
Let me start with an advice: Don't use real production hostnames in your dev environment. All environment dependent configuration should be provided to your application by environment variables. Your application should therefore access the host which is defined in e.g. $OTHER_HOST, no matter if it's an IP address or a hostname.
Back to your question. You can easily use Docker compose to spin up your dev environment with two different containers. A simple example can look like this:
version: '3'
services:
microservice:
build: .
ports:
- "8080:8080"
environment:
- DB_HOST=database
- DB_PORT=5432
- DB_NAME=demo
- DB_USER=demo
- DB_PASS=demo
database:
image: "postgres:alpine"
ports:
- "5432:5432"
environment:
- POSTGRES_DB=demo
- POSTGRES_USER=demo
- POSTGRES_PASSWORD=demo
This example uses the Dockerfile to build the image called "microservice" and sets environment variables for the database. A second image for the database just gets pulled from Dockerhub and configured by environment variables.
I have to deploy a third-party web application with front server acting like load balancer and multiple instances of the actual app server behind it. For reasons independent from me I must pass unique id to each app server instance, as environmental variable. My docker-compose.yml, simplified, is as follows:
version: '3'
services:
lb:
image: lb_image
app:
image: app_image
depends_on:
- lb
links:
- "server:lb"
environment:
- LB_HOST=server
Now, I'd like to run:
docker-compose up -d --scale app=3
but passing to each instance different env variable value. I've heard of templating, it would be nice to have something like:
environment:
- CONTAINER_ID={{.Node.Id}}
in my docker-compose.yml, is it possible (every solution I've heard of this far involves writing external script, which in my opinion totally discards benefits of using Compose)?
I have project with docker-compose file and want to migrate to V3, but when deploy with
docker stack deploy --compose-file=docker-compose.yml vertx
It does not understand build path, links, container names...
My file locate d here
https://github.com/armdev/vertx-spring/blob/master/docker-compose.yml
version: '3'
services:
eureka-node:
image: eureka-node
build: ./eureka-node
container_name: eureka-node
ports:
- '8761:8761'
networks:
- vertx-network
postgres-node:
image: postgres-node
build: ./postgres-node
container_name: postgres-node
ports:
- '5432:5432'
networks:
- vertx-network
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: socnet
POSTGRES_DB: socnet
vertx-node:
image: vertx-node
build: ./vertx-node
container_name: vertx-node
links:
- postgres-node
- eureka-node
ports:
- '8585:8585'
networks:
- vertx-network
networks:
vertx-network:
driver: overlay
when I run docker-compose up, it is working, but with
stack deploy not.
How to define path for docker file?
docker stack deploy works only on images, not on builds.
This means that you will have to push your images to an image registry (created with the build process), later docker stack deploy will download the images and execute them.
here you have an example of how was it done for a php application.
You have to pay attention to the parts 1, 3 and 4.
The articles are about php, but can easily be applied to any other language.
The swarm mode "docker service" interface has a few fundamental differences in how it manages containers. You are no longer directly running containers like with "docker run", and it is assumed that you will be doing this in a distributed environment more often than not.
I'll break down the answer by these specific things you listed.
It does not understand build path, links, container names...
Links
The link option has been deprecated for quite some time in favor of the network service discovery feature introduced alongside the "docker network" feature. You no longer need to specify specific links to/from containers. Instead, you simply need to ensure that all containers are on the same network and then they can discovery eachother by the container name or "network alias"
docker-compose will put all your containers into the same network by default, and it sets up the compose service name as an alias. That means if you have a service called 'postgres-node', you can reach it via dns by the name 'postgres-node'.
Container Names
The "docker service" interface allows you to declare a desired state. "I want x number of identical services". Since the interface must support x number of instances of a service, it doesn't allow you to choose the specific container name. Instead, you get to choose the service name. In the case of 'docker stack deploy', the service name defined under the services key in your docker-compose.yml file will be used, but it will also prepend the stack name to the service name.
In most cases, I would argue that overriding the container name in a docker-compose.yml file is unnecessary, even when using regular containers via docker-compose up.
If you need a different name for network service discovery purposes, add a different alias or use the service name alias that you get when using docker-compose or docker stack deploy.
build path
Because swarm mode was built to be a distributed system, building an image in place locally isn't something that "docker stack deploy" was meant to do. Instead, you should build and push your image to a registry that all nodes in your cluster can access.
In the case where you are using a single node swarm "cluster", you should be able to use the docker-compose build option to get the images built locally, and then use docker stack deploy.
I have been search google for a solution to the below problem for longer than I care to admit.
I have a docker-compose.yml file, which allows me to fire up an ecosystem of 2 containers on my local machine. Which is awesome. But I need to be able to deploy to Google Container Engine (GCP). To do so, I am using Kubernetes; deploying to a single node only.
In order to keep the deploying process simple, I am using kompose, which allows me to deploy my containers on Google Container Engine using my original docker-compose.yml. Which is also very cool. The issue is that, by default, Kompose will deploy each docker service (I have 2) in seperate pods; one container per pod. But I really want all containers/services to be in the same pod.
I know there are ways to deploy multiple containers in a single pod, but I am unsure if I can use Kompose to accomplish this task.
Here is my docker-compose.yml:
version: "2"
services:
server:
image: ${IMAGE_NAME}
ports:
- "3000"
command: node server.js
labels:
kompose.service.type: loadbalancer
ui:
image: ${IMAGE_NAME}
ports:
- "3001"
command: npm run ui
labels:
kompose.service.type: loadbalancer
depends_on:
- server
Thanks in advance.
The thing is, that neither dose docker-compose launch them like this. They are completely separate. It means, for example, that you can have two containers listening on port 80, cause they are independent. If you try to pack them into same pod you will get port conflict and end up with a mess. The scenario you want to achieve should be achieved on your Dockerfile level to make any sense (although fat [supervisor based] containers can be considered an antipattern in many cases), in turn making your compose obsolete...
IMO you should embrace how things are, cause it does not make sense to map docker-compose defined stack to single pod.