Docker prod and dev environment difference - docker

Microservices have one db per service. On dev we use docker-compose to bring up whole environment, web, php, mysql, but what is good way to do this on production? when load increases we have to create several copies of application on different servers but they all should use same database,
whats best way to do this?
==========================
1 app and 1 db on dev (using docker-compose), 10 app and 1 db on prod. Since db contains all the data, that must be shared between copies of application
I know about Kubernetes and Docker swarm, but I am asking about general approach of separating db from application on prod, while keeping it together on dev

Under compose v3 files, you can specify a deploy, replicas value, which will replicate your services according to the properties and parameters you set. These can each talk to the single replica of your db by name. The overlay network they are attached to will route requests to the database by dns. For example:
version: "3.3"
services:
api1:
image: api1:latest
deploy:
replicas: 3
api2:
image: api2:latest
deploy:
replicas: 6
redis:
image: redis
deploy:
replicas: 1
... etc.
Now, if each service simply connects to a host called 'redis' (the service name) which is resolved by dns on the given network (which I haven't shown) on the default port. Note that it's not necessary to 'link' the services to the db -- a lot of examples do that but it's a deprecated practice. Also, you don't have to expose any ports, because the network traffic is internal you don't require ingress from a client external to the network.

Related

How can I enforce all containers work together with domain localhost

I have 8 frontends apps, 12 backends servers. Frontends are Vue.js or AngularJS, backedends are ASP.NET Core 3.1, and SQL server and Redis and Other services
all services are similar configs for Docker containers, except loggings, ports and so on. they all work in same named network mynetwork
abcservice:
image: ${DOCKER_REGISTRY-}abcservice
container_name: abcServer
hostname: abcservice
build:
context: .
dockerfile: abcService/Dockerfile
networks:
- mynetwork
but I have to use http://host.docker.internal:{portnumer}, so all containers can work well together. How can I force all apps work together on http://localhost:{portnumer}?
let's say a simple ASP.NET core app, if started it WITHOUT docker, it can access SQL Server(run in docker) and Redis(run in docker) with domain http://localhost:port, but once if start it with docker, I have to access the app via domain http://host.docker.internal:port, otherwise it cannot reach SQL and Redis. because inside containers, localhost means the container itself, I need something config to let container reach other containers with localhost and specified ports.
Appreciate.
Option 1: Environment variables
You can either use ports for all services, and use environment variables in a .env file to switch between hostnames. The .env file works out-of-the-box with Docker Compose, see docs.
Using ports:
ports:
- 6379:6379
Sample .env file:
REDIS_HOST=redis
REDIS_PORT=6379
Option 2: using network_mode host
Another option is apply host network settings instead using network_mode on each service. That should apply host network settings to a service, instead of running in isolation.
network_mode: host

Docker Swarm ping by hostname incremental host.<id>

I have a service that requires that it can connect to the other instances of itself to establish a quorum.
The service has a environment variable like:
initialDiscoverMembers=db.1:5000,db.2:5000,db.3:5000
They can never find each other. I've tried logging into other containers and pinging other services by . like ping redis.1 and it doesn't work.
Is there a way in Docker (swarm) to get the incremental hostname working for connection as well? I looked at the endpoint_mode: dnsrr but that doesn't seem to be what I want.
I think I may have to just create three separate instances of the service and name it different things, but that seems so cumbersome.
You cannot refer independently to each container using the incremental host.<id> since the DNS resolution on Swarm is done on a service-basis; what you can do is to add a hostname alias to each container based on its Swarm slot.
For example, right now you're using a db service, so you could add:
version: '3.7'
services:
db:
image: postgres
deploy:
replicas: 3
hostname: "db-{{.Task.Slot}}"
ports:
- 5000:5432
In this case, since all the containers within each Swarm task are in the same network, you can address them by db-1, db-2 and db-3.

Connecting dockerized apps network to do api call

I have a bit of a problem with connecting the dots.
I managed to dockerized our legacy app and our newer app, but now I need to make them to talk to one another via API call.
Projects:
Project1 = using project1_appnet (bridge driver)
Project2 = using project2_appnet (bridge driver)
Project3 = using project3_appnet (bridge driver)
On my local, I have these 3 projects on 3 separates folders. Each project will have their own app, db and cache services.
This is the docker-compose.yml for one of the project. (They have nearly all the same docker-compose.yml only with different image and volume path)
version: '3'
services:
app:
build: ./docker/app
image: 'cms/app:latest'
networks:
- appnet
volumes:
- './:/var/www/html:cached'
ports:
- '${APP_PORT}:80'
working_dir: /var/www/html
cache:
image: 'redis:alpine'
networks:
- appnet
volumes:
- 'cachedata:/data'
db:
image: 'mysql:5.7'
environment:
MYSQL_ROOT_PASSWORD: '${DB_ROOT_PASSWORD}'
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USER}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
ports:
- '${DB_PORT}:3306'
networks:
- appnet
volumes:
- 'dbdata:/var/lib/mysql'
networks:
appnet:
driver: bridge
volumes:
dbdata:
driver: local
cachedata:
driver: local
Question:
How can I make them be able to talk to one another via API call? (On my local for development and for prod environment)
On production, the setting will be a bit different, they will be in different machines but still in the same VPC or even through public network. What is the setting for that?
Note:
I have been looking at link but apparently it is deprecated for v3 or not really recommended
Tried curl from project1 container to project2 container, by doing:
root#bc3afb31a5f1:/var/www/html# curl localhost:8050/login
curl: (7) Failed to connect to localhost port 8050: Connection refused
If your final setup will be that each service will be running on a physically different system, there aren't really any choices. One system can't directly access the Docker network on another system; the only way service 1 will be able to reach service 2 is via its host's DNS name (or IP address) and the published port. Since this will be different in different environments, I'd suggest making that value a configured environment variable.
environment:
SERVICE_2_URL: 'http://service-2-host.example.com/' # default port 80
Once you've settled on that, you can use the same setup for a single-host deployment, mostly. If your developer systems use Docker for Mac or Docker for Windows you should be able to use a special Docker hostname to reach the other service
environment:
SERVICE_2_URL: 'http://host.docker.internal:8082/'
(If you use Linux on the desktop you will have to know some IP address for the host; not localhost because that means "this container", and not the docker0 interface address because that will be on a specific network, but something like the host's eth0 address.)
Your other option is to "borrow" the other Docker Compose network as an external network. There is some trickiness if all of your Docker Compose setups have the same names; from some experimentation it seems like the Docker-internal DNS will always resolve to your own Docker Compose file first, and you have to know something like the Compose-assigned container name (which isn't hard to reconstruct and is stable) to reach the other service.
version: '3'
networks:
app2:
external:
name: app2_appnet
services:
app:
networks:
- appnet
- app2_appnet
environment:
SERVICE_2_URL: 'http://app2_app_1/' # using the service-internal port
MYSQL_HOST: db # in this docker-compose.yml
(I would suggest using the Docker Compose default network over declaring your own; that will mostly let you delete all of the networks: blocks in the file without any ill effect, but in this specific case you will need to declare networks: [default, app2_default] to connect to both.)
You may also consider a multi-host container solution when you're starting to look at this. Kubernetes is kind of heavy-weight, but it will run containers on any node in the cluster (you don't specifically have to worry about placement) and it provides both namespaces and automatic DNS resolution for you; you can just set SERVICE_2_URL: 'http://app.app2/' to point at the other namespace without worrying about these networking details.
If you run this docker compose locally; given app and db are on the same network - appnet - app should be able to talk to db using localhost:${DB_PORT}.
In production, if app and db are on different machines; app would probably need to talk to database using ip or domain name.
Considering that you are using different machines for the different docker deployments you good put them behind a regular webserver (Apache2, Nginx) and then route the traffic from the specific domain to $APP_PORT using a simple vhost. I prefer to do that instead of directly exposing the container to the network. This way you would also be able to host multiple applications on the same machine ( if you like to ). So I suggest you should not try to connect docker networks but "regular " ones.
Was playing around with inspect and cURL. I think I found the solution.
Locally:
In my local, I inspected the container and view the NetworkSettings.Network.<network name>.Gateway which is 172.25.0.1
Then I get the the exposed port which is 8050
Then I did a curl inside the app1 container curl 172.25.0.1:8050/login to check whether app1 can do a http request to app2 container. OR docker exec -it project1_app_1 curl 172.25.0.1:8050/login
Vice versa, I did curl 172.25.0.1:80 for app2 -> app 1 OR docker exec -it project2_app_1 curl 172.25.0.1:80
The only issue is that, the Gateway value changes when we restart via docker-compose up -d
Production likewise:
I am not that pro with networking and stuff. My estimate for production would be:
Do curl app2-domain.com which is pointed to the app by the webserver as they are in their own machine (even with a load balancer).

Docker stack deploy using overlay network - inconsistent behavior

I am deploying 2 containers (application and SQL) to the same network using a docker-compose.yml file (Swarm stack deploy).
Most of the time, the application has no problems talking to the SQL via its host name as a datasource in the connection string.
However, there are times where it simply can't find it. In order to debug it, I have verified that the overlay network is indeed created in each node, and when inspecting the network on each node, I see that the container does belong to this network.
Moreover, when I run docker exec command to enter the application container, I try to send a ping to the SQL container, and the host name does resolves to the correct IP, but still there is no response back.
This is extremely frustrating, as it only occurs from time to time.
Any suggestions of how to debug the issue ?
version: '3.2'
services:
sqlserver:
image: xxxx:5000/sql_image
hostname: sqlserver
deploy:
endpoint_mode: dnsrr
networks:
devnetwork:
aliases:
- sqlserver
test:
image: xxxx:5000/test
deploy:
endpoint_mode: dnsrr
deploy:
restart_policy:
condition: none
resources:
reservations:
memory: 2048M
networks:
- devnetwork
networks:
devnetwork:
driver: overlay
Service discovery and DNS problems on load are known bag in swarm mode. We have this problem a lot of times. You can discover open issues here and here.
If you run heavy use network application consider separate your worker and manager nodes. It's will help to manager execute service discovery well.
You can change the service discovery component and use something as Consul or ZooKeeper as part of your stack implementation.
I would consider using some service mesh for data-bind communication between services. Consul can do it for you. You can earn a lot of benefits from this design pattern. Security and encrypted data communication for example.

How to put docker container for database on a different host in production?

Let's say we have a simple web app stack, something like the one described in docker-compse docs. Its docker-compose.yml looks like this:
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
This is great for development on a laptop. In production, though, it would be useful to require the db container to be on its own host. Tutorials I'm able to find use docker-swarm to scale out the web container, but pay no attention to the fact that the instance of db and one instance of web run on the same machine.
Is it possible to require a specific container to be on its own machine (or even better, on a specific machine) using docker ? If so, how? If not, what is the docker way to deal with database in multi-container apps?
In my opinion, databases sit on the edge of the container world, they're useful for development and testing but production databases are often not very ephemeral or portable things by nature. Flocker certainly
helps as do scalable types of databases, like Cassandra, but databases can have very specific requirements that might be better treated as a service that sits behind your containerised app (RDS, Cloud SQL etc).
In any case you will need a container orchestration tool.
You can apply manual scheduling constraints for Compose + Swarm to dictate the docker host a container can run on. For your database, you might have:
environment:
- "constraint:storage==ssd"
Otherwise you can setup a more static Docker environment with Ansible, Chef, Puppet
Use another orchestration tool that supports docker: Kubernetes, Mesos, Nomad
Use a container service: Amazon ECS, Docker Cloud/Tutum

Resources