AWS ECS run instance for each microservice - docker

I suppose, it's a stupid question but I have no idea where to find the answer. I've checked so many resources but I still didn't get it.
I have docker-compose.yml file. Is it possible to use AWS ECS cluster to run a new instance (t2.micro, for example) for each service (eurekaserver, configserver, zuulserver, database)? I saw only examples with one big instance.
version: '2'
services:
eurekaserver:
image: maxb/tracker-eurekasvr:tracker-eurekasvr
ports:
- "8761:8761"
configserver:
image: maxb/tracker-confsvr:tracker-confsvr
ports:
- "8888:8888"
environment:
EUREKASERVER_URI: "http://eurekaserver:8761/eureka/"
EUREKASERVER_PORT: "8761"
ENCRYPT_KEY: "IMSYMMETRIC"
zuulserver:
image: maxb/tracker-zuulsvr:tracker-zuulsvr
ports:
- "5555:5555"
environment:
PROFILE: "default"
SERVER_PORT: "5555"
CONFIGSERVER_URI: "http://configserver:8888"
EUREKASERVER_URI: "http://eurekaserver:8761/eureka/"
DATABASESERVER_PORT: "27017"
EUREKASERVER_PORT: "8761"
CONFIGSERVER_PORT: "8888"
database:
image: mongo
container_name: tracker-mongo
volumes:
- $HOME/tracker-data:/data/db
- $HOME/tracker-datacd:/data/bkp
restart: always
AWS ECS has Tasks Definitions but I'm not sure if it can help

I am assuming you want to run these services 24x7 and not on demand. With container orchestration it is possible. One way of doing it with Rancher is as below:
Create 5 micro instances. 4 for the services and 1 for Rancher and put all 5 in 1 VPC. Now install Rancher in the 5th instance and add the other 4 hosts in Rancher, so that all your 4 hosts show up in Rancher's infrastructure.
Now label all the 4 hosts in Rancher uniquely - for ex: 'zuulserver' , 'database' , 'configserver' , 'eurekaserver'
Now edit your docker compose to add those rancher host labels to each of your services.
io.rancher.scheduler.affinity:host_label: key1=value1
wordpress:
labels:
# Make wordpress a global service
io.rancher.scheduler.global: 'true'
# Make wordpress only run containers on hosts with a key1=value1 label
io.rancher.scheduler.affinity:host_label: key1=value1
# Make wordpress only run on hosts that do not have a key2=value2 label
io.rancher.scheduler.affinity:host_label_ne: key2=value2
image: wordpress
links:
- db: mysql
stdin_open: true
In Rancher create a stack with your docker compose and start the stack.
Rancher will deploy those services to the corresponding hosts according to the host affinity labels.
https://rancher.com/docs/rancher/v1.1/en/cattle/scheduling
https://rancher.com/docs/rancher/v1.2/en/hosts/

Related

Docker and Traefik different directories

I have 2 different directories each containing docker containers for different purposes and both spun up with docker compose.
Dir A has Traefik config and container (and other containers) as well as environment variables whereas Dir B is a bunch of containers.
I want to now include Traefik labels in Dir B containers but when I run compose in Dir B, I'm facing:
WARN[0000] The "DOMAIN_NAME" variable is not set. Defaulting to a blank string.
service "[service name]" refers to undefined network traefik_proxy: invalid compose project
I'm guessing this is because services in Dir B can't see traefik_proxy since it's part of a different stack and same with the DOMAIN_NAME variable.
How can I have Dir B 'reach across' to Dir A? Is it even possible with my current config?
If you want to have multiple compose projects share a single Traefik frontend, that's certainly possible, but you need to place Traefik on a shared network. For this model, I would suggest starting with a docker-compose.yaml that only deploys Traefik; e.g:
version: "3"
services:
traefik:
image: docker.io/traefik:latest
command:
- --api.insecure=true
- --providers.docker
- --accesslog=true
- --accesslog.filepath=/dev/stderr
- --providers.docker.exposedByDefault=false
ports:
- "80:80"
- "443:443"
- "127.0.0.2:8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
services:
external: true
Start by creating the shared network:
docker network create services
And then starting the Traefik project:
pushd traefik; docker-compose up -d; popd
Now for every project you want to make available via Traefik, put your services on the services network. For example, let's say we have this in app1/docker-compose.yaml:
version: "3"
services:
app1:
image: docker.io/containous/whoami
networks:
- services
labels:
- "traefik.enable=true"
- "traefik.http.routers.app1.rule=PathPrefix(`/app1`)"
networks:
services:
external: true
Then I can run:
pushd app1; docker-compose up -d; popd
And now my app1 service is available at http://localhost/app1/.
We can add as many services as we want like this; the only requirement is that the containers are attached to the services network.

How to clone a docker stack on the same server

I want to practice using docker-compose. I have a tournament happening over the weekend and I want to set up 10 copies of the same web app on ONE server with urls like:
http://team1.example.com
http://team2.example.com
etc...
http://team10.example.com
There will be 10 teams in the tournament, and they will all go to their respective url http://team<your team number>.example.com via web browser, save some information to a database, and maybe even modify the code on the actual server.
So I built a simple nodejs app that simply writes data to a mongo database. Then I decided to set up two websites http://team1.example.com and http://team2.example.com. So I made this docker compose file:
version: '3'
services:
api1:
image: dockerjohn/tournament:latest
environment:
- DB=database1
ports:
- 80:3000
networks:
- net1
db1:
image: mongo:4.0.3
container_name: database1
networks:
- net1
api2:
image: dockerjohn/tournament:latest
environment:
- DB=database2
ports:
- 81:3000
networks:
- net2
db2:
image: mongo:4.0.3
container_name: database2
networks:
- net2
networks:
net1:
net2:
Then I installed apache web server to reverse proxy team 1 to port 80 and team 2 to port 81. This all works fine.
To set up the remaining teams 3 to 10, I have to duplicate the entries I have in my docker compose yml file and duplicate virtual host entries in apache.
My question: Is there a docker command that will let me clone each docker stack (team 1, team2, etc...) more easily without all this data entry? Do I need Kubernetes to do this?
Kubernetes would be way easier to set this up. It can take care of the reverse proxy setup too if you install the nginx controller.
You could create a single Kubernetes manifest containing:
a mongodb deployment, service, persistent volume claim
a nodejs deployment, service
You can then apply this 10 times, each time using a different namespace:
kubectl -n team01 -f manifest.yaml
kubectl -n team02 -f manifest.yaml
kubectl -n team03 -f manifest.yaml
...
Of course, you would need 10 different ingress rules because you want 10 different domains, but that would be the only thing you need to copy-paste.
I figured it out. There are options for docker called swarm and stack. First, I simplified my docker-compose.yml file to just this:
version: '3'
services:
api:
image: dockerjohn/tournament:latest
environment:
- DB=$DB
ports:
- $WEB_PORT:3000
networks:
- mynet
db:
image: mongo:4.0.3
networks:
- mynet
networks:
mynet:
Then I ran these commands from the same folder as my docker-compose file like this
docker swarm init
DB=team1_db WEB_PORT=81 docker stack deploy -c docker-compose.yml team1
DB=team2_db WEB_PORT=82 docker stack deploy -c docker-compose.yml team2
DB=team3_db WEB_PORT=83 docker stack deploy -c docker-compose.yml team3
DB=team4_db WEB_PORT=84 docker stack deploy -c docker-compose.yml team4
DB=team5_db WEB_PORT=85 docker stack deploy -c docker-compose.yml team5
etc...
You have to structure the DB env variable as <stack name located at the end of my docker stack deploy command>_<job name in the docker-compose yaml file>.
Now I just need to find a way to simplify my apache set up so I don't have to duplicate so many vhost entries . I heard there's a docker image called Traefik which can do this reverse proxy. Maybe I'll try that out and update my answer after.

Docker swarm containers connection issues

I am trying to use docker swarm to create simple nodejs service that lays behind Haproxy and connect to mysql. So, I created this docker compose file:
And I have several issues:
The backend service can't connect to the database using: localhost or 127.0.0.1, so, I managed to connect to the database using the private ip(10.0.1.4) of the database container.
The backend tries to connect to the database too soon even though it depends on it.
The application can't be reached from outside.
version: '3'
services:
db:
image: test_db:01
ports:
- 3306
networks:
- db
test:
image: test-back:01
ports:
- 3000
environment:
- SERVICE_PORTS=3000
- DATABASE_HOST=localhost
- NODE_ENV=development
deploy:
replicas: 1
update_config:
parallelism: 1
delay: 5s
restart_policy:
condition: on-failure
max_attempts: 3
window: 60s
networks:
- web
- db
depends_on:
- db
extra_hosts:
- db:10.0.1.4
proxy:
image: dockercloud/haproxy
depends_on:
- test
environment:
- BALANCE=leastconn
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 80:80
networks:
- web
deploy:
placement:
constraints: [node.role == manager]
networks:
web:
driver: overlay
db:
driver: bridge
I am running the following:
docker stack deploy --compose-file=docker-compose.yml prod
All the services are running.
curl http://localhost/api/test <-- Not working
But, as I mentioned above the issues I have.
Docker version 18.03.1-ce, build 9ee9f40
docker-compose version 1.18.0, build 8dd22a9
What do I missing?
The backend service can't connect to the database using: localhost or 127.0.0.1, so, I managed to connect to the database using the private ip(10.0.1.4) of the database container.
don't use IP addresses for connection. Use just the DNS name.
So you must change connection to DATABASE_HOST=db, because this is the service name you've defined.
Localhost is wrong, because the service is running in a different container as your test service.
The backend tries to connect to the database too soon even though it depends on it.
depends_on does not work as you expected. Please read https://docs.docker.com/compose/compose-file/#depends_on and the info box "There are several things to be aware of when using depends_on:"
TL;DR: depends_on option is ignored when deploying a stack in swarm mode with a version 3 Compose file.
The application can't be reached from outside.
Where is your haproxy configuration that it must request for http://test:3000 when something requests haproxy on /api/test?
For DATABASE_HOST=localhost - the localhost word means my local container. You need to use the service name where db is hosted. localhost is a special dns name always pointing to the application host. when using containers - it will be the container. In cloud development, you need to forget about using localhost (will point to the container) or IPs (they can change every time you run the container and you will not be able to use load-balancing), and simply use service names.
As for the readiness - docker has no possibility of knowing, if the application you started in container is ready. You need to make the service aware of the database unavailability and code/script some mechanisms of polling/fault tolerance.
Markus is correct, so follow his advice.
Here is a compose/stack file that should work assuming your app listens on port 3000 in the container and db is setup with proper password, database, etc. (you usually set these things as environment vars in compose based on their Docker Hub readme).
Your app should be designed to crash/restart/wait if it can't fine the DB. That's the nature of all distributed computing... that anything "remote" (another container, host, etc.) can't be assumed to always be available. If your app just crashes, that's fine and a normal process for Docker, which will re-create the Swarm Service task each time.
If you could attempt to make this with public Docker Hub images, I can try to test for you.
Note that in Swarm, it's likely easier to use Traefik for the proxy (Traefik on Swarm Mode Guide), which will autoupdate and route incoming requests to the correct container based on the hostname you give the labels... But note that you should test first just the app and db, then after you know that works, try adding in a proxy layer.
Also, in Swarm, all your networks should be overlay, and you don't need to specify as that is the default in stacks.
Below is a sample using traefik with your above settings. I didn't give the test service a specific traefik hostname so it should accept all traffic coming in on 80 and forward to 3000 on the test service.
version: '3'
services:
db:
image: test_db:01
networks:
- db
test:
image: test-back:01
environment:
- SERVICE_PORTS=3000
- DATABASE_HOST=db
- NODE_ENV=development
networks:
- web
- db
deploy:
labels:
- traefik.port=3000
- traefik.docker.network=web
proxy:
image: traefik
networks:
- web
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "80:80"
- "8080:8080" # traefik dashboard
command:
- --docker
- --docker.swarmMode
- --docker.domain=traefik
- --docker.watch
- --api
deploy:
placement:
constraints: [node.role == manager]
networks:
web:
db:

Docker Swarm with docker compose version 3 hostname underscore issue

I have created a docker swarm and trying to use overlay network in order to communicate between the 2 services deployed over that swarm.
the Docker compose of 1 service looks like:
version: '3'
services:
web:
container_name: "eureka"
image: eureka
environment:
EUREKA_HOST: eureka
ports:
- 8070:8070
networks:
- net_swarm
networks:
net_swarm:
external:
name: net_swarm
Second :
version: '3'
services:
web:
image: zuul-service
environment:
EUREKA_HOST: eureka_web
ports:
- 8069:8069
networks:
- net_swarm
networks:
net_swarm:
external:
name: net_swarm
i did a docker deploy --compose-file docker-compose.yml eureka to create the service 1, which spun up with service name as eureka_web, as seen above the same is referenced in the compose file of service 2 as EUREKA_HOSTS, however since this "eureka_web" has an underscore the host isnt getting picked when trying to run the second file.(primarily becoz of the underscore)
Can i somehow override the underscore in the service name or is there any other work around?
Don't give the container name.
So that your service name will act as the host name.
Also the host name with underscores should not cause any problem. Try finding out the actual rootcause.
Edit:
Your service name and the host name is web. And I can't say about this line, without looking at the docker file.
environment:
EUREKA_HOST: eureka

rationale behind docker compose "links" order

I have a Redis - Elasticsearch - Logstash - Kibana stack in docker which I am orchestrating using docker compose.
Redis will receive the logs from a remote location, will forward them to Logstash, and then the customary Elasticsearch, Kibana.
In the docker-compose.yml, I am confused about the order of "links"
Elasticsearch links to no one while logstash links to both redis and elasticsearch
elasticsearch:
redis:
logstash:
links:
- elasticsearch
- redis
kibana:
links:
- elasticsearch
Is this order correct? What is the rational behind choosing the "link" direction.
Why don't we say, elasticsearch is linked to logstash?
Instead of using the Legacy container linking method, you could instead use Docker user defined networks. Basically you can define a network for your services and then indicate in the docker-compose file that you want the container to run on that network. If your containers all run on the same network they can access each other via their container name (DNS records are added automatically).
1) : Create User Defined Network
docker network create pocnet
2) : Update docker-compose file
You want to add your containers to the network you just created. Your docker-compose file would look something along the lines of this :
version: '2'
services:
elasticsearch:
image: elasticsearch
container_name: elasticsearch
ports:
- "{your:ports}"
networks:
- pocnet
redis:
image: redis
container_name: redis
ports:
- "{your:ports}"
networks:
- pocnet
logstash:
image: logstash
container_name: logstash
ports:
- "{your:ports}"
networks:
- pocnet
kibana:
image: kibana
container_name: kibana
ports:
- "5601:5601"
networks:
- pocnet
networks:
pocnet:
external: true
3) : Start Services
docker-compose up
note : you might want to open a new shell window to run step 4.
4) : Test
Go into the Kibana container and see if you can ping the elasticsearch container.
your__Machine:/ docker exec -it kibana bash
kibana#123456:/# ping elasticsearch
First of all Links in docker are Unidirectional.
More info on links:
there are legacy links, and links in user-defined networks.
The legacy link provided 4 major functionalities to the default bridge network.
name resolution
name alias for the linked container using --link=CONTAINER-NAME:ALIAS
secured container connectivity (in isolation via --icc=false)
environment variable injection
Comparing the above 4 functionalities with the non-default user-defined networks , without any additional config, docker network provides
automatic name resolution using DNS
automatic secured isolated environment for the containers in a
network
ability to dynamically attach and detach to multiple networks
supports the --link option to provide name alias for the linked
container
In your case: Automatic dns will help you on user-defined network. first create a new network:
docker network create ELK -d bridge
With this approach you dont need to link containers on the same user-defined network. you just have to put your elk stack + redis containers in ELK network and remove link directives from composer file.
Your order looks fine to me. If you have any problem regarding the order, or waiting for services to get up in dependent containers, you can use something like the following:
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
entrypoint: ./wait-for-it.sh db:5432
db:
image: postgres
This will make the web container wait until it can connect to the db.
You can get wait-for-it script from here.

Resources