I am trying to deploy docker into ecs using docker compose. Below is my docker compose file:
version: '3.8'
x-aws-vpc: "vpc-0fef56fb4ec32ad70"
services:
osticket:
container_name: osticket-web
image: osticket/osticket
environment:
MYSQL_HOST: db
MYSQL_PASSWORD: secret
depends_on:
- db
ports:
- 80:80
db:
container_name: osticket-db
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: osticket
MYSQL_USER: osticket
MYSQL_PASSWORD: secret
My VPC is private with 6 subnet (2 public, 4 private), 2 NAT gateways both in public subnets and has one internet gateway. I assume this was the minimal requirements needed to use the x-aws-vpc flag in docker compose and that rest of the resources would be created automatically.
When I run the docker compose up command, I get the below error:
A load balancer cannot be attached to multiple subnets in the same
Availability Zone (Service: AmazonElasticLoadBalancing; Status Code:
400; Error Code: InvalidConfigurationRequest; Request ID:
d2142a38-55c6-44ef-a405-e34d99d9fa07; Proxy: null)
PS: if I run the same docker compose with the default vpc, it works fine. so I'm not sure what else I am missing.
The default VPC only has public subnets, and it doesn't have any NAT Gateways. That is the minimal requirements needed to use the x-aws-vpc flag, not what you setup in your custom VPC.
The error indicates that it is trying to attach the load balancer to both your public and private subnets, but of course it is getting an error because some of those subnets are in the same availability zone. To use the custom VPC you have created you need to read up on how to customize the CloudFormation template that gets generated by docker-compose. You'll need to run docker compose convert and get the name of the load balancer that gets generated in the CloudFormation template, and then add some custom x-aws-cloudformation code in the docker-compose file to specify exactly which subnets the load balancer should be connected to.
Related
I'm trying to deploy a 2 tier architecture aws deployment using docker compose into aws ecs.
From everything that I have read and found, it seems that I can use x-aws-cloudformation overly to use specific subnets in my docker compose build. So, this is what I have:
version: '3.8'
x-aws-vpc: "vpc-0f64c8ba9cb5bb10f"
services:
osticket:
container_name: osticket-web
image: osticket/osticket
environment:
MYSQL_HOST: db
MYSQL_PASSWORD: secret
depends_on:
- db
ports:
- 80:80
db:
container_name: osticket-db
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: osticket
MYSQL_USER: osticket
MYSQL_PASSWORD: secret
expose:
- "3306"
x-aws-cloudformation:
Resources:
OsticketService:
Properties:
NetworkConfiguration:
AwsvpcConfiguration:
Subnets:
- subnet-093223fe760e52016 #public subnet-1
- subnet-08120f88feb55e3f1 #public subnet-2
DbService:
Properties:
NetworkConfiguration:
AwsvpcConfiguration:
Subnets:
- subnet-0c68a298227d9c2e8 #private subnet-1
- subnet-042cae15125ba9b1b #private subnet-2
As you can see in my compose file, I have 2 services, osticket and db. In docker compose convert cloudformation output, it shows as OsticketService and DbService. The major problem is that for each service in the cloudformation template, it shows all of the subnets in each service instead of what I provided in the docker compose file. So, when I try to deploy it, I get the following error:
A load balancer cannot be attached to multiple subnets in the same Availability Zone (Service: AmazonElasticLoadBalancing; Status Code: 400; Error Code: InvalidConfigurationRequest; Request ID: 41880961-a4d5-4c15-9315-603acdef26f5; Proxy: null)
I'm not sure where I need to make the changes to get this working. Please let me know if you need to see the cloudformation template and I will upload it.
Thank you.
How do I get docker automatic service discovery working in an AWS ECS EC2 based cluster service?
I have this corresponding docker-compose.yml (which I'm mapping over to a ECS compatible task-definition.json file):
version: "3"
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.2
environment:
- discovery.type=single-node
mongo:
image: mongo:4.0.12
redis:
image: redis:5.0.3
api:
image: api
build: .
command: api.py
depends_on:
- elasticsearch
- mongo
- redis
ports:
- 5000:5000
If I launch this with docker-compose up docker-composer will create a new private bridge network. Within this private network "automatic service discovery" is on and service names resolve to service IP addresses. So for example the api can find mongo without knowing its IP by doing a DNS lookup for "mongo". The network is also isolated from other unrelated containers. You can do this manually via docker too like this:
docker network create api-net
docker run -d --name elasticsearch --net api-net docker.elastic.co/elasticsearch/elasticsearch:6.2.2
docker run -d --name mongo --net api-net mongo:4.0.12
...
But I can't figure out how I can achieve the same via AWS ECS multi-container service defined with a task-definition.json file. If I define multiple services with "bridge" networking all containers are launched into the default bridge network and automatic service discovery does not work. I can manually log into the ECS EC2 container instance and set up a private network, but obviously not a workable solution.
you need to use :
"links": ["name:internalName", ...]
see more here under Network Settings
please see this note also:
Important
Containers that are collocated on a single container instance may be
able to communicate with each other without requiring links or host
port mappings. Network isolation is achieved on the container instance
using security groups and VPC settings.
I'm new to docker and kubernetes.I have docker-compose.yml as
version: '2'
services:
db:
build:
context: ./db
dockerfile: Dockerfile
ports:
- "3306:3306"
networks:
docker-network:
ipv4_address: 172.18.0.2
restart: always
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: db
MYSQL_USER: user
MYSQL_PASSWORD: mypassword
createlinuxuser:
build:
context: ./createlinuxuser
dockerfile: Dockerfile
networks:
docker-network:
ipv4_address: 172.18.0.3
depends_on:
- db
tty: true
restart: always
networks:
docker-network:
driver: bridge
ipam:
config:
- subnet: 172.18.0.0/16
gateway: 172.18.0.1
I want to deploy this multi container docker app with bridge network between containers on kubernetes. I also want to have Ip to both containers on kubernetes so that they can talk to each other,is this possible? What is the best practice to do this?
Another feature that you might need to take a look at, is to deploy to kuberentes using docker cli itself as explained in docker stack deploy
Client and daemon API must both be at least 1.25 to use this command
The deploy command supports compose file version 3.0 and above. So you might need to update the docker-compose.yml to v3
docker stack deploy --compose-file /path/to/docker-compose.yml mystack
Other options can be specified like: --namespace, --kubeconfig
Also you can check out Helm which is a package manager for kubernetes where you need to write a helm chart in order to deploy it on kubernetes cluster
Just as David Maze said, there might be some issues with what you want to achieve.
Definitely you won't be able to just use this file straight away. You can use Kompose to go from Docker Compose to Kubernetes or kompose convert.
Kompose supports conversion of V1, V2, and V3 Docker Compose files
into Kubernetes and OpenShift objects.
But as mentioned it will probably require some adjustments.
As for your networking. Do you want multiple containers in a Pod or you want to deploy it all as separate pods? In this case read the documentation or some articles about Kubernetes Networking.
In terms of Docker constructs, a Pod is modelled as a group of Docker
containers that share a network namespace. Containers within a Pod all
have the same IP address and port space assigned through the network
namespace assigned to the Pod, and can find each other via localhost
since they reside in the same namespace.
source
Unable to connect to containers running on separate docker hosts
I've got 2 docker Tomcat containers running on 2 different Ubuntu vm's. System-A has a webservice running and System-B has a db. I haven't been able to figure out how to connect the application running on system-A to the db running on system-B. When I run the database on system-A, the application(which is also running on system-A) can connect to the database. I'm using docker-compose to setup the network(which works fine when both containers are running on the same VM). I've execd into etc/hosts file in the application container on system-A and I think whats missing is the ip address of System-B.
services:
db:
image: mydb
hostname: mydbName
ports:
- "8012: 8012"
networks:
data:
aliases:
- mydbName
api:
image: myApi
hostname: myApiName
ports:
- "8810: 8810"
networks:
data:
networks:
data:
You would configure this exactly the same way you would as if Docker wasn't involved: configure the Tomcat instance with the DNS name or IP address of the other server. You would need to make sure the service is published outside of Docker space using a ports: directive.
On server-a.example.com you could run this docker-compose.yml file:
version: '3'
services:
api:
image: myApi
ports:
- "8810:8810"
env:
DATABASE_URL: "http://server-b.example.com:8012"
And on server-b.example.com:
version: '3'
services:
db:
image: mydb
ports:
- "8012:8012"
In principle it would be possible to set up an overlay network connecting the two hosts, but this is a significantly more complicated setup.
(You definitely don't want to use docker exec to modify /etc/hosts in a container: you'll have to repeat this step every time you delete and recreate the container, and manually maintaining hosts files is tedious and error-prone, particularly if you're moving containers between hosts. Consul could work as a service-discovery system that provides a DNS service.)
I'm very new with Docker, and the most trouble I had with it, was to:
Try to make my own image with basic LAMP (phpMyAdmin included)
Using already created LAMP images (This one works for me but I'm having troubles with phpMyAdmin and privileges)
So I was asking myself if it would be possible to run multiple containers that connect between them (I saw there's the possibility to create a network on Docker but I don't know its limitations) and run as a single service like LAMP. In other words:
Apache2 + PHP -> connected with container nº2 || Host connected through port forwarding.
MySQL + PhpMyAdmin -> connected with container nº1
I'm still very confused by all the stuff you can or can't do with Docker.
You need docker-compose - it allows to run many containers by one command: docker-compose up.
You need to configure them, i.e. something like this: https://github.com/pnglabz/docker-compose-lamp.
You can achieve this by putting the containers in the same network.
On the host (containing the database):
# docker-compose.yml
services:
db:
mediawiki_db:
image: mariadb
networks:
- mediawiki
ports:
- 3306:3306
networks:
mediawiki:
driver: bridge
On the client (container with phpmyadmin):
# docker-compose.yml
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
networks:
- db_mediawiki
networks:
db_mediawiki:
external: true
On the phpMyAdmin page under server, fill in the IP of the db container.
Also make sure you have granted external access privileges to your mysql user:
GRANT ALL PRIVILEGES ON *.* TO 'root'#'%'
IDENTIFIED BY 'password'
WITH GRANT OPTION;
FLUSH PRIVILEGES;