I want to attach a custom security group to the load balancer (To accept traffic from cloud front) for my ecs deployment from docker
Below is my docker-compose.yml file and I need to attach security group sg-0828b05baf4899773 to the load balancer that gets created by cloudformation. Alternativly I would be open to a way to use cloudfront as part of docker-compose where the cloudfront distribution is created as part of cloudformation.
services:
application:
image: 000.dkr.ecr.us-east-1.amazonaws.com/org/pos:latest
platform: linux/amd64
env_file: .env.${ENV}
build:
context: "."
ports:
- 80:80
restart: always
networks:
- mail
- app_network
postfix:
image: 000.dkr.ecr.us-east-1.amazonaws.com/org/postfix:latest
platform: linux/amd64
build:
context: "postfix"
container_name: postfix
networks:
- mail
- app_network
hostname: postfix
restart: always
networks:
app_network:
name: tcetra_network
mail:
name: postfix-mail
x-aws-cloudformation:
Resources:
ApplicationTCP80Listener:
Properties:
Certificates:
- CertificateArn: "arn:aws:acm:us-east-1:088048903606:certificate/d7a330ce-77d6-4753-bcf0-913f8ac0cee3"
Protocol: HTTPS
Port: 443
ApplicationTCP80TargetGroup:
Properties:
HealthCheckPath: /health.html
Matcher:
HttpCode: 200-499
The docker-compose ECS support generates a CloudFormation template from your docker-compose.yaml file. Per the official documentation, You can create CloudFormation overlays in the docker-compose file via x-aws-cloudformation.
You are already doing this in your docker-compose file to specify custom SSL certificate, and a custom health check.
To customize other aspects of the generated CloudFormation template, you should first run docker compose convert to generate a CloudFormation stack file from your Compose file. You can then view that file to find the security group resource, and then include the needed changes for that resource in your docker-compose.yaml file.
Related
I'm trying to deploy a 2 tier architecture aws deployment using docker compose into aws ecs.
From everything that I have read and found, it seems that I can use x-aws-cloudformation overly to use specific subnets in my docker compose build. So, this is what I have:
version: '3.8'
x-aws-vpc: "vpc-0f64c8ba9cb5bb10f"
services:
osticket:
container_name: osticket-web
image: osticket/osticket
environment:
MYSQL_HOST: db
MYSQL_PASSWORD: secret
depends_on:
- db
ports:
- 80:80
db:
container_name: osticket-db
image: mariadb
restart: always
environment:
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: osticket
MYSQL_USER: osticket
MYSQL_PASSWORD: secret
expose:
- "3306"
x-aws-cloudformation:
Resources:
OsticketService:
Properties:
NetworkConfiguration:
AwsvpcConfiguration:
Subnets:
- subnet-093223fe760e52016 #public subnet-1
- subnet-08120f88feb55e3f1 #public subnet-2
DbService:
Properties:
NetworkConfiguration:
AwsvpcConfiguration:
Subnets:
- subnet-0c68a298227d9c2e8 #private subnet-1
- subnet-042cae15125ba9b1b #private subnet-2
As you can see in my compose file, I have 2 services, osticket and db. In docker compose convert cloudformation output, it shows as OsticketService and DbService. The major problem is that for each service in the cloudformation template, it shows all of the subnets in each service instead of what I provided in the docker compose file. So, when I try to deploy it, I get the following error:
A load balancer cannot be attached to multiple subnets in the same Availability Zone (Service: AmazonElasticLoadBalancing; Status Code: 400; Error Code: InvalidConfigurationRequest; Request ID: 41880961-a4d5-4c15-9315-603acdef26f5; Proxy: null)
I'm not sure where I need to make the changes to get this working. Please let me know if you need to see the cloudformation template and I will upload it.
Thank you.
I am new to docker and had a problem that hope you could help.
I have defined multiple services (HTTPSERV, IMED, etc...) in my docker-compose file and these services have a python code inside and a docker file for running them. The dockerfile also copies the required files into a host path defined in docker-compose. The HTTPSERV and IMED must share a text file and expose it to an external user sending a GET request to HTTPSERV.
In docker-compose I have defined a local host directory and bind it to a named volume. The services and docker files are meant to share each service files and run.
As soon as I run the docker-compose, the files related to the first service copies files in the PATH directory where "src" and change the permission right of the "src" folder not letting the other services copy their files. This causes the next services to fail to find the appropriate files and the whole orchestration process fails.
version: "3.9"
networks:
default:
ipam:
config:
- subnet: 172.28.0.2/20
services:
httpserv:
user: root
container_name: httpserver
build: ./HTTPSERV
volumes:
- myapp:/httpApp:rw
networks:
default:
ipv4_address: 172.28.0.5
ports:
- "8080:3000"
rabitQ:
user: root
container_name: rabitQ
image: rabbitmq:3.8-management
networks:
default:
ipv4_address: 172.28.0.4
ports:
- "9000:15672"
imed:
user: root
container_name: IMED-Serv
build: ./IMED
volumes:
- myapp:/imed:rw
networks:
- default
# restart: on-failure
orig:
user: root
container_name: ORIG-Serv
build: ./ORIG
volumes:
- myapp:/orig:rw
networks:
- default
# restart: on-failure
obse:
container_name: OBSE-Serv
build: ./OBSE
volumes:
- myapp:/obse:rw
networks:
- default
# restart: on-failure
depends_on:
- "httpserv"
links:
- httpserv
volumes:
myapp:
driver: local
driver_opts:
type: none
o: bind
device: /home/dockerfiles/hj/a3/src
The content of the docker file is similar for most of the services and is as follow:
FROM python:3.8-slim-buster
WORKDIR /imed
COPY . .
RUN pip install --no-cache-dir -r imed-requirements.txt
RUN chmod 777 ./imed.sh
CMD ["./imed.sh"]
The code has root access and the UserID and GroupID are set
I also used the anonymously named volumes but the same issue happens.
In Docker it's often better to avoid "sharing files" as a first-class concept. Imagine running this in a clustered system like Kubernetes; if you have three copies of each service, and they're running in a cloud of a hundred systems, "sharing files" suddenly becomes difficult.
Given the infrastructure you've shown here, you have a couple of straightforward options:
Add an HTTP endpoint to update the file. You'd need to protect this endpoint in some way, since you don't want external callers accessing it; maybe filter it in an Nginx reverse proxy, or use a second HTTP server in the same process. The service that has the updated content would then call something like
r = requests.post('http://webserv/internal/file.txt', data=contents)
r.raise_for_status()
Add an HTTP endpoint to the service that owns the file. When the Web server service starts up, and periodically after that, it makes a request
r = requests.get('http://imed/file.txt')
You already have RabbitMQ in this stack; add a RabbitMQ consumer in a separate thread, and post the updated file content to a RabbitMQ topic.
There's some potential trouble with the "push" models if the Web server service restarts, since it won't be functional until the other service sends it the current version of file.
If you really do want to do this with a shared filesystem, I'd recommend creating a dedicated directory and dedicated volume to do this. Do not mount a volume over your entire application code. (It will prevent docker build from updating the application, and in the setup you've shown, you're mounting the same application-code volume over every service, so you're running four copies of the same service instead of four different services.)
Adding in the shared volume, but removing a number of other unnecessary options, I'd rewrite the docker-compose.yml file as:
version: '3.9'
services:
httpserv:
build: ./HTTPSERV
ports:
- "8080:3000"
volumes:
- shared:/shared
rabitQ:
image: rabbitmq:3.8-management
hostname: rabitQ # (RabbitMQ specifically needs this setting)
ports:
- "9000:15672"
imed:
build: ./IMED
volumes:
- shared:/shared
orig:
build: ./ORIG
obse:
build: ./OBSE
depends_on:
- httpserv
volumes:
shared:
I'd probably have the producing service unconditionally copy the file into the volume on startup, and have the consuming service block on it being present. (Don't depend on Docker to initialize the volume; it doesn't work on Docker bind mounts, or on Kubernetes, or if the underlying image has changed.) So long as the files are world-readable the two services' user IDs don't need to match.
How to run ory/hydra as docker container with a custom configuration file.
Is it the best way to run or providing environment variable is better.
Need help!!
I use a dockerfile with this content:
FROM oryd/hydra:latest
COPY hydra.yml /home/ory/.hydra
COPY hydra.yml /.hydra
EXPOSE 4444 4445 5555
only one copy is needed but i never know which :)
you can run with env. variables as well, but I prefer the yaml, it's easier
Apart from the solution mentioned above, you can also try Docker Compose. Personally, I prefer this as I find it easy to follow and manage.
The official 5-minute tutorial also uses Docker Compose which can be found here: https://www.ory.sh/hydra/docs/5min-tutorial/
I am using PostgreSQL as the database here but you can replace it with any other of the supported databases such as MySQL.
More information about supported Database configurations are here: https://www.ory.sh/hydra/docs/dependencies-environment/#database-configuration
First, you create docker-compose.yml like below and then just docker-compose up
version: '3.7'
networks:
intranet:
driver: bridge
services:
hydra-migrate:
depends_on:
- auth-db
container_name: hydra-migrate
image: oryd/hydra:v1.10.6
environment:
- DSN=postgres://auth:secret#auth-db:5432/auth?sslmode=disable&max_conns=20&max_idle_conns=4
command: migrate sql -e --yes
networks:
- intranet
hydra:
container_name: hydra
image: oryd/hydra:v1.10.6
depends_on:
- auth-db
- hydra-migrate
ports:
- "4444:4444" # Public port
- "4445:4445" # Admin port
- "5555:5555" # Port for hydra token user
command:
serve -c /etc/hydra/config/hydra.yml all --dangerous-force-http
restart: on-failure
networks:
- intranet
volumes:
- type: bind
source: ./config
target: /etc/hydra/config
auth-db:
image: postgres:alpine
container_name: auth-db
ports:
- "5432:5432"
environment:
- POSTGRES_USER=auth
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=auth
networks:
- intranet
hydra-migrate service takes care of the database migrations and we don't actually need to specify an external configuration file for this and just the DSN as an environment variable would suffice.
hydra service starts up the hydra container and here I have done a volume mount where I have bound my local config folder to /etc/hydra/config within the container.
And then when the container spins up the following command is executed
serve -c /etc/hydra/config/hydra.yml all --dangerous-force-http which uses the bound config file.
And here is how my hydra config file looks like:
## ORY Hydra Configuration
version: v1.10.6
serve:
public:
cors:
enabled: true
dsn: postgres://auth:secret#auth-db:5432/auth?sslmode=disable&max_conns=20&max_idle_conns=4
oidc:
subject_identifiers:
supported_types:
- public
- pairwise
pairwise:
salt: youReallyNeedToChangeThis
urls:
login: http://localhost:4455/auth/login
consent: http://localhost:4455/auth/consent
logout: http://localhost:4455/consent
error: http://localhost:4455/error
post_logout_redirect: http://localhost:3000/
self:
public: http://localhost:4444/
issuer: http://localhost:4444/
ttl:
access_token: 1h
refresh_token: 1h
id_token: 1h
auth_code: 1h
oauth2:
expose_internal_errors: true
secrets:
cookie:
- youReallyNeedToChangeThis
system:
- youReallyNeedToChangeThis
log:
leak_sensitive_values: true
format: json
level: debug
Having the following architecture:
Microservice 1 + DB (microservice1/docker-compose.yml)
Microservice 2 + DB (microservice2/docker-compose.yml)
Proxy (proxy/docker-compose.yml)
Which of the following options would be the best to deploy in the production environment?
Docker Compose Overriding. Have a docker-compose for each microservice and another docker-compose for the proxy. When the production deployment is done, all the docker-compose would be merged to create only one (with docker-compose -f microservice1/docker-compose.yml -f microservice2/docker-compose.yml -f proxy/docker-compose.yml up. In this way, the proxy container, for example nginx, would have access to microservices to be able to redirect to one or the other depending on the request.
Shared external network. Have a docker-compose for each microservice and another docker-compose for the proxy. First, an external network would have to be created to link the proxy container with microservices.docker network create nginx_network. Then, in each docker-compose file, this network should be referenced in the necessary containers so that the proxy has visibility of the microservices and thus be able to use them in the configuration. An example is in the following link https://stackoverflow.com/a/48081535/6112286.
The first option is simple, but offers little felxibility when configuring many microservices or applications, since all docker-compose of all applications would need to be merged to generate the final configuration. The second option uses networks, which are a fundamental pillar of Docker. On the other hand, you don't need all docker-compose to be merged.
Of these two options, given the scenario of having several microservices and needing a single proxy to configure access, which would be the best? Why?
Tnahks in advance.
There is a third approach, for example documented in https://www.bogotobogo.com/DevOps/Docker/Docker-Compose-Nginx-Reverse-Proxy-Multiple-Containers.php and https://github.com/Einsteinish/Docker-compose-Nginx-Reverse-Proxy-II/. The gist of it is to have the proxy join all the other networks. Thus, you can keep the other compose files, possibly from a software distribution, unmodified.
docker-compose.yml
version: '3'
services:
proxy:
build: ./
networks:
- microservice1
- microservice2
ports:
- 80:80
- 443:443
networks:
microservice1:
external:
name: microservice1_default
microservice2:
external:
name: microservice2_default
Proxy configuration
The proxy will refer to the hosts by their names microservice1_app_1 and microservice2_app_1, assuming the services are called app in directories microservice1 and microservice2.
docker-compose is designed to orchestrate multiple containers in one single file. I do not know the content of your docker-compose files but the right way is to write one single docker-compose.yml that could contains:
version: '3.7'
services:
microservice1_app:
image: ...
volumes: ...
networks:
- service1_app
- service1_db
microservice1_db:
image: ...
volumes: ...
networks:
- service1_db
microservice2_app:
image: ...
volumes: ...
networks:
- service2_app
- service2_db
microservice2_db:
image: ...
volumes: ...
networks:
- service2_db
nginx:
image: ...
volumes: ...
networks:
- default
- service1_app
- service2_app
volumes:
...
networks:
service1_app:
service1_db:
service2_app:
service2_db:
default:
name: proxy_frontend
driver: bridge
In this way nginx container is able to communicate with microservice1_app container through microservice1_app hostname. If other hostnames are needed, it can be configured with aliases subsection within services networks section.
Security Bonus
In the above configuration, microservice1_db is only visible by microservice1_app (same for microservice2) and nginx is only able to see microservice1_app and microservice2_app and is reachable from outside of Docker (bridge mode)
I have the following docker-compose file content:
version: '3.4'
services:
local-app:
build: ./app/
command: node app
ports:
- '7001:7001'
links:
- search-svc
networks:
docker_app-network:
external: true
external_links:
-search-svc
Basically what I 'm trying to do is to link the ' local-app ' container with another already running container the ' search-svc '. By running the docker compose I get the following error:
The Compose file './docker-compose.yaml' is invalid because:
Invalid top-level property "external_links". Valid top-level sections for this Compose file are: secrets, version, volumes, services, configs, networks, and extensions starting with "x-". You might be seeing this error because you're using the wrong Compose file version. Either specify a supported version (e.g "2.2" or "3.3") and place your service definitions under the services key, or omit the version key and place your service definitions at the root of the file to use version 1.
I have read the documentation but I can't find any solution to my problem.
Can anyone suggest anything that might help?
Thanks in advance
Yaml files are space sensitive. You tried to define external_links at the top level of the file rather than as part of a service. This should by syntactically correct:
version: '3.4'
services:
local-app:
build: ./app/
command: node app
ports:
- '7001:7001'
links:
- search-svc
external_links:
- search-svc
networks:
docker_app-network:
external: true
That said, linking is deprecated in docker, it is preferred to use a common network (excluding the default bridge network named bridge) and then use the integrated DNS server for service discovery. It looks like you have defined your common network but didn't use it. This would place your service on that network and rely on DNS:
version: '3.4'
services:
local-app:
build: ./app/
command: node app
ports:
- '7001:7001'
networks:
- docker_app-network
networks:
docker_app-network:
external: true