I'm using Docker in AWS ECS. I have one EC2 machine with docker agent from AWS ECS, and the ECS task contains of 3 containers:
nginx container
application-nodejs container
staticfiles-nodejs-application container.
I want to support very huge traffic. Do I need to setup AWSLoad Balancer? or my setting for nginx upstream is enough?
nginx conf example:
upstream appwww {
server app-www:3000;
}
server {
server_name my.home.net;
location / {
proxy_pass http://appwww;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
listen 443 ssl http2; # managed by Certbot
ssl_certificate......; # managed by Certbot
ssl_certificate_key........ # managed by Certbot
include /.......# managed by Certbot
ssl_dhparam /.....pem; # managed by Certbot
}
server {
if ($host = my.host.net) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name my.host.net;
return 404; # managed by Certbot
}
Edit
I draw the currect architecture and I want to add LoadBalancer, where should I put it? auto scale fits to this drawing? should I use one or more ec2 machines? multi containers? multi upstream?
I suggest you start with using the load balancer, because:
you can configure SSL at the load balancer and terminate SSL at the load balancer
you can protect yourself from malicious attacks by configuring the load balancer to integrate with AWS WAF
you could easily add more targets in the future
the absence of load balancer requires you to configure SSL at the application level
it supports health check.
you get free ACM certificate to use with load balancer
easy to renew SSL certs every year
Note: consider using AWS S3 and cloudfront to serve your static content
introducing load balancer to your existing architecture
The application load balancer supports host based routing now, which means it makes it possible to use multiple domains (or sub domains) pointing to multiple websites. In addition to host based routing its also supporting path based routing. for e.g while mydomain.com/web1 pointing to website1 , mydomain.com/web2 can point to website2.
I can't think of a reason why you would need to use nginx (unless I am missing something).
So answering to your question, I would do this way.
introduce an application load balancer
deploy multiple containers via ECS (Fargate)
for each service, i will have a target group dedicated to manage scaling and health checks.
finally, I would do host based routing, s1.mydomain.com, s2.mydomain.com each pointing to different target groups (one per service)
Reference:
https://aws.amazon.com/blogs/aws/new-host-based-routing-support-for-aws-application-load-balancers/
Hope this helps.
As you are saying that
I want to support very huge traffic.
I would expect that at some point you will need to scale your AWS ECS cluster horizontally to multiple instances and at that point, you will need an Elastic Load Balancer to balance your traffic between them.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide///service-load-balancing.html
If this statement is true
I want to support very huge traffic
In addition to ECS tasks, you need to read about different concepts within AWS ECS:
Services
Application Load Balancer
Listeners
Target groups
AutoScaling (Because you're going to handle huge traffic)
In order to properly use AWS ECS you need to use those services together.
Related
In the past I tried setting up Jfrog Artifactory OSS and was able to get it through my reverse proxy exposed outside my home network, and I was able to push to it VIA my computer local CLI and through Drone CI but it took an abnormal amount of time (roughly 5 min) to push to my own registry when pushing to DockerHub or Gitlab took a matter of seconds.
My container is really small (think MBs) and I never have any issues with pushing it to any other remote registry. I always thought it might have been the registry and the fact it was running on an old machine until now.
I recently discovered my git solution Gitea has a registry built in, so I did the same, I got everything set up and mapped and once again it took an abnormal amount of time (roughly 5 min) to push to my own registry (this time backed by Gitea).
This leads me to think my issues is Nginx Proxy Manager related. I found some documenation online but it was really general and vague, I have the current proxy config below and it still has the issue. Could anyone point me in the right direction? I also included a few other posts related to this issue.
server {
set $forward_scheme http;
set $server "192.168.X.XX";
set $port 3000;
listen 8080;
#listen [::]:8080;
listen 4443 ssl http2;
#listen [::]:4443;
server_name my.domain.com;
# Let's Encrypt SSL
include conf.d/include/letsencrypt-acme-challenge.conf;
include conf.d/include/ssl-ciphers.conf;
ssl_certificate /etc/letsencrypt/live/npm-47/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/npm-47/privkey.pem;
# Force SSL
include conf.d/include/force-ssl.conf;
access_log /data/logs/proxy-host-10_access.log proxy;
error_log /data/logs/proxy-host-10_error.log warn;
#Additional fields I added ontop of the default Nginx Proxy Manager config
proxy_buffering off; proxy_ignore_headers "X-Accel-Buffering";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
# Proxy!
include conf.d/include/proxy.conf;
}
# Custom
include /data/nginx/custom/server_proxy[.]conf;
}
I also checked the live logs for Gitea and I see the requests coming real time and processed really fast, but there is always a significant delay before it receives the next request which makes me think the Nginx Proxy Manager is not correctly forwarding the requests or there is some setting that I missed. Any help would be greatly appreciated!
Some of the settings I got to try were from the below sources
Another registry
Another stack overflow suggestion
Here's my goal:
admin.domain.com is where we have a Magento 2 instance setup. It's locked down in Nginx for a white-list of IPs.
api.domain.com has its own white-list, and it ultimately goes to admin.domain.com/rest/..., preferably without the requester being able to see.
The idea is to enforce all API integrations to go through the api subdomain, and to hide our admin domain entirely. Note - This is inside a Docker container, not directly on a server.
Currently, how I am attempting to accomplish this is using proxy_pass and setting the allow and deny blocks accordingly. Here is a snippet of our Nginx configs
server {
server_name admin.domain.com;
# other stuff
location ~ /(index.php/rest|rest) {
allow $DOCKER_IP; # Seems to come from Docker Gateway IP as of now
deny all;
# other stuff
}
location / {
# other stuff
}
}
server {
server_name api.domain.com;
# other stuff
location ~ /(index.php/rest|rest) {
proxy_set_header Host admin.domain.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://admin.domain.com;
}
location / {
return 403;
}
}
In theory, this should work. From testing this I noticed that all requests to api.domain.com are forwarded to admin.domain.com and admin sees the request from the Docker container's Gateway IP as the source IP. So, I can add the Gateway IP in the allow $DOCKER_IP line. The main problem here is finding a dependable way to get this IP since it changes every time the container is recreated (on each release).
Alternatively, if there's a more simple way to do this, I would prefer that. I'm trying not to over-complicate this, but I'm a little over my head here with Nginx configurations.
So, my Questions are this:
Am I way over-complicating this, and is there a recommendation of a different approach to look into?
If not, is there a dependable way to get the Docker container's Gateway IP in Nginx, or maybe in entrypoint so that I can set it as a variable and place it into the nginx config?
Since the Docker container is ephemeral and the IP can change every time (and it's very hard to pass the user's real IP address all the way through a proxy to the Docker container), it may be a lot simpler to control this with code.
I'd create a new module with a config value for the IP address, which would allow you to edit the IP address from the admin. This is architecturally more scalable as you don't need to rely on a hard-coded IP.
Within this module you'll want to create an event observer on something like the controller_action_predispatch event. You can detect an admin route, and check/prevent access to that route based on the value of the configuration object for the IP address. This way you aren't relying on Docker at all and you would have an admin-editable value to control the IP address/range.
This is how I have solved this for now. I'm still interested in better solutions if possible, but for now this is what I'm doing.
This is a snippet of the Nginx config for the API domain. It has its own whitelist for API access, and then reverse proxy to the real domain where M2 is hosted.
server {
server_name api.domain.com;
# other stuff
location ~ /(index.php/rest|rest) {
# specific whitelist for API access
include /etc/nginx/conf.d/api.whitelist;
proxy_set_header Host admin.domain.com;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass https://admin.domain.com;
}
location / {
return 403;
}
}
And then in the final domain (admin.domain.com) we are this location block to only allow traffic to the API (/rest) that comes from the Proxy so nobody can request our API directly at this domain.
server {
server_name admin.domain.com;
# other stuff
location ~ /(index.php/rest|rest) {
include /etc/nginx/conf.d/proxy.whitelist;
allow $DOCKER_IP; # Seems to come from Docker Gateway IP as of now
deny all;
# other stuff
}
location / {
# other stuff
}
}
So, in order to accomplish the restriction for the proxy traffic, the file /etc/nginx/conf.d/proxy.whitelist is generated in entrypoint.sh of the docker container. I'm using a template file proxy.whitelist.template that looks like
# Docker IP
allow $DOCKER_IP;
I did this because there are a couple other hard-coded IPs we have in that file already.
Then, in entrypoint I use the following to find the Gateway IP of the Docker container.
export DOCKER_IP=$(route -n | awk '{if($4=="UG")print $2}')
envsubst < "/etc/nginx/conf.d/proxy.whitelist.template" > "/etc/nginx/conf.d/proxy.whitelist"
And so far that seems to be working for me.
I have a frontend angular application running on aws ecs ec2 instance and both are connected to TCP port 443 and 80 of network load balancer. I will have many vhost configured on this nginx docker container with multiple domain names. In the ecs service the container to load balance is given as port 443. We will have to choose either port 443 or 80 of the container to load balance. https://prnt.sc/pocu41. On https the site is loading fine. But on http I am getting the error
The plain HTTP request was sent to HTTPS port
I am planning to use the ssl certificate on the docker container and not the ssl on the load balancer. If I choose ssl on the load balancer then we need to use the multidomain ssl in application load balancer default certificate and may not feasible when there are hundreds of domain.
My Nginx conf looks like this
server {
listen 80;
server_name example.com;
root /usr/share/nginx/html/docroot;
index index.html index.htm;
include /etc/nginx/mime.types;
gzip on;
gzip_min_length 1000;
gzip_proxied expired no-cache no-store private auth;
gzip_types text/plain text/css application/json application/javascript application/x-javascript text/xml application/xml application/xml+rss text/javascript;
location / {
try_files $uri $uri/ /index.html;
}
}
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl on;
ssl_certificate /etc/nginx/ssl/example.com/example.com.crt;
ssl_certificate_key /etc/nginx/ssl/example.com/example.com.key;
server_name example.com;
root /usr/share/nginx/html/docroot;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
Any idea how we can solve this scenario?
I am planning to use the ssl certificate on the docker container and
not the ssl on the load balancer. If I choose ssl on the load balancer
then we need to use the multidomain ssl in application load balancer
default certificate and may not feasible when there are hundreds of
domain.
Your this assumption does not seem correct, you can create * certificate from LB or you can configure multiple from ACM as well. you can use AWS ACM with load balancer and its totally free of cost and why I should bother to manage SSL at application level? and why I should open port 80 at application level when I can do redirect with application LB if NLB is not requirment?
AWS Certificate Manager Pricing
Public SSL/TLS certificates provisioned through AWS Certificate
Manager are free. You pay only for the AWS resources you create to run
your application.
certificate-manager-pricing
Second any special reason for using NLB? For web-application I will never go for network balancer, NLB make sense for TCP level communication, But I will go for application LB for HTTP communication which provides advance routings like host base routing, redirect and path-based routing which will remove the need of Nginx.
Containers are designed for lightweight task and AWS recommends to memory arround 300-500MB and same recommendations for CPU.
Do you know the cost of SSL termination at container level?
SSL traffic can be compute intensive since it requires encryption and decryption of traffic. SSL relies on public key cryptography to encrypt communications between the client and server sending messages safely across networks.
Advantage of SSL termination at LB level
SSL termination at load balancer is desired because decryption is resource and CPU intensive. Putting the decryption burden on the load balancer enables the server to spend processing power on application tasks, which helps improve performance. It also simplifies the management of SSL certificates.
new-tls-termination-for-network-load-balancers
ssl-termination
10-tips-to-improve-the-performance-of-your-aws-application
So base on this I am not going to asnwer your problem as suggested by #Linpy may help if you still want to go, you can this too dealing-with-nginx-400-the-plain-http-request-was-sent-to-https-port-error
I have multiple NGNIX-uWSGI based Django Applications deployed using Docker and hosted in EC2 (currently at different ports like 81, 82, ...). Now I wish to add in sub-domains to this such that sub1.domain.com and sub2.domain.com will both work from the same EC2 instance.
I am fine with multiple ports, BUT they dont work via DNS settings.
sub1.domain.com -> 1.2.3.4:81
sub2.domain.com -> 1.2.3.4:82
What I cannot do
Multiple IPs ref: allocation of a new ip for each deployed sub-domain is not possible.
NGINX Proxy ref: This looks like the ideal solution BUT this is not maintained by an org like Docker or NGINX, so I am un-sure of the security and reliability.
What I am considering:
I am considering to write my own NGINX reverse proxy, similar to Apache Multiple Sub Domains With One IP Address BUT then the flow is will via multiple proxies since already there is an NGINX-uWSGI proxy via the Tech Stack
you can use nginx upstream
upstream backend {
server backend1.example.com weight=5;
server backend2.example.com:8080;
server unix:/tmp/backend3;
server backup1.example.com:8080 backup;
server backup2.example.com:8080 backup;
}
server {
server_name sub.test.com www.sub.test.com;
location / {
proxy_pass http://backend;
}
}
We use docker swarm with service discovery for Backend REST application. The services in swarm are configured with endpoint_mode: vip and are running in global mode. Nginx is proxy passed with service discovery aliases. When we update Backend services sometimes nginx throws 502 as service discovery may point to the updating service.
In such case, We wanted to retry the same endpoint again. How can we achieve this?
According to this we added upstream with the host's private IP and used proxy_next_upstream error timeout http_502; but still the problem persists.
nginx.conf
upstream servers {
server 192.168.1.2:443; #private ip of host machine
server 192.168.1.2:443 backup;
}
server {
listen 443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
proxy_next_upstream http_502;
location /endpoint1 {
proxy_pass http://docker.service1:8080/endpoint1;
}
location /endpoint2 {
proxy_pass http://docker.service2:8080/endpoint2;
}
location /endpoint3 {
proxy_pass http://docker.service3:8080/endpoint3;
}
}
Here if http://docker.service1:8080/endpoint1 throws 502 we want to hit http://docker.service1:8080/endpoint1 again.
Additional queries:
Is there any way in docker swarm to make it stop pointing to updating service in service discovery till that service is fully up?
Is upstream necessary here since we directly use docker service discovery?
I suggest you add a health check directly at container level (here)
By doing so, docker pings periodically an endpoint you specified, if it's found unhealthy it will 1) stop routing traffic to it 2) kill the container and restart a new one. Therefore you upstream will be resolved to one of the healthy containers. No need to retry.
As for your additional questions, the first one, docker won't start routing til it's healthy. The second, nginx is still useful to distribute traffic according to endpoint url. But personally nginx + swarm vip mode is not a great choice because swarm load balancer is poorly documented, it doesn't support sticky session and you can't have proxy level health check, I would use traefik instead, it has its own load balancer.