I have the following configuration for my stack:
api:
deployment_strategy: every_node
environment:
- 'DATABASE_URL=postgresql://.....'
- REDIS_HOST=redis
image: 'image/image:latest'
links:
- redis
ports:
- '5000:5000'
lb:
image: 'dockercloud/haproxy:latest'
links:
- api
ports:
- '80:80'
privileged: true
roles:
- global
and this is the haproxy output
2017-05-26T12:00:51.752500376Z INFO:haproxy:dockercloud/haproxy 1.6.6 has access to the Docker Cloud API - will reload list of backends in real-time
2017-05-26T12:00:51.752599249Z INFO:haproxy:dockercloud/haproxy PID: 5
2017-05-26T12:00:51.883065649Z INFO:haproxy:=> Add task: Websocket open
2017-05-26T12:00:52.884078353Z INFO:haproxy:=> Executing task: Websocket open
2017-05-26T12:00:52.884105435Z INFO:haproxy:==========BEGIN==========
2017-05-26T12:00:53.364820267Z INFO:haproxy:Linked service: API(d73c0091-ae4f-43b8-a3a8-ea11a276652e)
2017-05-26T12:00:53.364872613Z INFO:haproxy:Linked container: API_1(3f981340-9b04-4105-8876-2ad1e5521f5c)
2017-05-26T12:00:53.365695674Z INFO:haproxy:HAProxy configuration:
2017-05-26T12:00:53.365705363Z global
2017-05-26T12:00:53.365708753Z log 127.0.0.1 local0
2017-05-26T12:00:53.365712075Z log 127.0.0.1 local1 notice
2017-05-26T12:00:53.365715245Z log-send-hostname
2017-05-26T12:00:53.365718228Z maxconn 4096
2017-05-26T12:00:53.365721207Z pidfile /var/run/haproxy.pid
2017-05-26T12:00:53.365724305Z user haproxy
2017-05-26T12:00:53.365727513Z group haproxy
2017-05-26T12:00:53.365730447Z daemon
2017-05-26T12:00:53.365733783Z stats socket /var/run/haproxy.stats level admin
2017-05-26T12:00:53.365736704Z ssl-default-bind-options no-sslv3
2017-05-26T12:00:53.365746260Z ssl-default-bind-ciphers xxxxxx
2017-05-26T12:00:53.365752089Z defaults
2017-05-26T12:00:53.365755064Z balance roundrobin
2017-05-26T12:00:53.365758035Z log global
2017-05-26T12:00:53.365761046Z mode http
2017-05-26T12:00:53.365764045Z option redispatch
2017-05-26T12:00:53.365767032Z option httplog
2017-05-26T12:00:53.365769951Z option dontlognull
2017-05-26T12:00:53.365775842Z option forwardfor
2017-05-26T12:00:53.365780388Z timeout connect 5000
2017-05-26T12:00:53.365793420Z timeout client 50000
2017-05-26T12:00:53.365796603Z timeout server 50000
2017-05-26T12:00:53.365799585Z listen stats
2017-05-26T12:00:53.365802356Z bind :1936
2017-05-26T12:00:53.365805270Z mode http
2017-05-26T12:00:53.365808233Z stats enable
2017-05-26T12:00:53.365811235Z timeout connect 10s
2017-05-26T12:00:53.365814235Z timeout client 1m
2017-05-26T12:00:53.365817155Z timeout server 1m
2017-05-26T12:00:53.365827005Z stats hide-version
2017-05-26T12:00:53.365830160Z stats realm Haproxy\ Statistics
2017-05-26T12:00:53.365833322Z stats uri /
2017-05-26T12:00:53.365837063Z stats auth stats:stats
2017-05-26T12:00:53.365839909Z frontend default_port_80
2017-05-26T12:00:53.365842760Z bind :80
2017-05-26T12:00:53.365845760Z reqadd X-Forwarded-Proto:\ http
2017-05-26T12:00:53.365848857Z maxconn 4096
2017-05-26T12:00:53.365851745Z default_backend default_service
2017-05-26T12:00:53.365854664Z backend default_service
2017-05-26T12:00:53.365857581Z server API_1 10.7.0.2:5000 check inter 2000 rise 2 fall 3
2017-05-26T12:00:53.365886854Z INFO:haproxy:Launching HAProxy
2017-05-26T12:00:53.368391859Z INFO:haproxy:HAProxy has been launched(PID: 12)
2017-05-26T12:00:53.368498117Z INFO:haproxy:===========END===========
when I access the haproxy IP, I get ERR_CONNECTION_REFUSED on Chrome and the API service logs is empty, but when I access the haproxy on port 5000, then yes, the request hits my API.
I found it very weird, because I thought that HAProxy would do this routing for me. Am I missing something? maybe bind 80:5000?
This very simple example is working for me:
api:
image: nginx
lb:
image: 'dockercloud/haproxy:latest'
links:
- api
ports:
- '80:80'
privileged: true
(without the roles part because I'm not using docker cloud)
...
lb_1 | INFO:haproxy:HAProxy has been launched(PID: 13)
lb_1 | INFO:haproxy:===========END===========
api_1 | 172.17.0.3 - - [26/May/2017:12:40:36 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36" "172.17.0.1"
...
maybe bind 80:5000?
You shouldn't. It should be enough accessing :80 and then haproxy do the rest.
Related
we have an asp.net core 6 web api inside docker image, and in front of it is haproxy which directs traffic. The problem occurs when we want to change version of net core image we need to stop haproxy to docker-compose up new image. Something along the lines
docker-compose down
systemctl stop haproxy
docker-compose up -d
systemctl start haproxy
because without stopping haproxy we get a docker error Error starting userland proxy: listen tcp4 11.11.0.30:31079: bind: address already in use or everything seems fine but if you curl endpoint on net core api request will keep on running curl -v returns
* Trying 11.11.0.30:31079...
* TCP_NODELAY set
with logs inside docker we saw that some requests are getting inside from outside world, but like 0.1% of all load.
The weird thing is we have sidecar docker image of dotnet-monitor that doesn't have these issues.
-side note main net core image has prometheus .net library inside that uses $env:metrics_port to expose metrics data for internal usage that's why we us ASPNETCORE_URLS in docker-compose.
docker-compose.yml
version: '3.6'
services:
collector:
image: ${COLLECTOR_IMG}
restart: always
command: --urls "http://*:5003;http://*:5004"
container_name: collector
environment:
metrics_port: 5004
ports:
- "11.11.0.30:31079:5003"
- "11.11.0.30:52326:5004"
sysctls:
- "net.ipv6.conf.all.disable_ipv6=1"
networks:
collector-network:
ipv4_address: 162.30.337.10
volumes:
- dotnet-tmp:/tmp
dotnet-monitor:
image: ${MONITOR_IMG}
restart: always
command: --no-auth1 --urls http://*:52324
container_name: dotnet-monitor
ports:
- "11.11.0.30:52323:52324"
networks:
collector-network:
ipv4_address: 162.30.337.20
volumes:
- dotnet-tmp:/tmp
networks:
collector-network:
name: collector-network
driver: bridge
ipam:
driver: default
config:
- subnet: 162.30.337.0/24
volumes:
dotnet-tmp:
external: false
haproxy
global
log /dev/log local0 notice alert
# log /dev/log local1 notice alert
maxconn 400000
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
option dontlognull
retries 3
timeout connect 10s
timeout client 25s
timeout server 25s
maxconn 400000
frontend collector
bind 145.81.37.211:80
mode tcp
option tcplog
use_backend collector
backend collector
mode tcp
balance roundrobin
server server1 11.11.0.30:31079 check
frontend monitor
bind 145.81.37.211:52323
mode tcp
option tcplog
use_backend monitor
backend monitor
mode tcp
balance roundrobin
server server2 11.11.0.30:52323 check
listen stats
bind 11.11.0.30:1936
option http-use-htx
mode http
option forwardfor
http-request use-service prometheus-exporter if { path /metrics }
stats enable
stats hide-version
stats refresh 30s
stats show-node
stats auth admin:bz74ZGws4eJcAmq
stats uri /stats
I'm trying to load balance a simple Nodejs app with 3 instances using docker-compose & nginx. This configuration works on my local machine (windows laptop) but doesn't seem to work on EC2 server.
nginx.conf
http {
upstream all {
server nodeapp1:4100;
server nodeapp2:4200;
server nodeapp3:4300;
}
server {
listen 8080;
location / {
proxy_pass http://all/;
}
}
}
events { }
docker-compose.yml
version: '3'
services:
lb:
image: nginx
volumes:
- ./nginxproxy/nginx.conf:/etc/nginx/nginx.conf
ports:
- "3000:8080"
nodeapp1:
image: nodeapp
environment:
- PORT=4100
ports:
- "4100:4100"
nodeapp2:
image: nodeapp
environment:
- PORT=4200
ports:
- "4200:4200"
nodeapp3:
image: nodeapp
environment:
- PORT=4300
ports:
- "4300:4300"
I'm new to docker. I'm surprised why this works locally but does not work on EC2 instance. The load balancer was able to resolve the url correctly but it still says connection refused.
Error:
2022/02/28 20:00:22 [error] 33#33: *9 connect() failed (111: Connection refused) while
connecting to upstream, client: 62.113.237.40, server: , request: "GET / HTTP/1.1",
upstream: "http://172.121.0.5:4100/", host: "18.121.121.23:3000"
For me service name or ip address not worked, only work put the gateway IP of network, for default bridge is 172.17.0.1.
In the servers put the (gateway ip):(port of container) and with this haproxy connects with success.
My example of custom network with fixed ips and gateway:
---- nginx config
upstream loadbalancer {
server 172.17.0.1:8001 weight=5;
server 172.17.0.1:8002 weight=5;
}
----- haproxy config similar
backend be_pe_8545
mode http
balance roundrobin
server p1 172.20.0.254:18545 check inter 10s
server p2 172.20.0.254:28545 check inter 10s
----- docker app / network
docker_app: ...
networks:
public_network:
ipv4_address: 172.20.0.50
public_network:
name: public_network
driver: bridge
ipam:
driver: default
config:
- subnet: 172.20.0.0/24
gateway: 172.20.0.254
I currently have a Traefik instance that's being run using the following. It works fine forwarding HTTP connections to the appropriate backends.
services_lb:
image: traefik:v2.2
cmd: |
--entrypoints.web.address=:80
--entrypoints.websecure.address=:443
--entrypoints.web.http.redirections.entryPoint.to=websecure
--entrypoints.web.http.redirections.entryPoint.scheme=https
--entrypoints.web.http.redirections.entrypoint.permanent=true
--entrypoints.matrixfederation.address=:8448
--entrypoints.prosodyc2s.address=:5222
--entrypoints.prosodys2s.address=:5269
--providers.docker
--providers.docker.constraints=Label(`lb.net`,`services`)
--providers.docker.network=am-services
--certificatesresolvers.lec.acme.email=notify#battlepenguin.com
--certificatesresolvers.lec.acme.storage=/letsencrypt/acme.json
--certificatesresolvers.lec.acme.tlschallenge=true
--entryPoints.web.forwardedHeaders.trustedIPs=172.50.0.1/24
ports:
- 80
- 443
# Matrix
- 8448
# XMPP
- 5222
- 5269
My web and Matrix federation connections work fine as they're all HTTP. But for Prosody (XMPP) I need to forward 5222 and 5269 directly without any HTTP routing. I configured the container like so:
xmpp:
image: prosody/prosody:0.11
network:
- services
- database
labels:
lb.net: services
traefik.tcp.services.prosodyc2s.loadbalancer.server.port: "5222"
traefik.tcp.services.prosodys2s.loadbalancer.server.port: "5269"
traefik.http.routers.am-app-xmpp.entrypoints: "websecure"
traefik.http.routers.am-app-xmpp.rule: "Host(`xmpp.example.com`)"
traefik.http.routers.am-app-xmpp.tls.certresolver: "lec"
traefik.http.services.am-app-xmpp.loadbalancer.server.port: "5280"
volumes:
- prosody-config:/etc/prosody:rw
- services_certs:/certs:ro
- prosody-logs:/var/log/prosody:rw
- prosody-modules:/usr/lib/prosody-modules:rw
With the tcp services, I still can't get Traefik to forward the raw TCP connections to this container. I've tried removing the --entrypoints from the Traefik instance and of course, Traefik stopped listening on those ports. I assumed the traefik.tcp.service definition would cause that entrypoint to switch to a TCP passthrough mode, but that isn't the case. I couldn't see anything in the Traefik documentation on putting the entrypoint itself into TCP mode instead of HTTP mode.
How do I pass the raw TCP connection from Traefik to this particular container using labels on the container and CLI options for Traefik?
I figured it out. You can't use any standard Traefik TLS offloading due to the differences in how Traefik and Prosidy handle TLS. I had to disable TLS entirely and use the special HostSNI(*) rule below to allow straight pass throughts. I was also missing the routers that connect the Traefik entrypoints to the TCP services.
labels:
lb.net: services
# client to server
traefik.tcp.routers.prosodyc2s.entrypoints: prosodyc2s
traefik.tcp.routers.prosodyc2s.rule: HostSNI(`*`)
traefik.tcp.routers.prosodyc2s.tls: "false"
traefik.tcp.services.prosodyc2s.loadbalancer.server.port: "5222"
traefik.tcp.routers.prosodyc2s.service: prosodyc2s
# server to server
traefik.tcp.routers.prosodys2s.entrypoints: prosodys2s
traefik.tcp.routers.prosodys2s.rule: HostSNI(`*`)
traefik.tcp.routers.prosodys2s.tls: "false"
traefik.tcp.services.prosodys2s.loadbalancer.server.port: "5269"
traefik.tcp.routers.prosodys2s.service: prosodys2s
# web
traefik.http.routers.am-app-xmpp.entrypoints: "websecure"
traefik.http.routers.am-app-xmpp.rule: "Host(`xmpp.example.com`)"
traefik.http.routers.am-app-xmpp.tls.certresolver: "lec"
traefik.http.services.am-app-xmpp.loadbalancer.server.port: "5280"
I have a simple server written in Python that listens on port 8000 inside a private network (HTTP communication). There is now a requirement to switch to HTTPS communications and every client that sends a request to the server should get authenticated with his own cert/key pair.
I have decided to use Traefik v2 for this job. Please see the block diagram.
Traefik runs as a Docker image on a host that has IP 192.168.56.101. First I wanted to simply forward a HTTP request from a client to Traefik and then to the Python server running outside Docker on port 8000. I would add the TLS functionality when the forwarding is running properly.
However, I can not figure out how to configure Traefik to reverse proxy from i.e. 192.168.56.101/notify?wrn=1 to the Python server 127.0.0.1:8000/notify?wrn=1.
When I try to send the above mentioned request to the server (curl "192.168.56.101/notify?wrn=1") I get "Bad Gateway" as an answer. What am I missing here? This is the first time that I am in contact with Docker and reverse proxy/Traefik. I believe it has something to do with ports but I can not figure it out.
Here is my Traefik configuration:
docker-compose.yml
version: "3.3"
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
hostname: "traefik"
ports:
- "80:80"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./traefik.yml:/traefik.yml:ro"
traefik.yml
## STATIC CONFIGURATION
log:
level: INFO
api:
insecure: true
dashboard: true
entryPoints:
web:
address: ":80"
providers:
docker:
watch: true
endpoint: "unix:///var/run/docker.sock"
file:
filename: "traefik.yml"
## DYNAMIC CONFIGURATION
http:
routers:
to-local-ip:
rule: "Host(`192.168.56.101`)"
service: to-local-ip
entryPoints:
- web
services:
to-local-ip:
loadBalancer:
servers:
- url: "http://127.0.0.1:8000"
First, 127.0.0.1 will resolve to the traefik container and not to the docker host. You need to provide a private IP of the node and it needs to be accessible form the traefik container.
There is some workaround to make proxy to localhost:
change 127.0.0.1 to IP of docker0 interface
It should be 172.17.0.1
and then try to listen your python server on all interfaces (0.0.0.0)
if you use simple python http server nothing change... on default it listen on all interfaces
When I try to connect to my app I can connect and start sending and receiving ICE candidates. But the negotiation does not complete the rtc connection state eventually gets to "Checking", and then after about 30 seconds drops to "Failed"
I have this working with a local setup, but once I have deployed to AWS this starts to fail.
I went and bjorked the settings in AWS and opened all the ports and now I can reach the coturn service (returns 200 when requesting through http), and the Trickle service here works fine.
I am using the Kurento Media Server and hoping to make a websocket connection to that service. As I mentioned this works locally so I'm fairly sure that there is nothing wrong with how I'm making the request but instead it is a configuration option with AWS or my docker compose file.
I have a docker compose file with three apps in it:
version: "3.4"
services:
media-controller:
image: my-custom-images/my-server:latest.version
volumes:
- "tmp-video-storage:/tmp"
ports:
- "8899:8899"
kurento-media-service:
image: kurento/kurento-media-server:6.6.0
volumes:
- "tmp-video-storage:/tmp"
ports:
- "8888:8888"
coturn:
image: my-custom-images/coturn:lastest.version
ports:
- "3478:3478/udp"
- "3478:3478/tcp"
volumes:
tmp-video-storage:
coturn's /etc/turnserver.conf
min-port=49152
max-port=65535
fingerprint
lt-cred-mech
realm=my-domain.com
log-file stdout
user=username-placeholder:password-placeholder
external-ip=public-ip/private-ip
listening-port=3478
Output from Trickle Ice Candidates:
0.004 1 host 1019731727 udp 192.168.1.104 64702 126 | 32543 | 0
0.068 1 srflx 3180321211 udp 10.255.0.2 64702 100 | 32542 | 255
0.091 1 relay 610197926 udp 35.183.10.44 50008 2 | 32542 | 255
0.106 1 host 1917068287 tcp 192.168.1.104 9 90 | 32542 | 255
0.106 Done
0.120