I am running two microservices say demo1 and demo2 in two containers using docker. I have configured zuul in demo1. I want to route from demo1 to demo2 that is, I want to access the api in demo2 from demo1.
demo1 is running on port 8080 and demo2 on port 8030 and I want to access the api like this- "localhost:8030/zuultest/test". But the routing is not working. It works fine if I access the demo1 like "localhost:8080/test".
Here is my zuul configuration in application.yml-
server:
port: 8030
#TODO: figure out why I need this here and in bootstrap.yml
spring:
application:
name: zuul server
endpoints:
restart:
enabled: true
shutdown:
enabled: true
health:
sensitive: false
zuul:
routes:
zuultest:
url: http://localhost:8080
stripPrefix: false
ribbon:
eureka:
enabled: false
you can use links option in the docker-compose.yml to link between the two containers.
demo1:
image: <demo1 image name>
links:
- demo2
demo2:
image: <demo2 image name>
Then in the zuul:routs:url configuration you can use the conatiner name, demo2 instead of it's IP.
you also need to ensure that the ports in question are open and accessible from external machine as well.
better still you may route the traffic from port 8080 (default open port ) to the desired port , 8030 in your case.
For explicitly exposing the port, please refer the link:
https://github.com/wsargent/docker-cheat-sheet#exposing-ports
Related
I'm trying to deploy a Docker Swarm of three host nodes with a single replicated service and put an HAProxy in front of it. I want the clients to be able to connect via SSL.
My docker-compose.yml:
version: '3.9'
services:
proxy:
image: haproxy
ports:
- 443:8080
volumes:
- haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
deploy:
placement:
constraints: [node.role == manager]
networks:
- servers-network
node-server:
image: glusk/hackathon-2021:latest
ports:
- 8080:8080
command: npm run server
deploy:
mode: replicated
replicas: 2
networks:
- servers-network
networks:
servers-network:
driver: overlay
My haproxy.cfg (based on the official example):
# Simple configuration for an HTTP proxy listening on port 80 on all
# interfaces and forwarding requests to a single backend "servers" with a
# single server "server1" listening on 127.0.0.1:8000
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
backend servers
server server1 127.0.0.1:8000 maxconn 32
My hosts are Lightsail VPS Ubuntu instances and share the same private network.
node-service runs each https server task inside its own container on: 0.0.0.0:8080.
The way I'm trying to make this work at the moment is to ssh into the manager node (which also has a static and public IP), copy over my configuration files from above, and run:
docker stack deploy --compose-file=docker-compose.yml hackathon-2021
but it doesn't work.
Well, first of all and regarding SSL (since it's the first thing that you mention) you need to configure it using the certificate and listen on the port 443, not port 80.
With that modification, your Proxy configuration would already change to:
global
daemon
maxconn 256
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
frontend http-in
bind *:80
default_backend servers
frontend https-in
bind *:443 ssl crt /etc/ssl/certs/hackaton2021.pem
default_backend servers
That would be a really simplified configuration for allowing SSL connection.
Now, let's go for the access to the different services.
First of all, you cannot access to the service on localhost, actually you shouldn't even expose the ports of the services you have to the host. The reason? That you already have those applications in the same network than the haproxy, so the ideal would be to take advantage of the Docker DNS to access directly to them
In order to do this, first we need to be able to resolve the service names. For that you need to add the following section to your configuration:
resolvers docker
nameserver dns1 127.0.0.11:53
resolve_retries 3
timeout resolve 1s
timeout retry 1s
hold other 10s
hold refused 10s
hold nx 10s
hold timeout 10s
hold valid 10s
hold obsolete 10s
The Docker Swarm DNS service is always available at 127.0.0.11.
Now to your previous existent configuration, we would have to add the server but using the service-name discovery:
backend servers
balance roundrobin
server-template node- 2 node-server:8080 check resolvers docker init-addr libc,none
If you check what we are doing, we are creating a server for each one of the discovered containers in the Swarm within the node-server service (so the replicas) and we will create those adding the prefix node- to each one of them.
Basically, that would be the equivalent to get the actual IPs of each of the replicas and add them stacked as a basic server configuration.
For deployment, you also have some errors, since we aren't interested into actually expose the node-server ports to the host, but to create the two replicas and use HAProxy for the networking.
For that, we should use the following Docker Compose:
version: '3.9'
services:
proxy:
image: haproxy
ports:
- 80:80
- 443:443
volumes:
- hackaton2021.pem:/etc/ssl/certs/hackaton2021.pem
- haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg
deploy:
placement:
constraints: [node.role == manager]
node-server:
image: glusk/hackathon-2021:latest
command: npm run server
deploy:
mode: replicated
replicas: 2
Remember to copy your haproxy.cfg and the self-signed (or real) certificate for your application to the instance before deploying the Stack.
Also, when you create that stack it will automatically create a network with the name <STACK_NAME>-default, so you don't need to define a network just for connecting both services.
When working with helm charts (generated by helm create <name>) and specifying a docker image in values.yaml such as the image "kubernetesui/dashboard:v2.4.0" in which the exposed ports are written as EXPOSE 8443 9090 I found it hard to know how to properly specify these ports in the actual helm chart-files and was wondering if anyone could explain a bit further on the topic.
By my understanding, the EXPOSE 8443 9090 means that hostPort "8443" maps to containerPort "9090". It in that case seems clear that service.yaml should specify the ports in a manner similar to the following:
spec:
type: {{ .Values.service.type }}
ports:
- port: 8443
targetPort: 9090
The deployment.yaml file however only comes with the field "containerPort" and no port field for the 8443 port (as you can see below) Should I add some field here in the deployment.yaml to include port 8443?
spec:
template:
spec:
containers:
- name: {{ .Chart.Name }}
ports:
- name: http
containerPort: 9090
protocol: TCP
As of now when I try to install the helm charts it gives me an error message: "Container image "kubernetesui/dashboard:v2.4.0" already present on machine" and I've heard that it means the ports in service.yaml are not configured to match the docker images exposed ports. I have tested this with simpler docker image which only exposes one port and just added the port everywhere and the error message goes away so it seems to be true, but I am still confused about how to do it with two exposed ports.
I would really appreciate some help, thank you in advance if you have any experience of this and is willing to share.
A Docker image never gets to specify any host resources it will use. If the Dockerfile has EXPOSE with two port numbers, then both ports are exposed (where "expose" means almost nothing in modern Docker). That is: this line says the container listens on both port 8443 and 9090 without requiring any specific external behavior.
In your Kubernetes Pod spec (usually nested inside a Deployment spec), you'd then generally list both ports as containerPorts:. Again, this doesn't really say anything about how a Service uses it.
# inside templates/deployment.yaml
ports:
- name: http
containerPort: 9090
protocol: TCP
- name: https
containerPort: 8443
protocol: TCP
Then in the corresponding Service, you'd republish either or both ports.
# inside templates/service.yaml
spec:
type: {{ .Values.service.type }}
ports:
- port: 80 # default HTTP port
targetPort: http # matching name: in pod, could also use 9090
- port: 443 # default HTTP/TLS port
targetPort: https # matching name: in pod, could also use 8443
I've chosen to publish the unencrypted and TLS-secured ports on their "normal" HTTP ports, and to bind the service to the pod using the port names.
None of this setup is Helm-specific; in this the only Helm template reference is the Service type: (for if the operator needs to publish a NodePort or LoadBalancer service).
I have a simple server written in Python that listens on port 8000 inside a private network (HTTP communication). There is now a requirement to switch to HTTPS communications and every client that sends a request to the server should get authenticated with his own cert/key pair.
I have decided to use Traefik v2 for this job. Please see the block diagram.
Traefik runs as a Docker image on a host that has IP 192.168.56.101. First I wanted to simply forward a HTTP request from a client to Traefik and then to the Python server running outside Docker on port 8000. I would add the TLS functionality when the forwarding is running properly.
However, I can not figure out how to configure Traefik to reverse proxy from i.e. 192.168.56.101/notify?wrn=1 to the Python server 127.0.0.1:8000/notify?wrn=1.
When I try to send the above mentioned request to the server (curl "192.168.56.101/notify?wrn=1") I get "Bad Gateway" as an answer. What am I missing here? This is the first time that I am in contact with Docker and reverse proxy/Traefik. I believe it has something to do with ports but I can not figure it out.
Here is my Traefik configuration:
docker-compose.yml
version: "3.3"
services:
traefik:
image: "traefik:v2.1"
container_name: "traefik"
hostname: "traefik"
ports:
- "80:80"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./traefik.yml:/traefik.yml:ro"
traefik.yml
## STATIC CONFIGURATION
log:
level: INFO
api:
insecure: true
dashboard: true
entryPoints:
web:
address: ":80"
providers:
docker:
watch: true
endpoint: "unix:///var/run/docker.sock"
file:
filename: "traefik.yml"
## DYNAMIC CONFIGURATION
http:
routers:
to-local-ip:
rule: "Host(`192.168.56.101`)"
service: to-local-ip
entryPoints:
- web
services:
to-local-ip:
loadBalancer:
servers:
- url: "http://127.0.0.1:8000"
First, 127.0.0.1 will resolve to the traefik container and not to the docker host. You need to provide a private IP of the node and it needs to be accessible form the traefik container.
There is some workaround to make proxy to localhost:
change 127.0.0.1 to IP of docker0 interface
It should be 172.17.0.1
and then try to listen your python server on all interfaces (0.0.0.0)
if you use simple python http server nothing change... on default it listen on all interfaces
i have a few services in my docker-compose file using traefik labels.
now i would like to clean this file and start using traefik yaml files.
the problem is that i could not find the equivalent to traefik.http.services.dnsmasq-traefik.loadbalancer.server.port=5380
and there arent any examples in the docs
the labels (this works perfectly)
- "traefik.http.routers.dnsmasq.rule=Host(`dnsmasq.docker.localdomain`)"
- "traefik.http.routers.dnsmasq.service=dnsmasq-traefik#docker"
- "traefik.http.services.dnsmasq-traefik.loadbalancer.server.port=5380"
the yaml (not working, gives me a Gateway Timeout)
http:
routers:
dnsmasq-preauth:
entryPoints: [http]
middlewares: [redirect-to-http]
service: dnsmasq-preauth
rule: Host(`dnsmasq.docker.localdomain`)
services:
dnsmasq-preauth:
loadBalancer:
servers:
- url: "http://dnsmasq.docker.localdomain:5380"
Whenever I get this Gateway timeout issue I immediately look in two places:
The Traefik Config needs to be exposedByDefault - docs here
providers:
docker:
exposedByDefault: false
# ...
If exposedByDefault is false then you need to do #2 in this list.
On your specific container you need to set the docker labels
- traefik.enable=true
I've got some strange issue. I have following setup:
one docker-host running traefik as LB serving multiple sites. sites are most php/apache. HTTPS is managed by traefik.
Each site is started using a docker-compose YAML containing the following:
version: '2.3'
services:
redis:
image: redis:alpine
container_name: ${PROJECT}-redis
networks:
- internal
php:
image: registry.gitlab.com/OUR_NAMESPACE/docker/php:${PHP_IMAGE_TAG}
environment:
- APACHE_DOCUMENT_ROOT=${APACHE_DOCUMENT_ROOT}
container_name: ${PROJECT}-php-fpm
volumes:
- ${PROJECT_PATH}:/var/www/html:cached
- .docker/php/php-ini-overrides.ini:/usr/local/etc/php/conf.d/99-overrides.ini
ports:
- 80
networks:
- proxy
- internal
labels:
- traefik.enable=true
- traefik.port=80
- traefik.frontend.headers.SSLRedirect=false
- traefik.frontend.rule=Host:${PROJECT}
- "traefik.docker.network=proxy"
networks:
proxy:
external:
name: proxy
internal:
(as PHP we use 5.6.33-apache-jessie or 7.1.12-apache f.e.)
Additionally to above, some sites get following labels:
traefik.docker.network=proxy
traefik.enable=true
traefik.frontend.headers.SSLRedirect=true
traefik.frontend.rule=Host:example.com, www.example.com
traefik.port=80
traefik.protocol=http
what we get is that some requests end in 502 Bad Gateway
traefik debug output shows:
time="2018-03-21T12:20:21Z" level=debug msg="vulcand/oxy/forward/http: Round trip: http://172.18.0.8:80, code: 502, Length: 11, duration: 2.516057159s"
can someone help with that?
it's completely random when it happens
our traefik.toml:
debug = true
checkNewVersion = true
logLevel = "DEBUG"
defaultEntryPoints = ["https", "http"]
[accessLog]
[web]
address = ":8080"
[web.auth.digest]
users = ["admin:traefik:some-encoded-pass"]
[entryPoints]
[entryPoints.http]
address = ":80"
# [entryPoints.http.redirect] # had to disable this because HTTPS must be enable manually (not my decission)
# entryPoint = "https"
[entryPoints.https]
address = ":443"
[entryPoints.https.tls]
[retry]
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "example.com"
watch = true
exposedbydefault = false
[acme]
email = "info#example.com"
storage = "acme.json"
entryPoint = "https"
onHostRule = true
[acme.httpChallenge]
entryPoint = "http"
Could the issue be related to using the same docker-compose.yml?
Another reason can be that you might be accidentally mapping to the vm's port instead of the container port.
I made a change to my port mapping on the docker-compose file and forgot to update the labeled port so it was trying to map to a port on the machine that was not having any process attached to it
Wrong way:
ports:
- "8080:8081"
labels:
- "traefik.http.services.front-web.loadbalancer.server.port=8080"
Right way
ports:
- "8080:8081"
labels:
- "traefik.http.services.front-web.loadbalancer.server.port=8081"
Also in general don't do this, instead of exposing ports try docker networks they are much better and cleaner. I made my configuration documentation like a year ago and this was more of a beginner mistake on my side but might help someone :)
For anyone getting the same issue:
After recreating the network (proxy) and restarting every site/container it seems to work now.
I still don't know where the issue was from.
If you see Bad Gateway with Traefik chances are you have a Docker networking issue. First have a look at this issue and consider this solution. Then take a look at providers.docker.network (Traefik 2.0) or, in your case, the docker.network setting (Traefik 1.7).
You could add a default network here:
[docker]
endpoint = "unix:///var/run/docker.sock"
domain = "example.com"
watch = true
exposedbydefault = false
network = "proxy"
Or define/override it for a given service using the traefik.docker.network label.
Got the same problem and none of the above mentioned answers solved it for me. In my case a wrong loadbalancer was added. Removing the label or changing it to the correct port made the trick.
- "traefik.http.services.XXX.loadbalancer.server.port=XXX"
In your example you don't have traefik enabled:
traefik.enable=false
Make sure to enable it first and then test your containers.
The error "bad gateway" is returned when the web server in the container doesn't allow traffic from traefik e.g. because of wrong interface binding like localhost instead of 0.0.0.0.
Take Ruby on Rails for example. Its web server puma is configured by default like this (see config/puma.rb):
port ENV.fetch("PORT") { 3000 }
But in order to allow access from traefik puma must bind to 0.0.0.0 like so:
bind "tcp://0.0.0.0:#{ ENV.fetch("PORT") { 3000 } }"
This solved the problem for me.
Another cause can be exposing a container at a port that Traefik already uses.
I forgot to expose the port in my Dockerfile thats why traefik did not find a port to route to. So expose the port BEFORE you start the application like node:
#other stuff before...
EXPOSE 3000
CMD ["node", "dist/main" ]
Or if you have multiple ports open you have to specify which port traefik should route the domain to with:
- "traefik.http.services.myservice.loadbalancer.server.port=3000"
Or see docs
I faced very close issue to this exception my problem was not related to network settings or config, after time we figured out that the exposed port from the backend container is not like the port we mapping to to access form outside the service port was 5000 and we mapped 9000:9000 the solution was to fix the port issue first 9000:5000.