Eureka client in docker not connecting with Eureka server - docker

I have one eureka server.
server:
port: 8761
eureka:
client:
registerWithEureka: false
fetchRegistry: false
I have one eureka client.
spring:
application:
name: mysearch
server:
port: 8020
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka
instance:
preferIpAddress: true
My eureka client is running in a docker container.
FROM java:8
COPY ./mysearch.jar /var/tmp/app.jar
EXPOSE 8180
CMD ["java","-jar","/var/tmp/app.jar"]
I am starting the eureka server by java -jar eureka-server.jar
After that I am starting the docker instance of the eureka client using
sudo docker build -t web . and sudo docker run -p 8180:8020 -it web.
I am able to access the eureka client and server from browser but the client is not connecting with Eureka server. I am not able to see the client in the eureka server dashboard. I am getting below errors and warnings.
WARN 1 --- [tbeatExecutor-0] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failed with message: java.net.ConnectException: Connection refused (Connection refused)
ERROR 1 --- [tbeatExecutor-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_FLIGHTSEARCH/98b0d95fd668:flightsearch:8020 - was unable to send heartbeat!
INFO 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_FLIGHTSEARCH/98b0d95fd668:flightsearch:8020: registering service...
ERROR 1 --- [nfoReplicator-0] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error
WARN 1 --- [nfoReplicator-0] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failed with message: java.net.ConnectException: Connection refused (Connection refused)
WARN 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_FLIGHTSEARCH/98b0d95fd668:flightsearch:8020 - registration failed Cannot execute request on any known server
WARN 1 --- [nfoReplicator-0] c.n.discovery.InstanceInfoReplicator : There was a problem with the instance info replicator
I am doing it in an AWS EC2 Ubuntu instance.
Can anyone please tell me what I am doing wrong here?

server:
ports:
- "8761:8761"
eureka:
client:
registerWithEureka: false
fetchRegistry: false
with the above changes port 8761 will expose on host and can connect to server.
as your connecting using localhost "http://localhost:8761/eureka" which is searching for port 8761 on host.
In Eureka client config use host ip instead of localhost , because if localhost used it's search for port 8761 within container
http://hostip:8761/eureka

Make sure you are running in Swarm mode.(Single node can also run Swarm)
$ docker swarm init
An overlay network is created so services can ping each other.
$ docker network create -d overlay mybridge
Set application.property for eurika client as below
eureka.client.service-url.defaultZone=http://discovery:8761/eureka
Now create first discovery service (Eureka discover server)
$ docker service create -d --name discovery --network mybridge \
--replicas 1 -p 8761:8761 server-discovery
Open your browser and hit any node with port 8761
Now create client service:
$ docker service create -d --name goodbyeapp --network mybridge \
--replicas 1 -p 2222:2222 goodbye-service
This will register to the discovery service.

In the container world, the eureka server ip address can change each time the eureka server is restarted. Hence specifying the host ip address for eureka server url doesn't work all the time.
In the docker-compose.yml, I had to link the eureka client service to the eureka server container. Until I link the services, eureka client couldn't connect to the server.
This is already answered in another post recently: Applications not registering to eureka when using docker-compose

Related

Consul: Cannot connect service to Consul server hosted in Docker container

I am trying to connect to a Consul server hosted in a Docker container.
I used the below command to launch the Consul server on docker,
docker run -d -p 8500:8500 -p 8600:8600/udp --name=CS1 consul agent -server -ui -node=server-1 -bootstrap-expect=1 -client=0.0.0.0 -bind=0.0.0.0
The Consul sever is successfully launched and I can access the UI on my host machine using the URL http://localhost:8500/ui/dc1/nodes.
Below is the log for the Consul server as shown by Docker,
==> Starting Consul agent...
Version: '1.9.2'
Node ID: 'cc205268-1f9d-3cf6-2104-f09cbd01dd9d'
Node name: 'server-1'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: true)
Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600)
Cluster Addr: 172.17.0.2 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
Now I have a .Net core 3.1 web api service which I am trying to register with the Consul server. The service is running on the host machine.
I'm using IP address 172.17.0.2 and Port 8500.
However I'm unable to register the service and I get an error message as below,
The requested address is not valid in its context.
I would appreciate if anyone can suggest if I'm doing anything wrong or how to register a service running on host machine with a Consul server hosted in a docker container?

All published services within a docker swarm are unreachable, while containers deployed normally work fine

I've run into an issue that seems similar too this one; https://forums.docker.com/t/cant-access-service-in-swarm/63876. My setup is a little bit different though and I haven't found a solution to my problem yet.
The minimal, reproducible example
Build a swarm cluster between atleast 3 Ubuntu 20.04 docker swarm managers.
Deploy a service docker service create --name test_web --replicas 3 --publish published=8080,target=80 nginxdemos/hello
Check that the containers and services were created properly and observe the failure of connecting to that service:
demi-ubu01:~/stacks$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d4a12a3c5448 nginxdemos/hello:latest "nginx -g 'daemon of…" About a minute ago Up About a minute 80/tcp test_web.2.yul33wdycarig3qoxnehgrjrz
demi-ubu01:~/stacks$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
0yqd7gvggwuh test_web replicated 3/3 nginxdemos/hello:latest *:8080->80/tcp
# External test:
demi-ubu01:~/stacks$ curl -I 10.100.4.5:8080
curl: (7) Failed to connect to 10.100.4.5 port 8080: Connection refused
# Inside container to published service port:
demi-ubu01:~/stacks$ docker exec -it d4a12a3c5448 wget http://test_web:8080
Connecting to test_web:8080 (10.0.4.2:8080)
wget: can't connect to remote host (10.0.4.2): Host is unreachable
# Inside container to apps exposed port:
demi-ubu01:~/stacks$ docker exec -it d4a12a3c5448 wget http://localhost:80
Connecting to localhost:80 (127.0.0.1:80)
index.html 100% |****************************| 7217 0:00:00 ETA
The expected result of the first curl command should be a Status 200 Ok.
The detailed report
My setup is 4 nodes in total. They are identical Ubuntu 20.04 KVM virtual machines all on the same network. There are no firewalls between them. I have 3 Managers and 1 Worker (which i've only added as a step during troubleshooting).
:~/stacks$ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
kcm5v64psntjxngnqkfdj1jzh * demi-ubu01 Ready Active Reachable 20.10.1
uo3rljg6ax5qkjm898pyym9t1 demi-ubu02 Ready Active Leader 20.10.1
pysnl8sohdp4fv67gui156z4k demi-ubu03 Ready Active Reachable 20.10.1
rp2otsqpnxkgbmxbpkv21yjs6 demi-ubu04 Ready Active 20.10.1
I can run a container normally and reach it on the local host fine.
demi-ubu01:~/stacks$ docker run -p 8080:80 -d nginxdemos/hello
de4d0a937710acb1d6d8ae3b7eb9175860b6614dfd9ce92bc972efe619ae095f
demi-ubu01:~/stacks$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
de4d0a937710 nginxdemos/hello "nginx -g 'daemon of…" 4 seconds ago Up 2 seconds 0.0.0.0:8080->80/tcp pedantic_wiles
demi-ubu01:~/stacks$ curl -I 10.100.4.5:8080
HTTP/1.1 200 OK
Server: nginx/1.13.8
Date: Sat, 19 Dec 2020 17:59:23 GMT
Content-Type: text/html
Connection: keep-alive
Expires: Sat, 19 Dec 2020 17:59:22 GMT
Cache-Control: no-cache
However the same app deployed as a service using the following compose file:
demi-ubu01:~/stacks$ cat test.yml
version: "3.6"
services:
web:
image: nginxdemos/hello:latest
deploy:
replicas: 3
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- target: 80
published: 8080
protocol: tcp
mode: ingress
networks:
- webnet
networks:
webnet:
driver: overlay
It does not become reachable from any of the hosts at all:
demi-ubu01:~/stacks$ docker stack deploy -c test.yml test
Creating network test_webnet
Creating service test_web
demi-ubu01:~/stacks$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
05030ef897a1 nginxdemos/hello:latest "nginx -g 'daemon of…" 10 seconds ago Up 7 seconds 80/tcp test_web.1.kobrpkp68f2qbs4jhd6o8aebg
# Trying on all of the hosts in the cluster. No firewalls here.
demi-ubu01:~/stacks$ curl -I 10.100.4.5:8080
curl: (7) Failed to connect to 10.100.4.5 port 8080: Connection refused
demi-ubu01:~/stacks$ curl -I 10.100.4.9:8080
curl: (7) Failed to connect to 10.100.4.9 port 8080: Connection refused
demi-ubu01:~/stacks$ curl -I 10.100.4.10:8080
curl: (7) Failed to connect to 10.100.4.10 port 8080: Connection refused
demi-ubu01:~/stacks$ curl -I 10.100.4.11:8080
curl: (7) Failed to connect to 10.100.4.11 port 8080: Connection refused
demi-ubu01:~/stacks$ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
elvfm7o4v4zo test_web replicated 3/3 nginxdemos/hello:latest *:8080->80/tcp
I also don't see any port bindings being made on those hosts at all, so it doesn't look like any ports are being published.
INeed2Poo#demi-ubu01:~/stacks$ docker service inspect test_web
[
## https://pastebin.com/WqqyDnVS ##
]
demi-ubu01:~/stacks$ netstat -na | grep LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:49152 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:24007 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.53:53 0.0.0.0:* LISTEN
demi-ubu01:~/stacks$ docker network ls
NETWORK ID NAME DRIVER SCOPE
6e5f7e7cebc3 bridge bridge local
7a1155f87a62 docker_gwbridge bridge local
ab32da8ac1ec host host local
46id8wzw4ayf ingress overlay swarm
a24a40ef78f4 none null local
d9l7msysdx8m test_webnet overlay swarm
INeed2Poo#demi-ubu01:~/stacks$ docker network inspect 46id8wzw4ayf
[
https://pastebin.com/JPA0ZBjE
]
I also can't reach the service while exec'ed into a container for that service. Execing into a container, I'm able to hit the LOCAL app port, however I cannot hit the service by name. The container CAN resolve the service name.
## Testing the app's service from the local container fails:
demi-ubu01:~/stacks$ docker exec -it 05030ef897a1 wget http://test_web:8080
Connecting to test_web:8080 (10.0.4.2:8080)
wget: can't connect to remote host (10.0.4.2): Host is unreachable
## Testing the app's local port from the local container is sucessful:
demi-ubu01:~/stacks$ docker exec -it 05030ef897a1 wget http://localhost:80
Connecting to localhost:80 (127.0.0.1:80)
index.html 100% |****************************| 7217 0:00:00 ETA
demi-ubu01:~/stacks$ docker --version
Docker version 20.10.1, build 831ebea
I've changed the default-addr-pool for the swarm cluster from the original 10.0.0.0/8 network:
demi-ubu01:~$ docker info --format '{{json .Swarm.Cluster.DefaultAddrPool}}'
["10.135.0.0/16"]
I've gone and made sure that I'm not using any overlapping networks that might be causing this and have gone so far as to completely redeploy the cluster. I've just about exhausted all of my troubleshooting idea's. Any Idea's?
Edit: Update: I redeployed using Ubuntu 18.04 as my base image, and the same exact setup on that (deployed using ansible) seems to work fine... So this is an issue with the current version of Docker on Ubuntu 20.04.
Let me add my response from the docker forum here as well, as it is high likely the solution:
Is it safe to assume that 10.100.4.5 is one of your nodes ip?
The default address pool is 10.0.0.0/8, see: docker info --format '{{json .Swarm.Cluster.DefaultAddrPool}}'
If this is the case, you might find this blog post helpful - you can safely ignore that it refers to Docker EE, the problem and solution is valid for Docker CE as well. You need to alter default-addr-pool either when initiating the swarm or by modifying each node’s /etc/docker/daemon.json configuration file (and restart the daemon then).

Understanding Docker overlay network

I am using an overlay network to deploy an application on multiple VMs on the same LAN. I am using nginx as the front end for this application and this is running on host_1. All the containers that are part of the application are communicating with each other without any issues. But HTTP requests to the published port 80 of the nginx container (mapped to port 8080 on host_1) from a different VM on the same LAN, say host_2, time out[1]. But HTTP requests to localhost:8080 on host_1 succeed[2]. If I start the nginx container by removing the overlay network, I am able to send HTTP requests[3].
Output of curl -vvv <host_1 IP>:8080 on host_2.
ubuntu#host_2:~$ curl -vvv <host_1>:8080
Rebuilt URL to: <host_1 IP>:8080/
Trying <host_1 IP>...
TCP_NODELAY set
connect to <host_1 IP> port 8080 failed: Connection timed out
Failed to connect to <host_1 IP> port 8080: Connection timed out
Closing connection 0 curl: (7) Failed to connect to <host_1 IP> port 8080: Connection timed out
Output of curl localhost:8080 on host_1.
nginx welcome page
Output of curl -vvv <host_1 IP>:8080 on host_2 when I recreate the container without the overlay network
nginx welcome page
The docker-compose file for the front end is as below:
version: '3'
nginx-frontend:
hostname: nginx-frontend
image: nginx
ports: ['8080:80']
restart: always
networks:
default:
external: {name: overlay-network}
I checked that the nginx and the host are listening on 0.0.0.0:80 and 0.0.0.0:8080 respectively.
Since the port 80 of the nginx is published by mapping it to port 8080 of the host, I should be able to send HTTP requests from any VM that is on the same LAN as the host of this container. Can someone please explain what I am doing wrong or where my assumptions are wrong?

Connection refused when attempting to connect to a docker container on an EC2

I'm currently running a spring boot application in a docker container on an EC2. My docker-compose file looks like this (with some values replaced):
version: '3.8'
services:
my-app:
image: ${ecr-repo}/my-app:0.0.1-SNAPSHOT
ports:
- "8893:8839/tcp"
networks:
default:
The docker container deploys and comes up as healthy with the healthcheck command being:
wget --spider -q -Y off http://localhost:8893/my-app/v1/actuator/health
If I do a docker ps -a I can see for the ports:
0.0.0.0:8893->8893
My Alb healthcheck however is returning a 502 so I've temporarily allowed connections from my IP directly to the EC2 in the security group. The rules are:
Allow Ingress on 8893 from my Alb security group
Allow Ingress on 8893 from my IP
Allow Egress to anywhere (0.0.0.0)
When I try and hit the healthcheck endpoint of my app using the public DNS of the ec2 on port 8893 using Postman I get Error: connect ECONNREFUSED
If I take my docker container down and then simulate a webserver using the command from https://fabianlee.org/2016/09/26/ubuntu-simulating-a-web-server-using-netcat/ which is:
while true; do { echo -e "HTTP/1.1 200 OK\r\n$(date)\r\n\r\n<h1>hello world from $(hostname) on $(date)</h1>" | nc -vl 8080; } done
I get a 200 response with the expected body which indicates it's not a problem with the security groups.
The actuator endpoint for spring boot is definitely enabled as if I try running the app through intellij and hitting the endpoint it returns a 200 and status up.
Any suggestions for what I might be missing here or how I could debug this further? It seems like docker isn't picking up connections to the port for some reason.

Ngrok: Expose Database in Docker Container

I have a web application built with Elixir that uses a Postgres database in a docker container (https://hub.docker.com/_/postgres/).
I need to expose the web interface (running on port 4000) and the database in the docker container.
I tried adding this to my configuration files:
tunnels:
api:
addr: 4000
proto: http
db:
addr: 5432
proto: tcp
Then in my Elixir config/dev.exs I add this under the database configuration:
...
hostname: "TCP_URL_GIVEN_BY_NGRROK"
When I attempt to start the application, it says failure to connect to the database.
The docker command that I used is:
docker run --name phoenix-pg -e POSTRGRES_PASSWORD=postgres -d postgres
What am I doing wrong?

Resources