I am trying to connect to a Consul server hosted in a Docker container.
I used the below command to launch the Consul server on docker,
docker run -d -p 8500:8500 -p 8600:8600/udp --name=CS1 consul agent -server -ui -node=server-1 -bootstrap-expect=1 -client=0.0.0.0 -bind=0.0.0.0
The Consul sever is successfully launched and I can access the UI on my host machine using the URL http://localhost:8500/ui/dc1/nodes.
Below is the log for the Consul server as shown by Docker,
==> Starting Consul agent...
Version: '1.9.2'
Node ID: 'cc205268-1f9d-3cf6-2104-f09cbd01dd9d'
Node name: 'server-1'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: true)
Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600)
Cluster Addr: 172.17.0.2 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
Now I have a .Net core 3.1 web api service which I am trying to register with the Consul server. The service is running on the host machine.
I'm using IP address 172.17.0.2 and Port 8500.
However I'm unable to register the service and I get an error message as below,
The requested address is not valid in its context.
I would appreciate if anyone can suggest if I'm doing anything wrong or how to register a service running on host machine with a Consul server hosted in a docker container?
Related
On my Windows 10 host machine with Docker 4.9.1 I want to ssh into a docker container.
I followed a bunch of tutorials just like this one:
https://phoenixnap.com/kb/how-to-ssh-into-docker-container
From within the container I can ssh into the container using its IP of 172.17.0.2, but from my host machine I cannot.
Confirmation of the IP address:
docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' interesting_meitner
'172.17.0.2'
Ping without response:
ping 172.17.0.2
Ping wird ausgeführt für 172.17.0.2 mit 32 Bytes Daten:
Zeitüberschreitung der Anforderung.
Ping-Statistik für 172.17.0.2:
Pakete: Gesendet = 1, Empfangen = 0, Verloren = 1
(100% Verlust),
SSH with connection timeout:
ssh root#172.17.0.2
ssh: connect to host 172.17.0.2 port 22: Connection timed out
Starting the container (obviously done before trying to connect to it):
docker run -ti with_ssh:new /bin/bash
I have also tried this with options for remapping ports i.e. -p 22:666 or -p 666:22 .
Starting ssh server:
/etc/init.d/ssh start
* Starting OpenBSD Secure Shell server sshd
Checking status:
/etc/init.d/ssh status
* sshd is running
Ssh from container into container:
ssh root#172.17.0.2
The authenticity of host '172.17.0.2 (172.17.0.2)' can't be established.
ECDSA key fingerprint is SHA256:471dnz1q83owB/Nu0Qnnyz/Sct4Kwry9Sa9L9pwQeZo.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.17.0.2' (ECDSA) to the list of known hosts.
root#172.17.0.2's password:
Welcome to Ubuntu 18.04.3 LTS (GNU/Linux 5.10.16.3-microsoft-standard-WSL2 x86_64)
[...]
Again from the Docker host I get a connection timeout. What do?
Your Docker container runs in a virtual network you cannot reach from the host (because it is isolated), which is why you cannot ping the containers IP from the host (but your docker container can, because it is attending the same network). You can expose the port like you already did with -p 666:22, but then you have to SSH to localhost not to the IP of the container: ssh -p 666 root#127.0.0.1.
You could also configure a correct routing from your hosts network to the virtual network and then you can reach the IP directly.
I did not reproduce your setup but this might work i guess. Hope it helps.
I'm trying to push an image to a docker registry that I'm running as a container on another machine. I start the registry by mounting the config.yml in an external volume. The default value for the http: addr field in this file is "localhost:5000". That works but I can't push from the other machine. I get the error:
"unable to connect to 192.168.1.149:5000. Do you need an HTTP proxy?"
But these are all machines on a local network, so there should be no proxy needed.
When I set this value to the ip address of the machine: 192.168.1.149:5000, I get the error:
"level=fatal msg="listen tcp 192.168.1.149:5000: bind: cannot assign
requested address"
My config file looks like this:
version: 0.1
log:
accesslog:
disabled: true
level: debug
formatter: text
fields:
service: registry
environment: development
loglevel: debug # deprecated: use "log"
storage:
filesystem:
rootdirectory: /var/lib/registry
maxthreads: 100
delete:
enabled: false
redirect:
disable: false
http:
addr: localhost:5000
tls:
certificate: /etc/docker/registry/server-cert.pem
key: /etc/docker/registry/server-key.pem
headers:
X-Content-Type-Options: [nosniff]
http2:
disabled: false
I launch the container like this:
sudo docker run -d -p 5000:5000 --restart=always --name registry -v /etc/docker/registry:/etc/docker/registry -v `pwd`/config.yml:/etc/docker/registry/config.yml registry:2
And my push looks like this:
docker push 192.168.1.149:5000/ironclads-api:1.5
I can ping 192.168.1.149 from the machine I'm trying to push from and I configured the certs according to the docker instructions. Any ideas what might be happening here?
Since it's running in a container, and listening on localhost inside that container, which is a different network namespace from your host (and so a different localhost), it's only reachable from inside that container. Instead, don't listen on localhost or the host IP, listen on all interfaces inside the contianer. The port forward is used to direct traffic from the host into the container.
The fix is to remove localhost, and replace it with nothing, or 0.0.0.0 if you want to bind on all IPv4 interfaces.
http:
addr: ":5000"
I am trying to access my host system from a docker container
have tried all the following instead of 127.0.0.1 and localhost:
gateway.docker.internal,
docker.for.mac.host.internal,
host.docker.internal ,
docker.for.mac.host.internal,
docker.for.mac.localhost,
but none seem to work.
If I run my docker run command with --net=host, I can indeed access localhost however none of my port mappings get exposed and in accessible from outside docker.
I am using Docker version 20.10.5, build 55c4c88
some more info. I am running a piece of software called impervious (a layer on top of the bitcoin lightning network). It needs to connect to my local Polar lightning node on localhost:10001. Here is the config file the tool itself uses(see lnd section):
# Server configurations
server:
enabled: true # enable the GRPC/HTTP/websocket server
grpc_addr: 0.0.0.0:8881 # SET FOR DOCKER
http_addr: 0.0.0.0:8882 # SET FOR DOCKER
# Redis DB configurations
sqlite3:
username: admin
password: supersecretpassword # this will get moved to environment variable or generated dynamically
###### DO NOT EDIT THE BELOW SECTION#####
# Services
service_list:
- service_type: federate
active: true
custom_record_number: 100000
additional_service_data:
- service_type: vpn
active: true
custom_record_number: 200000
additional_service_data:
- service_type: message
active: true
custom_record_number: 400000
additional_service_data:
- service_type: socket
active: true
custom_record_number: 500000
additional_service_data:
- service_type: sign
active: true
custom_record_number: 800000
additional_service_data:
###### DO NOT EDIT THE ABOVE SECTION#####
# Lightning
lightning:
lnd_node:
ip: host.docker.internal
port: 10001 #GRPC port of your LND node
pub_key: 025287d7d6b3ffcfb0a7695b1989ec9a8dcc79688797ac05f886a0a352a43959ce #get your LND pubkey with "lncli getinfo"
tls_cert: /app/lnd/tls.cert # SET FOR DOCKER
admin_macaroon: /app/lnd/admin.macaroon # SET FOR DOCKER
federate:
ttl: 31560000 #Federation auto delete in seconds
imp_id: YOUR_IMP_ID #plain text string of your IMP node name
vpn:
price: 100 #per hour
server_ip: http://host.docker.internal #public IP of your VPN server
server_port: 51820 #port you want to listen on
subnet: 10.0.0.0/24 #subnet you want to give to your clients. .1 == your server IP.
server_pub_key: asdfasdfasdf #get this from your WG public key file
allowed_ips: 0.0.0.0/0 #what subnets clients can reach. Default is entire world.
binary_path: /usr/bin/wg #where your installed the "wg" command.
dns: 8.8.8.8 #set your preferred DNS server here.
socket:
server_ip: 1.1.1.1 #public IP of your socket server
I run impervious using the following docker comand:
docker run -p8881:8881 -p8882:8882 -v /Users/xxx/dev/btc/impervious/config/alice-config-docker.yml:/app/config/config.yml -v /Users/xxx/.polar/networks/1/volumes/lnd/alice/tls.cert:/app/lnd/tls.cert -v /Users/xxx/.polar/networks/1/volumes/lnd/alice/data/chain/bitcoin/regtest/admin.macaroon:/app/lnd/admin.macaroon -it impant/imp-releases:v0.1.4
but it just hangs when it tries to connect to the node at host.docker.internal
Have you tried docker-mac-net-connect?
The problem is related to macOS.Unlike Docker on Linux, Docker for macOS does not expose container networks directly on the macOS host.
You can use host.docker.internal which gives the localhost of the macos.
https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host
The host has a changing IP address (or none if you have no network
access). We recommend that you connect to the special DNS name
host.docker.internal which resolves to the internal IP address used by
the host. This is for development purpose and does not work in a
production environment outside of Docker Desktop.
Mac running the desktop version of docker.
The docker isn't running on the host machine and using a kind of virtual machine that includes Linux kernel. The network of this virtual machine is different from the host machine. To connect from your Mac host to running docker container used a kind of VPN connection:
When you run your docker with --net host switch you connect the container to a virtual machine network instead connect to your host machine network as it's working on Linux.
Then trying to connect to 127.0.0.1 or to localhost isn't allow connections to the running container.
The solution to this issue is to expose needed ports from running container:
docker run -p 8080:8080
If you need to expose all ports from your container you can use -P switch.
For opposite connection use host.docker.internal URL from container.
More documentation about docker desktop for Mac networking
I have one eureka server.
server:
port: 8761
eureka:
client:
registerWithEureka: false
fetchRegistry: false
I have one eureka client.
spring:
application:
name: mysearch
server:
port: 8020
eureka:
client:
serviceUrl:
defaultZone: http://localhost:8761/eureka
instance:
preferIpAddress: true
My eureka client is running in a docker container.
FROM java:8
COPY ./mysearch.jar /var/tmp/app.jar
EXPOSE 8180
CMD ["java","-jar","/var/tmp/app.jar"]
I am starting the eureka server by java -jar eureka-server.jar
After that I am starting the docker instance of the eureka client using
sudo docker build -t web . and sudo docker run -p 8180:8020 -it web.
I am able to access the eureka client and server from browser but the client is not connecting with Eureka server. I am not able to see the client in the eureka server dashboard. I am getting below errors and warnings.
WARN 1 --- [tbeatExecutor-0] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failed with message: java.net.ConnectException: Connection refused (Connection refused)
ERROR 1 --- [tbeatExecutor-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_FLIGHTSEARCH/98b0d95fd668:flightsearch:8020 - was unable to send heartbeat!
INFO 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_FLIGHTSEARCH/98b0d95fd668:flightsearch:8020: registering service...
ERROR 1 --- [nfoReplicator-0] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error
WARN 1 --- [nfoReplicator-0] c.n.d.s.t.d.RetryableEurekaHttpClient : Request execution failed with message: java.net.ConnectException: Connection refused (Connection refused)
WARN 1 --- [nfoReplicator-0] com.netflix.discovery.DiscoveryClient : DiscoveryClient_FLIGHTSEARCH/98b0d95fd668:flightsearch:8020 - registration failed Cannot execute request on any known server
WARN 1 --- [nfoReplicator-0] c.n.discovery.InstanceInfoReplicator : There was a problem with the instance info replicator
I am doing it in an AWS EC2 Ubuntu instance.
Can anyone please tell me what I am doing wrong here?
server:
ports:
- "8761:8761"
eureka:
client:
registerWithEureka: false
fetchRegistry: false
with the above changes port 8761 will expose on host and can connect to server.
as your connecting using localhost "http://localhost:8761/eureka" which is searching for port 8761 on host.
In Eureka client config use host ip instead of localhost , because if localhost used it's search for port 8761 within container
http://hostip:8761/eureka
Make sure you are running in Swarm mode.(Single node can also run Swarm)
$ docker swarm init
An overlay network is created so services can ping each other.
$ docker network create -d overlay mybridge
Set application.property for eurika client as below
eureka.client.service-url.defaultZone=http://discovery:8761/eureka
Now create first discovery service (Eureka discover server)
$ docker service create -d --name discovery --network mybridge \
--replicas 1 -p 8761:8761 server-discovery
Open your browser and hit any node with port 8761
Now create client service:
$ docker service create -d --name goodbyeapp --network mybridge \
--replicas 1 -p 2222:2222 goodbye-service
This will register to the discovery service.
In the container world, the eureka server ip address can change each time the eureka server is restarted. Hence specifying the host ip address for eureka server url doesn't work all the time.
In the docker-compose.yml, I had to link the eureka client service to the eureka server container. Until I link the services, eureka client couldn't connect to the server.
This is already answered in another post recently: Applications not registering to eureka when using docker-compose
I have a web application built with Elixir that uses a Postgres database in a docker container (https://hub.docker.com/_/postgres/).
I need to expose the web interface (running on port 4000) and the database in the docker container.
I tried adding this to my configuration files:
tunnels:
api:
addr: 4000
proto: http
db:
addr: 5432
proto: tcp
Then in my Elixir config/dev.exs I add this under the database configuration:
...
hostname: "TCP_URL_GIVEN_BY_NGRROK"
When I attempt to start the application, it says failure to connect to the database.
The docker command that I used is:
docker run --name phoenix-pg -e POSTRGRES_PASSWORD=postgres -d postgres
What am I doing wrong?