Docker registry can't bind to hostIP:5000 - docker

I'm trying to push an image to a docker registry that I'm running as a container on another machine. I start the registry by mounting the config.yml in an external volume. The default value for the http: addr field in this file is "localhost:5000". That works but I can't push from the other machine. I get the error:
"unable to connect to 192.168.1.149:5000. Do you need an HTTP proxy?"
But these are all machines on a local network, so there should be no proxy needed.
When I set this value to the ip address of the machine: 192.168.1.149:5000, I get the error:
"level=fatal msg="listen tcp 192.168.1.149:5000: bind: cannot assign
requested address"
My config file looks like this:
version: 0.1
log:
accesslog:
disabled: true
level: debug
formatter: text
fields:
service: registry
environment: development
loglevel: debug # deprecated: use "log"
storage:
filesystem:
rootdirectory: /var/lib/registry
maxthreads: 100
delete:
enabled: false
redirect:
disable: false
http:
addr: localhost:5000
tls:
certificate: /etc/docker/registry/server-cert.pem
key: /etc/docker/registry/server-key.pem
headers:
X-Content-Type-Options: [nosniff]
http2:
disabled: false
I launch the container like this:
sudo docker run -d -p 5000:5000 --restart=always --name registry -v /etc/docker/registry:/etc/docker/registry -v `pwd`/config.yml:/etc/docker/registry/config.yml registry:2
And my push looks like this:
docker push 192.168.1.149:5000/ironclads-api:1.5
I can ping 192.168.1.149 from the machine I'm trying to push from and I configured the certs according to the docker instructions. Any ideas what might be happening here?

Since it's running in a container, and listening on localhost inside that container, which is a different network namespace from your host (and so a different localhost), it's only reachable from inside that container. Instead, don't listen on localhost or the host IP, listen on all interfaces inside the contianer. The port forward is used to direct traffic from the host into the container.
The fix is to remove localhost, and replace it with nothing, or 0.0.0.0 if you want to bind on all IPv4 interfaces.
http:
addr: ":5000"

Related

Cannot access docker host from macos

I am trying to access my host system from a docker container
have tried all the following instead of 127.0.0.1 and localhost:
gateway.docker.internal,
docker.for.mac.host.internal,
host.docker.internal ,
docker.for.mac.host.internal,
docker.for.mac.localhost,
but none seem to work.
If I run my docker run command with --net=host, I can indeed access localhost however none of my port mappings get exposed and in accessible from outside docker.
I am using Docker version 20.10.5, build 55c4c88
some more info. I am running a piece of software called impervious (a layer on top of the bitcoin lightning network). It needs to connect to my local Polar lightning node on localhost:10001. Here is the config file the tool itself uses(see lnd section):
# Server configurations
server:
enabled: true # enable the GRPC/HTTP/websocket server
grpc_addr: 0.0.0.0:8881 # SET FOR DOCKER
http_addr: 0.0.0.0:8882 # SET FOR DOCKER
# Redis DB configurations
sqlite3:
username: admin
password: supersecretpassword # this will get moved to environment variable or generated dynamically
###### DO NOT EDIT THE BELOW SECTION#####
# Services
service_list:
- service_type: federate
active: true
custom_record_number: 100000
additional_service_data:
- service_type: vpn
active: true
custom_record_number: 200000
additional_service_data:
- service_type: message
active: true
custom_record_number: 400000
additional_service_data:
- service_type: socket
active: true
custom_record_number: 500000
additional_service_data:
- service_type: sign
active: true
custom_record_number: 800000
additional_service_data:
###### DO NOT EDIT THE ABOVE SECTION#####
# Lightning
lightning:
lnd_node:
ip: host.docker.internal
port: 10001 #GRPC port of your LND node
pub_key: 025287d7d6b3ffcfb0a7695b1989ec9a8dcc79688797ac05f886a0a352a43959ce #get your LND pubkey with "lncli getinfo"
tls_cert: /app/lnd/tls.cert # SET FOR DOCKER
admin_macaroon: /app/lnd/admin.macaroon # SET FOR DOCKER
federate:
ttl: 31560000 #Federation auto delete in seconds
imp_id: YOUR_IMP_ID #plain text string of your IMP node name
vpn:
price: 100 #per hour
server_ip: http://host.docker.internal #public IP of your VPN server
server_port: 51820 #port you want to listen on
subnet: 10.0.0.0/24 #subnet you want to give to your clients. .1 == your server IP.
server_pub_key: asdfasdfasdf #get this from your WG public key file
allowed_ips: 0.0.0.0/0 #what subnets clients can reach. Default is entire world.
binary_path: /usr/bin/wg #where your installed the "wg" command.
dns: 8.8.8.8 #set your preferred DNS server here.
socket:
server_ip: 1.1.1.1 #public IP of your socket server
I run impervious using the following docker comand:
docker run -p8881:8881 -p8882:8882 -v /Users/xxx/dev/btc/impervious/config/alice-config-docker.yml:/app/config/config.yml -v /Users/xxx/.polar/networks/1/volumes/lnd/alice/tls.cert:/app/lnd/tls.cert -v /Users/xxx/.polar/networks/1/volumes/lnd/alice/data/chain/bitcoin/regtest/admin.macaroon:/app/lnd/admin.macaroon -it impant/imp-releases:v0.1.4
but it just hangs when it tries to connect to the node at host.docker.internal
Have you tried docker-mac-net-connect?
The problem is related to macOS.Unlike Docker on Linux, Docker for macOS does not expose container networks directly on the macOS host.
You can use host.docker.internal which gives the localhost of the macos.
https://docs.docker.com/desktop/networking/#i-want-to-connect-from-a-container-to-a-service-on-the-host
The host has a changing IP address (or none if you have no network
access). We recommend that you connect to the special DNS name
host.docker.internal which resolves to the internal IP address used by
the host. This is for development purpose and does not work in a
production environment outside of Docker Desktop.
Mac running the desktop version of docker.
The docker isn't running on the host machine and using a kind of virtual machine that includes Linux kernel. The network of this virtual machine is different from the host machine. To connect from your Mac host to running docker container used a kind of VPN connection:
When you run your docker with --net host switch you connect the container to a virtual machine network instead connect to your host machine network as it's working on Linux.
Then trying to connect to 127.0.0.1 or to localhost isn't allow connections to the running container.
The solution to this issue is to expose needed ports from running container:
docker run -p 8080:8080
If you need to expose all ports from your container you can use -P switch.
For opposite connection use host.docker.internal URL from container.
More documentation about docker desktop for Mac networking

Consul: Cannot connect service to Consul server hosted in Docker container

I am trying to connect to a Consul server hosted in a Docker container.
I used the below command to launch the Consul server on docker,
docker run -d -p 8500:8500 -p 8600:8600/udp --name=CS1 consul agent -server -ui -node=server-1 -bootstrap-expect=1 -client=0.0.0.0 -bind=0.0.0.0
The Consul sever is successfully launched and I can access the UI on my host machine using the URL http://localhost:8500/ui/dc1/nodes.
Below is the log for the Consul server as shown by Docker,
==> Starting Consul agent...
Version: '1.9.2'
Node ID: 'cc205268-1f9d-3cf6-2104-f09cbd01dd9d'
Node name: 'server-1'
Datacenter: 'dc1' (Segment: '<all>')
Server: true (Bootstrap: true)
Client Addr: [0.0.0.0] (HTTP: 8500, HTTPS: -1, gRPC: -1, DNS: 8600)
Cluster Addr: 172.17.0.2 (LAN: 8301, WAN: 8302)
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false, Auto-Encrypt-TLS: false
Now I have a .Net core 3.1 web api service which I am trying to register with the Consul server. The service is running on the host machine.
I'm using IP address 172.17.0.2 and Port 8500.
However I'm unable to register the service and I get an error message as below,
The requested address is not valid in its context.
I would appreciate if anyone can suggest if I'm doing anything wrong or how to register a service running on host machine with a Consul server hosted in a docker container?

docker-compose internal DNS server 127.0.0.11 connection refused

Suddenly when I deployed some new containers with docker-compose the internal hostname resolution didn't work.
When I tried to ping one container from the other using the service name from the docker-compose.yaml file I got ping: bad address 'myhostname'
I checked that the /etc/resolv.conf was correct and it was using 127.0.0.11
When I tried to manually resolve my hostname with either nslookup myhostname. or nslookup myhostname.docker.internal I got error
nslookup: write to '127.0.0.11': Connection refused
;; connection timed out; no servers could be reached
Okay so the issue is that the docker DNS server has stopped working. All already started containers still function, but any new ones started has this issue.
I am running Docker version 19.03.6-ce, build 369ce74
I could of course just restart docker to see if it solves it, but I am also keen on understanding why this issue happened and how to avoid it in the future.
I have a lot of containers started on the server and a total of 25 docker networks currently.
Any ideas on what can be done to troubleshoot? Any known issues that could explain this?
The docker-compose.yaml file I use has worked before and no changes has been done to it.
Edit: No DNS names at all can be resolved. 127.0.0.11 refuses all connections. I can ping any external IP addresses, as well as the IP of other containers on the same docker network. It is only the 127.0.0.11 DNS server that is not working. 127.0.0.11 still replies to ping from within the container.
Make sure you're using a custom bridge network, NOT the default one. As per the Docker docs (https://docs.docker.com/network/bridge/), the default bridge network does not allow automatic DNS resolution:
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
I have the same problem. I am using the pihole/pihole docker container as the sole dns server on my network. Docker containers on the same host as the pihole server could not resolve domain names.
I resolved the issue based on "hmario"'s response to this forum post.
In brief, modify the pihole docker-compose.yml from:
---
version: '3.7'
services:
unbound:
image: mvance/unbound-rpi:1.13.0
hostname: unbound
restart: unless-stopped
ports:
- 53:53/udp
- 53:53/tcp
volumes: [...]
to
---
version: '3.7'
services:
unbound:
image: mvance/unbound-rpi:1.13.0
hostname: unbound
restart: unless-stopped
ports:
- 192.168.1.30:53:53/udp
- 192.168.1.30:53:53/tcp
volumes: [...]
Where 192.168.1.30 is a ip address of the docker host.
I'm having exactly the same problem. According to the comment here I could reproduce the setting without docker-compose, only using docker:
docker network create alpine_net
docker run -it --network alpine_net alpine /bin/sh -c "cat /etc/resolv.conf; ping -c 4 www.google.com"
stopping docker (systemctl stop docker) and enabling debug output it gives
> dockerd --debug
[...]
[resolver] read from DNS server failed, read udp 172.19.0.2:40868->192.168.177.1:53: i/o timeout
[...]
where 192.168.177.1 is my local network ip for the host that docker runs on and where also pi-hole as dns server is running and working for all of my systems.
I played around with fixing iptables configuration. but even switching them off completely and opening everything did not help.
The solution I found, without fully understanding the root case, was to move the dns to another server. I installed dnsmasq on a second system with ip 192.168.177.2 that nothing else than forwarding all dns queries back to my pi-hole server on 192.168.177.1
starting docker on 192.168.177.1 again with dns configured to use 192.168.177.2 everything was working again
with this in one terminal
dockerd --debug --dns 192.168.177.2
and the command from above in another it worked again.
> docker run -it --network alpine_net alpine /bin/sh -c "cat /etc/resolv.conf; ping -c 4 www.google.com"
search mydomain.local
nameserver 127.0.0.11
options ndots:0
PING www.google.com (172.217.23.4): 56 data bytes
64 bytes from 172.217.23.4: seq=0 ttl=118 time=8.201 ms
--- www.google.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 8.201/8.201/8.201 ms
So moving the the dns server to another host and adding "dns" : ["192.168.177.2"] to my /etc/docker/daemon.json fixed it for me
Maybe someone else can help me to explain the root cause behind the problem with running the dns server on the same host as docker.
First, make sure your container is connected to a custom bridged network. I suppose by default in a custom network DNS request inside the container will be sent to 127.0.0.11#53 and forwarded to the DNS server of the host machine.
Second, check iptables -L to see if there are docker-related rules. If there is not, probably that's because iptables are restarted/reset. You'll need to restart docker demon to re-add the rules to make DNS request forwarding working.
I had same problem, the problem was host machine's hostname. I have checked hostnamectl result and it was ok but problem solved with stupid reboot. before reboot result of cat /etc/hosts was like this:
# The following lines are desirable for IPv4 capable hosts
127.0.0.1 localhost HostnameSetupByISP
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4
# The following lines are desirable for IPv6 capable hosts
::1 localhost HostnameSetupByISP
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
and after reboot, I've got this result:
# The following lines are desirable for IPv4 capable hosts
127.0.0.1 hostnameIHaveSetuped HostnameSetupByISP
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4
# The following lines are desirable for IPv6 capable hosts
::1 hostnameIHaveSetuped HostnameSetupByISP
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

pod service not accessible from host

As the title says, I am unable to use a service a certain pod is providing.
The pod is serving a java restApi service at TCP port 2040, that I should be able to access with some specific curl commands.
Some data from kubectl describe pod :
Status: Running
IP: 172.17.0.32
Node: minikube/13*.20*.13*.14 (I obfuscated my real IP here)
Container ID: docker://b5b16bd7926ce65d4a57212f60c87ea72e161f534a0e1d6925c508dd89ab202e
Ports: 9899/TCP, 1272/TCP, 2040/TCP, 9500/TCP, 9501/TCP, 9502/TCP, 9503/TCP, 9504/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP, 0/TCP
(is this correct ?)
I am confused by the fact that :
A) I am perfectly able to issue curl commands on the shell inside the docker container (b5b16bd7926ce65...)
B) The tcp connection from my host to the service the pod is providing is successful :
user#host$ nc -zv 172.17.0.32 2040
Connection to 172.17.0.32 2040 port [tcp/*] succeeded!
C) any curl command (from the host) towards 172.17.0.32:2040 fails with :
504 Gateway Timeout: remote server did not respond to the proxy
The host is running Ubuntu 18.04LTS.
I am behind a corporate proxy, but as this is all done on my local machine, I don't think that could be an issue.
What could be responsible for this behavior?
Thanks in advance.
So the issue here was the proxy, adding --noproxy "*" to the curl command made it work from the host.
I however do not understand why this is required, I am hosting everything my self?

Register to Eureka from Docker with a custom IP

I'm running Spring Cloud Eureka inside my Docker VM. I have services registering to it, but they use their IP adress from inside the Docker VM, but to be able to use them properly i need them to use the IP adress i can access from outside the VM.
For example inside my VM the register using 172.x.x.x and i can access the REST interface from my browser using 192.168.x.x.x. I need them to register as 192.168.x.x.x.
How can i tell my service to register with a specific IP adress?
Both previous answers are correct, but I'll make copy-pasting easier.
What you should do is add an environment variable with the host IP when starting your container and in your Spring Boot application.yml file includes it.
application.yml
eureka:
instance:
# Necessary for Docker as it doesn't have DNS entries
prefer-ip-address: true
# Necessary for Docker otherwise you will get 172.0.0.x IP
ip-address: "${HOST}"
client:
serviceUrl:
# Location of your eureka server
defaultZone: http://192.168.0.107:8761/eureka/
Running with Docker
docker run -p <port>:<port> -e HOST='192.168.0.106' <image name>
Running with docker-compose
my_service:
image: image_name
environment:
- HOST=192.168.0.106
ports:
- your_port:container_port
You can configure it in your application.yml:
eureka:
instance:
ipAddress: 192.168.x.x
Register to Eureka with a custom IP and a custom PORT:
server:
port: 18090
eureka:
instance:
prefer-ip-address: true
ip-address: 10.150.160.21
non-secure-port: 8080
I run into a situation where 10.150.160.21:8080 is mapped to 192.168.1.124:18090 over a firewall. The application is running on 192.168.1.124:18090 but have to register to Eureka with 10.150.160.21:8080.
It works for me.
You can use environment variables in the eureka configuration in application.yml
In the following example I am using the $HOST and $PORT environment variables to tell the eureka client what values to use. In my case, these variables are set by Mesos/Marathon. You may find other useful variables set by Docker.
The following works for me:
eureka:
client:
serviceUrl:
defaultZone: http://discovery.marathon.mesos:31444/eureka/
registerWithEureka: true
fetchRegistry: true
instance:
appname: configserver
health-check-url: /health
prefer-ip-address: true
ip-address: "${HOST}" # mesos/marathon populates this in the environment
non-secure-port: "${PORT}" # mesos/marathon populates this in the environment

Resources