I'm trying to set up a cassandra ring with five nodes in docker using dse-server and dse-studio. The docker containers are up and running and I can access the casandra database and do CRUD operations, but it does not connect to all the nodes. I believe I have not created the docker compose networks correctly or it may be another issue. Here is the code for the project:
https://github.com/juanpujazon/DockerCassandraNodes
If I use the connector connecting to 192.168.3.19:9042 I can do the CRUD for the tables but only the conection to the first node is succesfull. The CRUD completes succesfully, but all the hosts ips other than the first one get the error "Connection[/172.30.0.4:9042-1, inFlight=0, closed=false] Error connecting to /172.30.0.4:9042 (connection timed out: /172.30.0.4:9042)"
I tried to create a connector adding all the ips from the different nodes as contact points but is not working as intended:
Exception in thread "main" java.lang.IllegalArgumentException: Failed to add contact point: "127.0.0.1";"172.30.0.2";"172.30.0.3";"172.30.0.4";"172.30.0.5";"172.30.0.6"
at com.datastax.driver.core.Cluster$Builder.addContactPoint(Cluster.java:943)
at cassandra.java.client.CassandraConnector.connectNodes(CassandraConnector.java:30)
at cassandra.java.client.Main.main(Main.java:13)
Caused by: java.net.UnknownHostException: Host desconocido ("127.0.0.1";"172.30.0.2";"172.30.0.3";"172.30.0.4";"172.30.0.5";"172.30.0.6")
at java.base/java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.base/java.net.InetAddress$PlatformNameService.lookupAllHostAddr(InetAddress.java:933)
at java.base/java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1529)
at java.base/java.net.InetAddress$NameServiceAddresses.get(InetAddress.java:852)
at java.base/java.net.InetAddress.getAllByName0(InetAddress.java:1519)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1377)
at java.base/java.net.InetAddress.getAllByName(InetAddress.java:1305)
at com.datastax.driver.core.Cluster$Builder.addContactPoint(Cluster.java:939)
Any idea about what should I change?
If you can only connect to the cluster on 192.168.3.19, it indicates to me that the containers are not accessible on the host. You will need to configure your Docker environment so that the containers are exposed to public access.
For this error:
Connection[/172.30.0.4:9042-1, inFlight=0, closed=false] \
Error connecting to /172.30.0.4:9042 (connection timed out: /172.30.0.4:9042)
you are connecting to the container using the default CQL port 9042 but you've exposed it on a different port in your docker-compose.yml:
ports:
- 9044:9042
I recommend you re-map all the container ports to just 9042 to make it easier for yourself to connect to them. Otherwise, you'll need to specify the port together with the IP addresses when you configure the contact points like:
"ip1:port1", "ip2:port2", "ip3:port3"
I've also noted that you've included localhost in the contact points:
Failed to add contact point: "127.0.0.1";"172.30.0.2";"172.30.0.3";"172.30.0.4";"172.30.0.5";"172.30.0.6"
If you have a node that is only listening for client connections on localhost then it is configured incorrectly and you need to fix its configuration.
Finally, if your goal is to build a cluster for app development, you might want to consider using Astra DB so you don't have to worry about configuring/maintaining your own Cassandra installation. With Astra DB, you can launch a cluster on the free tier with literally just 5 clicks in just over a minute with no credit card required. Cheers!
I have a VM running Ubuntu 16.04, on which I want to deploy an application packaged as a Docker container. The application needs to be able to perform an HTTP request towards a server under VPN (e.g. server1.vpn-remote.com)
I successfully configured the host VM in order to connect to the VPN through openconnect, I can turn this connection on/off using a systemd service.
Unfortunately, when I run docker run mycontainer, neither the host nor the container are able to reach server1.vpn-remote.com. Weirdly enough, there is no error displayed in the VPN connection service logs, which is stuck to the openconnect messages confirming a successful connection.
If I restart the VPN connection after starting mycontainer, the host machine is able to access server1.vpn-remote.com, but not the container. Moreover, if I issue any command like docker run/start/stop/restart on mycontainer or any other container, the connection gets broken again even for the host machine.
NOTE: I already checked on the ip routes and there seems to be no conflict between Docker and VPN subnets.
NOTE: running the container with --net="host" results in both host and container being able to access the VPN but I would like to avoid this option as I will eventually make a docker compose deployment which requires all containers to run in bridge mode.
Thanks in advance for your help
EDIT: I figured out it is a DNS issue, as I'm able to ping the IP corresponding to server1.vpn-remote.com even after the VPN connection seemed to be failing. I'm going through documentation regarding DNS management with Docker and Docker Compose and their usage of the host's /etc/resolv.conf file.
I hope you don't still need help six months later! Some of the details are different, but this sounds a bit like a problem I had. In my case the solution was a bit disappointing: after you've connected to your VPN, restart the docker daemon:
sudo systemctl restart docker
I'm making some inferences here, but it seems that, when the daemon starts, it makes some decisions/configs based on the state of the network at that time. In my case, the daemon starts when I boot up. Unsurprisingly, when I boot up, I haven't had a chance to connect to the VPN yet. As a result, my container traffic, including DNS lookups, goes through my network directly.
Hat tip to this answer for guiding me down the correct path.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 months ago.
Improve this question
I am having this issue
system3:postgres saurabh-gupta2$ docker build -t postgres .
Sending build context to Docker daemon 38.91kB
Step 1/51 : FROM registry.access.redhat.com/rhel7/rhel
Get https://registry.access.redhat.com/v2/: Service Unavailable
docker run -t apline
Unable to find image 'apline:latest' locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/: Service Unavailable.
See 'docker run --help'.
I have looked for a solution that says to set proxy, but I have set the proxy for the wifi.
https://docs.docker.com/docker-for-mac/networking/#httphttps-proxy-support
Still, it is not working.
I have set proxy for docker too. It is not working.
in Preference -> proxies
Docker version 17.12 ce
I also want to know if the proxy is the issue then how can I check it is set, what is work around for this?
Here are few suggestions:
Try restarting your Docker service.
Check your network connections. For example by the following shell commands:
</dev/tcp/registry-1.docker.io/443 && echo Works || echo Problem
curl https://registry-1.docker.io/v2/ && echo Works || echo Problem
Check your proxy settings (e.g. in /etc/default/docker).
If above won't help, this could be a temporary issue with the Docker services (as per Service Unavailable).
Related: GH-842 - 503 Service Unavailable at http://hub.docker.com.
I had this problem for past days, it just worked after that.
You can consider raising the issue at docker/hub-feedback repo, check at, Docker Community Forums, or contact Docker Support directly.
docker logout
docker login
This might solve your problem
I tried running on Windows, and got this problem after an update. I tried restarting the docker service as well as my pc, but nothing worked.
When running:
curl https://registry-1.docker.io/v2/ && echo Works
I got back:
{"errors":[{"code":"UNAUTHORIZED","message":"authentication required","detail":null}]}
Works
Eventually, I tried:
https://github.com/moby/moby/issues/22635#issuecomment-284956961
By changing the fixed address to 8.8.8.8:
Which worked for me!
I still got the unauthorized message for curl https://registry-1.docker.io/v2/ but I managed to pull images from docker hub.
For me I had this issue when I first installed Docker and ran
docker run hello-world
I got an authentication required error when I ran
curl https://registry-1.docker.io/v2/ && echo Works
All I needed to do was to restart my MacOS and then run the command again, it just started pulling the image and i got the message
Hello from Docker!
This message shows that your installation appears to be working correctly.
It's clearly a proxy issue: docker proxies https connections to the wrong place. Bear in mind that docker proxy settings may be different from the operating system (and curl) ones. Here's how I managed to solve the issue:
First of all, find out where are you proxying your docker https requests:
# docker info | grep Proxy
Http Proxy: http://<my.proxy.server>:8080
Https Proxy: https://<my.proxy.server>:8080
No Proxy: localhost,127.0.0.1
and double check your https settings.
In my case, I realized that the "Https proxy" was set to https://... instead of http://..., so I corrected it in /etc/sysconfig/docker file (I'm using RHEL7) and, after a docker restart with:
# systemctl restart docker
the proxy variable shows up succesfully updated:
# docker info | grep Proxy
Http Proxy: http://<my.proxy.server>:8080
Https Proxy: http://<my.proxy.server>:8080
No Proxy: localhost,127.0.0.1
and everything works fine :-)
Just to add, in case anyone else comes across this issue.
On a Mac
I had to logout and log back in.
docker logout
docker login
Then it prompts for username (NOTE: Not email) and password. (Need an account on https://hub.docker.com to pull images down)
Then it worked for me.
NTML PROXY AND DOCKER
If your company is behind MS Proxy Server that using the proprietary NTLM protocol.
You need to install **Cntlm** Authentication Proxy
After this SET the proxy in
/etc/systemd/system/docker.service.d/http-proxy.conf) with the following format:
[Service]
Environment=“HTTP_PROXY=http://<<IP OF CNTLM Proxy Server>>:3182”
In addition you can set in the .DockerFile
export http_proxy=http://<<IP OF CNTLM Proxy Server>>:3182
export https_proxy=http://<IP OF CNTLM Proxy Server>>:3182
export no_proxy=localhost,127.0.0.1,10.0.2.*
Followed by:
systemctl daemon-reload
systemctl restart docker
This Worked for me
For me the problem was solved by restarting the docker daemon:
sudo systemctl restart docker
One option which worked for me on MAC.
Click on the Docker Icon in the tray. Open Preferences -> Proxies. Click on Manual Proxy and specify Web Server (HTTP) proxy and Secure Web server (HTTPS) proxy in the same format as we specify in HTTPS_PROXY env variable.
Choose Apply and Restart.
This Worked for me
try to reload daemon then restart docker service.
systemctl daemon-reload
I had this same issue when working on an Ubuntu server.
I was getting the following error:
deploy#my-comp:~$ docker login -u my-username -p my-password
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp 35.175.83.85:443: connect: connection refused
Here are the things I tried that did not work:
Restarting the docker service using sudo docker systemctl restart docker
Powering off and restarting the Ubuntu server.
Changing the name server to 8.8.8.8 in the /etc/resolv.conf file
Here's what worked for me:
I tried checking if the server has access to the internet using the following netcat command:
nc -vz google.com 443
And it returned this output:
nc: connect to google.com port 443 (tcp) failed: Connection refused
nc: connect to google.com port 443 (tcp) failed: Network is unreachable
Instead of something like this:
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: Connected to 172.217.166.110:443.
Ncat: 0 bytes sent, 0 bytes received in 0.07 seconds.
I tried checking again if the server has access to the internet using the following wget command:
wget -q --spider http://google.com ; echo $?
And it returned:
4
Instead of:
0
Note: Anything other than 0 in the output means your system is not connected to the internet
I then tried the last time if the server has access to the internet using the following Nmap command:
nmap -p 443 google.com
And it returned:
Starting Nmap 7.01 ( https://nmap.org ) at 2021-02-16 11:50 WAT
Nmap scan report for google.com (216.58.223.238)
Host is up (0.00052s latency).
Other addresses for google.com (not scanned): 2c0f:fb50:4003:802::200e
rDNS record for 216.58.223.238: los02s04-in-f14.1e100.net
PORT STATE SERVICE
443/tcp closed https
Nmap done: 1 IP address (1 host up) scanned in 1.21 seconds
Instead something like this:
Starting Nmap 7.01 ( https://nmap.org ) at 2021-02-16 11:50 WAT
Nmap scan report for google.com (216.58.223.238)
Host is up (0.00052s latency).
Other addresses for google.com (not scanned): 2c0f:fb50:4003:802::200e
rDNS record for 216.58.223.238: los02s04-in-f14.1e100.net
PORT STATE SERVICE
443/tcp open https
Nmap done: 1 IP address (1 host up) scanned in 1.21 seconds
Note: The state of port 443/tcp is closed instead of open
All this was enough to make me realize that connections to the internet were not allowed on the server.
All I had to do was speak with the team in charge of infrastructure to fix the network connectivity issue to the internet on the server. And once that was fixed my docker command started working fine.
Resources: 9 commands to check if connected to internet with shell script examples
That's all.
I hope this helps
Recheck Proxy Settings with the following commands
docker info | grep Proxy
Check VPN Connectivity
If VPN not using CHECK NET connectivity
Reinstall Docker and repeat the above steps.
Enjoy
On my windows 11 all I did was to first login into my account
docker login
Got this from a network filter (LuLu on macOS) blocking traffic to/from Docker-related processes.
I had this issue when I first installed Docker and ran
docker run hello-world
I was on a corporate network and switching to my personal network solved the issue for me.
The answers are provided here amazing, but if you are new in that and you don't realize full error then you may see at the end of that error net/http: TLS handshake timeout. message means that you have a slow internet connection. So it can be only that problem that's it.
Toodles
I had the following entries in my /etc/hosts file:
34.228.211.243 registry-1.docker.io
34.205.88.205 auth.docker.io
104.18.121.25 production.cloudflare.docker.com
Just by commenting them out, I fixed the problem.
List item
Many good answers above, but mine is a bit different with Mac and Docker Desktop UI. In my case, it is a Desktop proxy setting that needs to be turned off when I am outside of corporate fiewall/proxy:
ERROR message from docker CLI:
Username: xxx
Password: ***
Error response from daemon: Get https://registry-1.docker.io/v2/: Service Unavailable
My env: Machine Mac with Docker UI (i.e. called Docker Desktop,
shown as a whale icon), running outside of corp firewall/proxy.
I am able to Sign In with Docker Desktop UI.
However, whether docker login or docker pull, I kept getting the above error and I got sidetrack into the user id, reset the daemon, ...
Finally, I got to the Docker Desktop UI. Sure enough, there is a proxy setting that I have setup long time ago, and totally forgot about it!
Yes, when I am outside of firewall, I need to turn off the proxy setting here.
Docker Desktop -> Preference -> Resources -> Proxies. Turn
Turn off the manual proxy configuration.
Then docker pull works (without docker login as I was pulling a public image)!
Thanks
PS. I think the difference in behavior of Docker Desktop and Docker CLI contributes to the confusion. I am able to login to docker through the GUI, and the CLI keeps erroring out without good enough diagnostic information.
Using Linux. For me it worked by doing:
$ docker logout
log out of hub.docker.com
log in to hub.docker.com
$ docker login
Check whether containers is enabled or not?
Goto --> turn on/off windows feature, then enable checkbox of containers
Restart windows.
Using the root account instead of my regular user account solved it for me.
I have solved this issue about $ sudo docker run hello-world following the Docker doc.
If you are behind an HTTP Proxy server of corporate, this may solve your problem.
Docker doc also displays other situation about HTTP proxy setting.
In my case, stopping Proxifier fixed it. I added a rule to route any connections from vpnkit.exe as Direct and it now works.
One of the problems you might need to check is,
Does the registry requires VPN,
Enable your VPN and try pulling again.
Thanks.
Ok, I have a similar issue and nothing seemed to help, restart docker, disabled IPv6 and the nslookup and dig all seemed fine.
What worked for me was going to my Docker Desktop -> Preferences -> Experimental Features and unchecking Use new virtualization framework.
docker login terminal command worked for me.
If your machine requires VPN then must connect with VPN first and try docker login.
Have you create a repo with the matching tag on destinated docker hub? It might be that your container image has no where to be pushed to.
Run export DOCKER_CONTENT_TRUST=0 and then try it again.
Use --tls in the pull request.
For example if original pull request is docker pull dgraph/dgraph:v21.03.0
Use this instead : docker --tls pull dgraph/dgraph:v21.03.0
Just reloading system, this is helped for me. (Windows 10 64x)
We have a couple docker containers deployed on ECS. The application inside the container uses remote service, so it needs to access them using their 10.X.X.X private IPs.
We are using Docker 1.13 with CentOS 7 and docker/alpine as our base image. We are also using netwokMode: host for our containers. The problem comes when we can successfully run telnet 10.X.X.X 9999 from the host machine but if we run the same command from inside the container, it just hangs and it's not able to connect.
In addition, we have net.ipv4.ip_forward enabled in the host machines (where the container runs) but disabled in the remote machine.
Not sure what could be the issue, maybe iptables?
I have spent the day with the same problem (tried with both network mode 'bridge' and 'host'), and it looks like an issue with using busybox's telnet inside ECS - Alpine's telnet is a symlink to busybox. I don't know enough about busybox/networking to suggest what the root cause is, but I was able to prove the network path was clear by using other tools.
My 'go to' for testing a network path is using netcat as follows. The 'success' or 'failure' message varies from version to version, but a refusal or a timeout (-w#) is pretty obvious. All netcat does here is request a socket - it doesn't actually talk to the listening application, so you need something else to test that.
nc -vz -w2 HOST PORT
My problem today was troubleshooting an app's mongo connection. nc showed the path was clear, but telnet had the same issue as you reported. I ended up installing the mongo client and checking with that, and I could connect properly.
If you need to actually run commands over telnet from inside your ECS container, perhaps try installing a different telnet tool and avoiding the busybox inbuilt one.
Note: Question is related to Bluemix docker support.
I am trying to connect two different Docker Containers deployed in Bluemix. I am getting the exception:
java.net.NoRouteToHostException: No route to host
when I try such connection (Java EE app running on Liberty trying to access MySQL). I tried using both private and public IPs of MySQL Docker Container.
The point is that I am able to access MySQL Docker Container from outside Bluemix. So the IP, port, and MySQL itself are ok.
It seems something related to the internal networking of Docker Container support within Bluemix. If I try to access from inside Bluemix it fails, if I do from outside it works. Any help?
UPDATE: I continued investigating as you can see in comments, and it seems a timing issue. I mean, it seems once containers are up and running, there is some connectivity work still undone. If I am able to wait around 1 minute, before trying the connection it works.
60 seconds should be the rule of thumb for the networking start working after container creation.