MongoDB 5.0 Replication: Mongo::Error::NoServerAvailable - ruby-on-rails

I would like to know how to fix the following error when enabling replication on MongoDB: <Server address=db-master:27017 UNKNOWN> It only happens when enabling replication.
Error
Mongo::Error::NoServerAvailable (No nearest server is available in cluster: #<Cluster topology=ReplicaSetNoPrimary[db-master:27017,db-node2:27017,db-node1:27017,name=rs0,v=6,e=7fffffff0000000000000017]
servers=[#<Server address=db-master:27017 UNKNOWN>,#<Server address=db-node2:27017 UNKNOWN>,#<Server address=db-node1:27017 UNKNOWN>]> with timeout=30, LT=0.015)
Is this issue due to DNS resolution? Is there a way to specify the IP address instead of the alias (from hosts file) to the cluster topology?
When ssh'ing to the primary and secondary nodes pinging seems to work.
[db-node2 server] $ ping db-master
PING db-master ([IP_IS_HERE]) 56(84) bytes of data.
64 bytes from db-master ([IP_IS_HERE]): icmp_seq=1 ttl=63 time=0.153 ms
64 bytes from db-master ([IP_IS_HERE]): icmp_seq=2 ttl=63 time=0.150 m
mongo.conf
net:
port: 27017
bindIp: 0.0.0.0,localhost,127.0.0.1,db-master,[IP_IS_HERE]
bindIpAll: true
replication:
replSetName: "rs0"
ubuntu hosts
$ cat /etc/hosts
127.0.0.1 localhost
[IP_IS_HERE] db-master
[IP_IS_HERE] db-node1
[IP_IS_HERE] db-node2

Did you start the mongod service on the other hosts?
Your config does not make much sense. If you want to permit connection from any host, then use
net:
port: 27017
bindIpAll: true
or
net:
port: 27017
bindIp: 0.0.0.0
If you want to limit the connection from specific host then use
net:
port: 27017
bindIp: localhost,db-master,[IP_IS_HERE]
ssh uses port 22 but your MongoDB uses port 27017, so your firewall may block the connection (the same applies for ping). If you like to check whether connection is available you can use curl command:
curl --connect-timeout 3 --silent --show-error db-node1:27017
If you get as response
It looks like you are trying to access MongoDB over HTTP on the native driver port.
then connection is possible.
An response of
Connection timed out after 3000 milliseconds
indicates that your firewall blocks the connection attempt.

Related

Can not connect docker mysql that's forwarding to 3306

When I'm trying to connect to a docker MySQL that's running and forwarding to my local TCP:3306 I get the following answer
mysql -u root -pPASSWORD
mysql: [Warning] Using a password on the command line interface can be insecure.
ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)
but when I do mysql -u root -pPASSWORD -h127.0.0.1 it connects wonderfully.
Any clue?
[UPDATE]
Considering the comments and this post I create ~/.my.cnf
with this content
[Mysql]
user=root
password=yourpass
host=127.0.0.1
port=3306
Giving these changes I could connect the localhost with the 127.0.0.1 address.
If you don't specify a host with -h (or a host directive in your .my.cnf), then MySQL defaults to connect to localhost. Connections to localhost use the UNIX domain socket interface, not TCP/IP. So it's not connecting to a TCP/IP port, and therefore does not forward to your docker container.
This distinction between localhost and 127.0.0.1 is a historical oddity of MySQL. Normally localhost and 127.0.0.1 are assumed to be equivalent. But MySQL treats the hostname "localhost" as special, using it to invoke the UNIX domain socket interface. This is a bit faster than using TCP/IP, but of course only works if the connection is on the local computer.

Understanding Docker overlay network

I am using an overlay network to deploy an application on multiple VMs on the same LAN. I am using nginx as the front end for this application and this is running on host_1. All the containers that are part of the application are communicating with each other without any issues. But HTTP requests to the published port 80 of the nginx container (mapped to port 8080 on host_1) from a different VM on the same LAN, say host_2, time out[1]. But HTTP requests to localhost:8080 on host_1 succeed[2]. If I start the nginx container by removing the overlay network, I am able to send HTTP requests[3].
Output of curl -vvv <host_1 IP>:8080 on host_2.
ubuntu#host_2:~$ curl -vvv <host_1>:8080
Rebuilt URL to: <host_1 IP>:8080/
Trying <host_1 IP>...
TCP_NODELAY set
connect to <host_1 IP> port 8080 failed: Connection timed out
Failed to connect to <host_1 IP> port 8080: Connection timed out
Closing connection 0 curl: (7) Failed to connect to <host_1 IP> port 8080: Connection timed out
Output of curl localhost:8080 on host_1.
nginx welcome page
Output of curl -vvv <host_1 IP>:8080 on host_2 when I recreate the container without the overlay network
nginx welcome page
The docker-compose file for the front end is as below:
version: '3'
nginx-frontend:
hostname: nginx-frontend
image: nginx
ports: ['8080:80']
restart: always
networks:
default:
external: {name: overlay-network}
I checked that the nginx and the host are listening on 0.0.0.0:80 and 0.0.0.0:8080 respectively.
Since the port 80 of the nginx is published by mapping it to port 8080 of the host, I should be able to send HTTP requests from any VM that is on the same LAN as the host of this container. Can someone please explain what I am doing wrong or where my assumptions are wrong?

docker-compose internal DNS server 127.0.0.11 connection refused

Suddenly when I deployed some new containers with docker-compose the internal hostname resolution didn't work.
When I tried to ping one container from the other using the service name from the docker-compose.yaml file I got ping: bad address 'myhostname'
I checked that the /etc/resolv.conf was correct and it was using 127.0.0.11
When I tried to manually resolve my hostname with either nslookup myhostname. or nslookup myhostname.docker.internal I got error
nslookup: write to '127.0.0.11': Connection refused
;; connection timed out; no servers could be reached
Okay so the issue is that the docker DNS server has stopped working. All already started containers still function, but any new ones started has this issue.
I am running Docker version 19.03.6-ce, build 369ce74
I could of course just restart docker to see if it solves it, but I am also keen on understanding why this issue happened and how to avoid it in the future.
I have a lot of containers started on the server and a total of 25 docker networks currently.
Any ideas on what can be done to troubleshoot? Any known issues that could explain this?
The docker-compose.yaml file I use has worked before and no changes has been done to it.
Edit: No DNS names at all can be resolved. 127.0.0.11 refuses all connections. I can ping any external IP addresses, as well as the IP of other containers on the same docker network. It is only the 127.0.0.11 DNS server that is not working. 127.0.0.11 still replies to ping from within the container.
Make sure you're using a custom bridge network, NOT the default one. As per the Docker docs (https://docs.docker.com/network/bridge/), the default bridge network does not allow automatic DNS resolution:
Containers on the default bridge network can only access each other by IP addresses, unless you use the --link option, which is considered legacy. On a user-defined bridge network, containers can resolve each other by name or alias.
I have the same problem. I am using the pihole/pihole docker container as the sole dns server on my network. Docker containers on the same host as the pihole server could not resolve domain names.
I resolved the issue based on "hmario"'s response to this forum post.
In brief, modify the pihole docker-compose.yml from:
---
version: '3.7'
services:
unbound:
image: mvance/unbound-rpi:1.13.0
hostname: unbound
restart: unless-stopped
ports:
- 53:53/udp
- 53:53/tcp
volumes: [...]
to
---
version: '3.7'
services:
unbound:
image: mvance/unbound-rpi:1.13.0
hostname: unbound
restart: unless-stopped
ports:
- 192.168.1.30:53:53/udp
- 192.168.1.30:53:53/tcp
volumes: [...]
Where 192.168.1.30 is a ip address of the docker host.
I'm having exactly the same problem. According to the comment here I could reproduce the setting without docker-compose, only using docker:
docker network create alpine_net
docker run -it --network alpine_net alpine /bin/sh -c "cat /etc/resolv.conf; ping -c 4 www.google.com"
stopping docker (systemctl stop docker) and enabling debug output it gives
> dockerd --debug
[...]
[resolver] read from DNS server failed, read udp 172.19.0.2:40868->192.168.177.1:53: i/o timeout
[...]
where 192.168.177.1 is my local network ip for the host that docker runs on and where also pi-hole as dns server is running and working for all of my systems.
I played around with fixing iptables configuration. but even switching them off completely and opening everything did not help.
The solution I found, without fully understanding the root case, was to move the dns to another server. I installed dnsmasq on a second system with ip 192.168.177.2 that nothing else than forwarding all dns queries back to my pi-hole server on 192.168.177.1
starting docker on 192.168.177.1 again with dns configured to use 192.168.177.2 everything was working again
with this in one terminal
dockerd --debug --dns 192.168.177.2
and the command from above in another it worked again.
> docker run -it --network alpine_net alpine /bin/sh -c "cat /etc/resolv.conf; ping -c 4 www.google.com"
search mydomain.local
nameserver 127.0.0.11
options ndots:0
PING www.google.com (172.217.23.4): 56 data bytes
64 bytes from 172.217.23.4: seq=0 ttl=118 time=8.201 ms
--- www.google.com ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 8.201/8.201/8.201 ms
So moving the the dns server to another host and adding "dns" : ["192.168.177.2"] to my /etc/docker/daemon.json fixed it for me
Maybe someone else can help me to explain the root cause behind the problem with running the dns server on the same host as docker.
First, make sure your container is connected to a custom bridged network. I suppose by default in a custom network DNS request inside the container will be sent to 127.0.0.11#53 and forwarded to the DNS server of the host machine.
Second, check iptables -L to see if there are docker-related rules. If there is not, probably that's because iptables are restarted/reset. You'll need to restart docker demon to re-add the rules to make DNS request forwarding working.
I had same problem, the problem was host machine's hostname. I have checked hostnamectl result and it was ok but problem solved with stupid reboot. before reboot result of cat /etc/hosts was like this:
# The following lines are desirable for IPv4 capable hosts
127.0.0.1 localhost HostnameSetupByISP
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4
# The following lines are desirable for IPv6 capable hosts
::1 localhost HostnameSetupByISP
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
and after reboot, I've got this result:
# The following lines are desirable for IPv4 capable hosts
127.0.0.1 hostnameIHaveSetuped HostnameSetupByISP
127.0.0.1 localhost.localdomain localhost
127.0.0.1 localhost4.localdomain4 localhost4
# The following lines are desirable for IPv6 capable hosts
::1 hostnameIHaveSetuped HostnameSetupByISP
::1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6

Docker: Connection from inside the container to localhost:port Refused

I'm trying to insure the connection between the different containers and the localhost address (127.0.0.1) used with port 8040.( My web application container run using this port.)
root#a70b20fbda00:~# curl -v http://127.0.0.1
* Rebuilt URL to: http://127.0.0.1/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 80 failed: Connection refused
* Failed to connect to 127.0.0.1 port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
This is what I get when I want to connect to localhost from inside the container
root#a70b20fbda00:~# curl -v http://127.0.0.1:8040
* Rebuilt URL to: http://127.0.0.1:8040/
* Hostname was NOT found in DNS cache
* Trying 127.0.0.1...
* connect to 127.0.0.1 port 8040 failed: Connection refused
* Failed to connect to 127.0.0.1 port 8040: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 127.0.0.1 port 8040: Connection refused
About iptables in each container:
root#a70b20fbda00:~# iptables
bash: iptables: command not found
Connection between the container is good
root#635114ca18b7:~# ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1) 56(84) bytes of data.
64 bytes from 172.17.0.1: icmp_seq=1 ttl=64 time=0.061 ms
64 bytes from 172.17.0.1: icmp_seq=2 ttl=64 time=0.253 ms
--- 172.17.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
root#635114ca18b7:~# ping 127.0.0.1
PING 127.0.0.1 (127.0.0.1) 56(84) bytes of data.
64 bytes from 127.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms
64 bytes from 127.0.0.1: icmp_seq=2 ttl=64 time=0.100 ms
--- 127.0.0.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
root#635114ca18b7:~# ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3) 56(84) bytes of data.
64 bytes from 172.17.0.3: icmp_seq=1 ttl=64 time=0.149 ms
64 bytes from 172.17.0.3: icmp_seq=2 ttl=64 time=0.180 ms
--- 172.17.0.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.149/0.164/0.180/0.020 ms
Ping the 127.0.0.1:8040
root#635114ca18b7:~# ping 127.0.01:8040
ping: unknown host 127.0.0.1:8040
What I need to do in this case?
So the Global image that there is two containers ,
The first container contains a tomcat server that deploy my web application and it turnes perfectly.
The second is a container that need to connect to the web application
URL. http://127.0.0.1:8040/my_app
you will have to use docker run --network host IMAGE:TAG for achieving the desired connection
further read here
example:-
docker run --network host --name CONTAINER1 IMAGE:tag
docker run --network host --name CONTAINER2 IMAGE:tag
inside container - CONTAINER2 you will be able to access other container as host CONTAINER1
And for accessing the service you will have to do CONTAINER:
Based on the information provided, looks like there are two containers. If these two containers are started by docker without --net=host then each of them get two different IP addresses. Say your first container got 172.17.0.2 and the second one 172.17.0.3.
In this scenario each container gets it's own networking stack. So 127.0.0.1 refers to it's own networking stack not the same.
As pointed out by #kakabali, it's possible to run the containers with host network, sharing the networking stack of the host.
One of the other options is to use the actual IP address of the first container in the second one.
second-container# curl http://172.17.0.2
Or another option is to run the second container as the sidekick/sidecar container sharing the networking stack of the first one.
docker run --net=container:${ID_OF_FIRST_CONTAINER} ${IMAGE_SECOND}:${IMAGE_TAG_SECOND}
Or if you use links correctly:
docker run --name web -itd ${IMAGE_FIRST}:${TAG_FIRST}
docker run --link web -itd ${IMAGE_SECOND}:${TAG_SECOND}
Note: docker --link feature is deprecated.
Another option is to use container management platforms which take care of service discovery for you automatically.
PS: You cannot ping an IP address on a different port. For more info, click here.

Docker inside Linux VM cannot connect to web application

My setup is the following:
Host: Win10
Guest: Ubuntu 15.10 (clean install, only docker and nodejs are added)
Base image: https://hub.docker.com/r/microsoft/aspnet/ 1.0.0-beta8-coreclr
Inside the guest I have installed Docker and created image (added sample webapp using yeoman to the image above). When I run the image inside container I can ping the container IP sucessfuly using the container IP from the linux (e.g. 172.17.0.2).
$sudo docker run -d -p 80:5000 --name web myapp
$sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' "web"
172.17.0.2
$ping 172.17.0.2
PING 172.17.0.2 (172.17.0.2) 56(84) bytes of data.
64 bytes from 172.17.0.2: icmp_seq=1 ttl=64 time=0.060 ms
1 packets transmitted, 1 received, 0% packet loss, time 999ms
$curl 172.17.0.2:80
curl: (7) Failed to connect to 172.17.0.2 port 80: Connection refused
I can also connect to the container and execute commands like ping, however from the linux machine (guest in VirtualBox, host for docker) I cannot access the web app that is hosted inside the container as seen above. I tried several approaches like mapping to the host IP addresses etc, but none of them worked. Did anyone have ideas where to start from ? Is the issue comes from that the docker is installed inside VirtualBox machine?
Thank you in advance.
Edit: Here are the logs from the container:
Could not open /etc/lsb_release. OS version will default to the empty string.
Hosting environment: Production
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.
Your command tells Docker to essentially proxy requests from port 80 of the Linux guest to port 5000 of the container. So the curl command you tried doesn't work because you're trying on port 80 on the container, while the container itself has a service listening on port 5000.
To connect to the container directly, you would use (on the Linux guest):
curl 172.17.0.2:5000
To access via the published port on the Linux guest (from your host):
curl (Linux guest IP)
Or (from the Linux guest):
curl localhost
Edit: This will also prove to be problematic:
Now listening on: http://localhost:5000
You'll want your app inside the container to bind to all interfaces (0.0.0.0) so it listens on the container's assigned IP. With localhost it won't be accessible.
You might find this example useful:
https://github.com/aspnet/Home/blob/dev/samples/1.0.0-beta8/HelloWeb/project.json
This line specifies that the app bind to all interfaces (using "*") on port 5004:
21 "kestrel": "Microsoft.AspNet.Hosting --server Microsoft.AspNet.Server.Kestrel --server.urls http://*:5004"
You'll need similar configuration.

Resources