I am trying to run a docker container that takes the CONSUL_URL as an ENV Variable. For now, I have set up the consul on my localhost and I run it like this: consul agent -dev -bind=127.0.0.1 -ui-dir /usr/local/Cellar/consul/0.7.0/share/consul/web-ui
I am able to access the consul ui this way when I go to http://localhost:8500. But, now that I am running the docker container through this, docker run -e CONSUL_URL=127.0.0.1:8500 -p 8500:8500 b321825a6c7a, it gives me the following error:
2016/10/05 09:38:38 [ERR] (view) "key_or_default(foo.appconfig.properties/logger.name, "foo_PERF_LOG")" store key: error fetching: Get http://127.0.0.1:8500/v1/kv/foo.appconfig.properties/logger.name?stale=&wait=60000ms: dial tcp 127.0.0.1:8500: getsockopt: connection refused
2016/10/05 09:38:38 [ERR] (runner) watcher reported error: store key: error fetching: Get http://127.0.0.1:8500/v1/kv/foo.appconfig.properties/logger.name?stale=&wait=60000ms: dial tcp 127.0.0.1:8500: getsockopt: connection refused
Consul Template returned errors:
store key: error fetching: Get http://127.0.0.1:8500/v1/kv/foo.appconfig.properties/logger.name?stale=&wait=60000ms: dial tcp 127.0.0.1:8500: getsockopt: connection refusedexecuting: 'myscript.sh run'
Why am I not able to connect with the consul URL? I also tried changing the localhost url to the IP Address of my machine, but, I get the same error with that too. I have done the port mapping, so, I guess it should work. Where am I going wrong?
You have to use the --net=host flag.
This flag will create sockets for the exposed ports for all interfaces in the main OS.
docker run --net=host -e CONSUL_URL=127.0.0.1:8500 b321825a6c7a
Related
I'm getting familiar with Docker thanks to my NAS Syonlogy 1515+.
I have created a SQL Server 2019 container called sqlserver4 that listen on port 1433:
sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=My_Password" -p 1433:1433 --name sqlserver4 -d mcr.microsoft.com/mssql/server:2019-latest
And I have then created a second one called sqlserver5 that listen on port 1533:
sudo docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=My_Password" -p 1533:1533 --name sqlserver5 -d mcr.microsoft.com/mssql/server:2019-latest
All good, my two servers are there at the IP Address 192.168.1.44
I can connect through SSMS to the first one, the sqlserver4:
But when I try to connect to the second one, the sqlserver5 I receive the error:
A connection was successfully established with the server, but then an
error occurred during the pre-login handshake. (provider: TCP
Provider, error: 0 - The specified network name is no longer
available.) (Microsoft SQL Server, Error: 64)
It's easy to see where the problem is: even if they are on different port, 1433 and 1533 the IP Address is always the same 192.168.1.44
How can I setup a different IP Address for each container?
EDIT:
#David Maze suggested me to stop sqlserver4 and try to connect to sqlserver5. My assumption was wrong, I cannot connect to sqlserver5 neither. But fun fact, the error changes:
A connection was successfully established with the server, but then an
error occurred during the pre-login handshake. (provider: TCP
Provider, error: 0 - An existing connection was forcibly closed by the
remote host.) (Microsoft SQL Server, Error: 10054)
I start auditbeat
docker run --cap-add="AUDIT_CONTROL" --cap-add="AUDIT_READ" docker.elastic.co/beats/auditbeat:7.8.1 setup -E setup.kibana.host=localhost:5601 -E output.elasticsearch.hosts=["127.0.0.1:9300"]
but get error Exiting: couldn't connect to any of the configured Elasticsearch hosts. Errors: [error connecting to Elasticsearch at http://127.0.0.1:9300: Get http://127.0.0.1:9300: dial tcp 127.0.0.1:9300: connect: connection refused] I try user also localhost in output.elasticsearch.hosts. When I sent request by curl http://127.0.0.1:9200 I get successful response from elasticsearch.
Also. Elasticsearch is deployed as docker process.
You need to use the HTTP port 9200 (the same you curl with) not the TCP port 9300
-Eoutput.elasticsearch.hosts=["host.docker.internal:9200"]
^
|
change this
I'm having problems to get my ssh tunnel working for my container in a docker swarm cluster.
ssh connection on my local machine:
ssh -L 7180:test.XXX:7180 user#XXX
In my Dockerfile on the remote machine:
EXPOSE 7180
Container start:
docker -H test:2379 --tlsverify run -d -p 7180:7180 --net=my-net
I tried to connect in Firefox via:
localhost:7180
Unfortunately the connection gets refused on the remote machine:
channel 3: open failed: connect failed: Connection refused
"docker container ls" prints following for the ports:
xxx:7180->7180/tcp
Inside my container "netstat -ntlp | grep LISTEN" prints:
tcp 0 0 0.0.0.0:7180 0.0.0.0:* LISTEN -
I'm new to this but after all what I've read so far this should actually work. I'm using "--net=my-net" because I want to setup my own network later. I had the same issue with "--net=host". What am I doing wrong?
The ssh command should be:
ssh -L 7180:127.0.0.1:7180 user#XXX
And then from your browser, you would go to:
http://127.0.0.1:7180
I've avoided using "localhost" because some machines map this to IPv6 even if you don't have IPv6 configured.
When testing this tunnel, make sure your application is listening on the remote server by doing an ssh to that server and run a curl command directly on the server to 127.0.0.1:7180. If it doesn't work there, you would repeat your debugging with netstat inside the container and verifying the port is published in thedocker ps` output.
I got it working with
ssh -D localhost:7180 -f -C -q -N user#XXX
and using
xxx:7180
in my browser (instead of localhost).
localhost and --net=host did not work for me with ssh -L.
I have a kubernetes cluster with three nodes: 10.9.84.149,10.9.105.90 and 10.9.84.149. When my application tries to execute the command inside some pod:
kuebctl exec -it <podName>
it sometimes gets an error:
Error from server: error dialing backend: dial tcp 10.9.84.149:10250: getsockopt: connection refused
As far as I could see everything was fine with the cluster: all kube-system services and pods were running well. Besides, it didn't appear regularly.
Can anybody help me on this issue?
I got the same error as this below
Error from server: Get https://192.168.100.102:10250/containerLogs/default/kubia-n8nv9/kubia: dial tcp 192.168.100.102:10250: connect: no route to host
DISABLING THE FIREWALL WAS MY FIX ON ALL NODES
I figured out my worker nodes firewall was not disabled. I did instruction below to fix my problem
systemctl disable firewalld && systemctl stop firewalld
-Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1...
-Removed symlink /etc/systemd/system/basic.target.wants/firewalld.service.```
Looks like your kubelet process not running, or keep restarting.
ss -tnpl |grep 10250
LISTEN 0 128 :::10250 :::* users:(("kubelet",pid=1102,fd=21))
check kubelet process is running.
if its running see when its started.
look at /var/log/message file for any issue with node.
Make sure you don't have the firewall blocking the traffic
Update
I believe the culprit is the master who does not appear to be listening on port 7946. netstat shows that 7946 is listening on the nodes, but not the master. When I check the syslogs for the nodes I see the following error
level=error msg="Failed to join memberlist [10.0.0.12] on retry: 1 error(s) occurred:\n\n* Failed to join 10.0.0.12: dial tcp 10.0.0.12:7946: getsockopt: connection refused"
Original Post
I am running a three node Swarm Mode cluster in AWS; one master and two workers. This is swarm mode not to be confused with docker swarm from pre 1.12.
I created all of the services with docker-machine. Each machine is running Ubuntu 15.10 with Docker 1.12.3.
Linux swarm-master-01 4.2.0-42-generic #49-Ubuntu SMP Tue Jun 28 21:26:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Using the master node I have created a service with the following
docker service create --replicas 1 --name myapp -p 3000 myapp
When I run docker service ps myapp I get the following output
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
02awst8p9pezgpkfzqgz8z79t myapp.1 myapp:latest swarm-node-01 Running Running 19 minutes ago
The running task is deployed to swarm-node-01.
I checked the auto-selected port which was published publicly
$ docker service inspect myapp | jq .[].Endpoint.Ports[].PublishedPort
30000
According to the documentation:
External components, such as cloud load balancers, can access the service on the PublishedPort of any node in the cluster whether or not the node is currently running the task for the service. All nodes in the swarm route ingress connections to a running task instance.
But when I try to curl the nodes who do not have the task running I'm getting connection refused.
$ curl $(docker-machine ip swarm-node-01):30000/stats
{"uptime":"2016-11-09T14:48:35Z","requestCount":7,"statuses":{"200":7},"pid":1,"open_db_conns":0}
$ curl $(docker-machine ip swarm-node-02):30000/stats
curl: (7) Failed to connect to [the IP] port 30000: Connection refused
note: I scrubbed the IP of node-02
My Troubleshooting:
The nodes are both properly connected to the swarm
Scaling the service up to 5 (which inherently deploys the task to every node) makes curl work on every node, because the task is deployed to every node.
UPDATE 1
I initialized the swarm with
docker swarm init --advertise-addr 10.0.0.12:2377 --listen-addr 10.0.0.12:2377
I checked the syslogs from the nodes and I'm seeing the following errors
level=error msg="Failed to join memberlist [10.0.0.12] on retry: 1 error(s) occurred:\n\n* Failed to join 10.0.0.12: dial tcp 10.0.0.12:7946: getsockopt: connection refused"
I checked to see if the ingress port was listening and it doesn't seem to be
ubuntu#swarm-master-01:~$ sudo lsof -i :7946
ubuntu#swarm-master-01:~$ cat < /dev/tcp/10.0.0.12/7946
-bash: connect: Connection refused
-bash: /dev/tcp/10.0.0.12/7946: Connection refused
ubuntu#swarm-master-01:~$ cat < /dev/tcp/0.0.0.0/7946
-bash: connect: Connection refused
-bash: /dev/tcp/0.0.0.0/7946: Connection refused
I was able to get around the issue for now, but I don't know what initially caused it. The overlay network (port 7946) wasn't listening on swarm-master-01. I figured this out with netstat -nlt. I searched the syslogs and found these errors related to the port in the syslog.
Nov 8 20:28:20 ubuntu docker[23092]: time="2016-11-08T20:28:20.171385360Z" level=warning msg="2016/11/08 20:28:20 [ERR] memberlist: Failed TCP fallback ping: read tcp 10.0.0.85:54016->10.0.0.13:7946: i/o timeout"
Nov 9 18:26:17 swarm-node-01 docker[714]: time="2016-11-09T18:26:17.573441271Z" level=warning msg="2016/11/09 18:26:17 [ERR] memberlist: Failed to send indirect ping: write udp [::]:7946->10.0.0.38:7946: use of closed network connection"
For some reason docker refused to open this port and listen any more. Here is what I did (albeit undesirable) to circumvent the issue:
Created another node with docker-machine called swarm-master-02
Joined swarm-master-02 to the cluster as a master
Demoted master-01 which set master-02 as the leader
Restarted the docker daemon on each node (might not have been necessary)
Now all of the machines are working as expected except for swarm-master-01. One task is running on swarm-node-01 and curl works against all nodes by forwarding the traffic to the proper container on the proper node. However, swarm-master-01 refuses to listen on the overlay network and curl does not work against this node. I was only able to fix swarm-master-01 by completely removing it from the cluster, restarting the docker daemon, and joining it again as a master. Now 7946 is listening on that machine.