I have together 6 containers running in docker swarm. Kafka+Zookeeper, MongoDB, A, B, C and Interface. Interface is the main access point from public - only this container publish the port - 5683. The interface container connects to A, B and C during startup. I am using docker-compose file + docker stack deploy, each service has a name which is used as host for interface. Everything starts successfully and works fine. After some time (20 mins,1h,..) I am not able to make request to interface. Interface receives my requests but application lost connection with service A,B,C or all of them. If I restart interface, it's able to reconnect to services A,B,C.
I firstly thought it's problem of application so I expose 2 new ports on each service (interface, A,B,C) and connect with profiler and debugger to them. Application is running properly, no leaks, no blocked threads, normally working and waiting for connections. Debugger shows me that when I make a request to interface and interface tries to request service A, Connection reset by peer exception was thrown.
During this debugging I found out interesting stuff. I attached debugger to interface when the services started and also debugger was disconnected after some time. + I was not able to reconnect it, until I made request to the container -> application. PRoblem - handshake failed.
Another interesting stuff that I found out was that I was not able to request neither interface. So I used wireshark to see what's going on and: SYN - ACK was fine. Then application post some data and interface respond with FIN,ACK. I assume that this also happen when interface tries to request service A and it FIN the connection. Codebase of Interface, A,B and C is the same regarding netty server.
Finally, I don't think it's a application issue. Why? I tried to deploy containers not as services. I run each container separately, published the ports of each and endpoint of services were set to localhost. (not overlay network). And it is working. Containers run without problem. + I didn't say at the beginning, that the the java applications (interface, A,B,C) runs without problem when they are running as standalone application - not in docker.
Could you please help me what could be the issue? Why the docker in case of overlay network is closing sockets?
I am using newest docker. I used also older.
Finally, I was able to solve the problem.
What was happening, one more time. Interface opens permanent TCP connection to A,B,C. When you try to run these services A,B,C as a standalone java applications, everything is working. When we dockerize them and run in swarm, it was working only few minutes. Strange was that the connection between Interface and another service was interrupted in the moment when you made a request from client to interface.
After many many unsuccessful tests and debugging each container I tried to run each docker container separately, with mapped ports and as endpoint I specified localhost. (each container exposed ports and interface was connecting to localhost) Funny thing happen, it was working. When you run containers like this, different network driver for container is used. Bridge one. If you run it in swarm, overlay network driver is used.
So it had to be something with the docker network, not with application itself. Next step was tcpdump from each container after couple of minutes, when it should stop working. It was very interesting.
Client -> Interface (OK, request accepted)
Interface ->(forward request because it belongs to A) A
Interface -> A [POST]
A -> Interface [RESET]
A was reseting opened TCP communication after couple of minutes without communication. Why?
Docker uses IP Virtual Server and IPVS maintains its own connection table. The default timeout for CLOSE_WAIT connections in IPVS table is 60 seconds. Hence when the server sends something after 60 seconds, the IPVS connection is no longer available and the packet looks invalid for a new TCP session and gets RST. On the client side, the connection remains forever in FIN_WAIT2 state because the app still has the socket open; kernel's fin_wait timer kicks in only for orphaned TCP sockets.
This is what I read about it and how understand it. I am not sure if my explanation of problem is correct, but based on these assumptions I implemented ping-pong between Interface and A,B,C services in case there is no communication for <60seconds. And, it’s working.
Got the same issue.
Specified
endpoint_mode: dnsrr
to properties of the service which plays "server" role and it works just fine.
https://forums.docker.com/t/tcp-timeout-that-occurs-only-in-docker-swarm-not-simple-docker-run/58179
Related
I want to have the following setup:
3 Couchbase nodes, each running on a separate container, all in the same cluster
Python application running in another container (querying, inserting, deleting data from the Couchbase cluster)
What I managed to do:
Set up a cluster, bucket, query the bucket via UI (accessed by localhost:8091)
What I didn't manage to do:
Create a connection between a Python application (which would at the end be Dockerized, for now for the sake of simplicity, let's treat it as local) and the (already working) cluster. Unfortunately, I cannot access it via Docker containers IP's with 8091 port, via localhost too. Unfortunately, the Couchbase documentation is either severely lacking here, or I just don't understand it. I tried to even use the setting-alternate-address option, but without much success (maybe I used it wrongly, so if you have any "how-to's" explaining the process, I'd still be grateful)
The connection works if there is one node, but throws Timeout if I set up 3 nodes.
I would really appreciate any tips leading to solving this problem.
EDIT: Adding code and error message:
connection_string = "couchbase://localhost"
cluster = Cluster.connect(connection_string, ClusterOptions(PasswordAuthenticator(os.getenv("LOGIN"), os.getenv("PASSWORD"))))
# following a successful authentication, a bucket can be opened.
# access a bucket in that cluster
bucket = cluster.bucket('travel-sample')
coll = bucket.default_collection()
result = coll.get('airline_10')
print(result.content_as[dict])
Error message:
couchbase.exceptions.UnAmbiguousTimeoutException: <ec=14, category=couchbase.common, message=unambiguous_timeout, context=KeyValueErrorContext:{'key': 'airline_10', 'bucket_name': 'travel-sample', 'scope_name': '_default', 'collection_name': '_default', 'opaque': 0}, C Source=C:\Jenkins\workspace\python\sdk\python-scripted-build-pipeline\py-client\src\kv_ops.cxx:209>
Couchbase SDKs need to be able to connect to every node on the cluster.
If you are running an app outside of the Docker host, it cannot connect to every node (you can't expose every node on the same port).
This is exactly why it will work fine with one node, but not with multiple (more details in the documentation)
If you run the Python app inside of a container that runs in the Docker host, it should connect just fine (or stick to a single node for development - which is mostly fine if you're not testing something specific to clustering/failover/replication).
I created the 3 necessary containers for NuoDB using the NuoDB instructions.
My Docker environment runs on a virtual Ubuntu Linux environment (VMware).
Afterwards I tried to access the database using a console application (C# .Net Framework 4.8) and the Ado.Net technology. For this I used the Nuget "NuoDb.Data.Client" from Nuget.org.
Unfortunately the connection does not work.
If I choose port 8888, my thread disappears to infinity when I open the connection.
For this reason I tried to open the port 48004 to get to the admin container.
On this way I get an error message.
"System.IO.IOException: A connection attempt failed because the remote peer did not respond properly after a certain period of time, or the established connection was faulty because the connected host did not respond 172.18.0.4:48006, 172.18.0.4"
Interestingly, if I specify a wrong database name, it throws an error:
No suitable transaction engine found for database.
This tells me that it connects to the admin container.
Does anyone have any idea what I am doing wrong?
The connection works when I establish a connection with the tool "dbvisualizer".
This tool accesses the transaction engine directly. For this reason I have opened the port 48006 in the corresponding container.
But even with these settings it does not work with my console application.
Thanks in advance.
Port 8888 is the REST port that you would use from the administration tool such as nuocmd: it allows you to start/stop engines and perform other administrative commands. You would not use this port for SQL clients (as you discovered). The correct port to use for SQL clients is 48004.
Port 48004 allows a SQL client to connect to a "load balancer" facility that will redirect it to one of the running TEs. It's not the case that the SQL traffic is routed through this load balancer: instead, the load balancer replies to the client with the address/port of one of the TEs then the client will disconnect from the load balancer and re-connect directly to the TE at that address/port. For this reason, all the ports that TEs are listening on must also be open to the client, not just 48004.
You did suggest you opened these ports but it's not clear from your post whether you followed all the instructions on the doc page you listed. In particular, were you able to connect to the database using the nuosql command line tool as described here? I strongly recommend that you ensure that simple access like this works correctly, before you attempt to try more sophisticated client access such as using Ado.Net.
I have a series of microservices that I have been testing. Originally it was using Service Fabric however I have switched to using Consul, Fabio, Nomad which I like better.
In development on my machine things work well however I am running into some issues actually getting Fabio to work in a cluster format.
I have a cluster of 5 nodes each running Consul, Fabio, Nomad.
Each service gets a dynamic port at runtime and successfully registers itself.
On the node which the service is running Fabio correctly forwards traffic.
However if the same fabio url is used on a different node then traffic is forwarded to the correct node/port however that is closed so the connection doesn't work.
For instance if ServiceA running on MachineA on port 1234 then http://MachineA:9999/ServiceA correctly works.
However http://MachineB/ServiceA fails after MachineA tries to initiate a connection to MachineB on port 1234.
A solution would be to add firewall rules, I would imagine, however this requires all the Services to run as Admin which I don't want.
Is there a way to support this through Fabio?
I've tried doing simple single node swarm just like in Docker tutorial part 3 and I've found out that if I use curl then I'm jumping between two replicas, but if I use Chrome then once I open the page then any following requests will be handled by the same replica. I'm sure I'm actually hitting it only once, because counter increases only by 1.
What is happening? Is it some kind of feature in Docker Swarm load balancing? If so, how would it work? No specific request headers are send to the server, so how would the load balancer recognize me? It can't be IP, because if I use incognito mode I'll be handled by different replica and I'll be stick to it as long as I'm in incognito.
It's not a Swarm thing, it's a chrome thing. Curl acts like you'd expect, each command is a new TCP request that shows as a new connection going through the Swarm VIP load balancer.
Chrome (and other browsers) have lots of methods to keep TCP connections open for future requests (HTTP keep-alives, etc). This is why it will stay connected to the same container because the connection is persistent through the LB to the replica. The LB will only shift to the "next in the round-robin pool" for a new connection.
I've developed a Grails application and I want my coworkers to be able to test it. They are on my network so I figure they can access it by using my IP address and the port number (8080). I've tried running it according to the steps laid out here and here to no avail.
I noticed that whenever I run the program, even when I follow those instructions, it says:
Grails application running at http://localhost:8080 in environment: development
Basic networking stuff here.
When something starts on interface 127.0.0.1 port something
Usually that port is then available for all the interfaces on the machine
if you run netstat -plant you will see running ports open on the machine.
Basically what ever ipconfig or ifconfig tells under Linux as your internal interface something like 192.168.1.x
The app is then available on http://192.168.1.x:8080
If you can't access it from other machines on network start by trying to ping {your machine ip}
It sounds like network security stopping local access from 1 machine accessing another.
Or even better still your good old MS firewall try stopping your security stuff on your desktop
It's not clear if you can access the app yourself on your own machine? It should be available at:
http://localhost:8080/appname
Your co-workers should be able to access the app by changing localhost to your computer name:
http://mycomputername:8080/appname