I want to check that I have properly implemented HttpClientFactory. I have a desktop application that pings my server every 20 seconds. When I open command prompt and run "netstat -ano | findstr {My server IP}" I can see there are always 2 or 3 connections. As time goes on and I continue to check, the ports will slowly change (go up in their port #'s, older ports disappear) but there are never more than 2 or 3 connections. Does this mean that the old ports are being released and I am not at risk for port exhaustion? Thanks.
As mentioned above. I am going to begin selling my application very soon and need to be sure that I am not going to exhaust my client's ports and hinder their network.
Related
RabbitMQ server is running locally on Windows 10 and docker is running on it also.
I'm running a device simulator on docker and it has to talk to local RabbitMQ server through MQTT.
It had been working but one day it stopped.
Here is device logging -
mqtt-client.cpp:322 | Failed to connect to broker at 'xxx#xxx.xxxxxx.com/:1883': code=15, message='Lookup error.'
Keep in mind that from docker(latest version) calls have been made to local web server which has exact domain name -
https-commissioning-channel.cpp:81 | [HttpsCommissioningChannel] using token to contact bootstrap service at 'https://xxx.xxxxxx.com/apibst/alo/v1/bootstrap/device-info'
So you can see domain name has been resolved. For firewall configuration port is open on 1883 (consider it had been working). RabbitMQ is running.
What might be the issue and what should I do to make the call go through?
As per the comments xxx#xxx.xxxxxx.com/:1883 should not contain a slash (xxx#xxx.xxxxxx.com:1883) - see the URI Scheme.
I wonder if others noticed this issue with the WSL2 Debian implementation of TCP.
I am connecting from a Docker container running WSL2 Debian v. 20
The TCP client sends a Keep-Alive packet every second which is kind of overkill. Then after roughly 5 minutes, the client terminates the connection without any reason. Is anybody seeing this behavior?
You can reproduce this by just opening a telnet session to another host. But the behavior happens on other types of sockets too.
And before you ask, this issue is not caused by the server, it does not occur when opening the same tcp connection from other hosts.
wireshark dump of the last few seconds of the idle TCP connection
I had the same problem with Ubuntu on WSL2. An outbound ssh connection closed after a period of time if there was no activity on that connection. Particularly anoying if you were running an application that produced no screen output.
I suspect that the internal router that connects wsl to the local network dropped the idle TCP connection.
The solution was to shorten the TCP keep-alive timers in /proc/sys/net/ipv4, the following worked for me:
echo 300 > /proc/sys/net/tcp_keepalive_time
echo 45 > /proc/sys/net/tcp_keepalive_intvl
So I figured this out. Unfortunately, the WSL2 implementation of Debian seems to have this hardcoded in the stack. I tried to change the parameters of the socket open call and they didn't cause a change in the behavior.
I am attempting to bind a socket to a port below:
if( bind(socket_desc,(struct sockaddr *) &server, sizeof(server)) < 0)
{
perror("bind failed. Error");
return 1;
}
puts("bind done");
But it gives:
$ ./serve
Socket created
bind failed. Error: Address already in use
Why does this error occur?
Everyone is correct. However, if you're also busy testing your code your own application might still "own" the socket if it starts and stops relatively quickly. Try SO_REUSEADDR as a socket option:
What exactly does SO_REUSEADDR do?
This socket option tells the kernel that even if this port is busy (in
the TIME_WAIT state), go ahead and reuse it anyway. If it is busy,
but with another state, you will still get an address already in use
error. It is useful if your server has been shut down, and then
restarted right away while sockets are still active on its port. You
should be aware that if any unexpected data comes in, it may confuse
your server, but while this is possible, it is not likely.
It has been pointed out that "A socket is a 5 tuple (proto, local
addr, local port, remote addr, remote port). SO_REUSEADDR just says
that you can reuse local addresses. The 5 tuple still must be
unique!" by Michael Hunter (mphunter#qnx.com). This is true, and this
is why it is very unlikely that unexpected data will ever be seen by
your server. The danger is that such a 5 tuple is still floating
around on the net, and while it is bouncing around, a new connection
from the same client, on the same system, happens to get the same
remote port. This is explained by Richard Stevens in ``2.7 Please
explain the TIME_WAIT state.''.
You have a process that is already using that port. netstat -tulpn will enable one to find the process ID of that is using a particular port.
Address already in use means that the port you are trying to allocate for your current execution is already occupied/allocated to some other process.
If you are a developer and if you are working on an application which require lots of testing, you might have an instance of your same application running in background (may be you forgot to stop it properly)
So if you encounter this error, just see which application/process is using the port.
In linux try using netstat -tulpn. This command will list down a process list with all running processes.
Check if an application is using your port. If that application or process is another important one then you might want to use another port which is not used by any process/application.
Anyway you can stop the process which uses your port and let your application take it.
If you are in linux environment try,
Use netstat -tulpn to display the processes
kill <pid> This will terminate the process
If you are using windows,
Use netstat -a -o -n to check for the port usages
Use taskkill /F /PID <pid> to kill that process
The error usually means that the port you are trying to open is being already used by another application. Try using netstat to see which ports are open and then use an available port.
Also check if you are binding to the right ip address (I am assuming it would be localhost)
if address is already in use can you just want to kill whoso ever process is using the port, you can use
lsof -ti:PortNumberGoesHere | xargs kill -9
source and inspiration this.
PS: Could not use netstat because it not installed already.
As mentioned above the port is in use already.
This could be due to several reasons
some other application is already using it.
The port is in close_wait state when your program is waiting for the other end to close the program.refer (https://unix.stackexchange.com/questions/10106/orphaned-connections-in-close-wait-state).
The program might be in time_wait state. you can wait or use socket option SO_REUSEADDR as mentioned in another post.
Do netstat -a | grep <portno> to check the port state.
It also happens when you have not give enough permissions(read and write) to your sock file!
Just add expected permission to your sock contained folder and your sock file:
chmod ug+rw /path/to/your/
chmod ug+rw /path/to/your/file.sock
Then have fun!
I was also facing that problem, but I resolved it.
Make sure that both the programs for client-side and server-side are on different projects in your IDE, in my case NetBeans. Then assuming you're using localhost, I recommend you to implement both the programs as two different projects.
To terminate all node processes:
killall -9 node
First of check which port are listening,
netstat -tlpn
then select available port to conect,
sudo netstat -tlpn | grep ':port'
Fix it into also to your server and clients interfaces. Go Barrier tab -> change settings, -> port value type -> save/ok
Check both clients and server have similar port values
Then Reload.
Now it should be ok.
Check for running process pid:
pidof <process-name>
Kill processes:
sudo kill -9 process_id_1 process_id_2 process_id_3
So I am building a web application for university which has a very high tick rate (clients recieving data from node server above 30 times per second via socketio). This works well in docker. Now I installed nginx, configured it and everything works well (no exposed ports, socket still running etc.) but now nginx logs in the docker terminal every single socket connection from every single client (at two clients well above 60 logs per second) and I think this also leads to performance issues and causes small lag to the clients. I did not find any solutions in their docs.
I have together 6 containers running in docker swarm. Kafka+Zookeeper, MongoDB, A, B, C and Interface. Interface is the main access point from public - only this container publish the port - 5683. The interface container connects to A, B and C during startup. I am using docker-compose file + docker stack deploy, each service has a name which is used as host for interface. Everything starts successfully and works fine. After some time (20 mins,1h,..) I am not able to make request to interface. Interface receives my requests but application lost connection with service A,B,C or all of them. If I restart interface, it's able to reconnect to services A,B,C.
I firstly thought it's problem of application so I expose 2 new ports on each service (interface, A,B,C) and connect with profiler and debugger to them. Application is running properly, no leaks, no blocked threads, normally working and waiting for connections. Debugger shows me that when I make a request to interface and interface tries to request service A, Connection reset by peer exception was thrown.
During this debugging I found out interesting stuff. I attached debugger to interface when the services started and also debugger was disconnected after some time. + I was not able to reconnect it, until I made request to the container -> application. PRoblem - handshake failed.
Another interesting stuff that I found out was that I was not able to request neither interface. So I used wireshark to see what's going on and: SYN - ACK was fine. Then application post some data and interface respond with FIN,ACK. I assume that this also happen when interface tries to request service A and it FIN the connection. Codebase of Interface, A,B and C is the same regarding netty server.
Finally, I don't think it's a application issue. Why? I tried to deploy containers not as services. I run each container separately, published the ports of each and endpoint of services were set to localhost. (not overlay network). And it is working. Containers run without problem. + I didn't say at the beginning, that the the java applications (interface, A,B,C) runs without problem when they are running as standalone application - not in docker.
Could you please help me what could be the issue? Why the docker in case of overlay network is closing sockets?
I am using newest docker. I used also older.
Finally, I was able to solve the problem.
What was happening, one more time. Interface opens permanent TCP connection to A,B,C. When you try to run these services A,B,C as a standalone java applications, everything is working. When we dockerize them and run in swarm, it was working only few minutes. Strange was that the connection between Interface and another service was interrupted in the moment when you made a request from client to interface.
After many many unsuccessful tests and debugging each container I tried to run each docker container separately, with mapped ports and as endpoint I specified localhost. (each container exposed ports and interface was connecting to localhost) Funny thing happen, it was working. When you run containers like this, different network driver for container is used. Bridge one. If you run it in swarm, overlay network driver is used.
So it had to be something with the docker network, not with application itself. Next step was tcpdump from each container after couple of minutes, when it should stop working. It was very interesting.
Client -> Interface (OK, request accepted)
Interface ->(forward request because it belongs to A) A
Interface -> A [POST]
A -> Interface [RESET]
A was reseting opened TCP communication after couple of minutes without communication. Why?
Docker uses IP Virtual Server and IPVS maintains its own connection table. The default timeout for CLOSE_WAIT connections in IPVS table is 60 seconds. Hence when the server sends something after 60 seconds, the IPVS connection is no longer available and the packet looks invalid for a new TCP session and gets RST. On the client side, the connection remains forever in FIN_WAIT2 state because the app still has the socket open; kernel's fin_wait timer kicks in only for orphaned TCP sockets.
This is what I read about it and how understand it. I am not sure if my explanation of problem is correct, but based on these assumptions I implemented ping-pong between Interface and A,B,C services in case there is no communication for <60seconds. And, it’s working.
Got the same issue.
Specified
endpoint_mode: dnsrr
to properties of the service which plays "server" role and it works just fine.
https://forums.docker.com/t/tcp-timeout-that-occurs-only-in-docker-swarm-not-simple-docker-run/58179