netstat local address, port represented by string - port

What ports are represented by the strings irdmi, availant-mgr, etc...?
In general, how do I figure this out? Is it assigned in some file somewhere?
netstat -lp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 *:irdmi *:* LISTEN 4648/python
tcp 0 0 *:availant-mgr *:* LISTEN 1777/sshd
tcp 0 0 *:shell *:* LISTEN 1732/xinetd
tcp 0 0 *:ssh *:* LISTEN 1698/sshd

Use the -n flag to show numerical addresses and ports.
netstat -an

If you run netstat -a it should list the actual port numbers you are listening on.
Typical protocol ports can be found here: http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers
e.g. irdmi is typically 8000, SSH is 22.

Related

docker container not able to reach some of host's ports

I have a stack with docker-compose running on a VM.
Here is a sample output of my netstat -tulpn on the VM
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:9839 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:8484 0.0.0.0:* LISTEN
The docker is able to communicate with port 9839 (using 172.17.0.1) but not with port 8484.
Why is that?
That's because the program listening on port 8484 is bound to 127.0.0.1 meaning that it'll only accept connections from localhost.
The one listening on 9839 has bound to 0.0.0.0 meaning it'll accept connections from anywhere.
To make the one listening on 8484 accept connections from anywhere, you need to change what it's binding to. If it's something you've written yourself, you can change it in code. If it's not, there's probably a configuration setting your can set.

Flink all zero monitoring

Flink has all-zero listening in the Docker container. Binding 0.0.0.0 is equivalent to binding all IP addresses of the local host, causing network-wide listening. If the management plane, control plane, and user plane are divided on the local host, the original isolation principle of the system will be violated. For example, run the netstat -nultp command in jobmanager.
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:6123 0.0.0.0:* LISTEN 1100/java
tcp 0 0 0.0.0.0:6124 0.0.0.0:* LISTEN 1100/java
tcp 0 0 0.0.0.0:8081 0.0.0.0:* LISTEN 1100/java
tcp 0 0 0.0.0.0:50100 0.0.0.0:* LISTEN 1100/java
All-zero monitoring can cause some security problems. Can anyone give us an opinion on how to solve the all-zero monitoring problem?

Socket port not opening in Docker Swarm Cluster (Root Cause Identified)

I have following setup
Two VMs
created overlay network
created two docker swarm services
docker service create --name karaf1-service --replicas 1 --network karaf_net karaf1:2.0.0
docker service create --name karaf2-service --replicas 1 --network karaf_net karaf2:2.0.0
Now these containers open socket port at start, i observed some time it successfully able to create it lot of time it fails.
ServerSocketFactory.getDefault().createServerSocket(serverPort)
if both containers get start on one node its mostly successfull, but when containers get created on different node it almost fails every time.
before troubleshooting for network issue, container atleast should create sockets.
this container not able to open socket
root#bd48643080b2:/opt/apache/apache-karaf-4.1.5# netstat -tulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8101 0.0.0.0:* LISTEN 61/java
tcp 0 0 127.0.0.1:1099 0.0.0.0:* LISTEN 61/java
tcp 0 0 0.0.0.0:41551 0.0.0.0:* LISTEN 61/java
tcp 0 0 127.0.0.11:44853 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:44444 0.0.0.0:* LISTEN 61/java
Following container able to create it on port 4550, but some times it vice versa
root#38d26c7dde1a:/opt/apache/apache-karaf-4.1.5# netstat -tulnp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:37347 0.0.0.0:* LISTEN 61/java
tcp 0 0 0.0.0.0:8101 0.0.0.0:* LISTEN 61/java
tcp 0 0 0.0.0.0:4550 0.0.0.0:* LISTEN 61/java
tcp 0 0 127.0.0.11:37575 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:1099 0.0.0.0:* LISTEN 61/java
tcp 0 0 127.0.0.1:35321 0.0.0.0:* LISTEN 61/java
tcp 0 0 0.0.0.0:44444 0.0.0.0:* LISTEN 61/java
Root Cause Identified:
As i am creating two services so while creating first service i provide second service as hostname to first service to keep verifying status so java throwing error on hostname like "karaf2-service"
java.net.UnknownHostException: karaf2-service: Name or service not known
at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
Now i cant add entry of karaf2-service in etc/hosts so socket dont complain as i dont know which IP would be assign to docker-swarm service? in overlay network we mostly communicate with service names.
Any suggestions to resolve this???
The easiest way to do this, is to check on container startup if you can reach the other service, and if not, wait a few seconds then try again.
There are multiple tools to do this, such as wait-for-it: https://github.com/vishnubob/wait-for-it

Cannot access HTTPS service from Docker container via Virtual Box

I run a https web service from a docker container set up on Vbox. Here is my config:
Vbox
Docker
Unfortunatelly, https://127.0.0.1 is not accessible.
The output of the command docker run -it --rm --net=container:$cont_id --pid=container:$cont_id busybox netstat -lntp is:
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 21/sshd
tcp 0 0 127.0.0.1:444 0.0.0.0:* LISTEN 319/node
tcp 0 0 127.0.0.1:8081 0.0.0.0:* LISTEN 315/python
tcp 0 0 :::22 :::* LISTEN 21/sshd
tcp 0 0 :::443 :::* LISTEN 319/node
I can't figure out where I'm getting wrong (I am still a beginner in port forwarding and networking). Any help appreciated, Thanks!

Docker run cannot publish port range despite netstat indicates that ports are available

I am trying to run a Docker image from inside Google Cloud Shell (i.e. on an courtesy Google Compute Engine instance) as follows:
docker run -d -p 20000-30000:10000-20000 -it <image-id> bash -c bash
Previous to this step, netstat -tuapn has reported the following:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8998 0.0.0.0:* LISTEN 249/python
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13080 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13081 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:34490 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13082 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13083 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13084 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:34490 127.0.0.1:48161 ESTABLISHED -
tcp 0 252 172.17.0.2:22 173.194.92.34:49424 ESTABLISHED -
tcp 0 0 127.0.0.1:48161 127.0.0.1:34490 ESTABLISHED 15784/python
tcp6 0 0 :::22 :::* LISTEN -
So it looks to me as if all the ports between 20000 and 30000 are available, but the run is nevertheless terminated with the following error message:
Error response from daemon: Cannot start container :
failed to create endpoint on network bridge: Timed out
proxy starting the userland proxy
What's going on here? How can I obtain more diagnostic information and ultimately solve the problem (i.e. get my Docker image to run with the whole port range available).
Opening up ports in a range doesn't currently scale well in Docker. The above will result in 10,000 docker-proxy processes being spawned to support each port, including all the file descriptors needed to support all those processes, plus a long list of firewall rules being added. At some point, you'll hit a resource limit on either file descriptors or processes. See issue 11185 on github for more details.
The only workaround when running on a host you control is to not allocate the ports and manually update the firewall rules. Not sure that's even an option with GCE. Best solution will be to redesign your requirements to keep the port range small. The last option is to bypass the bridge network entirely and run on the host network where there are no more proxies and firewall rules with --net=host. The later removes any network isolation you have in the container, so tends to be recommended against.

Resources