Meaning of SS command output with 3 colons (':::')? - parsing

The increasingly popular ss command (/usr/sbin/ss on RHEL) is a replacement for netstat.
I'm trying to parse the output in Python and I'm seeing some odd data that is not explained in the documentation.
$ ss -an | head
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 0 :::14144 :::*
LISTEN 0 0 127.0.0.1:32000 *:*
LISTEN 0 0 :::3233 :::*
LISTEN 0 0 *:5634 *:*
LISTEN 0 0 :::5634 :::*
So, it's obvious what the local address means when it's 127.0.0.1:32000, obviously listening on the loopback interface on port 32000. But, what do the 3 colons ::: mean?
Really, I can figure it's two extra colons, since the format is host:port, so what does a host of two colons mean?
I should mention I'm running this on a RHEL/CENTOS box:
Linux boxname 2.6.18-348.3.1.el5 #1 SMP somedate x86_64 x86_64 x86_64 GNU/Linux
This is not explained in any of the online man pages or other discussions I can find.

That's IPV6 abbreviated address representation. The colon groups represent consecutive zero groups.
:::14144 would be read as 0000:0000:0000:0000:0000:0000:0000:0000 port 14144 which I guess would mean all addresses with port 14144
:::* would be read as 0000:0000:0000:0000:0000:0000:0000:0000 all ports which I guess would mean all addresses with any port

Related

How to fix "Connection refused" error on ACME certificate challenge with cookiecutter-django

I have created a simple website using cookiecutter-django (using the latest master cloned today). Running the docker-compose setup locally works. Now I would like to deploy the site on digital ocean. To do this, I run the following commands:
$ docker-machine create -d digitalocean --digitalocean-access-token=secret instancename
$ eval "$(docker-machine env instancename)"
$ sudo docker-compose -f production.yml build
$ sudo docker-compose -f production.yml up
In the cookiecutter-django documentation I read
If you are not using a subdomain of the domain name set in the project, then remember to put your staging/production IP address in the DJANGO_ALLOWED_HOSTS environment variable (see Settings) before you deploy your website. Failure to do this will mean you will not have access to your website through the HTTP protocol.
Therefore, in the file .envs/.production/.django I changed the line with DJANGO_ALLOWED_HOSTS from
DJANGO_ALLOWED_HOSTS=.example.com (instead of example.com I use my actual domain)
to
DJANGO_ALLOWED_HOSTS=XXX.XXX.XXX.XX
(with XXX.XXX.XXX.XX being the IP of my digital ocean droplet; I also tried DJANGO_ALLOWED_HOSTS=.example.com and DJANGO_ALLOWED_HOSTS=.example.com,XXX.XXX.XXX.XX with the same outcome)
In addition, I logged in to where I registered the domain and made sure to point the A-Record to the IP of my digital ocean droplet.
With this setup the deployment does not work. I get the following error message:
traefik_1 | time="2019-03-29T21:32:20Z" level=error msg="Unable to obtain ACME certificate for domains \"example.com\" detected thanks to rule \"Host:example.com\" : unable to generate a certificate for the domains [example.com]: acme: Error -> One or more domains had a problem:\n[example.com] acme: error: 400 :: urn:ietf:params:acme:error:connection :: Fetching http://example.com/.well-known/acme-challenge/example-key-here: Connection refused, url: \n"
Unfortunately, I was not able to find a solution for this problem. Any help is greatly appreciated!
Update
When I run netstat -antp on the server as suggested in the comments I get the following output (IPs replaced with placeholders):
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1590/sshd
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:48923 SYN_RECV -
tcp 0 332 XXX.XXX.XXX.XX:22 ZZ.ZZZ.ZZ.ZZZ:49726 ESTABLISHED 16959/0
tcp 0 1 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:17195 FIN_WAIT1 -
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:57909 ESTABLISHED 16958/sshd: [accept
tcp6 0 0 :::2376 :::* LISTEN 5120/dockerd
tcp6 0 0 :::22 :::* LISTEN 1590/sshd
When I run $ sudo docker-compose -f production.yml up before, netstat -antp returns this:
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1590/sshd
tcp 0 332 XXX.XXX.XXX.XX:22 ZZ.ZZZ.ZZ.ZZZ:49726 ESTABLISHED 16959/0
tcp 0 0 XXX.XXX.XXX.XX:22 AA.AAA.AAA.A:50098 ESTABLISHED 17046/sshd: [accept
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:55652 SYN_RECV -
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:16750 SYN_RECV -
tcp 0 0 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:31541 SYN_RECV -
tcp 0 1 XXX.XXX.XXX.XX:22 YYY.YY.Y.YYY:57909 FIN_WAIT1 -
tcp6 0 0 :::2376 :::* LISTEN 5120/dockerd
tcp6 0 0 :::22 :::* LISTEN 1590/sshd
In my experience, the Droplets are configured as needed by cookiecutter-django, the ports are open properly, so unless you closed them, you shouldn't have to do anything.
Usually, when this error happens, it's due to DNS configuration issue. Basically Let's Encrypt was not able to reach your server using the domain example.com. Unfortunately, you're not giving us the actual domain you've used, so I'll try to guess.
You said you've configured a A record to point to your droplet, which is what you should do. However, this config needs to be propagated on most of the name servers, which may take time. It might be propagated for you, but if the name server used by Let's Encrypt isn't, your TLS certificate will fail.
You can check how well it's propagated using an online tool which checks multiple name servers at once, like https://dnschecker.org/.
From your machine, you can do so using dig (for people interested, I recommend this video):
# Using your default name server
dig example.com
# Using 1.1.1.1 as name server
dig #1.1.1.1 example.com
Hope that helps.

netstat misses some ports [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 10 months ago.
Improve this question
$ nmap localhost
Starting Nmap 6.40 ( http://nmap.org ) at 2019-02-12 12:59 +00
Nmap scan report for localhost (127.0.0.1)
Host is up (0.0027s latency).
Other addresses for localhost (not scanned): 127.0.0.1
Not shown: 995 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
80/tcp open http
111/tcp open rpcbind
443/tcp open https
Nmap done: 1 IP address (1 host up) scanned in 0.23 seconds
$ sudo netstat -lnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN
tcp6 0 0 :::111 :::* LISTEN
tcp6 0 0 :::22 :::* LISTEN
tcp6 0 0 ::1:25 :::* LISTEN
$
Why are 80 and 443 not captured by netstat?
ss does not report the missing ports either. This is found on a centos 7 box. Both 80 and 443 are actually open and working as nmap found out -- curl from another host can pull stuff as expected.
The special thing is that 80 and 443 are opened by a docker container running on this host (the commands were run on the host, not in the container, just to be clear). The other 3 (22, 25, 111) are by non-docker local programs. I'm guessing docker is doing some voodoo but I have been unable to locate anything useful.
As of v1.7 docker has a configuration flag --userland-proxy which can be set to true or false (I believe this can be set to false be default these days). Basically what it does (being set to false) is instead of using a proxy process to get the ingressing traffic to the container, it utilizes the iptables rules to nat/forward the traffic (hairpin NAT) to the container.
See this article for more detailed explanation. From what I was able to gather, in most cases when the userland-proxy is disabled the port will still show up in the netstat, but this is only to allocate the port, so that the other host applications wouldn't be able to bind it, but the actual data plane follows the rules specified in the iptables. At the same time I've came across a bug when it wasn't the case and the port didn't show up in the output of the netstat/ss.
I believe this is what is happening in your case. You do not see the port in the output of the netstat, but the traffic still can get to the container because userland-proxy is disabled and iptables magic is used to get there.

How to setup Restund Turn Server with IPv6

I am using Restund for WebRTC. My Restund server currently works with IPv4. I am attempting to update my Restund server to work with both IPv4 and IPv6. I am having some troubles and could use some help.
My dilemma is that my Restund turn server no longer works with Cell Service on iOS Devices since the 10.2 update (When using T-Mobile and Sprint. Note: Verizon is still working). As I understand it, these carriers are now only communicating on IPv6. Other carriers have announced they will be switching soon.
One thing I have noticed is the need to use the "Local" IPv4 address from my eth0 network device as listed in ifconfig. Because of this, I also added the [::1] entries in case the IPv6 cases would require it. I also added the full IPv6 Address. So there are 3 entries for udp_listen, tcp_listen, and tls_listen.
In my example below, I've changed the real addresses to be example addresses.
I've included my /etc/restund.conf file below.
daemon yes
debug no
realm HOST
syncinterval 600
udp_listen 192.168.1.100:3478
udp_listen [::1]:3478
udp_listen [AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA]:3478
udp_sockbuf_size 524288
tcp_listen 192.168.1.100:3478
tcp_listen [::1]:3478
tcp_listen [AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA]:3478
tls_listen 192.168.1.100:3479,/etc/cert.pem
tls_listen [::1]:3479,/etc/cert.pem
tls_listen [AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA]:3479,/etc/cert.pem
# modules
module_path /usr/local/lib/restund/modules
module stat.so
module binding.so
module auth.so
module turn.so
module syslog.so
module status.so
# auth
auth_nonce_expiry 3600
auth_shared_expiry 86400
# share this with your prosody server
auth_shared yoursecretthing
#auth_shared_rollover incaseyouneedtodokeyrollover
# turn
turn_max_allocations 512
turn_max_lifetime 600
turn_relay_addr 192.168.1.100
#turn_relay_addr6 ::1
turn_relay_addr6 AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA
turn_relay_addr6 ::1
# syslog
syslog_facility 24
# status
# 2/2/2017 Apparently only the first status is used, the second one is ignored.
# I verified this by going to:
# http://[AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA]:8080
# http://PUBLIC_IPV4_ADDR:8080/
# Only one would work at a time.
# So I commented the IPv6 Addresses.
status_udp_addr 192.168.1.100
#status_udp_addr AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA
status_udp_port 33000
status_http_addr 192.168.1.100
#status_http_addr AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA
status_http_port 8080
After verifying Restund ran without errors, I verified that the appropriate TCP/UDP ports were now being listened to using netstat -nlp.
One concern I found in the netstat results, was the full IPv6 address only shows 4 of the 8 sets (AAAA:AAAA:AAAA:AAAA instead of AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA). I'm wondering if this is something I should be concerned about.
netstat -nlp
IPv4 && IPv6 [Full Address and ::1]
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 192.168.1.100:8080 0.0.0.0:* LISTEN 11442/restund
tcp 0 0 192.168.1.100:3478 0.0.0.0:* LISTEN 11442/restund
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1321/sshd
tcp 0 0 192.168.1.100:3479 0.0.0.0:* LISTEN 11442/restund
tcp6 0 0 AAAA:AAAA:AAAA:AAAA:3478 :::* LISTEN 11442/restund
tcp6 0 0 ::1:3478 :::* LISTEN 11442/restund
tcp6 0 0 :::22 :::* LISTEN 1321/sshd
tcp6 0 0 AAAA:AAAA:AAAA:AAAA:3479 :::* LISTEN 11442/restund
tcp6 0 0 ::1:3479 :::* LISTEN 11442/restund
udp 0 0 192.168.1.100:33000 0.0.0.0:* 11442/restund
udp 0 0 192.168.1.100:3478 0.0.0.0:* 11442/restund
udp 0 0 0.0.0.0:68 0.0.0.0:* 927/dhclient
udp6 0 0 AAAA:AAAA:AAAA:AAAA:3478 :::* 11442/restund
udp6 0 0 ::1:3478 :::* 11442/restund
After all of these IPv6 additions to my /etc/restund.conf file, I am still unable to communicate via IPv6. Thanks in advance for any input!
This won't resolve your IPv6 issue, but it should make your code work for now.
On January 27 T-Mobile Released a Carrier Update for iOS 10.2.1 Carrier 27.2:
https://support.t-mobile.com/docs/DOC-32574
Try Updating your Carrier Settings and it may fix the T-Mobile Issue.
From the Home screen, tap Settings.
Tap General
Tap About and then review the Carrier Update Field.
It should prompt you to update at this point if you haven't already. See if that resolves your problem with T-Mobile. They added an update that "Adds dual stack to improve app compatibility issues with iOS 10.2".

Docker run cannot publish port range despite netstat indicates that ports are available

I am trying to run a Docker image from inside Google Cloud Shell (i.e. on an courtesy Google Compute Engine instance) as follows:
docker run -d -p 20000-30000:10000-20000 -it <image-id> bash -c bash
Previous to this step, netstat -tuapn has reported the following:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8998 0.0.0.0:* LISTEN 249/python
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13080 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13081 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:34490 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13082 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13083 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13084 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:34490 127.0.0.1:48161 ESTABLISHED -
tcp 0 252 172.17.0.2:22 173.194.92.34:49424 ESTABLISHED -
tcp 0 0 127.0.0.1:48161 127.0.0.1:34490 ESTABLISHED 15784/python
tcp6 0 0 :::22 :::* LISTEN -
So it looks to me as if all the ports between 20000 and 30000 are available, but the run is nevertheless terminated with the following error message:
Error response from daemon: Cannot start container :
failed to create endpoint on network bridge: Timed out
proxy starting the userland proxy
What's going on here? How can I obtain more diagnostic information and ultimately solve the problem (i.e. get my Docker image to run with the whole port range available).
Opening up ports in a range doesn't currently scale well in Docker. The above will result in 10,000 docker-proxy processes being spawned to support each port, including all the file descriptors needed to support all those processes, plus a long list of firewall rules being added. At some point, you'll hit a resource limit on either file descriptors or processes. See issue 11185 on github for more details.
The only workaround when running on a host you control is to not allocate the ports and manually update the firewall rules. Not sure that's even an option with GCE. Best solution will be to redesign your requirements to keep the port range small. The last option is to bypass the bridge network entirely and run on the host network where there are no more proxies and firewall rules with --net=host. The later removes any network isolation you have in the container, so tends to be recommended against.

netstat local address, port represented by string

What ports are represented by the strings irdmi, availant-mgr, etc...?
In general, how do I figure this out? Is it assigned in some file somewhere?
netstat -lp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 *:irdmi *:* LISTEN 4648/python
tcp 0 0 *:availant-mgr *:* LISTEN 1777/sshd
tcp 0 0 *:shell *:* LISTEN 1732/xinetd
tcp 0 0 *:ssh *:* LISTEN 1698/sshd
Use the -n flag to show numerical addresses and ports.
netstat -an
If you run netstat -a it should list the actual port numbers you are listening on.
Typical protocol ports can be found here: http://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers
e.g. irdmi is typically 8000, SSH is 22.

Resources