I am using Restund for WebRTC. My Restund server currently works with IPv4. I am attempting to update my Restund server to work with both IPv4 and IPv6. I am having some troubles and could use some help.
My dilemma is that my Restund turn server no longer works with Cell Service on iOS Devices since the 10.2 update (When using T-Mobile and Sprint. Note: Verizon is still working). As I understand it, these carriers are now only communicating on IPv6. Other carriers have announced they will be switching soon.
One thing I have noticed is the need to use the "Local" IPv4 address from my eth0 network device as listed in ifconfig. Because of this, I also added the [::1] entries in case the IPv6 cases would require it. I also added the full IPv6 Address. So there are 3 entries for udp_listen, tcp_listen, and tls_listen.
In my example below, I've changed the real addresses to be example addresses.
I've included my /etc/restund.conf file below.
daemon yes
debug no
realm HOST
syncinterval 600
udp_listen 192.168.1.100:3478
udp_listen [::1]:3478
udp_listen [AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA]:3478
udp_sockbuf_size 524288
tcp_listen 192.168.1.100:3478
tcp_listen [::1]:3478
tcp_listen [AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA]:3478
tls_listen 192.168.1.100:3479,/etc/cert.pem
tls_listen [::1]:3479,/etc/cert.pem
tls_listen [AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA]:3479,/etc/cert.pem
# modules
module_path /usr/local/lib/restund/modules
module stat.so
module binding.so
module auth.so
module turn.so
module syslog.so
module status.so
# auth
auth_nonce_expiry 3600
auth_shared_expiry 86400
# share this with your prosody server
auth_shared yoursecretthing
#auth_shared_rollover incaseyouneedtodokeyrollover
# turn
turn_max_allocations 512
turn_max_lifetime 600
turn_relay_addr 192.168.1.100
#turn_relay_addr6 ::1
turn_relay_addr6 AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA
turn_relay_addr6 ::1
# syslog
syslog_facility 24
# status
# 2/2/2017 Apparently only the first status is used, the second one is ignored.
# I verified this by going to:
# http://[AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA]:8080
# http://PUBLIC_IPV4_ADDR:8080/
# Only one would work at a time.
# So I commented the IPv6 Addresses.
status_udp_addr 192.168.1.100
#status_udp_addr AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA
status_udp_port 33000
status_http_addr 192.168.1.100
#status_http_addr AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA
status_http_port 8080
After verifying Restund ran without errors, I verified that the appropriate TCP/UDP ports were now being listened to using netstat -nlp.
One concern I found in the netstat results, was the full IPv6 address only shows 4 of the 8 sets (AAAA:AAAA:AAAA:AAAA instead of AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA:AAAA). I'm wondering if this is something I should be concerned about.
netstat -nlp
IPv4 && IPv6 [Full Address and ::1]
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 192.168.1.100:8080 0.0.0.0:* LISTEN 11442/restund
tcp 0 0 192.168.1.100:3478 0.0.0.0:* LISTEN 11442/restund
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1321/sshd
tcp 0 0 192.168.1.100:3479 0.0.0.0:* LISTEN 11442/restund
tcp6 0 0 AAAA:AAAA:AAAA:AAAA:3478 :::* LISTEN 11442/restund
tcp6 0 0 ::1:3478 :::* LISTEN 11442/restund
tcp6 0 0 :::22 :::* LISTEN 1321/sshd
tcp6 0 0 AAAA:AAAA:AAAA:AAAA:3479 :::* LISTEN 11442/restund
tcp6 0 0 ::1:3479 :::* LISTEN 11442/restund
udp 0 0 192.168.1.100:33000 0.0.0.0:* 11442/restund
udp 0 0 192.168.1.100:3478 0.0.0.0:* 11442/restund
udp 0 0 0.0.0.0:68 0.0.0.0:* 927/dhclient
udp6 0 0 AAAA:AAAA:AAAA:AAAA:3478 :::* 11442/restund
udp6 0 0 ::1:3478 :::* 11442/restund
After all of these IPv6 additions to my /etc/restund.conf file, I am still unable to communicate via IPv6. Thanks in advance for any input!
This won't resolve your IPv6 issue, but it should make your code work for now.
On January 27 T-Mobile Released a Carrier Update for iOS 10.2.1 Carrier 27.2:
https://support.t-mobile.com/docs/DOC-32574
Try Updating your Carrier Settings and it may fix the T-Mobile Issue.
From the Home screen, tap Settings.
Tap General
Tap About and then review the Carrier Update Field.
It should prompt you to update at this point if you haven't already. See if that resolves your problem with T-Mobile. They added an update that "Adds dual stack to improve app compatibility issues with iOS 10.2".
Related
I am running a network server under the jamq user in Docker.
[root#12af450e8259 /]# su jamq -c '/opt/jboss-amq-7-i0/bin/artemis-service start'
Starting artemis-service
artemis-service is now running (25)
I am then trying to list processes and their listening sockets using netstat as root, but for processes running as different user than me, I only see - instead of PID.
[root#12af450e8259 /]# netstat -nlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1/sshd
tcp 0 0 0.0.0.0:1883 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:8161 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5445 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5672 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:61613 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:61616 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN 1/sshd
I tried adding --privileged onto Docker command line, and that fixes the problem. I then wanted to use more granular capabilities, but I cannot find the right capability.
I tried
docker run --rm --cap-add=SYS_ADMIN --cap-add=NET_ADMIN -it myimage:latest bash
but that does not help.
The required capability is --cap-add=SYS_PTRACE. There are various reports in bugs that netstat needs this capability. For example, Bug 901754 - SELinux is preventing /usr/bin/netstat from using the 'sys_ptrace' capabilities.
The correct command therefore is
docker run --rm --cap-add=SYS_PTRACE -it myimage:latest bash
[root#f9c4b5fa7d1c /]# netstat -nlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5672 0.0.0.0:* LISTEN 22/java
tcp 0 0 0.0.0.0:61613 0.0.0.0:* LISTEN 22/java
tcp 0 0 0.0.0.0:61616 0.0.0.0:* LISTEN 22/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 92/sshd
tcp 0 0 0.0.0.0:1883 0.0.0.0:* LISTEN 22/java
tcp 0 0 127.0.0.1:8161 0.0.0.0:* LISTEN 22/java
tcp 0 0 0.0.0.0:5445 0.0.0.0:* LISTEN 22/java
tcp6 0 0 :::22 :::* LISTEN 92/sshd
Active UNIX domain sockets (only servers)
Proto RefCnt Flags Type State I-Node PID/Program name Path
I did the netstat -tulpn | grep listen and had the next results:
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 14901/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1011/exim4
tcp 0 0 127.0.0.1:5000 0.0.0.0:* LISTEN 826/ang
tcp6 0 0 :::80 :::* LISTEN 655/apache2
tcp6 0 0 :::22 :::* LISTEN 14901/sshd
tcp6 0 0 ::1:25 :::* LISTEN 1011/exim4
tcp6 0 0 :::443 :::* LISTEN 655/apache2
How can you close a port?
Does this configuration have any security issue?
The 1st part of the question - How to close the port ?
You can stop the service listed in the last column, or kill it. Stopping the service makes sure it wont start again on its own. Killing the service means some other process like upstart might start it up again.
The 2nd part of the question - Does this have a security issue ?
I would say yes, unless you have some firewall and access control mechanism in place. The reason is that the ssh service is listening for connections from any source. Ideally, you would restrict this on the firewall (or in the ssh config) and only allow known sources to connect. If you want to go one step further, make ssh listen on another port than the default port (22) so that you can avoid being seen by the most basic/common scanners. I only mention ssh as an example. You will need to review this periodically for any more software/services you deploy on that machine.
I just installed a check_mk server, I've done this on other missions, but first time failed after creating a site. I checked with netstat and I have this:
Proto Recv-Q Send-Q Adresse locale Adresse distante
Etat PID/Program name tcp 0 0 127.0.0.1:5000
0.0.0.0:* LISTEN 14925/httpd tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd tcp 0 0 0.0.0.0:22 0.0.0.0:*
LISTEN 962/sshd tcp 0 0 127.0.0.1:25
0.0.0.0:* LISTEN 1318/master tcp6 0 0 :::111 :::* LISTEN 1/systemd
tcp6 0 0 :::22 :::*
LISTEN 962/sshd tcp6 0 0 ::1:25 :::*
LISTEN 1318/master
telnet on port 80 hangs, ping works, If you have experienced this or suggests are welcome.
Regards,
Hassane
Never mind, I found the solution, for my cas it was the local firewall which was blocking. Disabling it make all works.
Regards,
Hassane
I need to set up Gitlb behind Traefik.
Everything works except authentication to the app via command line - I don't know how to expose port 22 via traefik.
Any idea how to set it up? How to expose port 22 of a docker container (via traefik)?
I changed the default port from 22 to 10022.
I'm getting via netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1132/sshd
tcp 0 0 0.0.0.0:5355 0.0.0.0:* LISTEN 1126/systemd-resolv
tcp6 0 0 :::22 :::* LISTEN 1132/sshd
tcp6 0 0 :::443 :::* LISTEN 1590/docker-proxy
tcp6 0 0 :::10022 :::* LISTEN 1440/docker-proxy
tcp6 0 0 :::5355 :::* LISTEN 1126/systemd-resolv
tcp6 0 0 :::80 :::* LISTEN 1602/docker-proxy
tcp6 0 0 :::8080 :::* LISTEN 1578/docker-proxy
udp 0 0 127.0.0.53:53 0.0.0.0:* 1126/systemd-resolv
udp 0 0 0.0.0.0:68 0.0.0.0:* 864/dhclient
udp 0 0 0.0.0.0:5355 0.0.0.0:* 1126/systemd-resolv
udp6 0 0 :::5355 :::* 1126/systemd-resolv
I don't understand why 10022 is connected to docker-proxy.
When I try:
git push --set-upstream origin master
ssh: connect to host git.myserver.com port 10022: Connection refused
fatal: Could not read from remote repository.
Thank you very much
Traefik is an HTTP reverse proxy, and ssh is not an HTTP protocol. So you'll need to simply publish the container's ssh port on an unused port on the host.
As BMitch said, traefik won't handle TCP traffic if it is not HTTP. (SSH is not HTTP).
See this discussion: https://github.com/containous/traefik/issues/10
I recommend you to configure your networking stuff in order to route the traffic of :22 directly to the container.
I am trying to run a Docker image from inside Google Cloud Shell (i.e. on an courtesy Google Compute Engine instance) as follows:
docker run -d -p 20000-30000:10000-20000 -it <image-id> bash -c bash
Previous to this step, netstat -tuapn has reported the following:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:8998 0.0.0.0:* LISTEN 249/python
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13080 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13081 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:34490 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13082 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13083 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:13084 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:34490 127.0.0.1:48161 ESTABLISHED -
tcp 0 252 172.17.0.2:22 173.194.92.34:49424 ESTABLISHED -
tcp 0 0 127.0.0.1:48161 127.0.0.1:34490 ESTABLISHED 15784/python
tcp6 0 0 :::22 :::* LISTEN -
So it looks to me as if all the ports between 20000 and 30000 are available, but the run is nevertheless terminated with the following error message:
Error response from daemon: Cannot start container :
failed to create endpoint on network bridge: Timed out
proxy starting the userland proxy
What's going on here? How can I obtain more diagnostic information and ultimately solve the problem (i.e. get my Docker image to run with the whole port range available).
Opening up ports in a range doesn't currently scale well in Docker. The above will result in 10,000 docker-proxy processes being spawned to support each port, including all the file descriptors needed to support all those processes, plus a long list of firewall rules being added. At some point, you'll hit a resource limit on either file descriptors or processes. See issue 11185 on github for more details.
The only workaround when running on a host you control is to not allocate the ports and manually update the firewall rules. Not sure that's even an option with GCE. Best solution will be to redesign your requirements to keep the port range small. The last option is to bypass the bridge network entirely and run on the host network where there are no more proxies and firewall rules with --net=host. The later removes any network isolation you have in the container, so tends to be recommended against.