I'm trying to setup a Docker container running freeSWITCH so I can deploy it on a Debian server.
For developing, I use a Mac running the freeSWITCH container within boot2docker.
I need the container to work in both these environments.
I manage to connect to the FS server with a softphone and place a call but after 32 seconds, the call drops.
freeswitch#internal> version
FreeSWITCH Version 1.4.15-1~64bit (-1 64bit)
This is the SIP 200 OK packet that FS sends and expects an answer to:
14 0.029449000 192.168.59.103 192.168.59.3 SIP/SDP 1312 Status: 200 OK |
SIP/2.0 200 OK
Via: SIP/2.0/UDP 192.168.1.141:49822;rport=49822;branch=z9hG4bKPjFf3iaNQ0tDY1fySiz1zGSEXSVFpZeE2b;received=192.168.59.3
From: "1000" <sip:1000#192.168.59.103>;tag=tNpJHmkYg0ke5GYyvIhkdBSIMM.ujzXE
To: <sip:5000#192.168.59.103>;tag=6N3793jeayeUe
Call-ID: 4wWTcxr9Q4OqlgT2Fs-8SeOkLhVYXTLb
CSeq: 10007 INVITE
Contact: <sip:5000#172.17.0.6:5060;transport=udp>
User-Agent: FreeSWITCH-mod_sofia/1.4.15-1~64bit
Accept: application/sdp
Allow: INVITE, ACK, BYE, CANCEL, OPTIONS, MESSAGE, INFO, UPDATE, REGISTER, REFER, NOTIFY, PUBLISH, SUBSCRIBE
Supported: timer, path, replaces
Allow-Events: talk, hold, conference, presence, as-feature-event, dialog, line-seize, call-info, sla, include-session-description, presence.winfo, message-summary, refer
Content-Type: application/sdp
Content-Disposition: session
Content-Length: 333
Remote-Party-ID: "5000" <sip:5000#192.168.59.103>;party=calling;privacy=off;screen=no
v=0
o=FreeSWITCH 1422731206 1422731207 IN IP4 172.17.0.6
s=FreeSWITCH
c=IN IP4 172.17.0.6
t=0 0
m=audio 16386 RTP/SAVP 9 101
a=rtpmap:9 G722/8000
a=rtpmap:101 telephone-event/8000
a=fmtp:101 0-16
a=ptime:20
a=rtcp:16387 IN IP4 172.17.0.6
a=crypto:1 AES_CM_128_HMAC_SHA1_80 inline:/NTeSA7Od0+1Uo1/3wIclZwEiKJ+R4Mh8gyTx+5O
Then this happens:
1745 32.031059000 192.168.59.103 192.168.59.3 SIP 664 Request: BYE sip:29716085#192.168.59.3:49822 |
BYE sip:29716085#192.168.59.3:49822 SIP/2.0
Via: SIP/2.0/UDP 172.17.0.6;rport;branch=z9hG4bK0m429X0ac150r
Max-Forwards: 70
From: <sip:5000#192.168.59.103>;tag=6N3793jeayeUe
To: "1000" <sip:1000#192.168.59.103>;tag=tNpJHmkYg0ke5GYyvIhkdBSIMM.ujzXE
Call-ID: 4wWTcxr9Q4OqlgT2Fs-8SeOkLhVYXTLb
CSeq: 71037748 BYE
Contact: <sip:5000#172.17.0.6:5060;transport=udp>
User-Agent: FreeSWITCH-mod_sofia/1.4.15-1~64bit
Allow: INVITE, ACK, BYE, CANCEL, OPTIONS, MESSAGE, INFO, UPDATE, REGISTER, REFER, NOTIFY, PUBLISH, SUBSCRIBE
Supported: timer, path, replaces
Reason: SIP;cause=408;text="ACK Timeout"
Content-Length: 0
I'm guessing FS never receives an answer from the softphone because of the various NAT layers and drops the call, assuming it didn't connect.
192.168.1.141 is my Mac's IP address on the LAN (as shown in the VIA for the 200 OK packet)
192.168.59.103 is the boot2docker VM
192.168.59.3 is my Mac on the boot2docker virtual network
172.17.0.xxx is the FS server's IP address on the Docker network (this IP changes, depending on how many containers are/were running before)
This is what I have on my sip_profiles/internal.xml
<param name="rtp-ip" value="$${local_ip_v4}"/>
<!-- ip address to bind to, DO NOT USE HOSTNAMES ONLY IP ADDRESSES -->
<param name="sip-ip" value="$${local_ip_v4}"/>
<param name="hold-music" value="$${hold_music}"/>
<param name="apply-nat-acl" value="nat.auto"/>
<!-- Docker NAT magic -->
<param name="ext-sip-ip" value="$${external_sip_ip}"/>
<param name="ext-rtp-ip" value="$${external_rtp_ip}"/>
And in my vars.xml
<X-PRE-PROCESS cmd="set" data="external_rtp_ip=192.168.59.103"/>
<X-PRE-PROCESS cmd="set" data="external_sip_ip=192.168.59.103"/>
From fs_cli:
freeswitch#internal> eval ${external_rtp_ip}
192.168.59.103
freeswitch#internal> eval ${external_sip_ip}
192.168.59.103
freeswitch#internal> eval ${ext-rtp-ip}
-ERR no reply
freeswitch#internal> eval ${ext-sip-ip}
-ERR no reply
I have set ports 16384 to 16484 UDP for RTP traffic, 5060, 5070, 5080 UDP & TCP for SIP, both in FS and in the container.
An echo test reveals that audio flows both ways.
Any idea what is happening and how to fix?
I had a similar problem with my FreeSwitch installation and I solved it by configuring the switch to constantly ping the registered softphones so as to keep the NAT firewall channels open. Here are the settings that did it for me:
<param name="nat-options-ping" value="true"/>
<param name="all-reg-options-ping" value="true"/>
<!-- One successful options ping is enough to verify that a phone is reachable -->
<param name="sip-user-ping-min" value="1"/>
<!-- Three pings need to be lost to consider the phone disconnected -->
<param name="sip-user-ping-max" value="4"/>
<param name="unregister-on-options-fail" value="true"/>
<!-- Freeswitch is coded to send
pings at variable intervals with the mean value determined by the
variable below and a normal distribution with a deviation of half the
interval value. -->
<param name="ping-mean-interval" value="15"/>
They go to sip_profiles/internal.xml.
Ok so here is an update.
It turns out the problem was because FS wasn't able to detect its IP properly and the packets it was sending back contained to the wrong IP.
I came up with this script as part of my container boot sequence:
https://github.com/Coaxial/mushimushi/blob/a29ae537314e89bc7f9808c2bd7fdb4917eafa04/lib/freeswitch_conf/start.sh#L7-L21
I then include the resulting XML file into the general config: https://github.com/Coaxial/mushimushi/blob/a29ae537314e89bc7f9808c2bd7fdb4917eafa04/lib/freeswitch_conf/freeswitch.xml#L3
This way I can reference the variables to configure the relevant Sofia profiles: https://github.com/Coaxial/mushimushi/blob/a29ae537314e89bc7f9808c2bd7fdb4917eafa04/lib/freeswitch_conf/autoload_configs/sofia.conf.xml#L29-L32
This works when run in boot2docker but isn't battle tested in "vanilla" docker yet.
Related
I have a Deluge client (in a docker container - that's likely irrelevant).
I want to be able to connect to the daemon from the outside world while having it behind a reverse proxy.
I don't necessarily need TLS, but I suspect http2 may require it.
What works:
connecting locally on the network to the Deluge RPC with a Deluge desktop, Android and WebUI client works well.
sending requests to the nginx server is OK (I can see logs as I hit nginx)
All the networking around (firewalls, port forwardings, DNS are fine)
What doesn't work:
Deluge client can't connect to the http server
nginx config:
server {
server_name deluge.example.com;
listen 58850;
location / {
proxy_pass grpc://localhost:58846;
}
ssl_certificate /etc/ssl/nginx/example.com.pem;
ssl_certificate_key /etc/ssl/nginx/example.com.key;
proxy_request_buffering off;
gzip off;
charset utf-8;
error_log /var/log/nginx/nginx_deluge.log debug;
}
Major edit:
As it turns out, I believed the JSON RPC and gRPC are more similar than just the "RPC" in the name. Hence my "original" issue "nginx deluge rpc doesn't work", is no longer relevant.
Unfortunately, the "same" issue still prevails. I still can't connect to the proxy even when using a regular HTTP proxy while I can make HTTP requests locally.
I will surely post an update or even an answer should I figure it out in the next days...
When I try to connect with the Deluge client, I get this error message in the log file:--
2022/06/14 16:59:55 [info] 1332115#1332115: *7 client sent invalid method while reading client request line, client: <REDACTED IPv4>, server: deluge.example.com, request: " Fu�Uq���U����a(wU=��_`. a��¹�(���O����f�"
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 http finalize request: 400, "?" a:1, c:1
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 event timer del: 17: 243303738
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 http special response: 400, "?"
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 http set discard body
2022/06/14 16:59:55 [debug] 1332115#1332115: *7 HTTP/1.1 400 Bad Request
Server: nginx/1.22.0
Date: Tue, 14 Jun 2022 16:59:55 GMT
Content-Type: text/html
Content-Length: 157
Connection: close
When I change the line listen 58850; to listen 58850 http2;, as I probably should, I get the following error: (log verbosity set to "debug")
2022/06/14 15:04:00 [info] 1007882#1007882: *3654 client sent invalid method while reading
client request line, client: <REDACTED IPv4>,
server: deluge.example.com, request: "x�;x��;%157?O/3/-�#�D��"
The gibberish there is seemingly identical when trying to connect from a different network from a different device. It was Dx�;x��;%157?O/3/-�#�E�, (there is a D as first character now) but all other attempts are again without the leading D.
or this error: (log verbosity set to "info")
2022/06/14 17:09:13 [info] 1348282#1348282: *14 invalid connection preface while processing HTTP/2 connection, client: <REDACTED IPv4>, server: 0.0.0.0:58850
I tried decoding the gibberish between various encodings, in hoping it would be just bad encoding of a better error message or a lead to a solution.
I looked through the first two pages of Google hoping the error messages were pointing me to a solution someone else has had to my problem.
environment:
Docker version 20.10.17, build 100c70180f
nginx version: nginx/1.22.0
deluged 2.0.5
libtorrent: 2.0.6.0
I have a cluster situation consisting of 4 total nodes, 3 servers and 1 management node, working properly.
At the beginning of the month we planned to patch the OS and we started from the first server node with this procedure:
Stop service
S.O. patching
Server restart
Start service
The service of the first patched node named "serverA" fails to restart with this error:
Log entries cluster join:
serverA:
| INFO | region-dm-12 | ache.geode.internal.tcp.Connection | --> Connection: shared=true ordered=false failed to connect to peer 10.237.110.195( Server serverB:9993):1024 because: java.net.ConnectException: Connection timed out (Connection timed out)
| WARN | region-dm-12 | ache.geode.internal.tcp.Connection | --> Connection: Attempting reconnect to peer 10.237.110.195( Server serverB:9993):1024
ServerMgmt:
| WARN | pool-3-thread-1 | tributed.internal.ReplyProcessor21 | --> 15 seconds have elapsed while waiting for replies: <CreateRegionProcessor$CreateRegionReplyProcessor 44180 waiting for 1 replies from [10.237.110.194( Server serverA:632):1024]> on 10.237.110.225( Management:6033):1024 whose current membership list is: [[10.237.110.196( Server serverC:16805):1024, 10.237.110.225( Management:6033):1024, 10.237.110.195( Server serverB:9993):1024, 10.237.110.194( Server serverA:632):1024]]
The connection between the systems was verified with tcpdumps, udp 1024 is running fine.
We have tried redeploying the service and making numerous attempts but we always get the same error during startup.
Any suggestions? Thank you.
Marco.
I think to see this error message, serverA was probably able to send UDP messages to serverB but it is failing to create a TCP connection. It's hard to say why though - a firewall issue, some TCP configuration issue, ... ?
Check to see if serverB has anything interesting in its logs. Since you are using TCP dump, you should be watching for that TCP connection for serverB:9993, since it looks like that is wwhat failed.
There is no firewall between the systems, we've analyzed again the network connection, during startup from node a, and we can see that the communication can be established between all systems. But what we detected is, that on port 2323 which is configured as locater, the node sends packages to the b and c node, but only receives back packages from the c node, and not from the b node. This is for us again a sign that the b node has an issue. Does it give a way to check our assumption from the b node?
A node ip .194
B node ip .195
C node ip .196
Management ip .225
I've got a build of the OpenVPN3 client library (https://github.com/OpenVPN/openvpn3) connecting to an OpenVPN 2 server (2.4.4). This is working for my mac and windows builds, but failing when the client is iOS.
The iOS client appears to connect, in the sense that I get my custom up script invoked and I can see what I assume are keepalive/heartbeat packets going back and forth between client and server. The client doesn't time out as long as these packets are allowed to continue. However, as soon as the client attempts to access any web page over the tunnel, I get packets dropped on the server side with errors like the following:
Fri Mar 15 20:08:27 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=10 seenFri Mar 15 20:08:28 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=7 seen
Fri Mar 15 20:08:29 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=5 seen
Fri Mar 15 20:08:30 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=9 seen
Fri Mar 15 20:08:31 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=8 seen
Fri Mar 15 20:08:32 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=2 seen
Fri Mar 15 20:08:34 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=13 seen
Fri Mar 15 20:08:38 2019 11e9-475e-04b1a640-b6f1-dda173e0051f/10.101.172.10:65334 IP packet with unknown IP version=7 seen
I'm using the same server and client configs for iOS as I was using when the client was Mac and Windows.
Server configs:
port 1194
proto udp
dev tun
ca /opt/certs/ca-cert.pem
cert /opt/certs/server.pem
key /opt/certs/server-key.pem
dh /opt/certs/dh2048.pem
tls-auth /opt/certs/ta.key 0
server 10.8.0.0 255.255.0.0
keepalive 5 15
verb 3
script-security 3
client-connect "/usr/local/bin/sdp-updown"
client-disconnect "/usr/local/bin/sdp-updown"
cipher AES-256-CBC
tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA256
comp-lzo
tun-mtu 1500
tun-mtu-extra 32
mssfix 1450
Client configs:
dev tun
proto udp
remote ... server and port omitted
remote-cert-tls server
key-direction 1
server-poll-timeout 5
cipher AES-256-CBC
tls-cipher TLS-DHE-RSA-WITH-AES-256-CBC-SHA256
comp-lzo
... routes omitted
<ca>
... CA omitted
</ca>
<cert>
... cert omitted
</cert>
<key>
... private key omitted
</key>
<tls-auth>
... OpenVPN static key omitted
</tls-auth>
I've tried a number of different settings for cipher and tls-cipher. When those settings are set to values that are supported on both sides I can get connected, but get the same IP packet with unknown IP version error. Obviously when either cipher or tls-cipher isn't supported on either server or client we fail to negotiate TLS and don't get connected at all.
I found a number of troubleshooting forum posts regarding this error and most of them are resolved by setting the compression settings to the same value on both ends. My iOS client build seems to think that it has no ability to perform compression, even though I think I've linked successfully against the LZ4 library. I compiled the LZ4 library for iOS, and included the LZ4=1 when building a dylib for OpenVPN itself. However, when the iOS client connects it reports settings like:
ENV[IV_AUTO_SESS] = 1
ENV[IV_COMP_STUBv2] = 1
ENV[IV_COMP_STUB] = 1
ENV[IV_LZO_STUB] = 1
ENV[IV_PROTO] = 2
ENV[IV_TCPNL] = 1
ENV[IV_NCP] = 2
ENV[IV_PLAT] = ios
ENV[IV_VER] = 3.1.2
I notice that this does not include IV_LZ4, which I take to mean that the client thinks it can't perform compression. That said, even when my configs include disabled compression I get the same results. I tried omitting any compression setting at all, comp-lzo no, compress stub, and compress stub-v2. None of these resulted in any different behavior.
My questions are thus:
What could be the cause of my IP packet with unknown IP version errors when actually sending packets over the data channel?
If what I'm seeing is actually a compression setting error, how do I convince OpenVPN to disable compression entirely? Alternatively, what have I done wrong to link LZ4 into my iOS OpenVPN dylib?
I am trying to decode a camera rtsp stream using ffmpeg_libs within a ubuntu docker container. The ffmpeg debug output seems to show that it successfully negotiates the rtsp-digest authentication (ie. RTSP/1.0 200 OK), and receives an SPS (nalu 7) and PPS (nalu 8), but nothing after that. It times out, retries, etc. That doesn’t really make sense to me.
The same code compiled and run locally (not in docker) works fully.
Also, if I decode a file, the code works fine both locally and in docker container. So, the basic ffmpeg_lib decode is working. The difficulty is with the stream interface running in docker.
Is there additional authentication through the docker interface, or maybe port access, or something? I’m not much of a networking guy, so I’m really lost at this point.
The ffmpeg logs is below, and my docker run command is:
docker run -it --name VideoRx videorx:latest (also tried with -p 554)
Any help will be very much appreciated.
Thanks,
Wayne
avformat_version(): 3756900 Build: 3756900 Ident: Lavf57.83.100
avformat_open_input(): rtsp://admin:public_pwd#192.168.1.237
Probing rtsp score:100 size:0
[tcp # 0x56263b430a20] No default whitelist set
[rtsp # 0xaddr1] Sending:
OPTIONS rtsp://192.168.1.237:554 RTSP/1.0
... [snipped]
Initial authentication handshake (OPTIONS, DESCRIBE, SETUP).
All success, server replies: 'RTSP/1.0 200 OK'
....
[rtsp # 0xaddr1] Sending:
PLAY rtsp://192.168.1.237:554/ RTSP/1.0
Range: npt=0.000-
CSeq: 5
User-Agent: Lavf57.83.100
Session: 420467284
Authorization: Digest username="admin", realm="IP Camera(C1003)", nonce="129b254c8da4e0ffb530f64f79938bcd", uri="rtsp://192.168.1.237:554/", response="82c6c0f1fadea3739846866e8e50e855"
--
[rtsp # 0xaddr1] line='RTSP/1.0 200 OK'
[rtsp # 0xaddr1] line='CSeq: 5'
[rtsp # 0xaddr1] line='Session: 420467284'
[rtsp # 0xaddr1] line='RTP-Info: url=rtsp://192.168.1.237:554/trackID=1;seq=43938;rtptime=4022155312'
[rtsp # 0xaddr1] line='Date: Thu, Aug 02 2018 15:53:00 GMT'
[rtsp # 0xaddr1] line=''
avformat_open_input(): Success erc: 0
avformat_find_stream_info()
[h264 # 0xaddr2] nal_unit_type: 7, nal_ref_idc: 3
[h264 # 0xaddr2] nal_unit_type: 8, nal_ref_idc: 3
[rtsp # 0xaddr1] UDP timeout, retrying with TCP
[rtsp # 0xaddr1] ...
... Stalls waiting for additional packets
Am running Nitrogen 2.0.X on Windows 7 Home Premium, HP Pavilion Entertainment PC Laptop.
Nitrogen starts with inets and i have failed to change or dictate the IP address of the webserver.
Once it starts, it tells me to go to my browser and hit http://localhost:8000 in the shell output below:
erl -make
Starting Nitrogen on Inets (http://localhost:8000)...
Eshell V5.8.4 (abort with ^G)
Hitting the link in almost all available browsers shows that page could not be found. When i ask the emulator about the ports, this is its output:
(motv#josh.ekampus.internal)1> inet:i().
Port Module Recv Sent Owner Local Address Foreign Address State
3109 inet6_tcp 0 0 *:8000 *:* ACCEPTING
618 inet_tcp 0 0 *:9543 *:* ACCEPTING
637 inet_tcp 4 19 localhost:9544 localhost:4369 CONNECTED
Port Module Recv Sent Owner Local Address Foreign Address State
ok
(motv#josh.ekampus.internal)2>
Am having a strong thought that inet6_tcp means that its using IPv6 while inet_tcp means IPv4, not very sure about this. But all in all, i cannot connect to my Nitrogen. These below are the running applications
(motv#josh.ekampus.internal)2> application:which_applications().
[{quickstart,"Nitrogen Quickstart",[]},
{inets,"INETS CXC 138 49","5.6"},
{nprocreg,"NProcReg - Simple Erlang Process Registry.",
"0.1"},
{stdlib,"ERTS CXC 138 10","1.17.4"},
{kernel,"ERTS CXC 138 10","2.14.4"}]
(motv#josh.ekampus.internal)3>
Can someone explain why i cannot Reach my Local Nitrogen Framework by just hitting http://localhost:8000 in the browser, given the observations above? And, how can i connect to it from my browser?
Some guesses:
Did you try http://127.0.0.1:8000 ?
If that doesn't work, can you startup erlang with forced ip4 support (i think):
-proto_dist inet_tcp