Connecting to janus server always hangs with hangup message from janus - ios

I have problem connecting to janus janus.plugin.videoroom plugin from iOS device using swift.
Although every steps take place correctly but janus server send following message:
{
"janus": "hangup",
"session_id": 3201104494179497,
"sender": 7759980289270843,
"reason": "ICE failed"
}
and disconnect.
Debugging the messages of connecting to janus leads me to following:
1- RTCIceGatheringState never changes to Completed
2- The generated candidates are like following:
candidate:3215141415 1 udp 1686052607 w.x.y.z 57168 typ srflx raddr w.x.y.z rport 57168 generation 0 ufrag 340a network-id 1 network-cost 10
as you can see video and audio words are replaced by 1 and 0 respectively in the generated candidate.
Do you have any idea about these two observations!
And why janus send the "ICE failed" message?

I found that the reason of getting "hang up" message is because I did not set the received jsep (from janus) to my peerconnection.
after setAnswer the jsep "hang up" message gone!
1- RTCIceGatheringState never changes to Completed
For the problem of not having "Completed" state For RTCIceGatheringState was because of "continualGatheringPolicy" options in configuring the peerConnection which was set to "gatherContinually" after setting that to "gatherOnce" the Completed state seen! :)
2- The generated candidates are like following:
It seem this is normal to have audio/video or 0/1

Related

mqtt.client:connect() "not established" callback is unexpectedly called after a disconnect

The documentation for mqtt.client:connect() states that the last arg is a "callback function for when the connection could not be established".
I have a case where mqtt.client:connect() succeeds, so the "not established" callback is not called (correct behavior). But, later, when my mqtt broker goes down, the "not established" callback function gets unexpectedly activated.
I have the following code:
function handle_mqtt_error (client, reason)
print("mqtt connect failed, reason = "..reason..". Trying again shortly.")
tmr.create():alarm(10 * 1000, tmr.ALARM_SINGLE, do_mqtt_connect)
end
function do_mqtt_connect ()
print("connecting---")
m:connect(MQTT_HOST, MQTT_PORT, 1, function(client)
print("mqtt connected")
client:publish("topic/status", "online", 1, 1)
end,
handle_mqtt_error
)
print("returning---")
end
-- init mqtt client
m = mqtt.Client(MQTT_CLIENT_ID, 120, MQTT_USER, MQTT_PASS)
-- connect to mqtt
print("Starting Test")
do_mqtt_connect()
I see the output from the test begin, as expected, with:
Starting Test
connecting---
returning---
mqtt connected
At this point, I kill my mqtt broker, and I unexpectedly see:
mqtt connect failed, reason = -5. Trying again shortly.
connecting---
returning---
mqtt connect failed, reason = -5. Trying again shortly.
connecting---
returning---
And, happily, but unexpectedly, when I restart my broker, I see:
mqtt connected
So, it appears that handle_mqtt_error() is not only called "when the connection could not be established". It appears that it also be called if mqtt.client:connect() successfully establishes a connection, then the connection is later broken.
======= New Information =======
I downloaded the "dev" tree and used the Docker image to build the firmware. Within mqtt.c, I enabled NODE_DBG. The interesting lines are:
enter mqtt_socket_reconnected.
mqtt connect failed, reason = -5. Trying again shortly.
enter mqtt_socket_disconnected.
leave mqtt_socket_disconnected.
leave mqtt_socket_reconnected.
The "mqtt connect failed..." message is printed by handle_mqtt_error(), which is my "connect failed" callback.
Here's my theory. When my test starts, do_mqtt_connect() calls mqtt_socket_connect(), which does this:
espconn_regist_reconcb(pesp_conn, mqtt_socket_reconnected);
This sets reconnect_callback (in app/lwip/app/espconn.c). Later, after my broker goes down and comes back up, espconn_tcp_reconnect() is called (in app/lwip/app/espconn_tcp.c). It calls the reconnect_callback, which is mqtt_socket_reconnected(), which calls handle_mqtt_error().
So, I think the end result doesn't match the documentation, but it works out okay for me. If the behavior did match the documentation, I would just add some Lua code to handle the "offline" event, and try to reestablish the mqtt connection. I just thought someone might be interested if the behavior doesn't match the documentation.

housekeeper [deleted 105926 hist/trends, 0 items, 0 events, 0 sessions, 0 alarms, 0 audit items in 3.718012 sec, idle for 1 hour(s)]

In my zabbix server it logs as
# sudo tail -f /var/log/zabbix/zabbix_server.log
housekeeper [deleted 105926 hist/trends, 0 items, 0 events, 0 sessions, 0 alarms, 0 audit items in 3.718012 sec, idle for 1 hour(s)]
and after this it fails to send the Alerts
5243:20171213:180658.517 Failed sending data to the peer: DATA failed: 550
5243:20171213:180702.182 Failed sending data to the peer: DATA failed: 550
5243:20171213:180705.812 Failed sending data to the peer: DATA failed: 550
Can you help me why this occurs and give me a solution
I solved this
Beacuase i have configured email to a person in more numbers in the sense
user will get more than 3 alerts for a problem and a ok alert and reduced
the alerts to single alert fro problem per person.
when the **zabbix alerter process exceeds 75%** this errors occurs
Zabbix alerter processes more than 75% busy
and zabbix is not able to send alerts to all peer hosts

iOS & Safari 11 WebRTC does not gather STUN/TURN Trickle ICE Candidates

My web application is failing to gather WebRTC relay ICE candidates via a CoTURN server when using Safari 11 on iOS 11 (iPhone 5s & iPhone 7) or desktop. The web application (which establishes a one-way audio only WebRTC peer connection) works fine between the real browsers (Chrome and Firefox) either direct or via CoTURN relay, and I normally get 6-15 ICE candidates on these browsers.
I have a (frankly, unnecessary) call to getUserMedia on the receiving side, which allows host ICE candidates to be produced by Safari. (Note... the user must approve audio and/or video access before Safari will provide host Ice Candidates, even if on a receiving-only end. I'm past that hurdle, but just so you won't hit it too... This is out of "privacy" concerns.). Before I added the allow getUserMedia, I received no ICE. Now I receive two candidates. One with a private IPv4 and another with an IPv6. This is enough to get the app working properly when on the same machine or local network. So I'm pretty confident with other parts of the application code. I am not sure if my problem is with the application code or the CoTURN server.
Example of the ICE candidates received:
{"candidate":{"candidate":"candidate:622522263 1 udp 2113937151 172.27.0.65 56182 typ host generation 0 ufrag r23H network-cost 50","sdpMid":"audio","sdpMLineIndex":0,"usernameFragment":"r23H"}}
I pretty sure the RTCIceServer Dictionary for my RTCPeerConnection is inline with the following standards:
https://w3c.github.io/webrtc-pc/webrtc.html
https://www.rfc-editor.org/rfc/rfc7064
https://www.rfc-editor.org/rfc/rfc7065
And I've tried multiple variations of parameters:
// For Example:
var RPCconfig = {
iceServers: [{
urls: "turn:Example.live",
username: "un",
credential: "pw"
}]
};
// Or:
var RPCconfig = {
iceServers: [{
urls: "turns:Example.live",
username: "un",
credential: "pw",
credentialType: "password"
}, {
urls: "stun:Example.live"
}]
};
// And even more desperate attempts...
var RPCconfig = {
iceServers: [{
urls: "turn:Example.live?transport=tcp",
username: "un",
credential: "pw",
credentialType: "password"
}]
};
Here's an example of the signaling process log for an idea of what is going on. This is from the receiving side, which is Safari 11. The other browser was Chrome (compare 6 vs 2 ICE candidates). The state change refers to oniceconnectionstatechange.
SDP Offer received.
Sending signal SDP
Sending signal IceCandidate
Sending signal IceCandidate
ICE Candidate Received
4:08:25 AM State Change -> checking
ICE Candidate Received
ICE Candidate Received
ICE Candidate Received
ICE Candidate Received
ICE Candidate Received
4:08:40 AM State Change -> failed
CoTURN is configured quite liberally in terms of accepting every possible transport method as far as I am aware. It works well for providing ICE Candidates and as a relay for the other browsers.
Any direction would be greatly appreciated. Even if it is just a sample RTCIceServer Dictionary code that works or a proven TURN server to try.

LuaSocket - TCP 2nd message not sending

I've been searching Google for awhile and there seems to be no offers on fixing this problem I have here.
I am using LuaSocket as a simple way to connect to a external server I created, and I am able to connect to it successfully and send a signal.
However, when I try to send a second message later on, the external server does not seem to be receiving the message, even though I am still connected to the socket.
socket = require("socket")
host, port = ip, port
tcp = assert(socket.tcp())
tcp:settimeout( 0 )
tcp:connect(host, port);
msg = {
["status"]="connect",
["usrName"]=username
}
msg = Json.Encode(msg)
tcp:send(msg); -- This message, the server received this message.
-- Later in my code, I attempt to send another message.
msg = {
["status"]="anotherMessage",
["usrName"]=username
};
msg = Json.Encode(msg)
tcp:send(msg); -- This message is not sending, even though i'm still connected.
You need to show what happens on the other side as it may be simply not reading even though the connection may be open. You also don't say what exactly happens when "message is not sending"; do you get an error? the script finishes but the message is not sent?
There are several things you can try:
Switch to the (default) synchronous send until you get it working; remove tcp:settimeout(0), as your send may simply fail with "timeout" message if the other side is not ready to read the message.
Check the error message from :send call to see if it's timing out or not.
local ok, err = tcp:send(msg)
Use socket.select to check if the other side it ready to accept the message you are sending.
Try adding "\r\n" at the end of your serialized JSON.

How can I apply timeout function to LMAX Disruptor Queue?

To developers/users of LMAX Disruptor http://code.google.com/p/disruptor/ :
My question:
Can anyone suggest an approach to how apply a timeout function to Disruptor e.g. using EventHandler?
Here is one scenario that came up in my line of work:
Outbox - messages sent to the Server over a network
Inbox - ACK messages received from the Server
ACK Handler - marks outbox messages as ACKed
Timeout Handler - marks outbox message as NACKed (much needed, but where can it fit into the Disruptor design?)
Is there anyone who share the same opinion?
Or can anyone point out why it is unnecessary.
I hope the ensuing debate would be brief.
Thank you.
To clarify the timeout-handler would "fire" after a certain period of time when a message could not be delivered?
The way it works with disruptor is you have a ringbuffer for inbound and a ringbuffer for outbound messges... so email comes in, place it into the inbound ring buffer using an appropriate event. then process the message (i.e. decode, analye, log, store) and send it along to another sytem by placing it into the outbound ringbuffer... another handler takes the message and stores it into a database or sends it to another server using smtp... if a error / timeout etc. occurs, your create an event in the inbound ringbuffer signaling the error (NACK) and process this message. does that make sense?!?

Resources