if Receiver not responding, is there any way to get a connection state of GCKSession & GCKCastSession? - ios

ios version : iOS 10.3
using released ios sdk v3.4
Hi, I'm having trouble getting this 'setSuspendSessionsWhenBackgrounded' option of GCKCastOptions. I did set the option as YES to control RemoteMediaClient of CastSession on MPRemoteCommandCenter in background.
but when i control the RemoteMediaClient using MPRemoteCommandCenter in background, i got this error messages on the logger and castSession will suspend.
[--Logger--] __46-[GCKNSSLSocket sslPrivateInitWithBufferSize:]_block_invoke [(null)] Error reading from SSL buffer to stream buffer, status: -9805
[--Logger--] __46-[GCKNSSLSocket sslPrivateInitWithBufferSize:]_block_invoke.23 [(null)] Error writing from stream buffer to SSL buffer, status: -9805
[--Logger--] __46-[GCKNSSLSocket sslPrivateInitWithBufferSize:]_block_invoke.23 [(null)] Error writing from stream buffer to SSL buffer, status: -9805
[--Logger--] __46-[GCKNSSLSocket sslPrivateInitWithBufferSize:]_block_invoke.23 [(null)] Error writing from stream buffer to SSL buffer, status: -9805
[--Logger--] __46-[GCKNSSLSocket sslPrivateInitWithBufferSize:]_block_invoke.23 [(null)] Error writing from stream buffer to SSL buffer, status: -9805
[--Logger--] -[GCKCastDeviceConnector heartbeatChannelDidTimeout:] Receiver not responding to heartbeat, disconnecting.
I got this error, but state of GCKCastSession & state of GCKSession stil was GCKConnectionStateConnected.
when i got the error messages, print connection states of sessionManager.
GCKSessionManager *sessionManager = [GCKCastContext sharedInstance].sessionManager;
sessionManager.hasConnectedSession ==> YES;
sessionManager.hasConnectedCastSession ==> YES;
sessionManager.currentSession.connectionState ==> GCKConnectionStateConnected
sessionManager.currentCastSession.connectionState ==> GCKConnectionStateConnected
i think if GCKSession changed disconnected state or disconnecting state, also set the state of sessions as GCKConnectionStateDisconnected.
1) when i got the error messages, is there any way to get a valid connection state of GCKSession & GCKCastSession?
2) how can i get a notification when GCKSession timedout?

Related

CHHapticEngine not running

After a day of playing with the CHHapticEngine and custom haptic patterns, using AHAP files in particular (including adding sound samples), I have lost the sound and haptics in the deivce / iPhone. After a factory reset of the iPhone, there are still no haptics / sound in the device (no ringer, no vibrations etc.). When trying to start the haptics engine and play any pattern I can see the following output in the Xcode debugger:
2020-04-29 19:44:31.034051+0200 ProjectName[18431:5043001] [hapi] CHHapticEngine.mm:2354:-[CHHapticEngine doStartWithCompletionHandler:]_block_invoke: ERROR: Player start failed: The operation couldn’t be completed. (com.apple.CoreHaptics error 1852797029.)
2020-04-29 19:44:31.034398+0200 ProjectName[18431:5042880] [Feedback] failed to start core haptics engine for <_UIFeedbackCoreHapticsEngine: 0x28268f170>: Error Domain=com.apple.CoreHaptics Code=1852797029 "(null)"
2020-04-29 19:44:34.076491+0200 ProjectName[18431:5043124] [hapi] CHHapticEngine.mm:2354:-[CHHapticEngine doStartWithCompletionHandler:]_block_invoke: ERROR: Player start failed: The operation couldn’t be completed. (com.apple.CoreHaptics error 1852797029.)
2020-04-29 19:44:34.076703+0200 ProjectName[18431:5043124] [hapi] HapticUtils.h:56:_Haptic_Check: -[CHHapticEngine checkEngineState:]: self.running error -4805
2020-04-29 19:44:34.077641+0200 ProjectName[18431:5043124] error = Error Domain=com.apple.CoreHaptics Code=-4805 "(null)" UserInfo={Error =self.running}
The question is, is it probable to damage the hardware from code?

Vidyo.io call recording issue

I'm developing a web application integrating Vidyo and I need to record video calls. I followed the documentation guide and installed a Media Bridge Docker image in my server. I don't face problems on connecting with the media bridge, but effectively it doesn't record video nor audio. The output .flv file shows only a 10 seconds black screen with display names of the call particpants.
My configuration file is:
width=1280
height=720
fps=30
kbps=2000
layout=1
maxParticipants=8
overlay=1
videoCodec=H264
audioCodec=AAC
maxCallDuration=180
presentationAllowed=1
presWidth=1280
presHeight=720
presFps=5
presKbps=300
destination="flv:/opt/vidyo/recordingTest.flv"
resourceId="test_recording"
token=<TOKEN HERE>
host="prod.vidyo.io"
mediaPorts="50000-50100"
publicIp="127.0.0.1"
log=debug
And the errors in the output log file are:
[1050]: LmiAudioProcessing ERROR [System thread] LmiAudioProcessing.c:1208 LmiAudioProcessingSetVoiceProcessing scip_xmpp_audio_processing audio processing: special voice processing should be set to off prior to setting voice processing off.
[1050]: LmiAudioProcessing ERROR [System thread] LmiAudioProcessing.c:1219 LmiAudioProcessingSetVoiceProcessing scip_xmpp_audio_processing audio processing: unable to set voice processing off.
[1050]: LmiTransport ERROR [System thread] LmiTcpTransport.c:1435 LmiTcpTransportConstructAddressFromString Error resolving address roni.eng.vidyo.com:443: UnknownHost
[1050]: LmiSignaling ERROR [System thread] LmiStackConnection.c:36 LmiStackConnectionConstructOutbound Could not create connection to ed3df5eef18e3b0d
[1050]: XmppGateway ERROR [System thread] xmpp.c:1010 makeXmppCall failed: LmiUserLogin
[1050]: leg ERROR [System thread] leg.c:845 legStart epMakeCall failed. protocol: xmpp dest: room:demoRoom~token:<TOKEN>~server:roni.eng.vidyo.com config: addOverlay:on~caFile:/opt/vidyo/conf/openssl/certs/ca-certificates.crt~dropCall:on
[1050]: gwCall ERROR [System thread] call.c:1744 main failed: legStart 0
[126]: LmiAudioProcessing ERROR [System thread] LmiAudioProcessing.c:1208 LmiAudioProcessingSetVoiceProcessing scip_xmpp_audio_processing audio processing: special voice processing should be set to off prior to setting voice processing off.
[126]: LmiAudioProcessing ERROR [System thread] LmiAudioProcessing.c:1219 LmiAudioProcessingSetVoiceProcessing scip_xmpp_audio_processing audio processing: unable to set voice processing off.
[126]: writer ERROR [System thread] writer.c:876 writerUpdateStats failed: Assign audio stats
[126]: rtmp ERROR [System thread] rtmp.c:662 endpointGetStats failed: Update stats
[126]: leg ERROR [System thread] leg.c:1567 legGetStats failed: epGetStats failed
[126]: ScipXmppCommon ERROR xmpp scip_xmpp_common.c:269 sxcRemoteSourceAdded We already have video remote source - skip this one
[126]: LmiRtp ERROR xmpp LmiRtpSessionInline.h:294 LmiRtpSessionGetActiveRtpDestination 3LI/IGh+pOy/zbUmmgAa conn 1: Can't get active RTP destination from stopped session
[126]: LmiRtp ERROR xmpp LmiRtpSessionInline.h:294 LmiRtpSessionGetActiveRtpDestination 3LI/IGh+pOy/zbUmmgAa conn 2: Can't get active RTP destination from stopped session
[126]: XmppGateway ERROR xmpp xmpp.c:586 selectedParticipantListChanged failed: selected_participants_do_show_selected_n
[126]: leg ERROR videoRenderer leg.c:388 epCaptureVideo leg: xmpp get video frame failed
[126]: gwCall ERROR gw-tp-1 call.c:1472 peerCallEndTask legEndCall failed
In case if this helps you try this, this worked for me.You don't need SIP configuration
#main video/audio settings
width=1280
height=720
fps=30
kbps=2000
layout=1
maxParticipants=8
overlay=1
videoCodec=H264
audioCodec=PCM
maxCallDuration=180 # duration in minutes
#Presentation settings
presentationAllowed=1 #0 - ignore presentations 1 - replace main video with presentation
presWidth=1280
presHeight=720
presFps=5
presKbps=300
destination="flv:/opt/vidyo/<call_id>.flv"
#vidyo.io connection info
resourceId="<room_id>"
token="<call_token>"
host="prod.vidyo.io"

Starting AudioKit results in AudioHAL_Client errors 50% of the time

I've been using AudioKit for over 8 months, but recently I've run into a weird issue.
When I start AudioKit, in (roughly) 50% of the cases the audio stops playing after a few seconds and I get a stream of lower-level AudioHAL_Client errors:
2019-03-14 17:17:15.567027+0100 TestApp[68164:1626512] [AudioHAL_Client] HALC_ProxyIOContext.cpp:1399:IOWorkLoop: HALC_ProxyIOContext::IOWorkLoop: failed to send the final message to the server, Error: 0x10000003
2019-03-14 17:17:16.104180+0100 TestApp[68164:1626365] [AudioHAL_Client] HALC_ShellPlugIn.cpp:817:HAL_HardwarePlugIn_ObjectHasProperty: HAL_HardwarePlugIn_ObjectHasProperty: no object
or:
2019-03-15 08:15:33.756244+0100 macOSDevelopment[47186:2925180] [AudioHAL_Client] HALC_ProxyIOContext.cpp:1399:IOWorkLoop: HALC_ProxyIOContext::IOWorkLoop: failed to send the final message to the server, Error: 0x10000003
2019-03-15 08:15:34.290366+0100 macOSDevelopment[47186:2925038] [AudioHAL_Client] HALC_ShellPlugIn.cpp:817:HAL_HardwarePlugIn_ObjectHasProperty: HAL_HardwarePlugIn_ObjectHasProperty: no object
2019-03-15 08:15:34.290431+0100 macOSDevelopment[47186:2925038] [AudioHAL_Client] HALC_ShellPlugIn.cpp:817:HAL_HardwarePlugIn_ObjectHasProperty: HAL_HardwarePlugIn_ObjectHasProperty: no object
It is not related to my specific app, because when I build the AudioKit macOS development app, the same happens. I've also tried it with a clean macOS project.
This is enough to trigger the bug:
AudioKit.output = AKMixer()
AudioKit.start()
Same happens when I connect an AKOscillator instead of AKMixer.
I've tried to debug this, but I cannot figure out what's going wrong.

Why I can't create an RTMP session on VideoCore?

I am getting the following log while connecting on Adobe FMS server.
I wonder "/publicorigin/" of my url has disappeared in the log.
2016-04-16 05:37:49.661 SampleBroadcaster-Swift[2016:531501] Creating context
2016-04-16 05:37:52.037 SampleBroadcaster-Swift[2016:531569] rtmp://sample.com:1935/publicorigin/160416_05_0?14991850%3Alv259769050%3A68%3A1460752176%3A0%3A1460752469%3A91d440bb3a1ef858
2016-04-16 05:37:52.037 SampleBroadcaster-Swift[2016:531569] lv259769050
Connecting:sample.com:1935, stream name:160416_05_0?14991850%3Alv259769050%3A68%3A1460752176%3A0%3A1460752469%3A91d440bb3a1ef858/lv259769050
ClientState: 1
ClientState: 2
ClientState: 3
Want read:4096, read:1410
ClientState: 4
Not enough s1 size
Want read:2686, read:1410
ClientState: 5
Not enough s2 size
Want read:1276, read:253
ClientState: 6
Tracking command(1, connect)
Want read:4096, read:200
Steam in buffer size:200
First byte:0x2, header type:0
Handle message:5
Received server window size: 2500000
Steam in buffer size:184
First byte:0x2, header type:0
Handle message:6
Received peer bandwidth limit: 49152 type: 2
Steam in buffer size:167
First byte:0x3, header type:0
Handle message:20
Received invoke
Received invoke _error
Steam in buffer size:30
First byte:0x3, header type:0
Handle message:20
Received invoke
Received invoke close
Want read:4096, read:30
Steam in buffer size:30
First byte:0x3, header type:0
Handle message:20
Received invoke
Received invoke close

Sudden exception during UDP streaming

I have an open UDP connection that streams video for several hours between two machines on different vlans.
After several hours I get the following exception on the server side (the transmitter):
System.Net.Sockets.SocketException: A blocking operation was
interrupted by a call to WSACancelBlockingCall at
System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32
size, SocketFlags socketFlags)
From that moment on, from time to time (not on every send), I see the following:
System.Net.Sockets.SocketException: A non-blocking socket operation
could not be completed immediately at
System.Net.Sockets.Socket.Send(Byte[] buffer, Int32 offset, Int32
size, SocketFlags socketFlags)
On the client side I see no exception or abnormal behavior.
Is it possible that I get this exception due to a N/W problem, for example, something in the switch?
Any other ideas what can cause these exceptions?
Thanks
I will make a wild guess about WSACancelBlockingCall exception.
Probably you are trying to close the socket from another thread or your socket is getting disposed somehow with garbage collector.

Resources