I'm developing a web application integrating Vidyo and I need to record video calls. I followed the documentation guide and installed a Media Bridge Docker image in my server. I don't face problems on connecting with the media bridge, but effectively it doesn't record video nor audio. The output .flv file shows only a 10 seconds black screen with display names of the call particpants.
My configuration file is:
width=1280
height=720
fps=30
kbps=2000
layout=1
maxParticipants=8
overlay=1
videoCodec=H264
audioCodec=AAC
maxCallDuration=180
presentationAllowed=1
presWidth=1280
presHeight=720
presFps=5
presKbps=300
destination="flv:/opt/vidyo/recordingTest.flv"
resourceId="test_recording"
token=<TOKEN HERE>
host="prod.vidyo.io"
mediaPorts="50000-50100"
publicIp="127.0.0.1"
log=debug
And the errors in the output log file are:
[1050]: LmiAudioProcessing ERROR [System thread] LmiAudioProcessing.c:1208 LmiAudioProcessingSetVoiceProcessing scip_xmpp_audio_processing audio processing: special voice processing should be set to off prior to setting voice processing off.
[1050]: LmiAudioProcessing ERROR [System thread] LmiAudioProcessing.c:1219 LmiAudioProcessingSetVoiceProcessing scip_xmpp_audio_processing audio processing: unable to set voice processing off.
[1050]: LmiTransport ERROR [System thread] LmiTcpTransport.c:1435 LmiTcpTransportConstructAddressFromString Error resolving address roni.eng.vidyo.com:443: UnknownHost
[1050]: LmiSignaling ERROR [System thread] LmiStackConnection.c:36 LmiStackConnectionConstructOutbound Could not create connection to ed3df5eef18e3b0d
[1050]: XmppGateway ERROR [System thread] xmpp.c:1010 makeXmppCall failed: LmiUserLogin
[1050]: leg ERROR [System thread] leg.c:845 legStart epMakeCall failed. protocol: xmpp dest: room:demoRoom~token:<TOKEN>~server:roni.eng.vidyo.com config: addOverlay:on~caFile:/opt/vidyo/conf/openssl/certs/ca-certificates.crt~dropCall:on
[1050]: gwCall ERROR [System thread] call.c:1744 main failed: legStart 0
[126]: LmiAudioProcessing ERROR [System thread] LmiAudioProcessing.c:1208 LmiAudioProcessingSetVoiceProcessing scip_xmpp_audio_processing audio processing: special voice processing should be set to off prior to setting voice processing off.
[126]: LmiAudioProcessing ERROR [System thread] LmiAudioProcessing.c:1219 LmiAudioProcessingSetVoiceProcessing scip_xmpp_audio_processing audio processing: unable to set voice processing off.
[126]: writer ERROR [System thread] writer.c:876 writerUpdateStats failed: Assign audio stats
[126]: rtmp ERROR [System thread] rtmp.c:662 endpointGetStats failed: Update stats
[126]: leg ERROR [System thread] leg.c:1567 legGetStats failed: epGetStats failed
[126]: ScipXmppCommon ERROR xmpp scip_xmpp_common.c:269 sxcRemoteSourceAdded We already have video remote source - skip this one
[126]: LmiRtp ERROR xmpp LmiRtpSessionInline.h:294 LmiRtpSessionGetActiveRtpDestination 3LI/IGh+pOy/zbUmmgAa conn 1: Can't get active RTP destination from stopped session
[126]: LmiRtp ERROR xmpp LmiRtpSessionInline.h:294 LmiRtpSessionGetActiveRtpDestination 3LI/IGh+pOy/zbUmmgAa conn 2: Can't get active RTP destination from stopped session
[126]: XmppGateway ERROR xmpp xmpp.c:586 selectedParticipantListChanged failed: selected_participants_do_show_selected_n
[126]: leg ERROR videoRenderer leg.c:388 epCaptureVideo leg: xmpp get video frame failed
[126]: gwCall ERROR gw-tp-1 call.c:1472 peerCallEndTask legEndCall failed
In case if this helps you try this, this worked for me.You don't need SIP configuration
#main video/audio settings
width=1280
height=720
fps=30
kbps=2000
layout=1
maxParticipants=8
overlay=1
videoCodec=H264
audioCodec=PCM
maxCallDuration=180 # duration in minutes
#Presentation settings
presentationAllowed=1 #0 - ignore presentations 1 - replace main video with presentation
presWidth=1280
presHeight=720
presFps=5
presKbps=300
destination="flv:/opt/vidyo/<call_id>.flv"
#vidyo.io connection info
resourceId="<room_id>"
token="<call_token>"
host="prod.vidyo.io"
Related
After a day of playing with the CHHapticEngine and custom haptic patterns, using AHAP files in particular (including adding sound samples), I have lost the sound and haptics in the deivce / iPhone. After a factory reset of the iPhone, there are still no haptics / sound in the device (no ringer, no vibrations etc.). When trying to start the haptics engine and play any pattern I can see the following output in the Xcode debugger:
2020-04-29 19:44:31.034051+0200 ProjectName[18431:5043001] [hapi] CHHapticEngine.mm:2354:-[CHHapticEngine doStartWithCompletionHandler:]_block_invoke: ERROR: Player start failed: The operation couldn’t be completed. (com.apple.CoreHaptics error 1852797029.)
2020-04-29 19:44:31.034398+0200 ProjectName[18431:5042880] [Feedback] failed to start core haptics engine for <_UIFeedbackCoreHapticsEngine: 0x28268f170>: Error Domain=com.apple.CoreHaptics Code=1852797029 "(null)"
2020-04-29 19:44:34.076491+0200 ProjectName[18431:5043124] [hapi] CHHapticEngine.mm:2354:-[CHHapticEngine doStartWithCompletionHandler:]_block_invoke: ERROR: Player start failed: The operation couldn’t be completed. (com.apple.CoreHaptics error 1852797029.)
2020-04-29 19:44:34.076703+0200 ProjectName[18431:5043124] [hapi] HapticUtils.h:56:_Haptic_Check: -[CHHapticEngine checkEngineState:]: self.running error -4805
2020-04-29 19:44:34.077641+0200 ProjectName[18431:5043124] error = Error Domain=com.apple.CoreHaptics Code=-4805 "(null)" UserInfo={Error =self.running}
The question is, is it probable to damage the hardware from code?
I've been using AudioKit for over 8 months, but recently I've run into a weird issue.
When I start AudioKit, in (roughly) 50% of the cases the audio stops playing after a few seconds and I get a stream of lower-level AudioHAL_Client errors:
2019-03-14 17:17:15.567027+0100 TestApp[68164:1626512] [AudioHAL_Client] HALC_ProxyIOContext.cpp:1399:IOWorkLoop: HALC_ProxyIOContext::IOWorkLoop: failed to send the final message to the server, Error: 0x10000003
2019-03-14 17:17:16.104180+0100 TestApp[68164:1626365] [AudioHAL_Client] HALC_ShellPlugIn.cpp:817:HAL_HardwarePlugIn_ObjectHasProperty: HAL_HardwarePlugIn_ObjectHasProperty: no object
or:
2019-03-15 08:15:33.756244+0100 macOSDevelopment[47186:2925180] [AudioHAL_Client] HALC_ProxyIOContext.cpp:1399:IOWorkLoop: HALC_ProxyIOContext::IOWorkLoop: failed to send the final message to the server, Error: 0x10000003
2019-03-15 08:15:34.290366+0100 macOSDevelopment[47186:2925038] [AudioHAL_Client] HALC_ShellPlugIn.cpp:817:HAL_HardwarePlugIn_ObjectHasProperty: HAL_HardwarePlugIn_ObjectHasProperty: no object
2019-03-15 08:15:34.290431+0100 macOSDevelopment[47186:2925038] [AudioHAL_Client] HALC_ShellPlugIn.cpp:817:HAL_HardwarePlugIn_ObjectHasProperty: HAL_HardwarePlugIn_ObjectHasProperty: no object
It is not related to my specific app, because when I build the AudioKit macOS development app, the same happens. I've also tried it with a clean macOS project.
This is enough to trigger the bug:
AudioKit.output = AKMixer()
AudioKit.start()
Same happens when I connect an AKOscillator instead of AKMixer.
I've tried to debug this, but I cannot figure out what's going wrong.
I am working on PJSIP. Two ways Video and Audio call are working fine but the issue is when app is in background and I made a new Incoming call, The callKit is showing new incoming call and I picked up the call. Then app moves from background to foreground but video is not showing and audio is working at that time. If I made a call in foreground then video is showing at both end.
Please find below the logs :
15:48:19.251 ios_opengl_dev .......Failed to initialize iOS OpenGL because we are in background
15:48:19.251 vid_port.c .......Closing OpenGL renderer..
15:48:19.319 pjsua_vid.c .......Window 0: destroying..
15:48:19.319 pjsua_media.c ....pjsua_vid_channel_update() failed for call_id 0 media 1: video subsystem not initialized (PJMEDIA_EVID_INIT)
15:48:19.319 pjsua_media.c ....Error updating media call00:1: video subsystem not initialized (PJMEDIA_EVID_INIT)
15:48:19.319 pjsua_app.c ...Call 0 media 0 [type=audio], status is Active
15:48:19.319 pjsua_aud.c ...Conf connect: 3 --> 0
15:48:19.319 pjsua_app.c ....Turning sound device ON
15:48:19.319 conference.c ....Port 3 (sip:linchpin#192.168.1.7) transmitting to port 0 (Master/sound)
15:48:19.319 pjsua_aud.c ...Conf connect: 0 --> 3
15:48:19.319 conference.c ....Port 0 (Master/sound) transmitting to port 3 (sip:linchpin#192.168.1.7)
15:48:19.319 pjsua_app.c ...Call 0 media 1 [type=video], status is Error
15:48:19.319 pjsua_app.c ...Just rejected incoming video offer on call 0, use "vid call enable 1" or "vid call add" to enable video!
Add some delay in answering the call it will work as sometimes PJSIP is taking time to initiate the camera.
ios version : iOS 10.3
using released ios sdk v3.4
Hi, I'm having trouble getting this 'setSuspendSessionsWhenBackgrounded' option of GCKCastOptions. I did set the option as YES to control RemoteMediaClient of CastSession on MPRemoteCommandCenter in background.
but when i control the RemoteMediaClient using MPRemoteCommandCenter in background, i got this error messages on the logger and castSession will suspend.
[--Logger--] __46-[GCKNSSLSocket sslPrivateInitWithBufferSize:]_block_invoke [(null)] Error reading from SSL buffer to stream buffer, status: -9805
[--Logger--] __46-[GCKNSSLSocket sslPrivateInitWithBufferSize:]_block_invoke.23 [(null)] Error writing from stream buffer to SSL buffer, status: -9805
[--Logger--] __46-[GCKNSSLSocket sslPrivateInitWithBufferSize:]_block_invoke.23 [(null)] Error writing from stream buffer to SSL buffer, status: -9805
[--Logger--] __46-[GCKNSSLSocket sslPrivateInitWithBufferSize:]_block_invoke.23 [(null)] Error writing from stream buffer to SSL buffer, status: -9805
[--Logger--] __46-[GCKNSSLSocket sslPrivateInitWithBufferSize:]_block_invoke.23 [(null)] Error writing from stream buffer to SSL buffer, status: -9805
[--Logger--] -[GCKCastDeviceConnector heartbeatChannelDidTimeout:] Receiver not responding to heartbeat, disconnecting.
I got this error, but state of GCKCastSession & state of GCKSession stil was GCKConnectionStateConnected.
when i got the error messages, print connection states of sessionManager.
GCKSessionManager *sessionManager = [GCKCastContext sharedInstance].sessionManager;
sessionManager.hasConnectedSession ==> YES;
sessionManager.hasConnectedCastSession ==> YES;
sessionManager.currentSession.connectionState ==> GCKConnectionStateConnected
sessionManager.currentCastSession.connectionState ==> GCKConnectionStateConnected
i think if GCKSession changed disconnected state or disconnecting state, also set the state of sessions as GCKConnectionStateDisconnected.
1) when i got the error messages, is there any way to get a valid connection state of GCKSession & GCKCastSession?
2) how can i get a notification when GCKSession timedout?
I am using a demo app from the dragon dictation api. I have made no modifications to the demo app, so I don't think there is anything wrong with it. When I open the app and run it on my phone it opens and runs. I click the record button and talk to it. Then it tries to connect to the server, but it gives me the error saying it can't connect to the speech server.
The output says:
2013-08-10 13:54:11.582 Recognizer[655:907] set session Active 0
2013-08-10 13:54:11.803 Recognizer[655:907] sample rate = 44100.000000
2013-08-10 13:54:11.823 Recognizer[655:907] audio input route(iOS5 or above): MicrophoneBuiltIn
2013-08-10 13:54:11.828 Recognizer[655:907] audiosource = MicrophoneBuiltIn
2013-08-10 13:54:11.889 Recognizer[655:907] [NMSP_ERROR] check status Error: 696e6974 init -> line: 485
2013-08-10 13:54:11.979 Recognizer[655:907] Application windows are expected to have a root view controller at the end of application launch
2013-08-10 13:54:13.513 Recognizer[655:907] Recognizing type:'websearch' Language Code: 'en_US' using end-of-speech detection:2.
2013-08-10 13:54:14.517 Recognizer[655:907] Recording started.
2013-08-10 13:54:16.490 Recognizer[655:907] Recording finished.
2013-08-10 13:54:26.903 Recognizer[655:4103] [NMSP_ERROR] Connection timed out!
2013-08-10 13:54:27.167 Recognizer[655:907] Got error.
2013-08-10 13:54:27.170 Recognizer[655:907] Session id [(null)].
I have no clue what's going on here, and any help would be greatly appreciated.
If when you try to record it immediately says “Cancelled” and shows an error like “recorder is null” or “[NMSP_ERROR] check status Error: 696e6974 init -> line: 485″, this probably means either something is wrong with your SpeechKit keys, or the SpeechKit servers are down. Double check your keys, and/or try again later.
Reference: http://www.raywenderlich.com/60870/building-ios-app-like-siri
in my case the error was that i called the cancel: method on a nil SKRecognizer object.