How to get current stream position using GoogleCast frameWork in iOS - ios

I am working on an application in which I will connect to the T.V using ChromeCast device, to achieve this I have used GoogleCast FrameWork in my project,
I am facing a problem when I try to access the approximate stream position of the video using the below statements,
GCKMediaControlChannel *mediaControlChannel = [[GCKMediaControlChannel alloc] init];
NSLog(#"Approximate stream position is %f",mediaControlChannel.approximateStreamPosition);
But this is resulting in a time difference of 20 seconds.
I tried below statements to get the exact stream position,
GCKMediaStatus *deviceStatus = [[GCKMediaStatus alloc] initWithSessionID:sessionId mediaInformation:self.mediaInformation];
NSLog(#"Stream Position %f", self.deviceStatus.streamPosition);
As the above method is having two parameters,We need to send session as an integer, but we are getting session id as a string alpha numeric and upon converting this to integer resulting in 0.
Can any one help me to get the session ID as an integer, Or suggest me to get the current stream position with any different method.

I use the following code to retrieve the stream position of the video that is currently being casted, and it's pretty accurate! The Google Cast SDK version that we use in our project is 3.5.3.
let position = Double(GCKCastContext.sharedInstance().sessionManager.currentSession!.remoteMediaClient!.mediaStatus!.streamPosition)
Hope it helps!

approximateStreamPosition should give you a pretty accurate time, certainly not off by 20 seconds. You can take a look at our iOS reference app for an example.

You can use approximateStreamPosition for the same.
Below code will print the more accurate time.
if let position = GCKCastContext.sharedInstance().sessionManager.currentSession?.remoteMediaClient?.approximateStreamPosition() {
print("current time ",position)
}

Related

How to custom WebRTC video source?

Does someone know how to change WebRTC (https://cocoapods.org/pods/libjingle_peerconnection) video source?
I am working on an screen sharing app.
At the moment, I retrieve the rendered frames in real-time in CVPixelBuffer. Does someone know how I could add my frames as video source please?
Is it possible to set an other video source instead of camera device source ? Is yes, which format the video has to be and how to do it ?
Thanks.
var connectionFactory : RTCPeerConnectionFactory = RTCPeerConnectionFactory()
let videoSource : RTCVideoSource = factory.videoSource()
videoSource.capturer(videoCapturer, didCapture: videoFrame!)
Mounis answer is wrong. This leads to nothing. At least not at the time of this writing. There is simply nothing happening.
In fact, you would need to satisfy this delegate
- (void)capturer:(RTCVideoCapturer *)capturer didCaptureVideoFrame:(RTCVideoFrame *)frame;
(Note the difference to the Swift version: didCapture vs. didCaptureVideoFrame)
Since this delegate is for unclear reasons not available at Swift level (the compiler says you have to use didCapture, since it has been renamed from didCaptureVideoFrame with Swift3) you have to put the code int an ObjC class. I did copy this and this (which is a part of this sample project)into my project, made my videoCapturer an instance of ARDBroadcastSampleHandler
self.videoCapturer = ARDExternalSampleCapturer(delegate: videoSource)
and within the capture callback I'm calling it
let capturer = self.videoCapturer as? ARDExternalSampleCapturer
capturer?.didCapture(sampleBuffer)

AVMutableMetadataItem's time & duration INVALID after reading

I have a question.
Recently I needed to add custom tags for recorded video. Local video on device not a streamed video. The task is to add some event specific tags in video, position of which could be set by pressing forward/backward like buttons like in any player.
It is not important whether the movie file will be mov file or mp4 format.
I searched on forum, found several samples how to add metadata using AVExportSession & it worked.
Although, when I tried to add metadata using AVAssetWriter. I wasn't able to append attributes to video.
What I do not understand is that after adding attribute, returned (time & duration) properties are always invalid.
For instance let's say I have a video with duration 2 seconds.
I have tried different key spaces. I am not able to write keys' from ID3 space.
IS ID3 used for stream video? (as far as I understood ID3 metadata of .mp3). Therefore, I was not able to write it into MPEG-4 file
I also used QuickTimeUserData & ISOUserData but again results are the same.
Here is an example
AVMutableMetadataItem *item2 = [AVMutableMetadataItem new];
item2.keySpace = AVMetadataKeySpaceiTunes;
item2.key = AVMetadataiTunesMetadataKeyUserComment;
item2.value = #"One two three";
item2.duration =CMTimeMakeWithSeconds(1, 1);
item2.time = CMTimeMakeWithSeconds(0, 1);
After reading I got the following:
AVMutableMetadataItem: 0xa4301f0, keySpace=itsk, key=\U00a9cmt, commonKey=(null), locale= (null), value=One two three, time={INVALID}, duration={INVALID}, extras={\n dataType = 1;\n}
I would like to use time & duration properties for metadata instead of writing custom data and processing it after that.
Ideally it would be great to append array of items with time = t1, duration = d1, .... (tn,dn).
Does anyone know how to accomplish that?
I've ended with a solution adding chapters to a video file instead of using metadata.
I looked at available libraries, took mpv4lib.
The library currently is not compiled for iOS, therefore, I ported the source project into static library for iOS platform.
That library allows to add custom "atoms" to mp4 file, and one of them is Quick Time text track, containing chapters.
I do similar with that post
The library is located here.

Configuring multiple devices in PortAudio: Invalid device error

This query is regarding the Portaudio framework. A little background before I ask the question:I am working on an application in PortAudio to output audio through a multichannel(=8) device. However, the device I am using does not expose itself as a single 8-channel device but instead shows up in my device-list as 4 stereo devices. On searching for an approach to handle this, I got to know that WinMME in PortAudio supports multiple devices.
Now, I went through the appropriate header file("pa_win_wmme.h") and followed the suggestions present. But I get the 'Invalid device' error (error number -9996) after calling PA_OpenStream(). In the above mentioned header file, they have in fact specified the right parameter(s) to use when configuring multiple devices to avoid this error, but in-spite of following them, I still get the error.
So I wanted to know if anybody has faced a similar issue and whether I have missed/wrongly configured anything.
I am pasting the required snippets of code below for reference:
PaStreamParameters outputParameters;
PaWinMmeStreamInfo wmmeStreamInfo;
PaWinMmeDeviceAndChannelCount wmmeDeviceAndNumChannels;**
...
...
outputParameters.device = paUseHostApiSpecificDeviceSpecification;
outputParameters.channelCount = 8;
outputParameters.sampleFormat = paFloat32; /* 32 bit floating point processing */
outputParameters.hostApiSpecificStreamInfo = NULL;
wmmeStreamInfo.size = sizeof(PaWinMmeStreamInfo);
wmmeStreamInfo.hostApiType = paMME;
wmmeStreamInfo.version = 1;
wmmeStreamInfo.flags = paWinMmeUseMultipleDevices;
wmmeDeviceAndNumChannels.channelCount = 2;
wmmeDeviceAndNumChannels.device = 3;
wmmeStreamInfo.devices = &wmmeDeviceAndNumChannels;
wmmeStreamInfo.deviceCount = 4;
outputParameters.hostApiSpecificStreamInfo = &wmmeStreamInfo;
The device id = 3 was obtained through
Pa_GetHostApiInfo( Pa_HostApiTypeIdToHostApiIndex( paMME ) )->defaultOutputDevice
I hope I have made the query clear enough. Will be happy to provide more details if required.
Thanks.
I finally figured out the mistake :-)
The configuration for multiple devices must be made as an array. For instance, in the above case
wmmeDeviceAndNumChannels must be an array of 4, with each individual device field containing the corresponding device index of each of the 4 stereo devices. The channelCount remains 2. The outputParameters.channelCount still has to be the aggregate number of channels, i.e. 8. With this I was able to run the application with a single stream, and of course, without any errors related to invalid device or invalid number of channels.:-)
Thanks.
Based on the code pasted above, it looks like you are trying to call open on a single 8-channel device. Instead you will have to get the Pa index of all four devices and call open 4 times. Once for each stereo device. You will then have 4 interleaved stereo streams to maintain. My guess is that changing channelCount = 8 to channelCount = 2 will allow the first stream to open.

WinRT Geolocator always return the same position

We have a strange behavior with the WinRT Geolocator in one of our app. The user clicks on a button in the app to get the current position. Works fine the first time but all subsequent click on the button returns the same coordinates even tough we move for more than one kilometer.
The application runs on a ThinkPad and we've installed an application called "GPS Satellite" and if we switch to this application, get a coordinates, and return to our app then the Geolocator returns the correct coordinates. So we know the GPS is working fine, but seems like the coordinates are kept in cache even tough we've set a expiration of a few millisecond.
private async void ExecuteObtenirCoordGPSCommand()
{
try
{
Geolocator geolocator = new Geolocator();
geolocator.DesiredAccuracy = PositionAccuracy.High;
// Make the request for the current position
Geoposition pos = await geolocator.GetGeopositionAsync(new TimeSpan(0,0,0,0,200), new TimeSpan(0,0,5));
Place.Latitude = pos.Coordinate.Latitude;
Place.Longitude = pos.Coordinate.Longitude;
}
catch
{
GPSMsgErreur = "The GPS is unavailable";
}
}
We've tried to put a expiration on the method GetGeopositionAsync but it didn't solved the problem.
We've tried to put the Geolocator var at the class level with the same result.
Any ideas?
Not sure if this is your issue but the API you are using is tagged has obsolete in this post:
http://msdn.microsoft.com/en-us/library/windows/apps/windows.devices.geolocation.geocoordinate
Try using:
Place.Latitude = pos.Coordinate.Point.Position.Latitude;
Place.Longitude = pos.Coordinate.Point.Position.Longitude;
You may also use:
double someVar = pos.Coordinate.Accuracy
to figure out what is the margin of error on the device. May be you where not far enough from your first location and your second location was within the margin of error...
I can also tell you that I have a software built with Visual Studio 2013 Windows (WinRT) that runs on a ThinkPad using the same objects you are using and it works fine.
The main difference between mine and yours is that I am using the API showed above and that I did not use the following statement:
geolocator.DesiredAccuracy = PositionAccuracy.High;
And that I did not pass any parameters to the GetGeopositionAsync method.
I hope this helps.
Cheers, Hans.

camera programming in black berry

my following code returns null ,
byte[] image1 = _videoControl.getSnapshot(null);
any suggestion please
Few important moments about VideoControl.getSnapshot method:
some manufacturers may not implement getSnapshot() method;
the viewfinder must actually be visible on the screen prior to calling getSnapShot();
if you attempt to take pictures too quickly, however, getSnapShot() may
return null. The camera requires time to clear out its buffer and
prepare for the next shot;
you may check MMAPI System Property for "video.snapshot.encodings" before capturing:
if (System.getProperty("video.snapshot.encodings") == null) {
// getSnapshot() is not supported
}
You may read this chapter from book "Advanced BlackBerry Development":
http://books.google.com/books?id=F4Qu-lpoVncC&pg=PA53&lpg=PA53#v=onepage&q&f=false
Since VideoControl.getSnapshot method is not supported by all devices I'd recommend to use another approach. You can start the native BB Camera app with this line of code:
Invoke.invokeApplication(Invoke.APP_TYPE_CAMERA, new CameraArguments());
and then using the FileSystemJournalListener catch the taken image.
The BB SDK on your PC contains samples. Search for 'fileexplorerdemo' sample to see the rest of details.

Resources