How to get values for primary light intensity etc, from ARDirectionalLightEstimate - ios

So I'm trying to use the front camera of iPhone XR to get the approximate location for light sources. I decided to use ARDirectionalLightEstimate but I can't figure out how to access it. I can easily access lightEstimate property.
The Docs said that the lightEstimate property of each frame has an instance of ARDirectionalLightEstimate but I can't access it using the dot operator, I even tried to type cast it to ARDirectionalLightEstimate (like I saw someone doing, I can't find the link right now but I will update) but that didn't work too. I am inexperienced in swift so it's possible I messed up somewhere.

ARDirectionalLightEstimate is a subclass of type ARLightEstimate, so to access you need to type cast lightEstimate:
let lightEstimate = sceneView?.session.currentFrame?.lightEstimate
if let directionalLightEstimate = lightEstimate as? ARDirectionalLightEstimate {
// add logic here
let primaryLightIntensity = directionalLightEstimate.primaryLightIntensity
// ...
}

Related

How to custom WebRTC video source?

Does someone know how to change WebRTC (https://cocoapods.org/pods/libjingle_peerconnection) video source?
I am working on an screen sharing app.
At the moment, I retrieve the rendered frames in real-time in CVPixelBuffer. Does someone know how I could add my frames as video source please?
Is it possible to set an other video source instead of camera device source ? Is yes, which format the video has to be and how to do it ?
Thanks.
var connectionFactory : RTCPeerConnectionFactory = RTCPeerConnectionFactory()
let videoSource : RTCVideoSource = factory.videoSource()
videoSource.capturer(videoCapturer, didCapture: videoFrame!)
Mounis answer is wrong. This leads to nothing. At least not at the time of this writing. There is simply nothing happening.
In fact, you would need to satisfy this delegate
- (void)capturer:(RTCVideoCapturer *)capturer didCaptureVideoFrame:(RTCVideoFrame *)frame;
(Note the difference to the Swift version: didCapture vs. didCaptureVideoFrame)
Since this delegate is for unclear reasons not available at Swift level (the compiler says you have to use didCapture, since it has been renamed from didCaptureVideoFrame with Swift3) you have to put the code int an ObjC class. I did copy this and this (which is a part of this sample project)into my project, made my videoCapturer an instance of ARDBroadcastSampleHandler
self.videoCapturer = ARDExternalSampleCapturer(delegate: videoSource)
and within the capture callback I'm calling it
let capturer = self.videoCapturer as? ARDExternalSampleCapturer
capturer?.didCapture(sampleBuffer)

Dynamic naming of objects in AudioKit (SpriteKit)

I am trying to create an app similar to the Reactable.
The user will be able to drag "modules" like an oscillator or filter from a menu into the "play area" and the module will be activated.
I am thinking to initialize the modules as they intersect with the "play area" background object. However, this requires me to name the modules automatically, i.e.:
let osci = AKOscillator()
where osci will automatically count up to be:
let osci1 = AKOscillator()
let osci2 = AKOscillator()
...
etc.
How will I be able to do this?
Thanks
edit: I am trying to use an array by creating an array of
var osciArray = [AKOscillator]()
and in my function to add an oscillator, this is my code:
let oscis = AKOscillator()
osciArray.append(oscis)
osciArray[oscCounter].frequency = freqValue
osciArray[oscCounter].amplitude = 0.5
osciArray[oscCounter].start()
selectedNode.userData = ["counter": oscCounter]
oscCounter += 1
currentOutput = osciArray[oscCounter]
AudioKit.output = currentOutput
AudioKit.start()
My app builds fine, but once the app starts running on the Simulator I get error : fatal error: Index out of range
I haven't used AudioKit, but I read about it a while ago and I have quite a big interest in it. From what I understand from the documentation, it's structured pretty much like SpriteKit: nodes connected together.
I guess then that most classes in the library derive from a base class, just like everything in SpriteKit derives from the SKNode class.
Since you are linking the audio kit nodes with visual representations via SpriteKit nodes, why don't you simply subclass from an SKSpriteNode and add an optional audioNode property with the base class from AudioKit?
That way you can just use SpriteKit to interact directly with the stored audio node property.
I think there's a lot of AudioKit related code in your question, but to answer the question, you only have to look at oscCounter. You don't show its initial value, but I am guessing it was zero. Then you increment it by 1 and try to access osciArray[oscCounter] which has only one element so it should be accessed by osciArray[0]. Move the counter lower and you'll be better off. Furthermore, your oscillators look like local variables, so they'll get lost once the scope is lost. They should be declared as instance variables in your class or whatever this is part of.

Unable to set CEMarker's collisionPriority

I'm trying to "mimic" the BusinessLayer functionality by creating a CEMarkerGroup for my own markers, then setting the following:
CEMarkerGroup *myGroup = [self.mapView markerGroupWithName:#"myMarkers"];
[myGroup setShouldTestForCollisions:YES];
And then, according to the Citymaps' current documentation, I try to set individual collisionPriority values to each like this:
[marker setCollisionPriority:25.0f]; //<-- ERROR!!, or
marker.collisionPriority = 25.0f; //<-- same ERROR
[myGroup addMarker:marker];
Error is: No visible #interface for 'CEMarker' declares the selector 'setCollisionPriority:'
As my goal is to approximate Citymaps' very slick behavior of avoiding marker overlaps, does anyone know of a workaround for this issue, or perhaps another approach altogether? Much thanks!
I am a developer at Citymaps. Thank you for your interest in our SDK!
Our documentation got a bit ahead of itself. Turns out, we never exposed the collisionPriority property. I've given myself a ticket to do so, and will let you know immediately when a new build is out containing this change.

Is Swift on OSX different from the iOS version?

I want to get the bounds of a CALayer.
On OSX I have to do this
let bounds = rootLayer?.bounds
on iOS, this
let bounds = rootLayer.bounds
and then to assign this bounds to another layer I have to do this
on OSX
anotherLayer.bounds = bounds!
on iOS
anotherLayer.bounds = bounds
why? is swift different on iOS and OSX? That would be awful.
On both cases rootLayer is set like this
let rootLayer = vista.layer
Swift the language is the same. The UI frameworks and API's, are not the same, which is expected since since the UI paradigms are not the same.
I've got no idea what API you are populating vista from. But somewhere along the line, you have an API that's defined slightly different on OSX and iOS. On OSX vista.layer returns and optional, and on iOS returns a non-optional. When you find where that is, you can correct for it, and unwrap the optional there.
Lastly, implicit typing is doing you no favors here. If you change your rootLayer declaration to this:
let rootLayer:MyLayerTypeHere = vista.layer
Then you'll get a compile time error right here if the type is returned as an optional, since MyLayerTypeHere and MyLayerTypeHere? are different types.

How to get current stream position using GoogleCast frameWork in iOS

I am working on an application in which I will connect to the T.V using ChromeCast device, to achieve this I have used GoogleCast FrameWork in my project,
I am facing a problem when I try to access the approximate stream position of the video using the below statements,
GCKMediaControlChannel *mediaControlChannel = [[GCKMediaControlChannel alloc] init];
NSLog(#"Approximate stream position is %f",mediaControlChannel.approximateStreamPosition);
But this is resulting in a time difference of 20 seconds.
I tried below statements to get the exact stream position,
GCKMediaStatus *deviceStatus = [[GCKMediaStatus alloc] initWithSessionID:sessionId mediaInformation:self.mediaInformation];
NSLog(#"Stream Position %f", self.deviceStatus.streamPosition);
As the above method is having two parameters,We need to send session as an integer, but we are getting session id as a string alpha numeric and upon converting this to integer resulting in 0.
Can any one help me to get the session ID as an integer, Or suggest me to get the current stream position with any different method.
I use the following code to retrieve the stream position of the video that is currently being casted, and it's pretty accurate! The Google Cast SDK version that we use in our project is 3.5.3.
let position = Double(GCKCastContext.sharedInstance().sessionManager.currentSession!.remoteMediaClient!.mediaStatus!.streamPosition)
Hope it helps!
approximateStreamPosition should give you a pretty accurate time, certainly not off by 20 seconds. You can take a look at our iOS reference app for an example.
You can use approximateStreamPosition for the same.
Below code will print the more accurate time.
if let position = GCKCastContext.sharedInstance().sessionManager.currentSession?.remoteMediaClient?.approximateStreamPosition() {
print("current time ",position)
}

Resources