Is it possible to use ARCore with CameraX? - arcore

I am trying to use ARCore to return a depth image and also use the CameraX to return an RGB image.
I can do both individually but when I combine both together the cameraX doesn't work.
I see that I must allow the shared camera but as far I searched, it can only be possible using the Camera2 API.
Does anyone know any way of using the CameraX instead?

Unfortunately, only one client can use a camera at a time, and both ARCore and CameraX assume they're the only user.
This would require explicit sharing a camera instance between the two, and while I believe ARCore has some provisions for this, I don't believe CameraX is able to use ARCore's interfaces.
So if you need the RGB image, you probably need to figure out how to ask ARCore for it, and not use CameraX at all.

Related

Direct2D versus Direct3D for digital video rendering

I need to render video from multiple IP cameras into several controls within the client application.
On top of the video, I should be able to add some OSD such as timestamp and camera name.
What I'm trying to do has nothing to do with 3D since we're talking about digital video with some text on it.
Which API is more suitable for this purpose? Direct3D or Direct2D?
Performance should also be a consideration here.
It used to be that Direct2D was a poor choice for Windows Phone (if you care about that system) because it wasn't supported, but Win Phone 8.1 has it now, so less of an issue.
My experience with D2D was that it offered fast, high quality 2D rendering, and I would say it is a good choice.
You might want to take a look at this article on Code Project. That looks appropriate for your purposes.
If you are certain you only need MS system support, then you're all set.
Another way to go would be a cross platform system like nanovg, which offers nice 2D rendering and would work on a Mac. Of course, you'd need to figure out how to do the video part on non windows systems.
Regarding D3D, you could certainly do it that way, but my guess would be it would make some things trickier to do. Don't forget you can combine the two as well...

Couture death information from SoftKinetic DS311 using openCV?

I wonder if there is a possibility to get a depth information from SoftKinetic DS311 using only opencv?
No, it's not possible. You need to setup a callback using the SoftKinetic SDK to be notified of new depth frames, and that convert that to cv::Mat.
You might be interested on this project since it shows how to do exactly that.

iOS graphics engines

I am new to iOS programming and am interested in working with images. Basically, I want to be able to obtain the (0,255) and RGB tuples of every pixel in a given image. What would be the best way of doing this? Would I need to use Open GL, or something similar?
Thanks
If you want to work with images, get a copy of Apple's 'Quartz 2D Programming Guide'. If you want even more detailed how-to, get a copy of the "Programming with Quartz" book on Amazon (its says Mac in the title as it predates iOS).
Essentially you are going to take images, draw them into bit map contexts, then determine the rgba layout by querying the image.
If you want to use system resources to assist you in making certain types of changes to images, there is a OSX framework recently moved to iOS called the Accelerate Framework. and it has a lot of functions in it for image manipulation (vImage).
For reading and writing images to the file system look at Apple's 'Image I/O Guide'. For advanced filtering there is Core Image, which allows you to apply filters to images.
EDIT: If you have any interest in really fast accellerated code that uses the GPU to perform some sophisticated filtering, you can checkout Brad Larson's GPU Image project on github.

iOS CGImageRef Pixel Shader

I am working on an image processing app for the iOS, and one of the various stages of my application is a vector based image posterization/color detection.
Now, I've written the code that can, per-pixel, determine the posterized color, but going through each and every pixel in an image, I imagine, would be quite difficult for the processor if the iOS. As such, I was wondering if it is possible to use the graphics processor instead;
I'd like to create a sort of "pixel shader" which uses OpenGL-ES, or some other rendering technology to process and posterize the image quickly. I have no idea where to start (I've written simple shaders for Unity3D, but never done the underlying programming for them).
Can anyone point me in the correct direction?
I'm going to come at this sideways and suggest you try out Brad Larson's GPUImage framework, which describes itself as "a BSD-licensed iOS library that lets you apply GPU-accelerated filters and other effects to images, live camera video, and movies". I haven't used it and assume you'll need to do some GL reading to add your own filtering but it'll handle so much of the boilerplate stuff and provides so many prepackaged filters that it's definitely worth looking into. It doesn't sound like you're otherwise particularly interested in OpenGL so there's no real reason to look into it.
I will add the sole consideration that under iOS 4 I found it often faster to do work on the CPU (using GCD to distribute it amongst cores) than on the GPU where I needed to be able to read the results back at the end for any sort of serial access. That's because OpenGL is generally designed so that you upload an image and then it converts it into whatever format it wants and if you want to read it back then it converts it back to the one format you expect to receive it in and copies it to where you want it. So what you save on the GPU you pay for because the GL driver has to shunt and rearrange memory. As of iOS 5 Apple have introduced a special mechanism that effectively gives you direct CPU access to OpenGL's texture store so that's probably not a concern any more.

Creating 3D model using set of 2D images on Windows

I want to create a 3D model using set of 2D images on windows which can be send through webservice to iphone to display on it.
I know it can be done through Opengl but don't know how to start and also if I succeeded in creating it,is it compatible with iphone as iphone uses opengl es.
Thanks in advance.
What kind of transformation do you have in mind to create the 3D models? I once worked on an application using such a concept to create a model from three images of an object. It didn't really work well. The models that could be created were very limited.
OpenGL does not have a built in functionality to do such stuff. Are there any reasosns why you do not want to use a real 3D-model? It sounds as if you are looking for a fast solution for your problem. But I'm afraid if you do not have any OpenGL experience, you should prepare prepare for lots of stuff to learn.
If you want to create 3D models automatically from 2D photos, you're going to have a fair bit of work to do. AFAIK, this is not something where you can get a cheap pre-packaged solution. Autodesk charge a small fortune for ImageModeler.
MeshLab may be a good starting point, but even that can't automatically convert photos to a 3D model AFAIK.
Take a look at David Lowes site. I found the "Distinctive image features from scale-invariant keypoints" paper quite interesting, though I haven't re-read it in a while. If nothing else, this should give you some idea why this is far from a trivial problem.

Resources