My question is, how could i get the raw image with the help of the driver in order to pass it to the controller software so that it can display the game frames in vr mode.
I looked in the documentations and i found this
IVROverlay::GetOverlayImageData
Is this the correct way to extract the image buffer?
Related
I am currently trying to use ROS' cv_bridge to convert to and from Opencv Mat images and ROS sensor_msgs/Images. I am not putting this question in the ROS answer site but here because I've already read in this answer that apparently in this conversion, cv_bridge does not put or fill or take the header message (with the timestamp) of the ROS Image.
So my remaining question is more on the OpenCV's side:
Does OpenCV Mat Images have some timestamp info embedded in them? If so, how can I access it?
OpenCV Mat images don't have any timing info built into them. You can see the class reference for them here.
You can, however, get the timestamp from your video capture source. It has a property CAP_PROP_POS_MSEC that returns the position the current frame is in a video source. You can use that to put into your ROS message header, although, you might have to do some extra work to convert the time from the video into the same timebase as ROS.
I'm an undergraduate student and I'm programming some HumanSeg iPhone app. Now I have to read raw frames from the camera, but I found the codes in official guide aren't clear enough for me to understand.
When I get the frames (in CMPixelBuffer format) from the camera, I need to modify it (I mean I have to do some padding and resizing, and turn it into a CVPixelBuffer format, to feed it to a CoreML MobileNet).
I've been searching for the solution for weeks, but unfortunately I got nothing. In the offical guide I was told that these buffers "doen't offer direct access to inner data".
I even tried to use a Context to draw the CMPixelBuffer into a UIImage and draw it back to a CVPixelBuffer,and I found this process terribly slow, just as the offcial guide says. Since I'm doing video processing this method is unacceptable.
What am I supposed to do or read? I really appreciate your help.
I'm looking for a way to inspect the the ICC color profile data provided CGColorSpace's copyICCData() method.
Specifically, I'm loading PNG images into UIImages on iOS, and trying to find a way to use the let iccData:CFData? = aUIImage.cgImage!.colorSpace!.copyICCData() to determine the gamma for the image file. This is for a game that uses 3D rendering— if the source image has a standard 2.2 gamma, I'll load the image data into a texture as sRGB (e.g. MTLPixelFormatRGBA8Unorm_sRGB) and if it has a gamma of 1.0 I'll instead load it as a linear texture (e.g. MTLPixelFormatRGBA8Unorm).
Note: The solution of just passing a UIImage/CGImage to the rendering system (SceneKit/Metal) and letting it sort it out won't work here because: 1. Some of the rendering I'm doing is assembling 2D images into a 3D texture, so that's something I need to do with raw data, not something I can just read from a standard image file format; 2. I'm specifically trying to pass gamma-1.0 images into the rendering system to avoid the overhead of sRGB→linear conversion (rendering is in linear space).
Also: Manual ICC-parsing solutions, Apple-API-using solutions, and open-source library suggestions are all acceptable answers. This is not specifically a query for tool recommendation — any solution that'll work is a good one — but in my research, manual ICC parsing would be unwieldy and Apple's APIs don't seem to expose any ICC properties. So I believe the most likely answer is a pointer to some library out there that I haven't been able to find via Google or GitHub or CocoaPods or StockOverflow, and will be gladly accepted.
Your best bet is to use sample icc 'https://sourceforge.net/projects/sampleicc/'. Just get the profile data as you described, then use OpenIccProfile to load it up. From there, get a ref to the header structure (.m_Header) and pull the info you need. I'd also recommend you take a look at RefIccMax 'https://github.com/InternationalColorConsortium/RefIccMAX' which is a newer version of the same lib, but not ready for primetime.
The Raspberry Pi Camera v1 contains a OmniVision OV5647 sensors which offers up to 10bit raw RGB data. Using opencv's cvQueryFrame I get only 8bit data. I am only interested in grayscale imagery - how do I get 10bit data?
There may be simpler options available, but here are a couple of possible ideas. I have not coded or tested either, like I normally would - sorry.
Option 1.
Use "Video for Linux" (v4l2) and open the camera, do the ioctl()s and manage the buffers yourself - great link here.
Option 2.
Use popen() to start raspivid and tell it you want the raw option (--raw) and grab the raw data off the end of the JPEG with information on Bayer decoding from - here. Other, somewhat simpler to follow information available at section 5.11 here.
Assuming you want to capture RAW data from still images and not necessarily video, you have 2 options I know of:
Option 1: picamera
picamera is a Python library that will let you capture data to a stream. Be sure to read the docs as it's pretty tricky to work with.
Option 2: raspistill
You can also shell out to raspistill to capture your image file, and the process that however you want - if you want to process the raw data (captured raspistill --raw), you can use picamraw on- or offboard the Pi.
Even though we're a heavily Python shop, my team went with option 2 (in combination with picamraw, which we released ourselves) because picamera was not stable enough.
Is it possible to process(filter) HDR images through Core Image? I couldn't find much documentation on this, so I was wondering if someone possibly had an answer to it. I do know that it is possible to do the working space computations with RGBAh when you initialize a CIContext, so I figured that if we could do computations with floating point image formats, that it should be possible..
What, if it is not possible, are alternatives if you want to produce HDR effects on iOS?
EDIT: I thought I'd try to be a bit more concise. It is to my understanding that HDR images can be clamped and saved as .jpg, .png, and other image formats by clamping the pixel values. However, I'm more interested in doing tone mapping through Core Image on a HDR image that has not been converted yet. The issue is encoding a CIImage with a HDR image, supposedly with the .hdr extention.
EDIT2: Maybe it would be useful to useful to use CGImageCreate , along with CGDataProviderCreateWithFilename ?
I hope you have basic understanding of how HDR works. An HDR file is generated by capturing 2 or more images at different exposures and combining it. So even if there's something like .HDR file, it would be a container format with more than one jpg in it. Technically you can not give two image files at once as an input to a generic CIFilter.
And in iOS, as I remember, it's not possible to access original set of photos of an HDR but the processed final output. Even if you could, you'd have to manually do the HDR process and generate a single HDR png/jpg anyway before feeding it to a CIFilter.
Since there are people who ask for a CI HDR Algorithm, I decided to share my code on github. See:
https://github.com/schulz0r/CoreImage-HDR
It is the Robertson HDR algorithm, so you cannot use RAW images. Please see the unit tests if you want to know how to get the camera response and obtain the hdr image. CoreImage saturates pixel values outside [0.0 ... 1.0], so the HDR is scaled into said interval.
Coding with metal always causes messy code for me, so I decided to use MetalKitPlus which you have to include in your project. You can find it here:
https://github.com/LRH539/MetalKitPlus
I think you have to check out the dev/v2.0.0 branch. I will merge this into master in the future.
edit: Just clone the master branch of MetalKitPlus. Also, I added a more detailed description to my CI-HDR project.
You can now(iOS 10+) capture Raw images(coded on 12 bits) and then filter them the way you like using CIFilter. You might not get a dynamic range as wide as the one you get by using bracketed captures; nevertheless, it is still wider than capturing 8-bits images.
Check Apple's documentation for capturing and processing RAW images.
I also recommend you watch wwdc2016 video by Apple(move to the raw processing part).