Display CVPixelBuffer with YUV422 format on iOS Simulator - ios

I'm using openGL ES to display CVPixelBuffers on iOS. The openGL pipeline uses the fast texture upload APIs (CVOpenGLESTextureCache*). When running my app on the actual device the display is great but on the simulator it's not the same (I understand that those APIs don't work on the simulator).
I noticed that, when using the simulator, the pixel format is kCVPixelFormatType_422YpCbCr8 and I'm trying to extract the Y and UV components and use the glTexImage2D to upload but, I'm getting some incorrect results. For now I'm concentrating on the Y component only, and the result looks like the image is half of the expected width and is duplicated - if it makes sense.
I would like to know from some one that has successfully displayed YUV422 video frames on iOS simulator if I'm on the right track and/or if I can get some pointers on how to solve my problem.
Thanks!

Related

Do all image assets in an iOS port of LIBGDX game need to be multiples of two?

I am writing an android game with Libgdx. I want the possibility to port it to iOS in the future. I read some comments that all image assets need to be sized in multiples of two for iOS. Is this true for all assets/textures and do the width and height need to be the same, ie: 256 X 256 or is 256 x 128 OK?
I am pretty sure that iOS supports any size, as I never heard that there's a need to transform downloaded images to other sizes to show them.
I know for sure that I never encountered an Android phone that does not support any texture size. I had my game released with different texture sizes for a year without problems until I switched to using a TextureAtlas. That's what I would recommend you, too.
If you are using Scene2d.ui and a Skin, just use SkinComposer to create your TextureAtlas. If you don't use Scene2d.ui, you can use TexturePacker GUI.

GLKTextureLoader not loading jpg on "The new iPad"

Im trying to create a cube map of six jpg files from the web in GLKit. It works great on my iPhone 6+ but when i run the same code on "The new iPad" the cube map is just black when applied to an object. If i try the same thing with png files it works. Is there anything specific that needs to be done to load jpg's correctly on certain hardware?
The error from cubeMapWithContentsOfFiles is nil so it appears like GLKit thinks it loaded the texture properly.
Here is a demo project http://s.swic.name/Yw8F
If the dimensions of textures you're generating are themselves determined by the device's display dimensions (e.g. rendering a full-screen UIView to a texture) then the resulting cube-map could easily fall within the MAX_TEXTURE_SIZE on some devices but exceed it on larger devices. What are the pixel dimensions of your cube map on iPhone 6 Plus vs iPad 4th generation? If they exceed 4096 in either dimension you could be in trouble.

Animation Lags on iOS devices for apps designed in Adobe Flash Pro

I am developing iOS app and android app using adobe AIR and Flash CS6.The app contains lots of animations. Since Bitmap images do not give a good quality, I have kept the images in Vector form only. It runs fine on android devices but when I publish it on iOS device many animation lags. How can I solve this without affecting the quality of my animations? I am using AIR SDK version 4.0 and GPU rendering method. Any help would be appreciated.
There might be a few things you could try:
use TweenMax/TweenLite for your animations as the GreenSock library is optimized for performance
try setting cacheAsBitmap to true on the vector you're animating
convert vectors to cached bitmap data (http://esdot.ca/site/2012/fast-rendering-in-air-cached-spritesheets)
try see if using "direct" mode for rendering yields better performance; from what I've experienced GPU is not well suited for vectors

CILanczosScaleTransform filter yields GL_INVALID_VALUE

In my photo editing iOS app I want to scale down large images before sending them into my filter pipeline. I'm using the CILanczosScaleTransform filter of Apple's Core Image framework for that.
Now for some images I get a black screen in the result. I enabled the "OpenGL ES Error" breakpoint in Xcode and found that a GL_INVALID_VALUE error is thrown inside Core Image—when it's trying to create an intermediate texture for the Lanczos filter, to be precise.
After countless experiments I found that it only happens if the resulting image would have a width greater then 2048 pixels.
So for example images taken with the build-in camera in landscape mode scaled down to 4 MP (2369x1769) cause this error. Portrait images of the same size (height > 2048) work without problems.
If I'm using the CIAffineTransform filter instead I'm not getting that error. But I'd prefer to use Lanczos since it yields better results.
I tested on an iPad Mini (non-retina, iOS 7.1) and an iPhone 5s (iOS 7.0.6) with the same result.
Any ideas on what is causing this issue? I searched online but was not able to find any documented or non-documented restrictions concerning image sizes for this filter.

iPad retina screen recording

Two parts:
Correct me if I'm wrong, but there isn't a standard video file format that holds 2048 x 1536 frames? (i.e. recording the full resolution of the iPad retina is impossible?)
My app uses a glReadPixels call to record the screen, and appends the pixel buffers to an AVAssetWriterInputPixelBufferAdaptor. If the video needs to be resized to export, what's the best way to do this? I'm trying right now with AVMutableVideoCompositionLayerInstructions and CGAffineTransforms, but it's not working. Any ideas?
Thanks
Sam
Yes , it is possible. My app is also taking big frame video .
Don't use glReadpixels it causes a lot of delay especially if you record big frames as 2048 x 1536
Since iOS 5.0 you can use a faster way using texture cash (link)

Resources