On Android, it is possible to make the camera write its output directly to an OpenGL texture (of type GL_TEXTURE_EXTERNAL_OES), avoiding buffers on the CPU altogether.
Is such a thing possible on iOS?
The output you get from the camera in iOS is a CMSampleBufferRef, with a CVPixelBufferRef inside. (See documentation here). iOS from version 5 has CVOpenGLESTextureCache in the CoreVideo framework, which allows you to create an OpenGL ES texture using a CVPixelBufferRef, avoiding any copies.
Check the RosyWriter sample in Apple's developer website, it's all there.
Related
I am very new to the concept and use of shaders in SpriteKit.
I found this tutorial on how to render a Mandelbrot fractal with a custom shader file - Fractal.fsh - attached to a Color Sprite's Custom Shader property.
https://www.weheartswift.com/fractals-Xcode-6/
It works fine and I thought to my self that learning about OpenGL ES and custom shaders in SpriteKit would be a fun exercise.
According to Apple though, OpenGL ES is deprecated as of iOS 12.
https://developer.apple.com/library/archive/documentation/3DDrawing/Conceptual/OpenGLES_ProgrammingGuide/BestPracticesforShaders/BestPracticesforShaders.html
My question is this:
Does this mean that custom shaders for use in SpriteKit should be written in Metal as of now?
I have tried to figure out how to rewrite the fractal.fsh shader code, referred to in the first link, in Metal but I have not - yet - been able to find any resources on how to convert existing custom SKShader's from OpenGL ES to Metal. However, I am NOT looking for someone to rewrite that code to use Metal, only a pointer in the right direction.
UPDATE:
Based on the answer from #Knight0fDragon I will try to clarify my question:
The documentation on the SKShader class states that:
"An SKShader object holds a custom OpenGL ES fragment shader."
https://developer.apple.com/documentation/spritekit/skshader
So if a SKShader object holds a custom OpenGL ES fragment shader, what will it hold after the support for OpenGL ES is deprecated?
How would one go on about creating a custom fragment shader to use in SpriteKit if one cannot use OpenGL ES as of iOS 12?
First I thought that the *.fsh file containing the GLSL code could be replaced with a *.metal file containing equivalent metal code but that assessment was clearly too naive (because I tried and I couldn't assign the *.metal file to the Color Sprite's Custom Shader property)
From the documentation on "Executing Shaders in Metal and OpenGL
":
On devices that support it, the GLSL code you provide to SKShader is automatically converted to Metal shading language and run on a Metal renderer.
So, from my understanding, SpriteKit will use Metal as a backend where it is available and convert your shaders for you when compiling them. I did not find an option to directly write the shaders in Metal.
According to Apple (see near bottom of the page at this link)...
Apps built using OpenGL ES will continue to run in iOS 12, but OpenGL
ES is deprecated in iOS 12. Games and graphics-intensive apps that
previously used OpenGL ES should now adopt Metal [emphasis added].
If you are starting from scratch, I suggest you write shaders in Metal.
I recently started building a game for iOS using my iPhone 5s for testing.
However, recently I have decided to try my iPhone SE to see how it looks; all of the materials on all of the objects appear to be missing, but the color of the trails for the particle effects and the line renderers still have their color.
It could be a shader issue or a problem with the graphics API Unity is using. If you're not using the standard shader then make sure that you're shader is compatible with mobile devices. Also make sure that it's included in the project by creating a folder named Resources and moving your shader into that.
If you're using one of the standard shaders that comes with Unity then the issue is likely not a shader one but the Graphics API selected. It's likely using Metal which is causing that issue. Use OpenGL ES instead of Metal.
Disable Auto Graphics API then change iOS Graphics API to OpenGLES2 or OpenGLES3 in Unity's Player Settings.
How can I resize texture2d of SharpDX? I'm using SharpDX to Duplicate the screen and I use MediaFoundation to encode the texture into a video file. My problem is when I open an application into fullscreen and has a different resolution from system resolution I got a blank screen on my recording. Is there a better way I can resize the texture before encoding to mediafoundation without suffering performance? I'm using hardware accelerated. Thanks.
It depends on how exactly you use Media Foundation, but you can use the Video Processor MFT explicitly. You need to add this MFT to the topology if you use IMFMediaSession/IMFTopology. Or initialize and process the samples with this MFT if you use Sink Writer. In this case you need to supply the DX manager to the MFT, using MFT_MESSAGE_SET_D3D_MANAGER.
This MFT is only available on Windows 8 and higher. So if you need to support an older Windows version you can use Video Resizer, but it is not hardware accelerated.
Another option to resize the texture would be to create a render target of the desired size and draw the texture to it. After that you need to feed that render target to the encoder.
I am writing an iOS app which is about live object detection. And I need to resize (scale)the video input and use the Metal framework afterwards. But I didn't find any Api provided by apple to do such thing.
I can crop it but scaling it seems harder.
Does anyone know how to do it?
This question is about iOS. On Android, it is very easy to use OpenGL ES 2.0 to render a texture on a view (for previewing) or to send it to an encoder (for file writing). I haven't been able to find any tutorial on iOS to achieve video playback (previewing video effect from a file) and video recording (saving a video with an effect) with shader effects. Is this something possible with iOS?
I've come across a demo about shaders called GLCameraRipple but I have no clue about how to use it more generically. Ex: With AVFoundation.
[EDIT]
I trampled on this tutorial about OpenGL ES, AVFoundation and video merging on iOS while searching for a snippet. That's another interesting entry door.
It's all very low-level stuff over in iOS land, with a whole bunch of pieces to connect.
The main thing you're likely to be interested in is CVOpenGLESTextureCache. As the CV prefix implies, it's part of Core Video, in this case its primary point of interest is CVOpenGLESTextureCacheCreateTextureFromImage which "creates a live binding between the image buffer and the underlying texture object". The documentation further provides you with explicit advice on use of such an image as a GL_COLOR_ATTACHMENT — i.e. the texture ID returned is usable both as a source and as a destination for OpenGL.
The bound image buffer will be tied to a CVImageBuffer, one type of which is a CVPixelBuffer. You can supply pixel buffers to an AVAssetWriterInputPixelBufferAdaptor wired to an AVAssetWriter in order to output to a video.
In the other direction, an AVAssetReaderOutput attached to a AVAssetReader will vend CMSampleBuffers which can be queried for attached image buffers (if you've got video coming in and not just audio, there'll be some) that can then be mapped into OpenGL via a texture cache.