iOS framework: MTLLibrary newFunctionWithName fails when cikernel flag is set on client code - ios

I have an iOS framework using Metal; it sets up a pipeline descriptor as follows:
...
_metalVars.mtlLibrary = [_metalVars.mtlDevice newLibraryWithFile:libraryPath error:&error];
id <MTLFunction> vertexFunction = [_metalVars.mtlLibrary newFunctionWithName:#"vertexShader"];
id <MTLFunction> fragmentFunction = [_metalVars.mtlLibrary newFunctionWithName:#"fragmentShader"];
MTLRenderPipelineDescriptor *pipelineStateDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
MTLRenderPipelineDescriptor *pipelineStateDescriptor = [[MTLRenderPipelineDescriptor alloc] init];
pipelineStateDescriptor.label = #"MyPipeline";
pipelineStateDescriptor.vertexFunction = vertexFunction;
pipelineStateDescriptor.fragmentFunction = fragmentFunction;
pipelineStateDescriptor.vertexDescriptor = mtlVertexDescriptor;
pipelineStateDescriptor.colorAttachments[0].pixelFormat = MTLPixelFormatRGBA8Unorm;
_metalVars.mtlPipelineState = [_metalVars.mtlDevice newRenderPipelineStateWithDescriptor:pipelineStateDescriptor error:&error];
...
This framework is used by an iOS app, and has been working fine until the flag
-fcikernel
was set under Other Metal Compiler Flags, and
-cikernel
underOther Metal Linker Flags, in the client app's Build Settings.
(flags set on client, per https://developer.apple.com/documentation/coreimage/cikernel/2880194-kernelwithfunctionname.)
With those changes in the client app, the above framework code snippet now fails at the last line of the above code snippet, on the call to newRenderPipelineStateWithDescriptor with the error
validateWithDevice:2556: failed assertion `Render Pipeline Descriptor Validation
vertexFunction must not be nil
and I've verified that the framework is now returning nil for the vertex function at
id <MTLFunction> vertexFunction = [_metalVars.mtlLibrary newFunctionWithName:#"vertexShader"];
but works fine if I remove the flags on the client app.
Questions:
1.) I've understood that metal shaders were precompiled; why would changing a build setting on the client code cause this runtime failure in the unchanged framework?
2.) How can this be resolved?

Those flags are meant to be used when compiling Core Image kernels written in Metal. They change the way the code is translated in a way that is not compatible with "regular" Metal shader code.
The problem with setting the Core Image flags on the project level is that all .metal files are affected—regardless of whether they contain Core Image kernels or regular shader code.
The best way around this is by using a different file extension for files containing Core Image Metal kernels, like .ci.metal. Then custom build rules can be used to compile those kernels separately from the rest of the Metal codebase.
I recommend this session from WWDC 2020 which describes the process in detail.

Related

Replacement for a custom CIFilter in iOS 12.

Since iOS 12 CIColorKernel(source:"kernel string") is deprecated. Does anybody of you know what is Apples replacement for that?
I am searching for a custom CIFilter in Swift. Maybe there is a Open Source libary?
It was announced back at WWDC 2017 that custom filters can also be written with Metal Shading Language -
https://developer.apple.com/documentation/coreimage/writing_custom_kernels
So now apparently they are getting rid of Core Image Kernel Language altogether.
Here's a quick intro to writing a CIColorKernel with Metal -
https://medium.com/#shu223/core-image-filters-with-metal-71afd6377f4
Writing kernels with Metal is actually easier, the only gotcha is that you need to specify 2 compiler flags in the project (see the article above).
I attempted to follow along with these blog posts and the apple docs, but this integration between CoreImage and Metal is quite confusing. After much searching, I ended up creating an actual working example iOS app that demonstrates how to write a Metal kernel grayscale function and have it process the CoreImage pipeline.
You can use it like this:
let url = Bundle.main.url(forResource: "default", withExtension: "metallib")!
let data = try! Data(contentsOf: url)
let kernel = try! CIKernel(functionName: "monochrome", fromMetalLibraryData: data)
let sampler = CISampler(image: inputImage)
let outputImage = kernel.apply(extent: image.extent, roiCallback: { _, rect in rect }, arguments: [sampler])
According to Apple:
"You need to set these flags to use MSL as the shader language for a CIKernel. You must specify some options in Xcode under the Build Settings tab of your project's target. The first option you need to specify is an -fcikernel flag in the Other Metal Compiler Flags option. The second is to add a user-defined setting with a key called MTLLINKER_FLAGS with a value of -cikernel:

Metal Shader Debugging - Capture GPU Frame

I want to debug my metal shader, tho the "Capture GPU Frame" button is not visible and unavailable in the debug menu.
My scheme was initially set up like this:
Capture GPU Frame: Automatically Enabled
Metal API Validation: Enabled
Tho when I change the Capture GPU Frame option to Metal, I do see the capture button, tho my app crashes when I'm trying to make the render command encoder:
commandBuffer.makeRenderCommandEncoder(descriptor: ...)
validateRenderPassDescriptor:644: failed assertion `Texture at colorAttachment[0] has usage (0x01) which doesn't specify MTLTextureUsageRenderTarget (0x04)'
Question one: Why do I need to specify the usage? (It works in Automatically Enabled mode)
Question two: How do I specify the MTLTextureUsageRenderTarget?
Running betas; Xcode 10 and iOS 12.
With newer versions of Xcode you need to explicitly set MTLTextureDescriptor.usage, for my case (a render target) that looks like this:
textureDescriptor.usage = MTLTextureUsageRenderTarget|MTLTextureUsageShaderRead;
The above setting indicates that a texture can be used as a render target and that it could also be read after that by another shader. As the comment above mentioned, you may also want to set the framebufferOnly property, here is how I do that:
if (isCaptureRenderedTextureEnabled) {
mtkView.framebufferOnly = false;
}
Note that this framebufferOnly only setting is left as the default of true when for the optimized case (isCaptureRenderedTextureEnabled = false) which makes it easy to inspect the data that will be rendered in the view (the output of the shader).
Specify usage purpose
textureDescriptor.usage = [.renderTarget , .shaderRead]
or
textureDescriptor.usage = MTLTextureUsage(rawValue: MTLTextureUsage.renderTarget.rawValue | MTLTextureUsage.shaderRead.rawValue)

Mixed topology (quad/tri) with ModelIO

I'm importing some simple OBJ assets using ModelIO like so:
let mdlAsset = MDLAsset(url: url, vertexDescriptor: nil, bufferAllocator: nil, preserveTopology: true, error: nil)
... and then adding them to a SceneKit SCN file. But, whenever I have meshes that have both quads/tris (often the case, for example eyeball meshes), the resulting mesh is jumbled:
Incorrect mesh topology
Re-topologizing isn't a good option since I sometimes have low-poly meshes with very specific topology, so I can't just set preserveTopology to false... I need a result with variable topology (i.e. MDLGeometryType.variableTopology).
How do I import these files correctly preserving their original topology?
I reported this as a bug at Apple Bug Reporter on 25th of November, bug id: 35687088
Summary: SCNSceneSourceLoadingOptionPreserveOriginalTopology does not actually preserve the original topology. Instead, it converts the geometry to all quads, messing up the 3D model badly. Based on its name it should behave exactly like preserveTopology of Model IO asset loading.
Steps to Reproduce: Load an OBJ file that has both triangles and polygons using SCNSceneSourceLoadingOptionPreserveOriginalTopology and load the same file into an MDLMesh using preserveTopology of ModelIO. Notice how it only works properly for the latter. Even when you create a new SCNGeometry based on the MDLMesh, it will "quadify" the mesh again to contain only quads (while it should support 3-gons and up).
On December 13th I received a reply with a request for sample code and assets, which I supplied 2 days later. I have not received a reply since (hopefully because they are just busy from catching up from the holiday season...).
As I mentioned in my bug report's summary, loading the asset with Model I/O does work properly, but then when you create a SCNNode based on that MDLMesh it ends up messing up the geometry again.
In my case the OBJ files I load have a known format as they are always files also exported with my app (no normals, colors, UV). So what I do is load the information of the MDLMesh (buffers, facetopology etc) manually into arrays, from which I then create a SCNGeometry manually. I don't have a complete separate piece of code of that for you as it is a lot and mixed with a lot of code specific to my app, and it's in Objective C. But to illustrate:
NSError *scnsrcError;
MDLAsset *asset = [[MDLAsset alloc] initWithURL:objURL vertexDescriptor:nil bufferAllocator:nil preserveTopology:YES error:&scnsrcError];
NSLog(#"%#", scnsrcError.localizedDescription);
MDLMesh * newMesh = (MDLMesh *)[asset objectAtIndex:0];
for (MDLSubmesh *faces in newMesh.submeshes) {
//MDLSubmesh *faces = newMesh.submeshes.firstObject;
MDLMeshBufferData *topo = faces.topology.faceTopology;
MDLMeshBufferData *vertIx = faces.indexBuffer;
MDLMeshBufferData *verts = newMesh.vertexBuffers.firstObject;
int faceCount = (int)faces.topology.faceCount;
int8_t *faceIndexValues = malloc(faceCount * sizeof(int8_t));
memcpy(faceIndexValues, topo.data.bytes, faceCount * sizeof(int8_t));
int32_t *vertIndexValues = malloc(faces.indexCount * sizeof(int32_t));
memcpy(vertIndexValues, vertIx.data.bytes, faces.indexCount * sizeof(int32_t));
SCNVector3 *vertValues = malloc(newMesh.vertexCount * sizeof(SCNVector3));
memcpy(vertValues, verts.data.bytes, newMesh.vertexCount * sizeof(SCNVector3));
....
....
}
In short, the preserveTopology option in SceneKit isn't working properly. To get from the working version in Model I/O to SceneKit I basically had to write my own converter.

Why doesn't CFBundleGetDataPointerForName work for iOS 8-style frameworks?

Typically in iOS you can load constants at runtime using CFBundleGetDataPointerForName. However, I'm finding that CFBundleGetDataPointerForName always returns NULL pointers for constants defined in iOS-8 style frameworks, like the ones discussed in WWDC 2014 Session 416: Building Modern Frameworks.
To replicate, I'm just creating a framework with the Xcode framework template, adding a build script to make a universal binary, and dragging the created framework into another project has the framework project. Then I try to load a constant from it like so:
void * versionNumberConstant = CFBundleGetDataPointerForName(CFBundleGetMainBundle(), CFSTR("TestFramework2VersionNumber"));
NSLog(#"Version number was %#",versionNumberConstant ? #"PRESENT" : #"NULL"); // Prints NULL
For replication purposes, this repo has the framework project, and this repo tries to load a constant from that framework using CFBundleGetDataPointerForName.
I can also replicate this issue with frameworks other developers have made, such as AdMob's SDK.
Just in case its an issue with the pointer not being in CFBundleGetMainBundle(), I've also looped through every CFBundleRef and can't load the constant from any of them:
CFArrayRef array = CFBundleGetAllBundles();
for (int i = 0; i < CFArrayGetCount(array); i++) {
CFBundleRef bundleRef = (CFBundleRef)CFArrayGetValueAtIndex(array, i);
void * versionPointer = CFBundleGetDataPointerForName(bundleRef, CFSTR("TestFramework2VersionNumber"));
NSLog(#"Version pointer is %#",versionPointer ? #"PRESENT" : #"NULL");
}
Does anyone know why this is, or if there's some build setting I can change to fix it?
Version info:
Xcode: Version 6.2 6C131e
iOS: 8.2 (12D508)
Device: iPhone 6 / iPhone 6 Simulator
Ok, the issue was that I needed to load the specific bundle that that framework was using, like so:
NSURL *testFramework2URL = [[[[NSBundle mainBundle] resourceURL] URLByAppendingPathComponent:#"Frameworks" isDirectory:YES] URLByAppendingPathComponent:#"TestFramework2.framework"];
CFBundleRef bundleRef = CFBundleCreate(NULL, (__bridge CFURLRef)testFramework2URL);
NSLog(#"Bundle Ref = %#",bundleRef);
double * manualVersion = CFBundleGetDataPointerForName(bundleRef, CFSTR("TestFramework2VersionNumber"));
NSLog(#"Manual version = %#",manualVersion ? #"present" : #"NULL"); // Manual version = present
double version = *manualVersion;
NSLog(#"Version = %g",version); // Version = 1
I can also achieve this using CFBundleGetAllBundles(), despite what I said in the question (I must have made a mistake when originally using the CFBundleGetAllBundles() function or something.

OpenCV and iPhone

I am writing an application to create a movie file from a bunch of images on an iPhone. I am using OpenCv. I downloaded OpenCv static libraries for ARM(iPhone's native instruction architecture) and the libraries were generated just fine. There were no problems linking to them libraries.
As a first step, I was trying to create a .avi file using one image, to see if it works. But cvCreateVideoWriter always returns me a NULL value. I did some searching and I believe its due to the codec not being present. I am trying this on the iPhone simulator. This is what i do:
- (void)viewDidLoad {
[super viewDidLoad];
UIImage *anImage = [UIImage imageNamed:#"1.jpg"];
IplImage *img_color = [self CreateIplImageFromUIImage:anImage];
//The image gets created just fine
CvVideoWriter *writer =
cvCreateVideoWriter("out.avi",CV_FOURCC('P','I','M','1'),
25,cvSize(320,480),1);
//writer is always null
int result = cvWriteFrame(writer, img_color);
NSLog(#"\n%d",result);
//hence this is also 0 all the time
cvReleaseVideoWriter(&writer);
}
I am not sure about the second parameter. What sort of codec or what exactly does it do...
I am a n00B at this. Any suggestions?
On *nix flavors, OpenCV uses ffmpeg under the covers to encode video files, so you need to make sure your static libraries are built with ffmpeg support. The second parameter, CV_FOURCC('P','I','M','1'), is the FOURCC code describing the video format/codec you are requesting, in this case the MPEG1 codec. Check out fourcc.org for a complete listing (not all of which work in ffmpeg).

Resources