MTLTexture only showing one color of the image - ios

I was following raywenderlich's Metal tutorial, but got stuck rendering a texture on a plane, it seems to be showing only one color of the image, not the entire image. I'm running on an iPad iOS 12.3.
Here's a repo for the project: https://github.com/TheJoseph-Dev/MyMetalProgram
May anyone help me?

In your Renderer implementation, set a breakpoint on the line that reads:
private lazy var device = metalView.device
And run your code.
At the point in which this line is executed, the metalView exists, but the device on that metalView is nil. Similar problems can be seen for the other lazy properties of the renderer.
You may wish to use a less complex property style as it appears the properties are not being collected when the view is in the state you expect. I suspect that the view will not create resources like its device until it is attached to a window which will happen after viewDidLoad.

Related

AudioKit shouldOptimizeForRealtimePlot causing plot to go blank

I've got an iOS project set up in Xcode utilizing AudioKit to display a realtime plot of a chosen onboard microphone on the iOS device.
I am currently attempting to set the flag for an AKNodeOutputPlot where shouldOptimizeForRealtimePlot = true, but I've found that if I even add that line of code, whether it's set to true or false, to the project (in the ViewController the plot in which the plot is displayed, OR as a key-value in IB), the plot no longer displays.
If I remove the line of code, my plot resumes functionality as normal.
Has anyone else run into this?
My plot is setup mostly in IB, I pretty much just use the VC to set the input node and pause/resume the plot, but if you'd like to see any extra code, or a screenshot of my setup in IB, I'm happy to oblige.
Thank you for your assistance.

Swift - Strange text rendering at runtime

I am working on an iOS app right now. I usually use the Interface Builder and recently it has been producing an issue. When the storyboard is seen in Xcode the labels on everything look fine, but some UI elements don't render properly at runtime. This has also happened for some of the images in this project. The only font used is the system font.
I have tried readjusting the font and using attributed text, but this hasn't worked for the specific label shown. Just to mention, this app is a shared project through Git so it may be an issue with it pulling incorrectly or something, but that seems odd for it to affect the text after it has been changed and adjusted.
View post on imgur.com
The results should be crystal clear text on iOS, but it results in "fuzzy" text you would expect if you were running the Windows XP on a 480p screen. What could be causing this issue?
It seems like the layer of a superview of the UILabel in the provided image is set to pre-render. This is good for performance reasons but may not always look as good, as is seen here.
If you're setting a custom layer on a superview of the UILabel, try setting the layer's shouldRasterize to false.
Ex: exampleLayer.shouldRasterize = false

How to set preferredRenderingAPI for ARSCNView in Xcode IB

In my ARKit application ARSCNView is initialized and attached to ViewController internally based on storyboard file structure. So in function viewDidLoad I have already initialized view.
But the problem is it uses default rendering API metal. But I would like to change it to OpenGL ES2. In Apple documentation I read that preferredRenderingAPI can be somehow changed in IB inspector. But I don't understand how. There are no any examples of how to do this?
Or maybe I still change it from the code, even though the view is initialized from IB?
Just open your Main.storyboard (or whatever you have) in Xcode and navigate to the node Scene View of type SCNView in there. The Attributes inspector should allow you to change the renderer (Rendering API).
I've just tested it but it looked like my project didn't work the same as with metal.

Inspect SpriteKit Scene at Runtime

I am generating my levels in SpriteKit via code (in Swift), and its not working out entirely how I'd like. Is there a way to inspect the scene of the simulator at run time. E.g. So I can view how things are being placed outside the visible screen.
Thanks!
Believe me, level editor via code rarely is a good idea. There are exceptions but if you need static levels then you should find another way to defined it.
How can you see your entire scene?
AFAIK you con't with Xcode 7, however you can change how the scene is resize in order to show the full scene inside the view. Depending by the size of your level you can judge whether this is solutions fits your case.
Just open GameViewController.swift and change this
scene.scaleMode = AspectFill
into this
scene.scaleMode = .AspectFit
Now your scene will be compressed to fit the size of the screen. Of course don't forget to restore the original value once you are done.

'Capture GPU frame' first frame for iOS app

My application performs several rendering operations on the first frame (I am using Metal, although I think the same applies to GLES). For example, it renders to targets that are used in subsequent frames, but not updated after that. I am trying to debug some of draw calls from these rendering operations, and I would like to use the 'GPU Capture Frame' functionality to do so. I have used it in the past for on-demand GPU frame debugging, and it is very useful.
Unfortunately, I can't seem to find a way to capture the first frame. For example, this option is unavailable when broken in the debugger (setting a breakpoint before the first frame). The Xcode behaviors also don't seem to allow for capturing the frame once debugging starts. There also doesn't appear to even be an API for performing GPU captures, in Metal APIs or the CAMetalLayer.
Has anybody done this successfully?
I've come across this again, and figured it out properly now. I'll add this as a separate answer, since it's a completely different approach from my other answer.
First, some background. There are three components to capturing a GPU frame:
Telling Xcode that you want to capture a GPU frame. In typical documented use, you do this manually by clicking the GPU Frame Capture "camera" button in Xcode.
Indicating the start of the next frame to capture. Normally, this occurs at the next occurrence of MTLCommandBuffer presentDrawable:, which is invoked to present the framebuffer to the underlying view.
Indicating the end of the frame being captured. Normally, this occurs at the next-but-one occurrence of MTLCommandBuffer presentDrawable:.
In capturing the first frame, or activity before the first frame, only the third of these is available, so we need an alternate way to perform the first two items:
To tell Xcode to begin capturing a frame, add a breakpoint in Xcode at a line in your code somewhere before the point at which you want to start capturing a frame. Right-click the breakpoint, select Edit Breakpoint... from the pop-up menu, and add a Capture GPU Frame action to the breakpoint:
To indicate the start of the frame to capture, before the first occurrence of MTLCommandBuffer presentDrawable:, you can use the MTLCommandQueue insertDebugCaptureBoundary method. For example, you could invoke this method as soon as you instantiate the MTLCommandQueue, to immediately begin capturing everything submitted to the queue. Make sure the breakpoint in item 1 will be triggered before the point this code is invoked.
To indicate the end of the captured frame, you can either rely on the first normal occurrence of MTLCommandBuffer presentDrawable:, or you can add a second invocation of MTLCommandQueue insertDebugCaptureBoundary.
Finally, the MTLCommandQueue insertDebugCaptureBoundary method does not actually cause the frame to be captured. It just marks a boundary point, so you can leave it in your code for future debugging use. Wrap it in a DEBUG compilation conditional if you want it gone from production code.
Try...
[myMTLCommandEncoder insertDebugSignpost: #"com.apple.GPUTools.event.debug-frame"].
To be honest, I haven't tried it myself, but it's analogous to the similar
glInsertEventMarkerEXT(0, "com.apple.GPUTools.event.debug-frame")
documented for OpenGL ES, and there is some mention on the web of it working for Metal.
First, in Metal, I usually use Metal to do parallel compute, then GPU Capture frame is alway grey. So, there are two ways until now I found is Ok.
In iOS 11
you can use the [[MTLCaptureManager alloc] startCaptureWithDevice:m_Device]; to capture frame so you can profile the compute shader performance
lower than iOS 11 (MTLCaptureManager && MTLCaptureScope are new in iOS 11.0 )
you can use the breakpoint, then edit the Action.Capture GPU Frame

Resources