I'm currently trying to draw a graphic that will be animated using Metal in Swift. I have successfully drawn a single frame of my graphic. The graphic is simple, as you can see from this image. What I can't figure out is how to multisample the drawing. There seems to be few references on Metal in general, especially in regards to the Swift syntax.
self.metalLayer = CAMetalLayer()
self.metalLayer.device = self.device
self.metalLayer.pixelFormat = .BGRA8Unorm
self.metalLayer.framebufferOnly = true
self.metalLayer.frame = self.view.frame
self.view.layer.addSublayer(self.metalLayer)
self.renderer = SunRenderer(device: self.device, frame: self.view.frame)
let defaultLibrary = self.device.newDefaultLibrary()
let fragmentProgram = defaultLibrary!.newFunctionWithName("basic_fragment")
let vertexProgram = defaultLibrary!.newFunctionWithName("basic_vertex")
let pipelineStateDescriptor = MTLRenderPipelineDescriptor()
pipelineStateDescriptor.vertexFunction = vertexProgram
pipelineStateDescriptor.fragmentFunction = fragmentProgram
pipelineStateDescriptor.colorAttachments[0].pixelFormat = .BGRA8Unorm
pipelineStateDescriptor.colorAttachments[0].blendingEnabled = true
pipelineStateDescriptor.colorAttachments[0].rgbBlendOperation = MTLBlendOperation.Add
pipelineStateDescriptor.colorAttachments[0].alphaBlendOperation = MTLBlendOperation.Add
pipelineStateDescriptor.colorAttachments[0].sourceRGBBlendFactor = MTLBlendFactor.SourceAlpha
pipelineStateDescriptor.colorAttachments[0].sourceAlphaBlendFactor = MTLBlendFactor.SourceAlpha
pipelineStateDescriptor.colorAttachments[0].destinationRGBBlendFactor = MTLBlendFactor.OneMinusSourceAlpha
pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = MTLBlendFactor.OneMinusSourceAlpha
The question, how do I smooth these edges?
UPDATE:
So I have implemented a MultiSample texture and set the sampleCount to 4. I don't notice any difference so I suspect I did something wrong.
FINAL:
So, in the end, it does appear the multisampling works. Initially, I had vertices wrapping these "rays" with a 0 alpha. This is a trick to make smoother edges. With these vertices, multisampling didn't seem to improve the edges. When I reverted back to have 4 vertices per ray, the multi-sampling improved their edges.
let defaultLibrary = self.device.newDefaultLibrary()
let fragmentProgram = defaultLibrary!.newFunctionWithName("basic_fragment")
let vertexProgram = defaultLibrary!.newFunctionWithName("basic_vertex")
let pipelineStateDescriptor = MTLRenderPipelineDescriptor()
pipelineStateDescriptor.vertexFunction = vertexProgram
pipelineStateDescriptor.fragmentFunction = fragmentProgram
pipelineStateDescriptor.colorAttachments[0].pixelFormat = .BGRA8Unorm
pipelineStateDescriptor.colorAttachments[0].blendingEnabled = true
pipelineStateDescriptor.sampleCount = 4
pipelineStateDescriptor.colorAttachments[0].rgbBlendOperation = MTLBlendOperation.Add
pipelineStateDescriptor.colorAttachments[0].alphaBlendOperation = MTLBlendOperation.Add
pipelineStateDescriptor.colorAttachments[0].sourceRGBBlendFactor = MTLBlendFactor.SourceAlpha
pipelineStateDescriptor.colorAttachments[0].sourceAlphaBlendFactor = MTLBlendFactor.SourceAlpha
pipelineStateDescriptor.colorAttachments[0].destinationRGBBlendFactor = MTLBlendFactor.OneMinusSourceAlpha
pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = MTLBlendFactor.OneMinusSourceAlpha
let desc = MTLTextureDescriptor()
desc.textureType = MTLTextureType.Type2DMultisample
desc.width = Int(self.view.frame.width)
desc.height = Int(self.view.frame.height)
desc.sampleCount = 4
desc.pixelFormat = .BGRA8Unorm
self.sampletex = self.device.newTextureWithDescriptor(desc)
// When rendering
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = sampletex
renderPassDescriptor.colorAttachments[0].resolveTexture = drawable.texture
renderPassDescriptor.colorAttachments[0].loadAction = .Clear
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 23/255.0, green: 26/255.0, blue: 31/255.0, alpha: 0.0)
renderPassDescriptor.colorAttachments[0].storeAction = .MultisampleResolve
let commandBuffer = commandQueue.commandBuffer()
let renderEncoder = commandBuffer.renderCommandEncoderWithDescriptor(renderPassDescriptor)
renderEncoder.setRenderPipelineState(pipelineState)
This is substantially simpler with MTKView (just set sampleCount to your desired number of MSAA samples on the view and the pipeline descriptor), but here are the steps for rolling your own.
When creating a render pipeline state, set the sampleCount of your render pipeline state descriptor to your multisample count.
At startup, and whenever the layer resizes, create a multisample texture with dimensions equal to your layer's drawable size by creating a texture descriptor whose textureType is MTLTextureType2DMultisample and whose sampleCount is your multisample count. If you are using a depth and/or stencil buffer, set these properties on their descriptors as well.
When rendering, set the MSAA texture as the texture of the render pass descriptor's primary color attachment, and set the current drawable's texture as the resolveTexture.
Set the storeAction of the color attachment to MTLStoreActionMultisampleResolve so that the MSAA texture is resolved into the renderbuffer at the end of the pass.
Draw and present as you normally would.
Related
I'm trying to achieve this mosaic light show effect for my background view with the CAReplicatorLayer object:
https://downloops.com/stock-footage/mosaic-light-show-blue-illuminated-pixel-grid-looping-background/
Each tile/CALayer is a single image that was replicated horizontally & vertically. That part I have done.
It seems to me this task is broken into at least 4 separate parts:
Pick a random tile
Select a random range of color offset for the selected tile
Apply that color offset over a specified duration in seconds
If the random color offset exceeds a specific threshold then apply a glow effect with the color offset animation.
But I'm not actually sure this would be the correct algorithm.
My current code was taken from this tutorial:
https://www.swiftbysundell.com/articles/ca-gems-using-replicator-layers-in-swift/
Animations are not my strong suite & I don't actually know how to apply continuous/repeating animation on all tiles. Here is my current code:
#IBOutlet var animationView: UIView!
func cleanUpAnimationView() {
self.animationView.layer.removeAllAnimations()
self.animationView.layer.sublayers?.removeAll()
}
/// Start a background animation with a replicated pattern image in tiled formation.
func setupAnimationView(withPatternImage patternImage: UIImage, animate: Bool = true) {
// Tutorial: https://www.swiftbysundell.com/articles/ca-gems-using-replicator-layers-in-swift/
let imageSize = patternImage.size.halve
self.cleanUpAnimationView()
// Animate pattern image
let replicatorLayer = CAReplicatorLayer()
replicatorLayer.frame.size = self.animationView.frame.size
replicatorLayer.masksToBounds = true
self.animationView.layer.addSublayer(replicatorLayer)
// Give the replicator layer a sublayer to replicate
let imageLayer = CALayer()
imageLayer.contents = patternImage.cgImage
imageLayer.frame.size = imageSize
replicatorLayer.addSublayer(imageLayer)
// Tell the replicator layer how many copies (or instances) of the image needs to be rendered. But we won't see more than one since they are, per default, all rendered/stacked on top of each other.
let instanceCount = self.animationView.frame.width / imageSize.width
replicatorLayer.instanceCount = Int(ceil(instanceCount))
// Instance offsets & transforms is needed to move them
// 'CATransform3D' transform will be used on each instance: shifts them to the right & reduces the red & green color component of each instance's tint color.
// Shift each instance by the width of the image
replicatorLayer.instanceTransform = CATransform3DMakeTranslation(imageSize.width, 0, 0)
// Reduce the red & green color component of each instance, effectively making each copy more & more blue while horizontally repeating the gradient pattern
let colorOffset = -1 / Float(replicatorLayer.instanceCount)
replicatorLayer.instanceRedOffset = colorOffset
replicatorLayer.instanceGreenOffset = colorOffset
//replicatorLayer.instanceBlueOffset = colorOffset
//replicatorLayer.instanceColor = UIColor.random.cgColor
// Extend the original pattern to also repeat vertically using another tint color gradient
let verticalReplicatorLayer = CAReplicatorLayer()
verticalReplicatorLayer.frame.size = self.animationView.frame.size
verticalReplicatorLayer.masksToBounds = true
verticalReplicatorLayer.instanceBlueOffset = colorOffset
self.animationView.layer.addSublayer(verticalReplicatorLayer)
let verticalInstanceCount = self.animationView.frame.height / imageSize.height
verticalReplicatorLayer.instanceCount = Int(ceil(verticalInstanceCount))
verticalReplicatorLayer.instanceTransform = CATransform3DMakeTranslation(0, imageSize.height, 0)
verticalReplicatorLayer.addSublayer(replicatorLayer)
guard animate else { return }
// Set both the horizontal & vertical replicators to add a slight delay to all animations applied to the layer they're replicating
let delay = TimeInterval(0.1)
replicatorLayer.instanceDelay = delay
verticalReplicatorLayer.instanceDelay = delay
// This will make the image layer change color
let animColor = CABasicAnimation(keyPath: "instanceRedOffset")
animColor.duration = animationDuration
animColor.fromValue = verticalReplicatorLayer.instanceRedOffset
animColor.toValue = -1 / Float(Int.random(replicatorLayer.instanceCount-1))
animColor.autoreverses = true
animColor.repeatCount = .infinity
replicatorLayer.add(animColor, forKey: "colorshift")
let animColor1 = CABasicAnimation(keyPath: "instanceGreenOffset")
animColor1.duration = animationDuration
animColor1.fromValue = verticalReplicatorLayer.instanceGreenOffset
animColor1.toValue = -1 / Float(Int.random(replicatorLayer.instanceCount-1))
animColor1.autoreverses = true
animColor1.repeatCount = .infinity
replicatorLayer.add(animColor1, forKey: "colorshift1")
let animColor2 = CABasicAnimation(keyPath: "instanceBlueOffset")
animColor2.duration = animationDuration
animColor2.fromValue = verticalReplicatorLayer.instanceBlueOffset
animColor2.toValue = -1 / Float(Int.random(replicatorLayer.instanceCount-1))
animColor2.autoreverses = true
animColor2.repeatCount = .infinity
replicatorLayer.add(animColor2, forKey: "colorshift2")
}
let imageSize = patternImage.size.halve
and
animColor.toValue = -1 / Float(Int.random(replicatorLayer.instanceCount-1))
both generated errors.
I removed the halve and commented-out the animColor lines and the code runs and animates. I could not get ANY replicator layer to display or animate at all (not even the most basic apple or tutorial code) until I used your code. Thank you so much!
In off screen rendering in metal
let textureDescriptors = MTLTextureDescriptor()
textureDescriptors.textureType = MTLTextureType.type2D
let screenRatio = UIScreen.main.scale
textureDescriptors.width = Int((DrawingManager.shared.size?.width)!) * Int(screenRatio)
textureDescriptors.height = Int((DrawingManager.shared.size?.height)!) * Int(screenRatio)
textureDescriptors.pixelFormat = .bgra8Unorm
textureDescriptors.storageMode = .shared
textureDescriptors.usage = [.renderTarget, .shaderRead]
ssTexture = device.makeTexture(descriptor: textureDescriptors)
ssTexture.label = "ssTexture"
Here the texture is in Clear color. Is it possible to load a image texture and is it posible to render the image texture in Draw Method
let renderPass = MTLRenderPassDescriptor()
renderPass.colorAttachments[0].loadAction = .clear
renderPass.colorAttachments[0].clearColor = MTLClearColorMake( 0.0, 0.0, 0.0, 0.0)
renderPass.colorAttachments[0].texture = ssTexture
renderPass.colorAttachments[0].storeAction = .store
I'm not sure what you're asking.
There's MTLTextureLoader for creating textures initialized with the contents of an image.
You can use the replace(region:...) methods of MTLTexture to fill all or part of a texture with image data.
You can use MTLBlitCommandEncoder to copy data from one texture to (all or part of) another or from a buffer to a texture.
You can draw to a texture or write to it from a compute shader.
It's a general-purpose API. There are many ways to do the things you seem to be asking. What have you tried? In what way did those attempts fail to achieve what you want?
I wrote the following code to implement additive blending for offscreen rendering to a Metal texture but it crashes if Metal API Validation is enabled:
validateRenderPassDescriptor:487: failed assertion `Texture at colorAttachment[0] has usage (0x02) which doesn't specify MTLTextureUsageRenderTarget (0x04)'
Here is the code:
let renderPipelineDescriptorGreen = MTLRenderPipelineDescriptor()
renderPipelineDescriptorGreen.vertexFunction = vertexFunctionGreen
renderPipelineDescriptorGreen.fragmentFunction = fragmentFunctionAccumulator
renderPipelineDescriptorGreen.colorAttachments[0].pixelFormat = .bgra8Unorm
renderPipelineDescriptorGreen.colorAttachments[0].isBlendingEnabled = true
renderPipelineDescriptorGreen.colorAttachments[0].rgbBlendOperation = .add
renderPipelineDescriptorGreen.colorAttachments[0].sourceRGBBlendFactor = .one
renderPipelineDescriptorGreen.colorAttachments[0].destinationRGBBlendFactor = .one
All I want to implement is additive color blending, something like this in OpenGLES:
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE);
glEnable(GL_BLEND);
What is wrong in my code?
Ok I found the solution. The solution was to set texture usage flag MTLTextureUsage.renderTarget in addition to (or in place of depending upon usage, shaderRead or shaderWrite when creating the texture:
let textureDescriptor = MTLTextureDescriptor()
textureDescriptor.textureType = .type3D
textureDescriptor.pixelFormat = .bgra8Unorm
textureDescriptor.width = 256
textureDescriptor.height = 1
textureDescriptor.usage = .renderTarget
let texture = device.makeTexture(descriptor: textureDescriptor)
I have a custom geometry quadrangle and my texture image is displaying on it, but I want it to display as an Aspect Fill, rather than stretching or compressing to fit the space.
I'm applying the same texture to multiple walls in a room so if the image is wallpaper, it has to look correct.
Is there a way to use the following and also determine how it fills?
quadNode.geometry?.firstMaterial?.diffuse.contents = UIImage(named: "wallpaper3.jpg")
Thanks
[UPDATE]
let quadNode = SCNNode(geometry: quad)
let (min, max) = quadNode.boundingBox
let width = CGFloat(max.x - min.x)
let height = CGFloat(max.y - min.y)
let material = SCNMaterial()
material.diffuse.contents = UIImage(named: "wallpaper3.jpg")
material.diffuse.contentsTransform = SCNMatrix4MakeScale(Float(width), Float(height), 1)
material.diffuse.wrapS = SCNWrapMode.repeat
material.diffuse.wrapT = SCNWrapMode.repeat
quadNode.geometry?.firstMaterial = material
I think this might help you, It is in objective c but it should be understandable:
CGFloat width = self.planeGeometry.width;
CGFloat height = self.planeGeometry.length;
SCNMaterial *material = self.planeGeometry.materials[4];
material.diffuse.contentsTransform = SCNMatrix4MakeScale(width, height, 1);
material.diffuse.wrapS = SCNWrapModeRepeat;
material.diffuse.wrapT = SCNWrapModeRepeat;
Plane Geometry is defined as follows:
self.planeGeometry = [SCNBox boxWithWidth:width height:planeHeight length:length chamferRadius:0];
//planeHeight = 0.01;
I'm using this to show horozontal planes, and the material it's made of doesn't get stretched out, but merely extends. Hoping that's what you need.
The dimensions of the plane are defined as: (incase it is needed)
float width = anchor.extent.x;
float length = anchor.extent.z;
This is being done in initWithAnchor method which uses the ARPlaneAnchor found on a plane.
I need to clear the depth buffer, for which i use glClear(GL_DEPTH_BUFFER_BIT) in OpenGL, how to do in metal ? I have gone through apple's documentation, there is no hint about it.
The short answer is that to clear the depth buffer you add these two lines before beginning a render pass:
mRenderPassDescriptor.depthAttachment.loadAction = MTLLoadActionClear;
mRenderPassDescriptor.depthAttachment.clearDepth = 1.0f;
And you cannot do a clear without ending and restarting a render pass.
Long answer:
In Metal, you have to define that you want the colour and depth buffers cleared when you start rendering to a MTLTexture. There is no clear function like in OpenGL.
To do this, in your MTLRenderPassDescriptor, set depthAttachment.loadAction to MTLLoadActionClear and depthAttachment.clearDepth to 1.0f.
You may also want to set colorAttachments[0].loadAction to MTLLoadActionClear to clear the colour buffer.
This render pass descriptor is then passed in to your call to MTLCommandBuffer::renderCommandEncoderWithDescriptor.
If you do want to clear a depth or colour buffer midway through rendering you have to call endEncoding on MTLRenderCommandEncoder, and then start encoding again with depthAttachment.loadAction set to MTLLoadActionClear.
To explain the solution more clear with sample codes
Before start rendering:
void prepareRendering(){
CMDBuffer = [_commandQueue commandBuffer]; // get command Buffer
drawable = [_metalLayer nextDrawable]; // get drawable from metalLayer
renderingTexture = drawable.texture; // set that as rendering te
setupRenderPassDescriptorForTexture(drawable.texture); // set the depth and colour buffer properties
RenderCMDBuffer = [CMDBuffer renderCommandEncoderWithDescriptor:_renderPassDescriptor];
RenderCMDBuffer.label = #"MyRenderEncoder";
setUpDepthState(CompareFunctionLessEqual,true,false); //
[RenderCMDBuffer setDepthStencilState:_depthState];
[RenderCMDBuffer pushDebugGroup:#"DrawCube"];
}
void setupRenderPassDescriptorForTexture(id <MTLTexture> texture)
{
if (_renderPassDescriptor == nil)
_renderPassDescriptor = [MTLRenderPassDescriptor renderPassDescriptor];
// set color buffer properties
_renderPassDescriptor.colorAttachments[0].texture = texture;
_renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionClear;
_renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColorMake(1.0f, 1.0f,1.0f, 1.0f);
_renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;
// set depth buffer properties
MTLTextureDescriptor* desc = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat: MTLPixelFormatDepth32Float width: texture.width height: texture.height mipmapped: NO];
_depthTex = [device newTextureWithDescriptor: desc];
_depthTex.label = #"Depth";
_renderPassDescriptor.depthAttachment.texture = _depthTex;
_renderPassDescriptor.depthAttachment.loadAction = MTLLoadActionClear;
_renderPassDescriptor.depthAttachment.clearDepth = 1.0f;
_renderPassDescriptor.depthAttachment.storeAction = MTLStoreActionDontCare;
}
Render Your contents here
Render();
After rendering
similar to ogles2 method [_context presentRenderbuffer:_colorRenderBuffer];
void endDisplay()
{
[RenderCMDBuffer popDebugGroup];
[RenderCMDBuffer endEncoding];
[CMDBuffer presentDrawable:drawable];
[CMDBuffer commit];
_currentDrawable = nil;
}
The above methods clears the depth and colour buffers after rendering each frame
To clear the depth buffer in midway
void clearDepthBuffer(){
// end encoding the render command buffer
[RenderCMDBuffer popDebugGroup];
[RenderCMDBuffer endEncoding];
// here MTLLoadActionClear will clear your last drawn depth values
_renderPassDescriptor.depthAttachment.loadAction = MTLLoadActionClear; _renderPassDescriptor.depthAttachment.clearDepth = 1.0f;
_renderPassDescriptor.depthAttachment.storeAction = MTLStoreActionDontCare;
// here MTLLoadActionLoad will reuse your last drawn color buffer
_renderPassDescriptor.colorAttachments[0].loadAction = MTLLoadActionLoad;
_renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreActionStore;
RenderCMDBuffer = [CMDBuffer renderCommandEncoderWithDescriptor:_renderPassDescriptor];
RenderCMDBuffer.label = #"MyRenderEncoder";
[RenderCMDBuffer pushDebugGroup:#"DrawCube"];
}
Here is the Swift 5 version. Run your first render pass:
// RENDER PASS 1
renderPassDescriptor = view.currentRenderPassDescriptor
if let renderPassDescriptor = renderPassDescriptor, let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) {
renderEncoder.label = "First Render Encoder"
renderEncoder.pushDebugGroup("First Render Debug")
// render stuff here...
renderEncoder.popDebugGroup()
renderEncoder.endEncoding()
}
Then clear the depth buffer, but keep the colour buffer:
renderPassDescriptor = view.currentRenderPassDescriptor
// Schedule Metal to clear the depth buffer
renderPassDescriptor!.depthAttachment.loadAction = MTLLoadAction.clear
renderPassDescriptor!.depthAttachment.clearDepth = 1.0
renderPassDescriptor!.depthAttachment.storeAction = MTLStoreAction.dontCare
// Schedule Metal to reuse the previous colour buffer
renderPassDescriptor!.colorAttachments[0].loadAction = MTLLoadAction.load
renderPassDescriptor!.colorAttachments[0].storeAction = MTLStoreAction.store
Then run your second render:
if let renderPassDescriptor = renderPassDescriptor, let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) {
renderEncoder.label = "Second Render"
renderEncoder.pushDebugGroup("Second Render Debug")
// render stuff here...
renderEncoder.popDebugGroup()
renderEncoder.endEncoding()
}