I want to disable clamping between far and close points. Already tred to modify sampler to disable clamp to edge (constexpr sampler s(address::clamp_to_zero) and it worked as expected for the edges, but coordinates between most far and close points are still clamping.
Current unwanted result:
https://gph.is/g/ZyWjkzW
Expected result:
https://i.imgur.com/GjvwgyU.png
Also tried encoder.setDepthClipMode(.clip) but it didn't worked.
Some portions of code:
let descriptor = MTLRenderPipelineDescriptor()
descriptor.colorAttachments[0].pixelFormat = .rgba16Float
descriptor.colorAttachments[1].pixelFormat = .rgba16Float
descriptor.depthAttachmentPixelFormat = .invalid
let descriptor = MTLRenderPassDescriptor()
descriptor.colorAttachments[0].texture = outputColorTexture
descriptor.colorAttachments[0].clearColor = clearColor
descriptor.colorAttachments[0].loadAction = .load
descriptor.colorAttachments[0].storeAction = .store
descriptor.colorAttachments[1].texture = outputDepthTexture
descriptor.colorAttachments[1].clearColor = clearColor
descriptor.colorAttachments[1].loadAction = .load
descriptor.colorAttachments[1].storeAction = .store
descriptor.renderTargetWidth = Int(drawableSize.width)
descriptor.renderTargetHeight = Int(drawableSize.height)
guard let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: descriptor) else { throw RenderingError.makeDescriptorFailed }
encoder.setDepthClipMode(.clip)
encoder.setRenderPipelineState(pipelineState)
encoder.setFragmentTexture(inputColorTexture, index: 0)
encoder.setFragmentTexture(inputDepthTexture, index: 1)
encoder.setFragmentBuffer(uniformsBuffer, offset: 0, index: 0)
encoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
encoder.endEncoding()
Related
I have loaded an audio file and have created an input and output buffer.
But when I follow Apple's post I get an output signal that has distorted sounds.
private func extractSignal(input: AVAudioPCMBuffer, output: AVAudioPCMBuffer) {
let count = 256
let forward = vDSP.DCT(previous: nil, count: count, transformType: .II)!
let inverse = vDSP.DCT(previous: nil, count: count, transformType: .III)!
// Iterates over the signal.
input.iterate(signalCount: 32000) { step, signal in
var series = forward.transform(signal)
series = vDSP.threshold(series, to: 0.0003, with: .zeroFill) // What should this threshold be?
var inversed = inverse.transform(series)
let divisor: Float = Float(count / 2)
inversed = vDSP.divide(inversed, divisor)
// Code: write inversed to output buffer.
output.frameLength = AVAudioFrameCount(step * signal.count + signal.count)
}
}
I created a 3D object using blender and exported it as an OBJ file and I tried to render it using Metal by following this http://metalbyexample.com/modern-metal-1 tutorial. But some of my 3D object parts are missing. They are not rendered properly.
Here is my 3D object in blender :-
Here is my rendered object in Metal :-
Here is my blender file :-
https://gofile.io/?c=XfQYLK
How should i fix this?
I already rendered some other shapes like, rectangle, Circle, Star successfully. But the problem is with this shape. I did not change the way i create the shape nor the way it is exported from the blender. Even though I did everything in same way problem still there.
Here is how i load the OBJ file
private var vertexDescriptor: MTLVertexDescriptor!
private var meshes: [MTKMesh] = []
private func loadResource() {
let modelUrl = Bundle.main.url(forResource: self.meshName, withExtension: "obj")
let vertexDescriptor = MDLVertexDescriptor()
vertexDescriptor.attributes[0] = MDLVertexAttribute(name: MDLVertexAttributePosition, format: .float3, offset: 0, bufferIndex: 0)
vertexDescriptor.attributes[1] = MDLVertexAttribute(name: MDLVertexAttributeNormal, format: .float3, offset: MemoryLayout<Float>.size * 3, bufferIndex: 0)
vertexDescriptor.attributes[2] = MDLVertexAttribute(name: MDLVertexAttributeTextureCoordinate, format: .float2, offset: MemoryLayout<Float>.size * 6, bufferIndex: 0)
vertexDescriptor.layouts[0] = MDLVertexBufferLayout(stride: MemoryLayout<Float>.size * 8)
self.vertexDescriptor = MTKMetalVertexDescriptorFromModelIO(vertexDescriptor)
let bufferAllocator = MTKMeshBufferAllocator(device: self.device)
let asset = MDLAsset(url: modelUrl, vertexDescriptor: vertexDescriptor, bufferAllocator: bufferAllocator)
(_, meshes) = try! MTKMesh.newMeshes(asset: asset, device: device)
}
Here is my vertex and fragment shaders :-
struct VertexOut {
float4 position [[position]];
float4 eyeNormal;
float4 eyePosition;
float2 texCoords;
};
vertex VertexOut vertex_3d(VertexIn vertexIn [[stage_in]])
{
VertexOut vertexOut;
vertexOut.position = float4(vertexIn.position, 1);
vertexOut.eyeNormal = float4(vertexIn.normal, 1);
vertexOut.eyePosition = float4(vertexIn.position, 1);
vertexOut.texCoords = vertexIn.texCoords;
return vertexOut;
}
fragment float4 fragment_3d(VertexOut fragmentIn [[stage_in]]) {
return float4(0.33, 0.53, 0.25, 0.5);
}
And here my CommandEncoder :-
func render(commandEncoder: MTLRenderCommandEncoder) {
commandEncoder.setRenderPipelineState(self.renderPipelineState)
let mesh = meshes[0]
let vertexBuffer = mesh.vertexBuffers.first!
commandEncoder.setVertexBuffer(vertexBuffer.buffer, offset: vertexBuffer.offset, index: 0)
let indexBuffer = mesh.submeshes[0].indexBuffer
commandEncoder.drawIndexedPrimitives(type: mesh.submeshes[0].primitiveType,
indexCount: mesh.submeshes[0].indexCount,
indexType: mesh.submeshes[0].indexType,
indexBuffer: indexBuffer.buffer,
indexBufferOffset: indexBuffer.offset)
commandEncoder.endEncoding()
}
Presenting to the drawable is handled in a different place.
How should i properly render my 3D object using Metal?
I made this public repo: https://github.com/danielrosero/ios-touchingMetal, and I think is a great starting point for 3d Rendering with Metal, with textures and a compute function.
You should just change the models inside, check Renderer.swift init(view: MTKView) method.
// Create the MTLTextureLoader options that we need according to each model case. Some of them are flipped, and so on.
let textureLoaderOptionsWithFlip: [MTKTextureLoader.Option : Any] = [.generateMipmaps : true, .SRGB : true, .origin : MTKTextureLoader.Origin.bottomLeft]
let textureLoaderOptionsWithoutFlip: [MTKTextureLoader.Option : Any] = [.generateMipmaps : true, .SRGB : true]
// ****
// Initializing the models, set their position, scale and do a rotation transformation
// Cat model
cat = Model(name: "cat",vertexDescriptor: vertexDescriptor,textureFile: "cat.tga", textureLoaderOptions: textureLoaderOptionsWithFlip)
cat.transform.position = [-1, -0.5, 1.5]
cat.transform.scale = 0.08
cat.transform.rotation = vector_float3(0,radians(fromDegrees: 180),0)
// ****
// Dog model
dog = Model(name: "dog",vertexDescriptor: vertexDescriptor,textureFile: "dog.tga", textureLoaderOptions: textureLoaderOptionsWithFlip)
dog.transform.position = [1, -0.5, 1.5]
dog.transform.scale = 0.018
dog.transform.rotation = vector_float3(0,radians(fromDegrees: 180),0)
// ****
This is the way I import models in my implementation, check Model.swift
//
// Model.swift
// touchingMetal
//
// Created by Daniel Rosero on 1/8/20.
// Copyright © 2020 Daniel Rosero. All rights reserved.
//
import Foundation
import MetalKit
//This extension allows to create a MTLTexture attribute inside this Model class
//in order to be identified and used in the Renderer. This is to ease the loading in case of multiple models in the scene
extension Model : Texturable{
}
class Model {
let mdlMeshes: [MDLMesh]
let mtkMeshes: [MTKMesh]
var texture: MTLTexture?
var transform = Transform()
let name: String
//In order to create a model, you need to pass a name to use it as an identifier,
// a reference to the vertexDescriptor, the imagename with the extension of the texture,
//the dictionary of MTKTextureLoader.Options
init(name: String, vertexDescriptor: MDLVertexDescriptor, textureFile: String, textureLoaderOptions: [MTKTextureLoader.Option : Any]) {
let assetUrl = Bundle.main.url(forResource: name, withExtension: "obj")
let allocator = MTKMeshBufferAllocator(device: Renderer.device)
let asset = MDLAsset(url: assetUrl, vertexDescriptor: vertexDescriptor, bufferAllocator: allocator)
let (mdlMeshes, mtkMeshes) = try! MTKMesh.newMeshes(asset: asset, device: Renderer.device)
self.mdlMeshes = mdlMeshes
self.mtkMeshes = mtkMeshes
self.name = name
texture = setTexture(device: Renderer.device, imageName: textureFile, textureLoaderOptions: textureLoaderOptions)
}
}
If the 3D model is not triangulated properly it will miss behave in Metal. In order to render 3D model correctly, When exporting from modeling software to an OBJ file turn on Triangulate Faces option. This will turn all the faces to triangles. So Metal will not have to re triangulate the faces. But this process may change the vertex order. But 3D model will not change. Only the order of vertices will change.
I'm trying to create a pipeline state in swift and the app crashes with SIGABRT. I've put the call to newRenderPipelineStateWithDescriptor within a try catch block. Why is it not caching it?
Here's the code,
let defaultLibrary = device.newDefaultLibrary()!
let fragmentProgram = defaultLibrary.newFunctionWithName("passThroughFragment")!
let vertexProgram = defaultLibrary.newFunctionWithName("passGeometry")!
// check TexturedVertex
let vertexDesc = MTLVertexDescriptor()
vertexDesc.attributes[0].format = .Float3
vertexDesc.attributes[0].offset = 0
vertexDesc.attributes[0].bufferIndex = 0
vertexDesc.attributes[1].format = .Float3
vertexDesc.attributes[0].offset = sizeof(Vec3)
vertexDesc.attributes[0].bufferIndex = 0
vertexDesc.attributes[2].format = .Float2
vertexDesc.attributes[0].offset = sizeof(Vec3) * 2
vertexDesc.attributes[0].bufferIndex = 0
vertexDesc.layouts[0].stepFunction = .PerVertex
vertexDesc.layouts[0].stride = sizeof(TexturedVertex)
let pipelineStateDescriptor = MTLRenderPipelineDescriptor()
pipelineStateDescriptor.vertexFunction = vertexProgram
pipelineStateDescriptor.fragmentFunction = fragmentProgram
//pipelineStateDescriptor.vertexDescriptor = vertexDesc
pipelineStateDescriptor.colorAttachments[0].pixelFormat = view.colorPixelFormat
pipelineStateDescriptor.colorAttachments[0].blendingEnabled = false
pipelineStateDescriptor.sampleCount = view.sampleCount
do {
// SIGABRT will happen here when enabling .vertexDescriptor = vertexDesc
try pipelineState = device.newRenderPipelineStateWithDescriptor(pipelineStateDescriptor)
} catch let error {
print("Failed to create pipeline state, error \(error)")
}
The code above doesn't crash, until I enable this line,
pipelineStateDescriptor.vertexDescriptor = vertexDesc
To give a bit of background, my vertex shader used to receive a packed_float4 buffer as input but I'm trying to update it to a struct, as defined in the MTLVertexDescriptor above.
The frustrating thing is that the app just crashes without giving any hints of what's wrong.
Is there any way of get error messages when calling device.newRenderPipelineStateWithDescriptor?
Edit:
This fixes the crash,
vertexDesc.attributes[0].format = .Float3
vertexDesc.attributes[0].offset = 0
vertexDesc.attributes[0].bufferIndex = 0
vertexDesc.attributes[1].format = .Float3
vertexDesc.attributes[1].offset = sizeof(Vec3)
vertexDesc.attributes[1].bufferIndex = 0
vertexDesc.attributes[2].format = .Float2
vertexDesc.attributes[2].offset = sizeof(Vec3) * 2
vertexDesc.attributes[2].bufferIndex = 0
But apparently there's no way to make newRenderPipelineStateWithDescriptor to throw an exception because of that at the moment.
I am having trouble setting up a kAudioUnitSubType_NBandEQ in Swift. Here is my code to initialize the EQ:
var cd:AudioComponentDescription = AudioComponentDescription(componentType: OSType(kAudioUnitType_Effect),componentSubType: OSType(kAudioUnitSubType_NBandEQ),componentManufacturer: OSType(kAudioUnitManufacturer_Apple),componentFlags: 0,componentFlagsMask: 0)
// Add the node to the graph
status = AUGraphAddNode(graph, &cd, &MyAppNode)
println(status)
// Once the graph has been opened get an instance of the equalizer
status = AUGraphNodeInfo(graph, self.MyAppNode, nil, &MyAppUnit)
println(status)
var eqFrequencies: [UInt32] = [ 32, 250, 500, 1000, 2000, 16000 ]
status = AudioUnitSetProperty(
self.MyAppUnit,
AudioUnitPropertyID(kAUNBandEQProperty_NumberOfBands),
AudioUnitScope(kAudioUnitScope_Global),
0,
eqFrequencies,
UInt32(eqFrequencies.count*sizeof(UInt32))
)
println(status)
status = AudioUnitInitialize(self.MyAppUnit)
println(status)
var ioUnitOutputElement:AudioUnitElement = 0
var samplerOutputElement:AudioUnitElement = 0
AUGraphConnectNodeInput(graph, sourceNode, sourceOutputBusNumber, self.MyAppNode, 0)
AUGraphConnectNodeInput(graph, self.MyAppNode, 0, destinationNode, destinationInputBusNumber)
and then to apply changes in the frequency gains my code is as follows:
if (MyAppUnit == nil) {return}
else{
var bandValue0 :Float32 = tenBands.objectAtIndex(0) as! Float32
var bandValue1 :Float32 = tenBands.objectAtIndex(1) as! Float32
var bandValue2 :Float32 = tenBands.objectAtIndex(2) as! Float32
var bandValue3 :Float32 = tenBands.objectAtIndex(3) as! Float32
var bandValue4 :Float32 = tenBands.objectAtIndex(4) as! Float32
var bandValue5 :Float32 = tenBands.objectAtIndex(5) as! Float32
AudioUnitSetParameter(self.MyAppUnit, 0, AudioUnitScope(kAudioUnitScope_Global), 0, bandValue0, 0);
AudioUnitSetParameter(self.MyAppUnit, 1, AudioUnitScope(kAudioUnitScope_Global), 0, bandValue1, 0);
AudioUnitSetParameter(self.MyAppUnit, 2, AudioUnitScope(kAudioUnitScope_Global), 0, bandValue2, 0);
AudioUnitSetParameter(self.MyAppUnit, 3, AudioUnitScope(kAudioUnitScope_Global), 0, bandValue3, 0);
AudioUnitSetParameter(self.MyAppUnit, 4, AudioUnitScope(kAudioUnitScope_Global), 0, bandValue4, 0);
AudioUnitSetParameter(self.MyAppUnit, 5, AudioUnitScope(kAudioUnitScope_Global), 0, bandValue5, 0);
}
Can anyone point out what I am doing wrong here? I think it is related to the second variable in AudioUnitSetParameter. I have tried AudioUnitParameterID(0), and AudioUnitParameterID(kAUNBandEQParam_Gain + 1) for this Value but those don't seem to work at all. Any help is appreciated!
Comment adding as answer because comments are insufficient.
The following Code is in Objective-c but it should help identify your problem.
There are a number of places this might fail. Firstly, you should check the status of the AudioUnitSetParameter, and indeed all the AudioUnit Calls as this will give you a clearer point of where you're code is failing.
I've done this successfully in Objective-C and have a test app i can make available, if you need it, which shows the complete graph setup and setting the bands and gains by moving a slider ... back to your specific question. The following works just fine for me, this might help you rule out a particular section.
You can try and obtain the current "gain", this will indicate if your bands are set up correctly.
- (AudioUnitParameterValue)gainForBandAtPosition:(uint)bandPosition
{
AudioUnitParameterValue gain;
AudioUnitParameterID parameterID = kAUNBandEQParam_Gain + bandPosition;
OSStatus status = AudioUnitGetParameter(equalizerUnit,
parameterID,
kAudioUnitScope_Global,
0,
&gain);
if (status != noErr) {
#throw [NSException exceptionWithName:#"gettingParamGainErrorException"
reason:[NSString stringWithFormat:#"OSStatus Error on getting EQ Gain! Status returned %d).", (int)status]
userInfo:nil];
}
return gain;
}
then setting the gain can be done in the following way;
- (void)setGain:(AudioUnitParameterValue)gain forBandAtPosition:(uint)bandPosition
{
AudioUnitParameterID parameterID = kAUNBandEQParam_Gain + bandPosition;
OSStatus status = AudioUnitSetParameter(equalizerUnit,
parameterID,
kAudioUnitScope_Global,
0,
gain,
0);
if (status != noErr) {
#throw [NSException exceptionWithName:#"settingParamGainAudioErrorException"
reason:[NSString stringWithFormat:#"OSStatus Error on setting EQ gain! Status returned %d).", (int)status]
userInfo:nil];
}
}
Finally, what value are you trying to set, the valid range (if I'm not mistaken) is -125.0 to 25.0
I have the following MonoTouch code which can change the Saturation , but I am trying to also change the Hue.
float hue = 0;
float saturation = 1;
if (colorCtrls == null)
colorCtrls = new CIColorControls() {
Image = CIImage.FromCGImage (originalImage.CGImage) };
else
colorCtrls.Image = CIImage.FromCGImage(originalImage.CGImage);
colorCtrls.Saturation = saturation;
var output = colorCtrls.OutputImage;
var context = CIContext.FromOptions(null);
var result = context.CreateCGImage(output, output.Extent);
return UIImage.FromImage(result);
It's part of a different filter so you'll need to use CIHueAdjust instead of CIColorControls to control the hue.
Here's what I ended up doing to add Hue:
var hueAdjust = new CIHueAdjust() {
Image = CIImage.FromCGImage(originalImage.CGImage),
Angle = hue // Default is 0
};
var output = hueAdjust.OutputImage;
var context = CIContext.FromOptions(null);
var cgimage = context.CreateCGImage(output, output.Extent);
return UIImage.FromImage(cgimage);
However, this does not work on Retina devices, the image returned is scaled incorrectly.