Objective C to swift conversion for unions - ios

Hi I am trying to convert below objective c code into swift but struggling to convert unions which are supported into C but not directly in swift.
I am not sure how I can convert below union type and pass it to MTLTexture getbytes?
union {
float f[2];
unsigned char bytes[8];
} u;
Also last part where I want to print these float values with log statement.
It would be great if I get working swift conversion for below code snippet.
id<MTLDevice> device = MTLCreateSystemDefaultDevice();
id<MTLCommandQueue> queue = [device newCommandQueue];
id<MTLCommandBuffer> commandBuffer = [queue commandBuffer];
MTKTextureLoader *textureLoader = [[MTKTextureLoader alloc] initWithDevice:device];
id<MTLTexture> sourceTexture = [textureLoader newTextureWithCGImage:image.CGImage options:nil error:nil];
CGColorSpaceRef srcColorSpace = CGColorSpaceCreateDeviceRGB();
CGColorSpaceRef dstColorSpace = CGColorSpaceCreateDeviceGray();
CGColorConversionInfoRef conversionInfo = CGColorConversionInfoCreate(srcColorSpace, dstColorSpace);
MPSImageConversion *conversion = [[MPSImageConversion alloc] initWithDevice:device
srcAlpha:MPSAlphaTypeAlphaIsOne
destAlpha:MPSAlphaTypeAlphaIsOne
backgroundColor:nil
conversionInfo:conversionInfo];
MTLTextureDescriptor *grayTextureDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:MTLPixelFormatR16Unorm
width:sourceTexture.width
height:sourceTexture.height
mipmapped:false];
grayTextureDescriptor.usage = MTLTextureUsageShaderWrite | MTLTextureUsageShaderRead;
id<MTLTexture> grayTexture = [device newTextureWithDescriptor:grayTextureDescriptor];
[conversion encodeToCommandBuffer:commandBuffer sourceTexture:sourceTexture destinationTexture:grayTexture];
MTLTextureDescriptor *textureDescriptor = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:grayTexture.pixelFormat
width:sourceTexture.width
height:sourceTexture.height
mipmapped:false];
textureDescriptor.usage = MTLTextureUsageShaderWrite | MTLTextureUsageShaderRead;
id<MTLTexture> texture = [device newTextureWithDescriptor:textureDescriptor];
MPSImageLaplacian *imageKernel = [[MPSImageLaplacian alloc] initWithDevice:device];
[imageKernel encodeToCommandBuffer:commandBuffer sourceTexture:grayTexture destinationTexture:texture];
MPSImageStatisticsMeanAndVariance *meanAndVariance = [[MPSImageStatisticsMeanAndVariance alloc] initWithDevice:device];
MTLTextureDescriptor *varianceTextureDescriptor = [MTLTextureDescriptor
texture2DDescriptorWithPixelFormat:MTLPixelFormatR32Float
width:2
height:1
mipmapped:NO];
varianceTextureDescriptor.usage = MTLTextureUsageShaderWrite;
id<MTLTexture> varianceTexture = [device newTextureWithDescriptor:varianceTextureDescriptor];
[meanAndVariance encodeToCommandBuffer:commandBuffer sourceTexture:texture destinationTexture:varianceTexture];
[commandBuffer commit];
[commandBuffer waitUntilCompleted];
union {
float f[2];
unsigned char bytes[8];
} u;
MTLRegion region = MTLRegionMake2D(0, 0, 2, 1);
[varianceTexture getBytes:u.bytes bytesPerRow:2 * 4 fromRegion:region mipmapLevel: 0];
NSLog(#"mean: %f", u.f[0] * 255);
NSLog(#"variance: %f", u.f[1] * 255 * 255);
It will be great if I get swift representation for this?

You can use a Struct instead, like this. And add an extension to get description for logging.
struct u {
var bytes: [UInt8] = [0,0,0,0, 0,0,0,0]
var f: [Float32] {
set {
var f = newValue
memcpy(&bytes, &f, 8)
}
get {
var f: [Float32] = [0,0]
var b = bytes
memcpy(&f, &b, 8)
return Array(f)
}
}
}
extension u: CustomStringConvertible {
var description: String {
let bytesString = (bytes.map{ "\($0)"}).joined(separator: " ")
return "floats : \(f[0]) \(f[1]) - bytes : \(bytesString)"
}
}
var test = u()
print(test)
test.f = [3.14, 1.618]
print(test)
test.bytes = [195, 245, 72, 64, 160, 26, 207, 63]
print(test)
Log:
floats : 0.0 0.0 - bytes : 0 0 0 0 0 0 0 0
floats : 3.14 1.618 - bytes : 195 245 72 64 160 26 207 63
floats : 3.14 1.618 - bytes : 195 245 72 64 160 26 207 63

You don't need the whole union for getBytes to work, only u.bytes is used there, which can be converted as
var bytes = [UInt8](repeating: 0, count: 8)
that's the array of length 8 (with arbitrary initial value 0 in each element), and you pass it to getBytes as an UnsafeMutableRawPointer:
varianceTexture.getBytes(&bytes, ...)
As for union, there are many ways to represent it. For example:
var u = ([Float](repeating: 0.0, count: 2), [UInt8](repeating: 0, count: 8))
And in that case you pass it as
varianceTexture.getBytes(&u.1, ...)
Or you could make it a class or struct in a similar way.

Related

distorted cv::Mat converted from CMSampleBuffer of video frame

I use AVAssetReader/AVAssetReaderTrackOutput to get CMSampleBuffer from video. But When I convert CMSampleBuffer to cv::Mat, the Mat is a distorted image.
Video decode code:
#objc open func startReading() -> Void {
if let reader = try? AVAssetReader.init(asset: _asset){
let videoTrack = _asset.tracks(withMediaType: .video).compactMap{ $0 }.first;
let options = [kCVPixelBufferPixelFormatTypeKey : Int(kCVPixelFormatType_32BGRA)]
let readerOutput = AVAssetReaderTrackOutput.init(track: videoTrack!, outputSettings: options as [String : Any])
reader.add(readerOutput)
reader.startReading()
var count = 0
//reading
while (reader.status == .reading && videoTrack?.nominalFrameRate != 0){
let sampleBuffer = readerOutput.copyNextSampleBuffer()
_delegate?.reader(self, newFrameReady: sampleBuffer, count)
count = count+1;
}
_delegate?.readerDidFinished(self,totalFrameCount: count)
}
}
Image covert code:
//convert sampleBuffer in callback of video reader
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CVPixelBufferLockBaseAddress(imageBuffer, kCVPixelBufferLock_ReadOnly);
char *baseBuffer = (char*)CVPixelBufferGetBaseAddress(imageBuffer);
cv::Mat cvImage = cv::Mat((int)height,(int)width,CV_8UC3);
cv::MatIterator_<cv::Vec3b> it_start = cvImage.begin<cv::Vec3b>();
cv::MatIterator_<cv::Vec3b> it_end = cvImage.end<cv::Vec3b>();
long cur = 0;
while (it_start != it_end) {
//opt pixel
long p_idx = cur*4;
char b = baseBuffer[p_idx];
char g = baseBuffer[p_idx + 1];
char r = baseBuffer[p_idx + 2];
cv::Vec3b newpixel(b,g,r);
*it_start = newpixel;
cur++;
it_start++;
}
UIImage *tmpImg = MatToUIImage(cvImage);
preview of tmpImg:
I find some video is work fine but some not. Any help is appreciated!
Finally I figure out this bug is because padding bytes of sampleBuffer.
Many API pad extra bytes behind image rows to optimize memory layout for SIMD, which could process parallel pixels.
Blow code works.
cv::Mat cvImage = cv::Mat((int)height,(int)width,CV_8UC3);
cv::MatIterator_<cv::Vec3b> it_start = cvImage.begin<cv::Vec3b>();
cv::MatIterator_<cv::Vec3b> it_end = cvImage.end<cv::Vec3b>();
long cur = 0;
//Padding bytes added behind image row bytes
size_t padding = CVPixelBufferGetBytesPerRow(imageBuffer) - width*4;
size_t offset = 0;
while (it_start != it_end) {
//opt pixel
long p_idx = cur*4 + offset;
char b = baseBuffer[p_idx];
char g = baseBuffer[p_idx + 1];
char r = baseBuffer[p_idx + 2];
cv::Vec3b newpixel(b,g,r);
*it_start = newpixel;
cur++;
it_start++;
if (cur%width == 0) {
offset = offset + padding;
}
}
UIImage *tmpImg = MatToUIImage(cvImage);

Swift 3 CGContext Memory Leak

I'm using a CGBitMapContext() to convert colour spaces to ARGB and get the pixel data values, I malloc space for bit map context and free it after I'm done but am still seeing a Memory Leak in Instruments I'm thinking I'm likely doing something wrong so any help would be appreciated.
Here is the ARGBBitmapContext function
func createARGBBitmapContext(width: Int, height: Int) -> CGContext {
var bitmapByteCount = 0
var bitmapBytesPerRow = 0
//Get image width, height
let pixelsWide = width
let pixelsHigh = height
bitmapBytesPerRow = Int(pixelsWide) * 4
bitmapByteCount = bitmapBytesPerRow * Int(pixelsHigh)
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Here is the malloc call that Instruments complains of
let bitmapData = malloc(bitmapByteCount)
let context = CGContext(data: bitmapData, width: pixelsWide, height: pixelsHigh, bitsPerComponent: 8, bytesPerRow: bitmapBytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
// Do I need to free something here first?
return context!
}
Here is where I use the context to retrieve all the pixel values as a list of UInt8s (and where the memory leaks)
extension UIImage {
func ARGBPixelValues() -> [UInt8] {
let width = Int(self.size.width)
let height = Int(self.size.height)
var pixels = [UInt8](repeatElement(0, count: width * height * 3))
let rect = CGRect(x: 0, y: 0, width: width, height: height)
let context = createARGBBitmapContext(inImage: self.cgImage!)
context.clear(rect)
context.draw(self.cgImage!, in: rect)
var location = 0
if let data = context.data {
while location < (width * height) {
let arrOffset = 3 * location
let offset = 4 * (location)
let R = data.load(fromByteOffset: offset + 1, as: UInt8.self)
let G = data.load(fromByteOffset: offset + 2, as: UInt8.self)
let B = data.load(fromByteOffset: offset + 3, as: UInt8.self)
pixels[arrOffset] = R
pixels[arrOffset+1] = G
pixels[arrOffset+2] = B
location += 1
}
free(context.data) // Free the data consumed, perhaps this isn't right?
}
return pixels
}
}
Instruments reports a malloc error of 1.48MiB which is right for my image size (540 x 720) I free the data but apparently that is not right.
I should mention that I know you can pass nil to CGContext init (and it will manage memory) but I'm more curious why using malloc creates an issue is there something more I should know (I'm more familiar with Obj-C).
Because CoreGraphics is not handled by ARC (like all other C libraries), you need to wrap your code with with an autorelease, even in Swift. Particularly if you are not on the main thread (which you should not be, if CoreGraphics is involved... .userInitiated or lower is appropriate).
func myFunc() {
for _ in 0 ..< makeMoneyFast {
autoreleasepool {
// Create CGImageRef etc...
// Do Stuff... whir... whiz... PROFIT!
}
}
}
For those that care, your Objective-C should also be wrapped like:
BOOL result = NO;
NSMutableData* data = [[NSMutableData alloc] init];
#autoreleasepool {
CGImageRef image = [self CGImageWithResolution:dpi
hasAlpha:hasAlpha
relativeScale:scale];
NSAssert(image != nil, #"could not create image for TIFF export");
if (image == nil)
return nil;
CGImageDestinationRef destRef = CGImageDestinationCreateWithData((CFMutableDataRef)data, kUTTypeTIFF, 1, NULL);
CGImageDestinationAddImage(destRef, image, (CFDictionaryRef)options);
result = CGImageDestinationFinalize(destRef);
CFRelease(destRef);
}
if (result) {
return [data copy];
} else {
return nil;
}
See this answer for details.

How to extract pixel data for processing from CMSampleBuffer using Swift in iOS 9?

I am writing an app in Swift which employs the Scandit barcode scanning SDK. The SDK permits you to access camera frames directly and provides the frame as a CMSampleBuffer. They provide documentation in Objective-C, which I am having trouble getting to work in Swift. I do not know if the problem is in porting the code, or if there is something amiss with the sample buffer itself, perhaps due to a change in Core Media since their documentation was generated.
Their API exposes the frame as follows (Objective-C):
interface YourViewController () <SBSProcessFrameDelegate>
...
- (void)barcodePicker:(SBSBarcodePicker*)barcodePicker
didProcessFrame:(CMSampleBufferRef)frame
session:(SBSScanSession*)session {
// Process the frame yourself.
}
Building from several answers here on SO, I attempt to process the frame with:
let imageBuffer = CMSampleBufferGetImageBuffer(frame)!
CVPixelBufferLockBaseAddress(imageBuffer, 0)
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.NoneSkipFirst.rawValue | CGBitmapInfo.ByteOrder32Little.rawValue)
let context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, bitmapInfo.rawValue)
let quartzImage = CGBitmapContextCreateImage(context)
CVPixelBufferUnlockBaseAddress(imageBuffer,0)
let image = UIImage(CGImage: quartzImage!)
But, this fails with:
Jan 29 09:01:30 Scandit[1308] <Error>: CGBitmapContextCreate: invalid data bytes/row: should be at least 7680 for 8 integer bits/component, 3 components, kCGImageAlphaNoneSkipFirst.
Jan 29 09:01:30 Scandit[1308] <Error>: CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
fatal error: unexpectedly found nil while unwrapping an Optional value
The fatal error is in attempting to resolve a UIImage from quartzImage.
The width, height, and bytesPerRow are (at the base address):
Width: 1920
Height: 1080
Bytes per row: 2904
As passed from the delegate, here is what the buffer contains according to CMSampleBufferGetFormatDescription(frame):
Optional(<CMVideoFormatDescription 0x1447dafa0 [0x1a1864b68]> {
mediaType:'vide'
mediaSubType:'420f'
mediaSpecific: {
codecType: '420f' dimensions: 1920 x 1080
}
extensions: {<CFBasicHash 0x1447dba10 [0x1a1864b68]>{type = immutable dict, count = 6,
entries =>
0 : <CFString 0x19d28b678 [0x1a1864b68]>{contents = "CVImageBufferYCbCrMatrix"} = <CFString 0x19d28b6b8 [0x1a1864b68]>{contents = "ITU_R_601_4"}
1 : <CFString 0x19d28b7d8 [0x1a1864b68]>{contents = "CVImageBufferTransferFunction"} = <CFString 0x19d28b698 [0x1a1864b68]>{contents = "ITU_R_709_2"}
2 : <CFString 0x19d2b65c0 [0x1a1864b68]>{contents = "CVBytesPerRow"} = <CFNumber 0xb00000000000b582 [0x1a1864b68]>{value = +2904, type = kCFNumberSInt32Type}
3 : <CFString 0x19d2b6640 [0x1a1864b68]>{contents = "Version"} = <CFNumber 0xb000000000000022 [0x1a1864b68]>{value = +2, type = kCFNumberSInt32Type}
5 : <CFString 0x19d28b758 [0x1a1864b68]>{contents = "CVImageBufferColorPrimaries"} = <CFString 0x19d28b698 [0x1a1864b68]>{contents = "ITU_R_709_2"}
6 : <CFString 0x19d28b818 [0x1a1864b68]>{contents = "CVImageBufferChromaLocationTopField"} = <CFString 0x19d28b878 [0x1a1864b68]>{contents = "Center"}
}
}
})
I realize there may be multiple "planes" here, but even with:
let pixelBufferBytesPerRow0 = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 0)
let pixelBufferBytesPerRow1 = CVPixelBufferGetBytesPerRowOfPlane(imageBuffer, 1)
Gives:
Pixel buffer bytes per row (Plane 0): 1920
Pixel buffer bytes per row (Plane 1): 1920
I don't understand that discrepancy.
I also attempted to process each pixel individually as it is clear the buffer contains some manner of YCbCr, but it fails every way I have tried. The Scandit API suggest (Objective-C):
// Get the buffer info for the YCbCrBiPlanar format.
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;
But, I cannot find a Swift implementation that permits access to the buffer info using CVPlanarPixelBufferInfo... everything I have tried fails, so I am unable to determine the offset for "Y", "Cr", etc.
How can I access the pixel data in the buffer? Is this a problem with the CMSampleBuffer the SDK is passing, a problem with iOS9, or both?
Working from Codo's "hints" and integrating with Objective-C code in the Scandit documentation, I worked out a solution in Swift. Though I accepted Codo's answer as it helped tremendously, I'm also answering my own question in the hopes that a complete solution would help someone in the future:
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(pixelBuffer, 0)
let lumaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
let chromaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let lumaBytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0)
let chromaBytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1)
let lumaBuffer = UnsafeMutablePointer<UInt8>(lumaBaseAddress)
let chromaBuffer = UnsafeMutablePointer<UInt8>(chromaBaseAddress)
var rgbaImage = [UInt8](count: 4*width*height, repeatedValue: 0)
for var x = 0; x < width; x++ {
for var y = 0; y < height; y++ {
let lumaIndex = x+y*lumaBytesPerRow
let chromaIndex = (y/2)*chromaBytesPerRow+(x/2)*2
let yp = lumaBuffer[lumaIndex]
let cb = chromaBuffer[chromaIndex]
let cr = chromaBuffer[chromaIndex+1]
let ri = Double(yp) + 1.402 * (Double(cr) - 128)
let gi = Double(yp) - 0.34414 * (Double(cb) - 128) - 0.71414 * (Double(cr) - 128)
let bi = Double(yp) + 1.772 * (Double(cb) - 128)
let r = UInt8(min(max(ri,0), 255))
let g = UInt8(min(max(gi,0), 255))
let b = UInt8(min(max(bi,0), 255))
rgbaImage[(x + y * width) * 4] = b
rgbaImage[(x + y * width) * 4 + 1] = g
rgbaImage[(x + y * width) * 4 + 2] = r
rgbaImage[(x + y * width) * 4 + 3] = 255
}
}
let colorSpace = CGColorSpaceCreateDeviceRGB()
let dataProvider: CGDataProviderRef = CGDataProviderCreateWithData(nil, rgbaImage, 4 * width * height, nil)!
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.NoneSkipFirst.rawValue | CGBitmapInfo.ByteOrder32Little.rawValue)
let cgImage: CGImageRef = CGImageCreate(width, height, 8, 32, width * 4, colorSpace!, bitmapInfo, dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)!
let image: UIImage = UIImage(CGImage: cgImage)
CVPixelBufferUnlockBaseAddress(pixelBuffer,0)
Despite iterating through the entire 8.3MP image, the code executes very quickly. I freely admit that I don't have a deep understanding of Core Media frameworks, but I believe this means the code is executing on the GPU. But, I would appreciate any comments on the code to make it more efficient, or to improve the "Swiftness" as I am completely an amateur.
This is not a complete answer, just some hints:
Scandit uses the YCbCrBiPlanar format. It has a Y byte for each pixel and a Cb and a Cr byte for each group of 2x2 pixels. The Y values are on the first plane, the Cb and Cr values on the second plane.
If the image is w x h pixels large, then the first plane contains h rows of w bytes (and maybe some padding for each line).
The second plane contains h / 2 lines of w / 2 pairs of byte. Each pair consists of a Cb and Cr value. Again each line might have some padding at the end.
So the value of Y for the pixel at position (x, y) can be found at the address:
Y: baseAddressPlane1 + y * bytesPerRowPlane1 + x
And the value Cb and Cr for the pixel at position (x, y) can be found at the address:
Cb: baseAddressPlane2 + (y / 2) * bytesPerRowPlan2 + (x / 2) * 2
Cr: baseAddressPlane2 + (y / 2) * bytesPerRowPlan2 + (x / 2) * 2 + 1
The divisions by 2 are integer divisions that discard the fractional part.

Loading Texture2D data in DirectX 11 Compute Shader

I am trying to read some data from a texture2d in DirectX11 compute shader, however, the 'Load' function of a texture2D object keeps returning 0 even though the texture object is filled with the same float number.
This is a 160 * 120 texture2d with DXGI_FORMAT_R32G32B32A32_FLOAT. The following code is how I created this resource:
HRESULT TestResources(ID3D11Device* pd3dDevice, ID3D11DeviceContext* pImmediateContext) {
float *test = new float[4 * 80 * 60 * 4]; // 80 * 60, 4 channels, 1 big texture contains 4 80 * 60 subimage
for (int i = 0; i < 4 * 80 * 60 * 4; i++) test[i] = 0.7f;
HRESULT hr = S_OK;
D3D11_TEXTURE2D_DESC RTtextureDesc;
ZeroMemory(&RTtextureDesc, sizeof(D3D11_TEXTURE2D_DESC));
RTtextureDesc.Width = 160;
RTtextureDesc.Height = 120;
RTtextureDesc.MipLevels = 1;
RTtextureDesc.ArraySize = 1;
RTtextureDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
RTtextureDesc.SampleDesc.Count = 1;
RTtextureDesc.SampleDesc.Quality = 0;
RTtextureDesc.Usage = D3D11_USAGE_DYNAMIC;
RTtextureDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE;
RTtextureDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
RTtextureDesc.MiscFlags = 0;
D3D11_SUBRESOURCE_DATA InitData;
InitData.pSysMem = test;
InitData.SysMemPitch = sizeof(float) * 4;
V_RETURN(pd3dDevice->CreateTexture2D(&RTtextureDesc, &InitData, &m_pInputTex2Ds));
//V_RETURN(pd3dDevice->CreateTexture2D(&RTtextureDesc, NULL, &m_pInputTex2Ds));
D3D11_SHADER_RESOURCE_VIEW_DESC SRViewDesc;
ZeroMemory(&SRViewDesc, sizeof(SRViewDesc));
SRViewDesc.Format = RTtextureDesc.Format;
SRViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
SRViewDesc.Texture2D.MostDetailedMip = 0;
SRViewDesc.Texture2D.MipLevels = 1;
V_RETURN(pd3dDevice->CreateShaderResourceView(m_pInputTex2Ds, &SRViewDesc, &m_pInputTexSRV));
delete[] test;
return hr;
}
And then I try to run dispatch with X = Y = 2 and Z = 1 like the following:
void ComputeShaderReduction::ExecuteComputeShader(ID3D11DeviceContext* pd3dImmediateContext, UINT uInputNum, ID3D11UnorderedAccessView** ppUAVInputs, UINT X, UINT Y, UINT Z) {
pd3dImmediateContext->CSSetShader(m_pComputeShader, nullptr, 0);
pd3dImmediateContext->CSSetShaderResources(0, 1, &m_pInputTexSRV); // test code
pd3dImmediateContext->CSSetUnorderedAccessViews(0, uInputNum, ppUAVInputs, nullptr);
//pd3dImmediateContext->CSSetUnorderedAccessViews(0, 1, &m_pGPUOutUAVs, nullptr);
pd3dImmediateContext->UpdateSubresource(m_pConstBuf, 0, nullptr, &m_ConstBuf, 0, 0);
pd3dImmediateContext->CSSetConstantBuffers(0, 1, &m_pConstBuf);
pd3dImmediateContext->Dispatch(X, Y, Z);
pd3dImmediateContext->CSSetShader(nullptr, nullptr, 0);
ID3D11UnorderedAccessView* ppUAViewnullptr[1] = { nullptr };
pd3dImmediateContext->CSSetUnorderedAccessViews(0, 1, ppUAViewnullptr, nullptr);
ID3D11ShaderResourceView* ppSRVnullptr[1] = { nullptr };
pd3dImmediateContext->CSSetShaderResources(0, 1, ppSRVnullptr);
ID3D11Buffer* ppCBnullptr[1] = { nullptr };
pd3dImmediateContext->CSSetConstantBuffers(0, 1, ppCBnullptr);
}
And I wrote a very simple CS shader to try to get the data in the texture2d and out it. So, the compute shader looks like this:
#define subimg_dim_x 80
#define subimg_dim_y 60
Texture2D<float4> BufferIn : register(t0);
StructuredBuffer<float> Test: register(t1);
RWStructuredBuffer<float> BufferOut : register(u0);
groupshared float sdata[subimg_dim_x];
[numthreads(subimg_dim_x, 1, 1)]
void CSMain(uint3 DTid : SV_DispatchThreadID,
uint3 threadIdx : SV_GroupThreadID,
uint3 groupIdx : SV_GroupID) {
sdata[threadIdx.x] = 0.0;
GroupMemoryBarrierWithGroupSync();
if (threadIdx.x == 0) {
float4 num = BufferIn.Load(uint3(groupIdx.x, groupIdx.y, 1));
//BufferOut[groupIdx.y * 2 + groupIdx.x] = 2.0; //This one gives me 2.0 as output in the console
BufferOut[groupIdx.y * 2 + groupIdx.x] = num.x; //This one keeps giving me 0.0 and in the texture, r = g = b = a = 0.7 or x = y = z = w = 0.7, so it suppose to print 0.7 in the console.
}
GroupMemoryBarrierWithGroupSync();
}
I think the way I print the CS shader result on CPU end is correct.
void ComputeShaderReduction::CopyToCPUBuffer(ID3D11Device* pdevice, ID3D11DeviceContext* pd3dImmediateContext, ID3D11Buffer* pGPUOutBufs) {
D3D11_BUFFER_DESC desc;
ZeroMemory(&desc, sizeof(desc));
pGPUOutBufs->GetDesc(&desc);
desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ;
desc.Usage = D3D11_USAGE_STAGING;
desc.BindFlags = 0;
desc.MiscFlags = 0;
if (!m_pCPUOutBufs && SUCCEEDED(pdevice->CreateBuffer(&desc, nullptr, &m_pCPUOutBufs))) {
pd3dImmediateContext->CopyResource(m_pCPUOutBufs, pGPUOutBufs);
}
else pd3dImmediateContext->CopyResource(m_pCPUOutBufs, pGPUOutBufs);
D3D11_MAPPED_SUBRESOURCE MappedResource;
float *p;
pd3dImmediateContext->Map(m_pCPUOutBufs, 0, D3D11_MAP_READ, 0, &MappedResource);
p = (float*)MappedResource.pData;
for (int i = 0; i < 4; i++) printf("%d %f\n", i, p[i]);
pd3dImmediateContext->Unmap(m_pCPUOutBufs, 0);
printf("\n");
}
The buffer that bind to UAV has only 4 elements. So, if all the float numbers in my texture2d are 0.7, I should have 4 0.7s get printed in CopyToCPUBuffer function instead of 0.0s.
Is anyone know what could be wrong in my code or can someone provide me an entire example or a tutorial that shows how to read DirectX 11 texture2d's data in compute shader correctly?
Thanks in advance.
The following is wrong for a start. The Pitch of your input data is the number of bytes per row of the texture, not per pixel.
InitData.SysMemPitch = sizeof(float) * 4;
Secondly:
float4 num = BufferIn.Load(uint3(groupIdx.x, groupIdx.y, 1));
You're trying to load data from the 2nd mip of the texture, it only has 1 mip level.

Get CIColorCube Filter Working In Swift

I am trying to get the CIColorCube filter working. However the Apple documents only provide a poorly explained reference example here:
// Allocate memory
const unsigned int size = 64;
float *cubeData = (float *)malloc (size * size * size * sizeof (float) * 4);
float rgb[3], hsv[3], *c = cubeData;
// Populate cube with a simple gradient going from 0 to 1
for (int z = 0; z < size; z++){
rgb[2] = ((double)z)/(size-1); // Blue value
for (int y = 0; y < size; y++){
rgb[1] = ((double)y)/(size-1); // Green value
for (int x = 0; x < size; x ++){
rgb[0] = ((double)x)/(size-1); // Red value
// Convert RGB to HSV
// You can find publicly available rgbToHSV functions on the Internet
rgbToHSV(rgb, hsv);
// Use the hue value to determine which to make transparent
// The minimum and maximum hue angle depends on
// the color you want to remove
float alpha = (hsv[0] > minHueAngle && hsv[0] < maxHueAngle) ? 0.0f: 1.0f;
// Calculate premultiplied alpha values for the cube
c[0] = rgb[0] * alpha;
c[1] = rgb[1] * alpha;
c[2] = rgb[2] * alpha;
c[3] = alpha;
c += 4; // advance our pointer into memory for the next color value
}
}
}
// Create memory with the cube data
NSData *data = [NSData dataWithBytesNoCopy:cubeData
length:cubeDataSize
freeWhenDone:YES];
CIColorCube *colorCube = [CIFilter filterWithName:#"CIColorCube"];
[colorCube setValue:#(size) forKey:#"inputCubeDimension"];
// Set data for cube
[colorCube setValue:data forKey:#"inputCubeData"];
So I have attempted to translate this over to Swift with the following:
var filter = CIFilter(name: "CIColorCube")
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setDefaults()
var size: UInt = 64
var floatSize = UInt(sizeof(Float))
var cubeDataSize:size_t = size * size * size * floatSize * 4
var colorCubeData:Array<Float> = [
0,0,0,1,
0,0,0,1,
0,0,0,1,
0,0,0,1,
0,0,0,1,
0,0,0,1,
0,0,0,1,
0,0,0,1
]
var cubeData:NSData = NSData(bytesNoCopy: colorCubeData, length: cubeDataSize)
However I get an error when trying to create the cube data:
"Extra argument 'bytesNoCopy' in call"
Basically I am creating the cubeData wrong. Can you advise me on how to properly create the cubeData object in Swift?
Thanks!
Looks like you are after the chroma key filter recipe described here. Here's some code that works. You get a filter for the color you want to make transparent, described by its HSV angle:
func RGBtoHSV(r : Float, g : Float, b : Float) -> (h : Float, s : Float, v : Float) {
var h : CGFloat = 0
var s : CGFloat = 0
var v : CGFloat = 0
let col = UIColor(red: CGFloat(r), green: CGFloat(g), blue: CGFloat(b), alpha: 1.0)
col.getHue(&h, saturation: &s, brightness: &v, alpha: nil)
return (Float(h), Float(s), Float(v))
}
func colorCubeFilterForChromaKey(hueAngle: Float) -> CIFilter {
let hueRange: Float = 60 // degrees size pie shape that we want to replace
let minHueAngle: Float = (hueAngle - hueRange/2.0) / 360
let maxHueAngle: Float = (hueAngle + hueRange/2.0) / 360
let size = 64
var cubeData = [Float](repeating: 0, count: size * size * size * 4)
var rgb: [Float] = [0, 0, 0]
var hsv: (h : Float, s : Float, v : Float)
var offset = 0
for z in 0 ..< size {
rgb[2] = Float(z) / Float(size) // blue value
for y in 0 ..< size {
rgb[1] = Float(y) / Float(size) // green value
for x in 0 ..< size {
rgb[0] = Float(x) / Float(size) // red value
hsv = RGBtoHSV(r: rgb[0], g: rgb[1], b: rgb[2])
// the condition checking hsv.s may need to be removed for your use-case
let alpha: Float = (hsv.h > minHueAngle && hsv.h < maxHueAngle && hsv.s > 0.5) ? 0 : 1.0
cubeData[offset] = rgb[0] * alpha
cubeData[offset + 1] = rgb[1] * alpha
cubeData[offset + 2] = rgb[2] * alpha
cubeData[offset + 3] = alpha
offset += 4
}
}
}
let b = cubeData.withUnsafeBufferPointer { Data(buffer: $0) }
let data = b as NSData
let colorCube = CIFilter(name: "CIColorCube", withInputParameters: [
"inputCubeDimension": size,
"inputCubeData": data
])
return colorCube!
}
Then to get your filter call
let chromaKeyFilter = colorCubeFilterForChromaKey(hueAngle: 120)
I used 120 for your standard green screen.
I believe you want to use NSData(bytes: UnsafePointer<Void>, length: Int) instead of NSData(bytesNoCopy: UnsafeMutablePointer<Void>, length: Int). Make that change and calculate the length in the following way and you should be up and running.
let colorCubeData: [Float] = [
0, 0, 0, 1,
1, 0, 0, 1,
0, 1, 0, 1,
1, 1, 0, 1,
0, 0, 1, 1,
1, 0, 1, 1,
0, 1, 1, 1,
1, 1, 1, 1
]
let cubeData = NSData(bytes: colorCubeData, length: colorCubeData.count * sizeof(Float))

Resources