DJI Osmo Mobile video preview - ios

I want to create sample app for DJI Osmo Mobile2 but when I tried to fetch camera from DJIHandheld it was always nil. How can I use native camera? I tried to map CMSampleBuffer of AVCaptureVideoDataOutputSampleBufferDelegate to UnsafeMutablePointer<UInt8> in captureOutput delegate method but the preview was always black.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
let lumaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
let chromaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let lumaBytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0)
let chromaBytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1)
let lumaBuffer = lumaBaseAddress?.assumingMemoryBound(to: UInt8.self)
let chromaBuffer = chromaBaseAddress?.assumingMemoryBound(to: UInt8.self)
var rgbaImage = [UInt8](repeating: 0, count: 4*width*height)
for x in 0 ..< width {
for y in 0 ..< height {
let lumaIndex = x+y*lumaBytesPerRow
let chromaIndex = (y/2)*chromaBytesPerRow+(x/2)*2
let yp = lumaBuffer?[lumaIndex]
let cb = chromaBuffer?[chromaIndex]
let cr = chromaBuffer?[chromaIndex+1]
let ri = Double(yp!) + 1.402 * (Double(cr!) - 128)
let gi = Double(yp!) - 0.34414 * (Double(cb!) - 128) - 0.71414 * (Double(cr!) - 128)
let bi = Double(yp!) + 1.772 * (Double(cb!) - 128)
let r = UInt8(min(max(ri,0), 255))
let g = UInt8(min(max(gi,0), 255))
let b = UInt8(min(max(bi,0), 255))
rgbaImage[(x + y * width) * 4] = b
rgbaImage[(x + y * width) * 4 + 1] = g
rgbaImage[(x + y * width) * 4 + 2] = r
rgbaImage[(x + y * width) * 4 + 3] = 255
}
}
let data = NSData(bytes: &rgbaImage, length: rgbaImage.count)
let videoBuffer = UnsafeMutablePointer<UInt8>.allocate(capacity: data.length)
data.getBytes(videoBuffer, length: data.length)
VideoPreviewer.instance().push(videoBuffer, length: Int32(data.length))
}
I don't know if this is a correct way.
PS: VideoPreviewer is based on ffmpeg.

The Osmo Mobile 2 does not come with it's own camera, so the SDK is not going to return an instance of a camera - this is different than the other versions of Osmos that have a camera. You will need to build your code to interact directly with your iOS device and not through the Osmo Mobile 2.

Related

TensorFlowLite.Tensor to UUImage

I am new with swift, TFlite and IOS. I succeed to convert, run my model. However at the end, I need to reconstruct an image. My TFlite model return a TFLite.tensor Float32 4d - shape (1, height, width, 3).
let outputTensor: Tensor
outputTensor = try myInterpreter.output(at: 0)
I am looking to make a RGB picture without alpha. In python, it will like this:
Image.fromarray((np.array(outputTensor.data) * 255).astype(np.uint8))
From my understanding the best way will be to make a CVPixelBuffer, apply a CoreOS transformation (for the x255) and finally make the UUImage. I am deeply lost in the IOS doc, it exists many possibilities, does the community has a suggestion ?
++t
From Google example, an extension of UIImage can be coded:
extension UIImage {
convenience init?(data: Data, size: CGSize) {
let width = Int(size.width)
let height = Int(size.height)
let floats = data.toArray(type: Float32.self)
let bufferCapacity = width * height * 4
let unsafePointer = UnsafeMutablePointer<UInt8>.allocate(capacity: bufferCapacity)
let unsafeBuffer = UnsafeMutableBufferPointer<UInt8>(
start: unsafePointer,
count: bufferCapacity)
defer {
unsafePointer.deallocate()
}
for x in 0..<width {
for y in 0..<height {
let floatIndex = (y * width + x) * 3
let index = (y * width + x) * 4
let red = UInt8(floats[floatIndex] * 255)
let green = UInt8(floats[floatIndex + 1] * 255)
let blue = UInt8(floats[floatIndex + 2] * 255)
unsafeBuffer[index] = red
unsafeBuffer[index + 1] = green
unsafeBuffer[index + 2] = blue
unsafeBuffer[index + 3] = 0
}
}
let outData = Data(buffer: unsafeBuffer)
// Construct image from output tensor data
let alphaInfo = CGImageAlphaInfo.noneSkipLast
let bitmapInfo = CGBitmapInfo(rawValue: alphaInfo.rawValue)
.union(.byteOrder32Big)
let colorSpace = CGColorSpaceCreateDeviceRGB()
guard
let imageDataProvider = CGDataProvider(data: outData as CFData),
let cgImage = CGImage(
width: width,
height: height,
bitsPerComponent: 8,
bitsPerPixel: 32,
bytesPerRow: MemoryLayout<UInt8>.size * 4 * Int(size.width),
space: colorSpace,
bitmapInfo: bitmapInfo,
provider: imageDataProvider,
decode: nil,
shouldInterpolate: false,
intent: .defaultIntent
)
else {
return nil
}
self.init(cgImage: cgImage)
}
}
Then the image can be easily constructed from the inference of TFLite.
let outputTensor: Tensor
outputTensor = try decoder.output(at: 0)
image = UIImage(data: outputTensor.data, size: size) ?? UIImage()

CVPixelBuffer resulting into garbage image on the device, while working as expected on the simulator

I am trying to create an image out of artificially created data and want to use CVPixelBuffer:
private func RGBAImage(width w: Int, height h: Int) -> UIImage? {
let width = w * Int(UIScreen.main.scale)
let height = h * Int(UIScreen.main.scale)
// Prepare artificial data
let dataPtr = UnsafeMutablePointer<UInt8>.allocate(capacity: width * height * 4)
for i in 0..<width {
for j in 0..<height {
dataPtr[4 * (i + j * width)] = UInt8(sin(Double(i) * 0.01 * .pi / Double(UIScreen.main.scale)) * 127 + 127)
dataPtr[4 * (i + j * width) + 1] = UInt8(255)
dataPtr[4 * (i + j * width) + 2] = UInt8(0)
dataPtr[4 * (i + j * width) + 3] = UInt8(0)
}
}
// Convert data into CVPixelBuffer
var pxBuffer: CVPixelBuffer?
CVPixelBufferCreateWithBytes(
kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32ARGB,
dataPtr,
width * 4,
nil,
nil,
[kCVPixelBufferIOSurfacePropertiesKey: [:]] as CFDictionary,
&pxBuffer
)
dataPtr.deallocate()
guard let cvPxBuffer = pxBuffer else {
return nil
}
// Generate image from CVPixelBuffer
let ciImage = CIImage(cvImageBuffer: cvPxBuffer)
return UIImage(ciImage: ciImage, scale: UIScreen.main.scale, orientation: .up)
}
The code works fine on simulator and shows as this :
But the same code shows garbage results on the device :
What am I missing here? Any suggestion is welcome.
Figured out myself. I still don't know why CVPixelBufferCreateWithBytes doesn't work, but I was able to make it work by creating the pixel buffer with CVPixelBufferCreate and setting value of each RGB address one by one. This should be a better approach as well since I don't need to create an array first.
Here is the working code for both device and simulator:
private func RGBAImage(width w: Int, height h: Int) -> UIImage? {
let width = w * Int(UIScreen.main.scale)
let height = h * Int(UIScreen.main.scale)
let bytesPerPixel = 4
// Create CVPixelBuffer with artificial data
var pxBuffer: CVPixelBuffer?
CVPixelBufferCreate(
kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32ARGB,
nil,
&pxBuffer)
guard let cvPxBuffer = pxBuffer else {
return nil
}
CVPixelBufferLockBaseAddress(cvPxBuffer, CVPixelBufferLockFlags(rawValue: 0))
let bufferWidth = Int(CVPixelBufferGetWidth(cvPxBuffer))
let bufferHeight = Int(CVPixelBufferGetHeight(cvPxBuffer))
let bytesPerRow = CVPixelBufferGetBytesPerRow(cvPxBuffer)
guard let baseAddress = CVPixelBufferGetBaseAddress(pxBuffer!) else {
return nil
}
for row in 0..<bufferHeight {
var pixel = baseAddress + row * bytesPerRow
for col in 0..<bufferWidth {
let alpha = pixel
alpha.storeBytes(of: UInt8(sin(Double(col) * 0.01 * .pi / Double(UIScreen.main.scale)) * 127 + 127), as: UInt8.self)
let red = pixel + 1
red.storeBytes(of: 255, as: UInt8.self)
let green = pixel + 2
green.storeBytes(of: 0, as: UInt8.self)
let blue = pixel + 3
blue.storeBytes(of: 0, as: UInt8.self)
pixel += bytesPerPixel;
}
}
CVPixelBufferUnlockBaseAddress(cvPxBuffer, CVPixelBufferLockFlags(rawValue: 0))
// Generate image from CVPixelBuffer
let ciImage = CIImage(cvImageBuffer: cvPxBuffer)
return UIImage(ciImage: ciImage, scale: UIScreen.main.scale, orientation: .up)
}

How to read and log the raw pixels of image in swift iOS

I need to read pixel values of an image and iterate to print in swift output, I have written this so far and used a RGBAImage class to read out pixels. I'm getting lost from CGContextRef to Iteration. I tried to write from CGImage, getting pixel data from objective C language to swift since I wanted to work in swift.
func createRGBAPixel(inImage: CGImageRef) -> CGContextRef {
//Image width, height
let pixelWidth = CGImageGetWidth(inImage)
let pixelHeight = CGImageGetHeight(inImage)
//Declaring number of bytes
let bytesPerRow = Int(pixelWidth) * 4
let byteCount = bytesPerRow * Int(pixelHeight)
//RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB()
//Allocating image data
let mapData = malloc(byteCount)
let mapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue)
//Create bitmap context
let context = CGBitmapContextCreate(mapData, pixelWidth, pixelHeight, Int(8), Int(bytesPerRow), colorSpace, mapInfo.rawValue)
let pixelImage = CGBitmapContextCreate(pixels, pixelWidth, pixelHeight, bitsPerComponent, bytesPerRow, colorSpace, mapInfo)
let CGContextRef = pixelImage
let CGContextDrawImage(context, CGRectMake(0, 0, pixelWidth, pixelHeight), inImage)
//Iterating and logging
print("Logging pixel counts")
let pixels = calloc(pixelHeight * pixelWidth, sizeof(UInt32))
let myImage = CGImageRef: inImage
let myRGBA = RGBAImage(image: myImage)! //RGBAImage class to read pixels.
var number = 0
var currentPixel:Int32 = 0
currentPixel = pixels * UInt32
for number in 0..<pixelHeight {
for number in 0..<pixelWidth {
var color = color * currentPixel
print((pixel.red + pixel.green + pixel.blue) / 3.0)
currentPixel++
}
}
return context!
}
I created small class for this:
class ImagePixelReader {
enum Component:Int {
case r = 0
case g = 1
case b = 2
case alpha = 3
}
struct Color {
var r:UInt8
var g:UInt8
var b:UInt8
var a:UInt8
var uiColor:UIColor {
return UIColor(red:CGFloat(r)/255.0,green:CGFloat(g)/255.0,blue:CGFloat(b)/255.0,alpha:CGFloat(alpha)/255.0)
}
}
let image:UIImage
private var data:CFData
private let pointer:UnsafePointer<UInt8>
private let scale:Int
init?(image:UIImage){
self.image = image
guard let cfdata = self.image.cgImage?.dataProvider?.data,
let pointer = CFDataGetBytePtr(cfdata) else {
return nil
}
self.scale = Int(image.scale)
self.data = cfdata
self.pointer = pointer
}
func componentAt(_ component:Component,x:Int,y:Int)->UInt8{
assert(CGFloat(x) < image.size.width)
assert(CGFloat(y) < image.size.height)
let pixelPosition = (Int(image.size.width) * y * scale + x) * 4 * scale
return pointer[pixelPosition + component.rawValue]
}
func colorAt(x:Int,y:Int)->Color{
assert(CGFloat(x) < image.size.width)
assert(CGFloat(y) < image.size.height)
let pixelPosition = (Int(image.size.width) * y * scale + x) * 4 * scale
return Color(r: pointer[pixelPosition + Component.r.rawValue],
g: pointer[pixelPosition + Component.g.rawValue],
b: pointer[pixelPosition + Component.b.rawValue],
a: pointer[pixelPosition + Component.alpha.rawValue])
}
}
How to use:
if let reader = ImagePixelReader(image: yourImage) {
//get alpha or color
let alpha = reader.componentAt(.alpha, x: 10, y:10)
let color = reader.colorAt(x:10, y: 10).uiColor
//getting all the pixels you need
var values = ""
//iterate over all pixels
for x in 0 ..< Int(image.size.width){
for y in 0 ..< Int(image.size.height){
let color = reader.colorAt(x: x, y: y)
values += "[\(x):\(y):\(color)] "
}
//add new line for every new row
values += "\n"
}
print(values)
}

UIImage built from CMSampleBuffer not retained and causing EXC_BAD_ACCESS

I am building a UIImage from a CMSampleBuffer. From the main thread, I call a function to access the pixel data in the CMSampleBuffer and convert the YCbCr planes into an ABGR bitmap which I wrap in a UIImage. I call the function from the main thread with:
let priority = DISPATCH_QUEUE_PRIORITY_DEFAULT
dispatch_async(dispatch_get_global_queue(priority, 0), {() -> Void in
let image = self.imageFromSampleBuffer(frame)
dispatch_async(dispatch_get_main_queue(), {() -> Void in
self.testView.image = image
self.testView.hidden = false
})
})
This maintains responsiveness of the UI and main thread as I would hope. The function processing the buffer is:
func imageFromSampleBuffer(sampleBuffer: CMSampleBuffer) -> UIImage {
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(pixelBuffer, 0)
let lumaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
let chromaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let lumaBytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0)
let chromaBytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1)
let lumaBuffer = UnsafeMutablePointer<UInt8>(lumaBaseAddress)
let chromaBuffer = UnsafeMutablePointer<UInt8>(chromaBaseAddress)
var rgbaImage = [UInt8](count: 4*width*height, repeatedValue: 0)
for var x = 0; x < width; x++ {
for var y = 0; y < height; y++ {
let lumaIndex = x+y*lumaBytesPerRow
let chromaIndex = (y/2)*chromaBytesPerRow+(x/2)*2
let yp = lumaBuffer[lumaIndex]
let cb = chromaBuffer[chromaIndex]
let cr = chromaBuffer[chromaIndex+1]
let ri = Double(yp) + 1.402 * (Double(cr) - 128)
let gi = Double(yp) - 0.34414 * (Double(cb) - 128) - 0.71414 * (Double(cr) - 128)
let bi = Double(yp) + 1.772 * (Double(cb) - 128)
let r = UInt8(min(max(ri,0), 255))
let g = UInt8(min(max(gi,0), 255))
let b = UInt8(min(max(bi,0), 255))
rgbaImage[(x + y * width) * 4] = b
rgbaImage[(x + y * width) * 4 + 1] = g
rgbaImage[(x + y * width) * 4 + 2] = r
rgbaImage[(x + y * width) * 4 + 3] = 255
}
}
let colorSpace = CGColorSpaceCreateDeviceRGB()
let dataProvider: CGDataProviderRef = CGDataProviderCreateWithData(nil, rgbaImage, 4 * width * height, nil)!
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.NoneSkipFirst.rawValue | CGBitmapInfo.ByteOrder32Little.rawValue)
let cgImage = CGImageCreate(width, height, 8, 32, width * 4, colorSpace!, bitmapInfo, dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)!
let image = UIImage(CGImage: cgImage)
CVPixelBufferUnlockBaseAddress(pixelBuffer,0)
return image
}
If I put a breakpoint just before the function returns, I can use "Quick Look" and see the image (and it is what I would expect). However, once the function returns, I cannot use image anywhere else and Quick Look always fails. If I attempt to set a UIImageView to the returned image, nothing in the UI changes:
testView.image = image \\The UIImageView does not update.
If I try to access the image in any other way (e.g., to attempt to save it to Parse), the code crashes with EXC_BAD_ACCESS. Again, if I save the image to Parse within the above function, it appears in the backend database as expected.
I have also tried calling the processing function without dispatching to global and main queues by calling the function directly. The results are always the same.
I believe this is because the image is not retained. I have tried defining both the image and CGImage context at the class and file level, but neither change the outcome. I thought this would maintain a reference, but it apparently does not. I am new enough to Swift that I clearly do not understand how ARC is working in this case.
I also believe there were a few times while debugging using Quick Look from within the function that the first time I clicked the Quick Look was "unavailable"... but waiting a few seconds and clicking again results in the image appearing. Is it possible it is just taking longer for the data to be made available? Perhaps GPU->CPU? If so, how do I check/delay to avoid the crash?
How do I maintain a reference? Is there a better way to handle the image created from the CMSampleBuffer?
The problem is the way in which the CGImage is being created. Using dataProvider and CGImageCreate is the specific issue:
let dataProvider = CGDataProviderCreateWithData(nil, rgbaImage, 4 * width * height, nil)!
let cgImage = CGImageCreate(width, height, 8, 32, width * 4, colorSpace!, bitmapInfo, dataProvider, nil, true, CGColorRenderingIntent.RenderingIntentDefault)!
A working solution using CGBitmapContextGetData and CGBitmapContextCreateImage follows:
func imageFromSampleBuffer(sampleBuffer: CMSampleBuffer) -> UIImage? {
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(pixelBuffer, 0)
let lumaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0)
let chromaBaseAddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let lumaBytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 0)
let chromaBytesPerRow = CVPixelBufferGetBytesPerRowOfPlane(pixelBuffer, 1)
let lumaBuffer = UnsafeMutablePointer<UInt8>(lumaBaseAddress)
let chromaBuffer = UnsafeMutablePointer<UInt8>(chromaBaseAddress)
let contextBytesPerRow = Int(width) * 4
let contextByteCount = contextBytesPerRow * Int(height)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapData = malloc(contextByteCount)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue | CGBitmapInfo.ByteOrder32Little.rawValue)
let context = CGBitmapContextCreate(bitmapData, width, height, 8, contextBytesPerRow, colorSpace, bitmapInfo.rawValue)
let data = CGBitmapContextGetData(context)
let rgbaImage = UnsafeMutablePointer<UInt8>(data)
for var x = 0; x < width; x++ {
for var y = 0; y < height; y++ {
let lumaIndex = x+y*lumaBytesPerRow
let chromaIndex = (y/2)*chromaBytesPerRow+(x/2)*2
let yp = lumaBuffer[lumaIndex]
let cb = chromaBuffer[chromaIndex]
let cr = chromaBuffer[chromaIndex+1]
let ri = Double(yp) + 1.402 * (Double(cr) - 128)
let gi = Double(yp) - 0.34414 * (Double(cb) - 128) - 0.71414 * (Double(cr) - 128)
let bi = Double(yp) + 1.772 * (Double(cb) - 128)
let r = UInt8(min(max(ri,0), 255))
let g = UInt8(min(max(gi,0), 255))
let b = UInt8(min(max(bi,0), 255))
rgbaImage[(x + y * width) * 4] = b
rgbaImage[(x + y * width) * 4 + 1] = g
rgbaImage[(x + y * width) * 4 + 2] = r
rgbaImage[(x + y * width) * 4 + 3] = 255
}
}
let quartzImage = CGBitmapContextCreateImage(context)
CVPixelBufferUnlockBaseAddress(pixelBuffer,0)
let image = UIImage(CGImage: quartzImage!, scale: CGFloat(1.0), orientation: UIImageOrientation.Right)
return (image)
// frontCameraImageOrientation = UIImageOrientation.LeftMirrored
// backCameraImageOrientation = UIImageOrientation.Right
}

How to use LUT png for CIColorCube filter?

I would like to use a lookup table png (example) as color cube data for the CIColorCube filter in Swift. All I tried (and found) so far are examples with a computed color cube as in this example.
How can I read a png as lookup data?
I now used this and this project to adapt their Objective-C implementation for Swift:
func colorCubeFilterFromLUT(imageName : NSString) -> CIFilter? {
let kDimension : UInt = 64
let lutImage = UIImage(named: imageName)!.CGImage
let lutWidth = CGImageGetWidth(lutImage!)
let lutHeight = CGImageGetHeight(lutImage!)
let rowCount = lutHeight / kDimension
let columnCount = lutWidth / kDimension
if ((lutWidth % kDimension != 0) || (lutHeight % kDimension != 0) || (rowCount * columnCount != kDimension)) {
NSLog("Invalid colorLUT %#", imageName);
return nil
}
let bitmap = self.createRGBABitmapFromImage(lutImage)
let size = Int(kDimension) * Int(kDimension) * Int(kDimension) * sizeof(Float) * 4
let data = UnsafeMutablePointer<Float>(malloc(UInt(size)))
var bitmapOffset : Int = 0
var z : UInt = 0
for (var row: UInt = 0; row < rowCount; row++)
{
for (var y: UInt = 0; y < kDimension; y++)
{
var tmp = z
for (var col: UInt = 0; col < columnCount; col++)
{
for (var x: UInt = 0; x < kDimension; x++) {
let alpha = Float(bitmap[Int(bitmapOffset)]) / 255.0
let red = Float(bitmap[Int(bitmapOffset+1)]) / 255.0
let green = Float(bitmap[Int(bitmapOffset+2)]) / 255.0
let blue = Float(bitmap[Int(bitmapOffset+3)]) / 255.0
var dataOffset = Int(z * kDimension * kDimension + y * kDimension + x) * 4
data[dataOffset] = red
data[dataOffset + 1] = green
data[dataOffset + 2] = blue
data[dataOffset + 3] = alpha
bitmapOffset += 4
}
z++
}
z = tmp
}
z += columnCount
}
let colorCubeData = NSData(bytesNoCopy: data, length: size, freeWhenDone: true)
// create CIColorCube Filter
var filter = CIFilter(name: "CIColorCube")
filter.setValue(colorCubeData, forKey: "inputCubeData")
filter.setValue(kDimension, forKey: "inputCubeDimension")
return filter
}
func createRGBABitmapFromImage(inImage: CGImage) -> UnsafeMutablePointer<Float> {
//Get image width, height
let pixelsWide = CGImageGetWidth(inImage)
let pixelsHigh = CGImageGetHeight(inImage)
// Declare the number of bytes per row. Each pixel in the bitmap in this
// example is represented by 4 bytes; 8 bits each of red, green, blue, and
// alpha.
let bitmapBytesPerRow = Int(pixelsWide) * 4
let bitmapByteCount = bitmapBytesPerRow * Int(pixelsHigh)
// Use the generic RGB color space.
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Allocate memory for image data. This is the destination in memory
// where any drawing to the bitmap context will be rendered.
let bitmapData = malloc(CUnsignedLong(bitmapByteCount)) // bitmap
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue)
// Create the bitmap context. We want pre-multiplied RGBA, 8-bits
// per component. Regardless of what the source image format is
// (CMYK, Grayscale, and so on) it will be converted over to the format
// specified here by CGBitmapContextCreate.
let context = CGBitmapContextCreate(bitmapData, pixelsWide, pixelsHigh, 8, UInt(bitmapBytesPerRow), colorSpace, bitmapInfo)
let rect = CGRect(x:0, y:0, width:Int(pixelsWide), height:Int(pixelsHigh))
// Draw the image to the bitmap context. Once we draw, the memory
// allocated for the context for rendering will then contain the
// raw image data in the specified color space.
CGContextDrawImage(context, rect, inImage)
// Now we can get a pointer to the image data associated with the bitmap
// context.
// var data = CGBitmapContextGetData(context)
// var dataType = UnsafeMutablePointer<Float>(data)
// return dataType
var convertedBitmap = malloc(UInt(bitmapByteCount * sizeof(Float)))
vDSP_vfltu8(UnsafePointer<UInt8>(bitmapData), 1, UnsafeMutablePointer<Float>(convertedBitmap), 1, vDSP_Length(bitmapByteCount))
free(bitmapData)
return UnsafeMutablePointer<Float>(convertedBitmap)
}
Also see this answer.
Thought I would update this for Swift 3.0 also this works for JPG's and PNG's 3D Color LUTs
fileprivate func colorCubeFilterFromLUT(imageName : String) -> CIFilter? {
let size = 64
let lutImage = UIImage(named: imageName)!.cgImage
let lutWidth = lutImage!.width
let lutHeight = lutImage!.height
let rowCount = lutHeight / size
let columnCount = lutWidth / size
if ((lutWidth % size != 0) || (lutHeight % size != 0) || (rowCount * columnCount != size)) {
NSLog("Invalid colorLUT %#", imageName);
return nil
}
let bitmap = getBytesFromImage(image: UIImage(named: imageName))!
let floatSize = MemoryLayout<Float>.size
let cubeData = UnsafeMutablePointer<Float>.allocate(capacity: size * size * size * 4 * floatSize)
var z = 0
var bitmapOffset = 0
for _ in 0 ..< rowCount {
for y in 0 ..< size {
let tmp = z
for _ in 0 ..< columnCount {
for x in 0 ..< size {
let alpha = Float(bitmap[bitmapOffset]) / 255.0
let red = Float(bitmap[bitmapOffset+1]) / 255.0
let green = Float(bitmap[bitmapOffset+2]) / 255.0
let blue = Float(bitmap[bitmapOffset+3]) / 255.0
let dataOffset = (z * size * size + y * size + x) * 4
cubeData[dataOffset + 3] = alpha
cubeData[dataOffset + 2] = red
cubeData[dataOffset + 1] = green
cubeData[dataOffset + 0] = blue
bitmapOffset += 4
}
z += 1
}
z = tmp
}
z += columnCount
}
let colorCubeData = NSData(bytesNoCopy: cubeData, length: size * size * size * 4 * floatSize, freeWhenDone: true)
// create CIColorCube Filter
let filter = CIFilter(name: "CIColorCube")
filter?.setValue(colorCubeData, forKey: "inputCubeData")
filter?.setValue(size, forKey: "inputCubeDimension")
return filter
}
fileprivate func getBytesFromImage(image:UIImage?) -> [UInt8]?
{
var pixelValues: [UInt8]?
if let imageRef = image?.cgImage {
let width = Int(imageRef.width)
let height = Int(imageRef.height)
let bitsPerComponent = 8
let bytesPerRow = width * 4
let totalBytes = height * bytesPerRow
let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
let colorSpace = CGColorSpaceCreateDeviceRGB()
var intensities = [UInt8](repeating: 0, count: totalBytes)
let contextRef = CGContext(data: &intensities, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
contextRef?.draw(imageRef, in: CGRect(x: 0.0, y: 0.0, width: CGFloat(width), height: CGFloat(height)))
pixelValues = intensities
}
return pixelValues!
}

Resources