Swift 2.2 - Count Black Pixels in UIImage - ios

I need to count all the black pixels in UIImage. I have found a code that could work however it is written in Objective-C. I have tried to convert it in swift but I get lots of errors and I cannot find the way of fix them.
Whats the best way to do this using Swift?
Simple Image
Objective-C:
/**
* Structure to keep one pixel in RRRRRRRRGGGGGGGGBBBBBBBBAAAAAAAA format
*/
struct pixel {
unsigned char r, g, b, a;
};
/**
* Process the image and return the number of pure red pixels in it.
*/
- (NSUInteger) processImage: (UIImage*) image
{
NSUInteger numberOfRedPixels = 0;
// Allocate a buffer big enough to hold all the pixels
struct pixel* pixels = (struct pixel*) calloc(1, image.size.width * image.size.height * sizeof(struct pixel));
if (pixels != nil)
{
// Create a new bitmap
CGContextRef context = CGBitmapContextCreate(
(void*) pixels,
image.size.width,
image.size.height,
8,
image.size.width * 4,
CGImageGetColorSpace(image.CGImage),
kCGImageAlphaPremultipliedLast
);
if (context != NULL)
{
// Draw the image in the bitmap
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, image.size.width, image.size.height), image.CGImage);
// Now that we have the image drawn in our own buffer, we can loop over the pixels to
// process it. This simple case simply counts all pixels that have a pure red component.
// There are probably more efficient and interesting ways to do this. But the important
// part is that the pixels buffer can be read directly.
NSUInteger numberOfPixels = image.size.width * image.size.height;
while (numberOfPixels > 0) {
if (pixels->r == 255) {
numberOfRedPixels++;
}
pixels++;
numberOfPixels--;
}
CGContextRelease(context);
}
free(pixels);
}
return numberOfRedPixels;
}

Much faster is to use Accelerate's vImageHistogramCalculation to get a histogram of the different channels in your image:
let img: CGImage = CIImage(image: image!)!.cgImage!
let imgProvider: CGDataProvider = img.dataProvider!
let imgBitmapData: CFData = imgProvider.data!
var imgBuffer = vImage_Buffer(data: UnsafeMutableRawPointer(mutating: CFDataGetBytePtr(imgBitmapData)), height: vImagePixelCount(img.height), width: vImagePixelCount(img.width), rowBytes: img.bytesPerRow)
let alpha = [UInt](repeating: 0, count: 256)
let red = [UInt](repeating: 0, count: 256)
let green = [UInt](repeating: 0, count: 256)
let blue = [UInt](repeating: 0, count: 256)
let alphaPtr = UnsafeMutablePointer<vImagePixelCount>(mutating: alpha) as UnsafeMutablePointer<vImagePixelCount>?
let redPtr = UnsafeMutablePointer<vImagePixelCount>(mutating: red) as UnsafeMutablePointer<vImagePixelCount>?
let greenPtr = UnsafeMutablePointer<vImagePixelCount>(mutating: green) as UnsafeMutablePointer<vImagePixelCount>?
let bluePtr = UnsafeMutablePointer<vImagePixelCount>(mutating: blue) as UnsafeMutablePointer<vImagePixelCount>?
let rgba = [redPtr, greenPtr, bluePtr, alphaPtr]
let histogram = UnsafeMutablePointer<UnsafeMutablePointer<vImagePixelCount>?>(mutating: rgba)
let error = vImageHistogramCalculation_ARGB8888(&imgBuffer, histogram, UInt32(kvImageNoFlags))
After this runs, alpha, red, green, and blue are now histograms of the colors in your image. If red, green, and blue each only have count in the 0th spot, while alpha only has count in the last spot, your image is black.
If you want to not even check multiple arrays, you can use vImageMatrixMultiply to combine your different channels:
let readableMatrix: [[Int16]] = [
[3, 0, 0, 0]
[0, 1, 1, 1],
[0, 0, 0, 0],
[0, 0, 0, 0]
]
var matrix: [Int16] = [Int16](repeating: 0, count: 16)
for i in 0...3 {
for j in 0...3 {
matrix[(3 - j) * 4 + (3 - i)] = readableMatrix[i][j]
}
}
vImageMatrixMultiply_ARGB8888(&imgBuffer, &imgBuffer, matrix, 3, nil, nil, UInt32(kvImageNoFlags))
If you stick this in before the histograming, your imgBuffer will be modified in place to average the RGB in each pixel, writing the average out to the B channel. As such, you can just check the blue histogram instead of all three.
(btw, the best description of vImageMatrixMultiply I've found is in the source code, like at https://github.com/phracker/MacOSX-SDKs/blob/2d31dd8bdd670293b59869335d9f1f80ca2075e0/MacOSX10.7.sdk/System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/Headers/Transform.h#L21)

I ran into a similar issue now, where I needed to determine if an image was 100% black. The following code will return the number of pure black pixels it finds in an image.
However, if you want to bump the threshold up, you can change the compare value, and allow it to tolerate a wider range of possible colors.
import UIKit
extension UIImage {
var blackPixelCount: Int {
var count = 0
for x in 0..<Int(size.width) {
for y in 0..<Int(size.height) {
count = count + (isPixelBlack(CGPoint(x: CGFloat(x), y: CGFloat(y))) ? 1 : 0)
}
}
return count
}
private func isPixelBlack(_ point: CGPoint) -> Bool {
let pixelData = cgImage?.dataProvider?.data
let pointerData: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo = Int(((size.width * point.y) + point.x)) * 4
let maxValue: CGFloat = 255.0
let compare: CGFloat = 0.01
if (CGFloat(pointerData[pixelInfo]) / maxValue) > compare { return false }
if (CGFloat(pointerData[pixelInfo + 1]) / maxValue) > compare { return false }
if (CGFloat(pointerData[pixelInfo + 2]) / maxValue) > compare { return false }
return true
}
}
You call this with:
let count = image.blackPixelCount
The one caveat is that this is a very slow process, even on small images.

Related

CGContext.init() -- NULL color space no longer allowed

TL;DR: In legacy Obj-C code, the color space param value was NULL. That is not allowed in the Swift equivalent. What value to use?
I have inherited code that reads:
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(
pixel,1, 1, 8, 1, NULL, (CGBitmapInfo)kCGImageAlphaOnly
);
The port to Swift 4 CGContext is straightforward, except for that NULL color space value. Using a plausible value, I am getting nil back from CGContext.init?(). My translation is:
var pixelValue = UInt8(0)
var pixel = Data(buffer: UnsafeBufferPointer(start:&pixelValue, count:1))
let context = CGContext(
data : &pixel,
width : 1,
height : 1,
bitsPerComponent: 8,
bytesPerRow : 1,
space : CGColorSpace(name:CGColorSpace.genericRGBLinear)!,
bitmapInfo : CGImageAlphaInfo.alphaOnly.rawValue
)! // Returns nil; unwrapping crashes
Q: What is the appropriate value for space? (The value I provide is not returning nil; it's the CGContext() call itself.
Setting the environment variable CGBITMAP_CONTEXT_LOG_ERRORS yields an error log like this:
Assertion failed: (0), function get_color_model_name,
file /BuildRoot/Library/Caches/com.apple.xbs/Sources/Quartz2D_Sim/
Quartz2D-1129.2.1/CoreGraphics/API/CGBitmapContextInfo.c, line 210.
For some more backstory, the context was used to find the alpha value of a single pixel in a UIImage in the following way:
unsigned char pixel[1] = {0};
CGContextRef context = CGBitmapContextCreate(pixel,1, 1, 8, 1, NULL, (CGBitmapInfo)kCGImageAlphaOnly);
UIGraphicsPushContext(context);
[image drawAtPoint:CGPointMake(-point.x, -point.y)];
UIGraphicsPopContext();
CGContextRelease(context);
CGFloat alpha = pixel[0]/255.0;
(I do have possible alternatives for finding alpha, but in the interest of leaving legacy code alone, would like to keep it this way.)
I recently worked with similar topic, maybe this code sample will help someone:
let image = UIImage(named: "2.png")
guard let cgImage = image?.cgImage else {
fatalError()
}
let width = cgImage.width
let height = cgImage.height
//CGColorSpaceCreateDeviceGray - 1 component, 8 bits
//i.e. 1px = 1byte
let bytesPerRow = width
let bitmapByteCount = width * height
let bitmapData: UnsafeMutablePointer<UInt8> = .allocate(capacity: bitmapByteCount)
defer {
bitmapData.deallocate()
}
bitmapData.initialize(repeating: 0, count: bitmapByteCount)
guard let context = CGContext(data: bitmapData, width: width, height: height,
bitsPerComponent: 8, bytesPerRow: bytesPerRow,
space: CGColorSpaceCreateDeviceGray(), bitmapInfo: CGImageAlphaInfo.alphaOnly.rawValue) else {
fatalError()
}
//draw image to context
var rect = CGRect(x: 0, y: 0, width: width, height: height)
context.draw(cgImage, in: rect)
// Enumerate through all pixels
for row in 0..<height {
for col in 0..<width {
let alphaValue = bitmapData[row * width + col]
if alphaValue != 0 {
//visible pixel
}
}
}
Here’s how to determine whether a pixel is transparent:
let info = CGImageAlphaInfo.alphaOnly.rawValue
let pixel = UnsafeMutablePointer<UInt8>.allocate(capacity:1)
defer {
pixel.deinitialize(count: 1)
pixel.deallocate()
}
pixel[0] = 0
let sp = CGColorSpaceCreateDeviceGray()
let context = CGContext(data: pixel,
width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 1,
space: sp, bitmapInfo: info)!
UIGraphicsPushContext(context)
im.draw(at:CGPoint(-point.x, -point.y))
UIGraphicsPopContext()
let p = pixel[0]
let alpha = Double(p)/255.0
let transparent = alpha < 0.01
For the record, here is how I wound up doing it. It hasn't (yet) misbehaved, so on the principle of "If it ain't broke, don't fix it" I'll leave it. (I have added self for clarity.) But you can be sure that I will paste Matt's code right in there, in case I need it in the future. Thanks Matt!
// Note that "self" is a UIImageView; "point" is the point under consideration.
let im = self.image!
// TODO: Why is this clamping necessary? We get points outside our size.
var x = point.x
var y = point.y
if x < 0 { x = 0 } else if x > im.size.width - 1 { x = im.size.width - 1 }
if y < 0 { y = 0 } else if y > im.size.height - 1 { y = im.size.height - 1 }
let screenWidth = self.bounds.width
let intrinsicWidth = im.size.width
x *= im.scale * intrinsicWidth/screenWidth
y *= im.scale * intrinsicWidth/screenWidth
let pixData = im.cgImage?.dataProvider?.data
let data = CFDataGetBytePtr(pixData!)
let pixIndex = Int(((Int(im.size.width*im.scale) * Int(y)) + Int(x)) * 4)
let r = data?[pixIndex]
let g = data?[pixIndex + 1]
let b = data?[pixIndex + 2]
let α = data?[pixIndex + 3]
let red = CGFloat(r!)/255
let green = CGFloat(g!)/255
let blue = CGFloat(b!)/255
let alpha = CGFloat(α!)/255

Swift 3 CGContext Memory Leak

I'm using a CGBitMapContext() to convert colour spaces to ARGB and get the pixel data values, I malloc space for bit map context and free it after I'm done but am still seeing a Memory Leak in Instruments I'm thinking I'm likely doing something wrong so any help would be appreciated.
Here is the ARGBBitmapContext function
func createARGBBitmapContext(width: Int, height: Int) -> CGContext {
var bitmapByteCount = 0
var bitmapBytesPerRow = 0
//Get image width, height
let pixelsWide = width
let pixelsHigh = height
bitmapBytesPerRow = Int(pixelsWide) * 4
bitmapByteCount = bitmapBytesPerRow * Int(pixelsHigh)
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Here is the malloc call that Instruments complains of
let bitmapData = malloc(bitmapByteCount)
let context = CGContext(data: bitmapData, width: pixelsWide, height: pixelsHigh, bitsPerComponent: 8, bytesPerRow: bitmapBytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
// Do I need to free something here first?
return context!
}
Here is where I use the context to retrieve all the pixel values as a list of UInt8s (and where the memory leaks)
extension UIImage {
func ARGBPixelValues() -> [UInt8] {
let width = Int(self.size.width)
let height = Int(self.size.height)
var pixels = [UInt8](repeatElement(0, count: width * height * 3))
let rect = CGRect(x: 0, y: 0, width: width, height: height)
let context = createARGBBitmapContext(inImage: self.cgImage!)
context.clear(rect)
context.draw(self.cgImage!, in: rect)
var location = 0
if let data = context.data {
while location < (width * height) {
let arrOffset = 3 * location
let offset = 4 * (location)
let R = data.load(fromByteOffset: offset + 1, as: UInt8.self)
let G = data.load(fromByteOffset: offset + 2, as: UInt8.self)
let B = data.load(fromByteOffset: offset + 3, as: UInt8.self)
pixels[arrOffset] = R
pixels[arrOffset+1] = G
pixels[arrOffset+2] = B
location += 1
}
free(context.data) // Free the data consumed, perhaps this isn't right?
}
return pixels
}
}
Instruments reports a malloc error of 1.48MiB which is right for my image size (540 x 720) I free the data but apparently that is not right.
I should mention that I know you can pass nil to CGContext init (and it will manage memory) but I'm more curious why using malloc creates an issue is there something more I should know (I'm more familiar with Obj-C).
Because CoreGraphics is not handled by ARC (like all other C libraries), you need to wrap your code with with an autorelease, even in Swift. Particularly if you are not on the main thread (which you should not be, if CoreGraphics is involved... .userInitiated or lower is appropriate).
func myFunc() {
for _ in 0 ..< makeMoneyFast {
autoreleasepool {
// Create CGImageRef etc...
// Do Stuff... whir... whiz... PROFIT!
}
}
}
For those that care, your Objective-C should also be wrapped like:
BOOL result = NO;
NSMutableData* data = [[NSMutableData alloc] init];
#autoreleasepool {
CGImageRef image = [self CGImageWithResolution:dpi
hasAlpha:hasAlpha
relativeScale:scale];
NSAssert(image != nil, #"could not create image for TIFF export");
if (image == nil)
return nil;
CGImageDestinationRef destRef = CGImageDestinationCreateWithData((CFMutableDataRef)data, kUTTypeTIFF, 1, NULL);
CGImageDestinationAddImage(destRef, image, (CFDictionaryRef)options);
result = CGImageDestinationFinalize(destRef);
CFRelease(destRef);
}
if (result) {
return [data copy];
} else {
return nil;
}
See this answer for details.

Image comparison to identity and map identical pixels

I'm building this for iOS using Swift — either via CoreImage or GPUImage, but if I can build it in Python or Node/JavaScript, that'd work too. Feel free to answer abstractly, or in a different language entirely — I'll accept any answer that roughly describes how I might go about accomplishing this.
Consider the following two "images" (I've fabricated two 3x3-pixel grids to represent two images, each 3x3 pixels for a total of 9 pixels).
Let's assume I process the original image (left) with a shader that changes the color of some, but not all of the pixels. The resulting image on the right is the same, but for 3 pixels — #2, #3, and #6:
I'm trying to find a means of comparing all of the pixels in both images and logging the x,y position of pixels that haven't changed during the filter process. In this case, when comparing the left to right, I'd need to know that #1, #4, #5, #7, #8, and #9 remained unchanged.
Assuming your images before and after are the same size all you need to do is loop through each pixel and compare them which you can do with a pointer. I certainly don't claim this is the fastest method but it should work (note you can compare all 32 bits at once with a UInt32 pointer, but I am doing it byte wise just to illustrate where the RGBA values are if you need them). Also note that because of the fact that Quartz was written for Mac and it uses Cartesian coordinates and iOS and UIKit do not, its possible your data is upside down (mirrored around the X-axis). You will have to check; it depends on how the internal bitmap is being represented.
func difference(leftImage: UIImage, rightImage: UIImage) {
let width = Int(leftImage.size.width)
let height = Int(leftImage.size.height)
guard leftImage.size == rightImage.size else {
return
}
if let cfData1:CFData = leftImage.cgImage?.dataProvider?.data,
let l = CFDataGetBytePtr(cfData1),
let cfData2:CFData = rightImage.cgImage?.dataProvider?.data,
let r = CFDataGetBytePtr(cfData2) {
let bytesPerpixel = 4
let firstPixel = 0
let lastPixel = (width * height - 1) * bytesPerpixel
let range = stride(from: firstPixel, through: lastPixel, by: bytesPerpixel)
for pixelAddress in range {
if l.advanced(by: pixelAddress).pointee != r.advanced(by: pixelAddress).pointee || //Red
l.advanced(by: pixelAddress + 1).pointee != r.advanced(by: pixelAddress + 1).pointee || //Green
l.advanced(by: pixelAddress + 2).pointee != r.advanced(by: pixelAddress + 2).pointee || //Blue
l.advanced(by: pixelAddress + 3).pointee != r.advanced(by: pixelAddress + 3).pointee { //Alpha
print(pixelAddress)
// do stuff here
}
}
}
}
If you need a faster method write a shader that will delta each pixel and write the result out to a texture. Any pixels that are not clear black (i.e. 0,0,0,0) in the output are different between the images. Shaders are not my area of expertise so I will leave it to someone else to write. Also on some architectures its expensive to read back form graphics memory so you will have to test and see if this is really better than doing it in main memory (may also depend on image size because you have to amortize the setup cost for the textures and shaders).
I use another option, a slightly modified Facebook version.
The original code here
func compareWithImage(_ referenceImage: UIImage, tolerance: CGFloat = 0) -> Bool {
guard size.equalTo(referenceImage.size) else {
return false
}
guard let cgImage = cgImage, let referenceCGImage = referenceImage.cgImage else {
return false
}
let minBytesPerRow = min(cgImage.bytesPerRow, referenceCGImage.bytesPerRow)
let referenceImageSizeBytes = Int(referenceImage.size.height) * minBytesPerRow
let imagePixelsData = UnsafeMutablePointer<Pixel>.allocate(capacity: cgImage.width * cgImage.height)
let referenceImagePixelsData = UnsafeMutablePointer<Pixel>.allocate(capacity: cgImage.width * cgImage.height)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue & CGBitmapInfo.alphaInfoMask.rawValue)
guard let colorSpace = cgImage.colorSpace, let referenceColorSpace = referenceCGImage.colorSpace else { return false }
guard let imageContext = CGContext(data: imagePixelsData, width: cgImage.width, height: cgImage.height, bitsPerComponent: cgImage.bitsPerComponent, bytesPerRow: minBytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) else { return false }
guard let referenceImageContext = CGContext(data: referenceImagePixelsData, width: referenceCGImage.width, height: referenceCGImage.height, bitsPerComponent: referenceCGImage.bitsPerComponent, bytesPerRow: minBytesPerRow, space: referenceColorSpace, bitmapInfo: bitmapInfo.rawValue) else { return false }
imageContext.draw(cgImage, in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
referenceImageContext.draw(referenceCGImage, in: CGRect(x: 0, y: 0, width: referenceImage.size.width, height: referenceImage.size.height))
var imageEqual = true
// Do a fast compare if we can
if tolerance == 0 {
imageEqual = memcmp(imagePixelsData, referenceImagePixelsData, referenceImageSizeBytes) == 0
} else {
// Go through each pixel in turn and see if it is different
let pixelCount = referenceCGImage.width * referenceCGImage.height
let imagePixels = UnsafeMutableBufferPointer<Pixel>(start: imagePixelsData, count: cgImage.width * cgImage.height)
let referenceImagePixels = UnsafeMutableBufferPointer<Pixel>(start: referenceImagePixelsData, count: referenceCGImage.width * referenceCGImage.height)
var numDiffPixels = 0
for i in 0..<pixelCount {
// If this pixel is different, increment the pixel diff count and see
// if we have hit our limit.
let p1 = imagePixels[i]
let p2 = referenceImagePixels[i]
if p1.value != p2.value {
numDiffPixels += 1
let percents = CGFloat(numDiffPixels) / CGFloat(pixelCount)
if percents > tolerance {
imageEqual = false
break
}
}
}
}
free(imagePixelsData)
free(referenceImagePixelsData)
return imageEqual
}
struct Pixel {
var value: UInt32
var red: UInt8 {
get { return UInt8(value & 0xFF) }
set { value = UInt32(newValue) | (value & 0xFFFFFF00) }
}
var green: UInt8 {
get { return UInt8((value >> 8) & 0xFF) }
set { value = (UInt32(newValue) << 8) | (value & 0xFFFF00FF) }
}
var blue: UInt8 {
get { return UInt8((value >> 16) & 0xFF) }
set { value = (UInt32(newValue) << 16) | (value & 0xFF00FFFF) }
}
var alpha: UInt8 {
get { return UInt8((value >> 24) & 0xFF) }
set { value = (UInt32(newValue) << 24) | (value & 0x00FFFFFF) }
}
}

GLKit texture jagged

I'm trying implement GLKit texture painting, and it's looks too jagged
Creating texture from .png, got this code from Apple's GLPaing example
private func texture(fromName name: String) -> textureInfo_t {
var texId: GLuint = 0
var texture: textureInfo_t = (0, 0, 0)
let brushImage = UIImage(named: name)!.cgImage!
let width: size_t = brushImage.width
let height: size_t = brushImage.height
var brushData = [GLubyte](repeating: 0, count: width * height * 4)
let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue
let brushContext = CGContext(data: &brushData, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width * 4, space: (brushImage.colorSpace!), bitmapInfo: bitmapInfo)
brushContext?.draw(brushImage, in: CGRect(x: 0.0, y: 0.0, width: width.g, height: height.g))
glGenTextures(1, &texId)
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D.ui, texId)
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D.ui, GL_TEXTURE_MIN_FILTER.ui, GL_LINEAR)
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D.ui, 0, GL_RGBA, width.i, height.i, 0, GL_RGBA.ui, GL_UNSIGNED_BYTE.ui, brushData)
// Release the image data; it's no longer needed
texture.id = texId
texture.width = width.i
texture.height = height.i
return texture
}
and rendering while painting
private func renderLine(from _start: CGPoint, to _end: CGPoint) {
struct Static {
static var vertexBuffer: [GLfloat] = []
}
var count = 0
EAGLContext.setCurrent(context)
glBindFramebuffer(GL_FRAMEBUFFER.ui, viewFramebuffer)
// Convert locations from Points to Pixels
let scale = self.contentScaleFactor
var start = _start
start.x *= scale
start.y *= scale
var end = _end
end.x *= scale
end.y *= scale
// Allocate vertex array buffer
// Add points to the buffer so there are drawing points every X pixels
count = max(Int(ceilf(sqrtf((end.x - start.x).f * (end.x - start.x).f + (end.y - start.y).f * (end.y - start.y).f) / kBrushPixelStep.f)), 1)
Static.vertexBuffer.reserveCapacity(count * 2)
Static.vertexBuffer.removeAll(keepingCapacity: true)
for i in 0..<count {
Static.vertexBuffer.append(start.x.f + (end.x - start.x).f * (i.f / count.f))
Static.vertexBuffer.append(start.y.f + (end.y - start.y).f * (i.f / count.f))
}
// Load data to the Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER.ui, vboId)
glBufferData(GL_ARRAY_BUFFER.ui, count*2*MemoryLayout<GLfloat>.size, Static.vertexBuffer, GL_DYNAMIC_DRAW.ui)
glEnableVertexAttribArray(ATTRIB_VERTEX.ui)
glVertexAttribPointer(ATTRIB_VERTEX.ui, 2, GL_FLOAT.ui, GL_FALSE.ub, 0, nil)
// Draw
glUseProgram(program[PROGRAM_POINT].id)
glDrawArrays(GL_POINTS.ui, 0, count.i)
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER.ui, viewRenderbuffer)
context.presentRenderbuffer(GL_RENDERBUFFER.l)
}
How could I improve texture quality?
UPDATE:
Even with bigger resolution .png result same. What is wrong am I doing?
There 1024x1024 .png with transparent background that I'm using:

Get CIColorCube Filter Working In Swift

I am trying to get the CIColorCube filter working. However the Apple documents only provide a poorly explained reference example here:
// Allocate memory
const unsigned int size = 64;
float *cubeData = (float *)malloc (size * size * size * sizeof (float) * 4);
float rgb[3], hsv[3], *c = cubeData;
// Populate cube with a simple gradient going from 0 to 1
for (int z = 0; z < size; z++){
rgb[2] = ((double)z)/(size-1); // Blue value
for (int y = 0; y < size; y++){
rgb[1] = ((double)y)/(size-1); // Green value
for (int x = 0; x < size; x ++){
rgb[0] = ((double)x)/(size-1); // Red value
// Convert RGB to HSV
// You can find publicly available rgbToHSV functions on the Internet
rgbToHSV(rgb, hsv);
// Use the hue value to determine which to make transparent
// The minimum and maximum hue angle depends on
// the color you want to remove
float alpha = (hsv[0] > minHueAngle && hsv[0] < maxHueAngle) ? 0.0f: 1.0f;
// Calculate premultiplied alpha values for the cube
c[0] = rgb[0] * alpha;
c[1] = rgb[1] * alpha;
c[2] = rgb[2] * alpha;
c[3] = alpha;
c += 4; // advance our pointer into memory for the next color value
}
}
}
// Create memory with the cube data
NSData *data = [NSData dataWithBytesNoCopy:cubeData
length:cubeDataSize
freeWhenDone:YES];
CIColorCube *colorCube = [CIFilter filterWithName:#"CIColorCube"];
[colorCube setValue:#(size) forKey:#"inputCubeDimension"];
// Set data for cube
[colorCube setValue:data forKey:#"inputCubeData"];
So I have attempted to translate this over to Swift with the following:
var filter = CIFilter(name: "CIColorCube")
filter.setValue(ciImage, forKey: kCIInputImageKey)
filter.setDefaults()
var size: UInt = 64
var floatSize = UInt(sizeof(Float))
var cubeDataSize:size_t = size * size * size * floatSize * 4
var colorCubeData:Array<Float> = [
0,0,0,1,
0,0,0,1,
0,0,0,1,
0,0,0,1,
0,0,0,1,
0,0,0,1,
0,0,0,1,
0,0,0,1
]
var cubeData:NSData = NSData(bytesNoCopy: colorCubeData, length: cubeDataSize)
However I get an error when trying to create the cube data:
"Extra argument 'bytesNoCopy' in call"
Basically I am creating the cubeData wrong. Can you advise me on how to properly create the cubeData object in Swift?
Thanks!
Looks like you are after the chroma key filter recipe described here. Here's some code that works. You get a filter for the color you want to make transparent, described by its HSV angle:
func RGBtoHSV(r : Float, g : Float, b : Float) -> (h : Float, s : Float, v : Float) {
var h : CGFloat = 0
var s : CGFloat = 0
var v : CGFloat = 0
let col = UIColor(red: CGFloat(r), green: CGFloat(g), blue: CGFloat(b), alpha: 1.0)
col.getHue(&h, saturation: &s, brightness: &v, alpha: nil)
return (Float(h), Float(s), Float(v))
}
func colorCubeFilterForChromaKey(hueAngle: Float) -> CIFilter {
let hueRange: Float = 60 // degrees size pie shape that we want to replace
let minHueAngle: Float = (hueAngle - hueRange/2.0) / 360
let maxHueAngle: Float = (hueAngle + hueRange/2.0) / 360
let size = 64
var cubeData = [Float](repeating: 0, count: size * size * size * 4)
var rgb: [Float] = [0, 0, 0]
var hsv: (h : Float, s : Float, v : Float)
var offset = 0
for z in 0 ..< size {
rgb[2] = Float(z) / Float(size) // blue value
for y in 0 ..< size {
rgb[1] = Float(y) / Float(size) // green value
for x in 0 ..< size {
rgb[0] = Float(x) / Float(size) // red value
hsv = RGBtoHSV(r: rgb[0], g: rgb[1], b: rgb[2])
// the condition checking hsv.s may need to be removed for your use-case
let alpha: Float = (hsv.h > minHueAngle && hsv.h < maxHueAngle && hsv.s > 0.5) ? 0 : 1.0
cubeData[offset] = rgb[0] * alpha
cubeData[offset + 1] = rgb[1] * alpha
cubeData[offset + 2] = rgb[2] * alpha
cubeData[offset + 3] = alpha
offset += 4
}
}
}
let b = cubeData.withUnsafeBufferPointer { Data(buffer: $0) }
let data = b as NSData
let colorCube = CIFilter(name: "CIColorCube", withInputParameters: [
"inputCubeDimension": size,
"inputCubeData": data
])
return colorCube!
}
Then to get your filter call
let chromaKeyFilter = colorCubeFilterForChromaKey(hueAngle: 120)
I used 120 for your standard green screen.
I believe you want to use NSData(bytes: UnsafePointer<Void>, length: Int) instead of NSData(bytesNoCopy: UnsafeMutablePointer<Void>, length: Int). Make that change and calculate the length in the following way and you should be up and running.
let colorCubeData: [Float] = [
0, 0, 0, 1,
1, 0, 0, 1,
0, 1, 0, 1,
1, 1, 0, 1,
0, 0, 1, 1,
1, 0, 1, 1,
0, 1, 1, 1,
1, 1, 1, 1
]
let cubeData = NSData(bytes: colorCubeData, length: colorCubeData.count * sizeof(Float))

Resources