Swift Image Processing: Calculate Texture of UIImage as 1 Double - ios

I'm writing an iOs app for image processing and I'm currently developing a method to extract the texture out of a UIImage. The method takes a UIImage as input and gives one value (Double) as output. The higher the texture, the higher the output value will be. That way, input images can be classified from low texture to high texture. Please see Photo 1 to understand the difference between a textured and a smooth image.
Photo 1: Images with high vs low texture
My method is based on the statistical definition of texture (See the PPT presentation of the Purdue University for more info on texture: http://www.cyto.purdue.edu/cdroms/micro2/content/education/wirth06.pdf?fbclid=IwAR0_7ttvNAtVqY9gWkaE_TZhivmXnZIPv1UwByTr8eFW_zAZ_Uxg6kcQ3z0).
Photo 2: Definition of pixel, square, input image
The input images will measure exactly 600px x 600 px. To determine the overall texture of an image, I divide the image into squares which measure 3x3 pixels. From each pixel in the square, I calculate the grayscale value. After that, the minimum and maximum greyscale value is calculated for each square. Finally, the difference is defined as maximum - minimum. This difference is calculated for each 3x3 square in the image. The sum is then made of all those difference values, which gives us the 'Texture'. The higher this sum of differences, the higher the texture in the image. Please see Photo 2 for more info.
First, a pixel, a square and an image are defined in Swift:
struct Pixel {
var value: UInt32
var red: UInt8 {
get { return UInt8(value & 0xFF) }
set { value = UInt32(newValue) | (value & 0xFFFFFF00) }
}
var green: UInt8 {
get { return UInt8((value >> 8) & 0xFF) }
set { value = (UInt32(newValue) << 8) | (value & 0xFFFF00FF) }
}
var blue: UInt8 {
get { return UInt8((value >> 16) & 0xFF) }
set { value = (UInt32(newValue) << 16) | (value & 0xFF00FFFF) }
}
var alpha: UInt8 {
get { return UInt8((value >> 24) & 0xFF) }
set { value = (UInt32(newValue) << 24) | (value & 0x00FFFFFF) }
}
}
struct Square {
var pixels: UnsafeMutableBufferPointer<Pixel>
let width:Int = 3
let height:Int = 3
init?(zeroX: Int, zeroY: Int) {
//pixelArray[0] = UnsafeMutableBufferPointer<Pixel>()
//let coordinates = CGPoint(x: zeroX, y: zeroY)
let bitsPerComponent = 8 // 2
let bytesPerPixel = 4
let bytesPerRow = width * bytesPerPixel
let squareData = UnsafeMutablePointer<Pixel>.allocate(capacity: width * height)
let colorSpace = CGColorSpaceCreateDeviceGray()
var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Big.rawValue
bitmapInfo |= CGImageAlphaInfo.premultipliedLast.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
guard let imageContext = CGContext(data: squareData, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else { return nil }
//imageContext.draw(cgImage, in: CGRect(origin: .zero, size: image.size))
pixels = UnsafeMutableBufferPointer<Pixel>(start: squareData, count: width * height)
}
}
struct RGBA {
var pixels: UnsafeMutableBufferPointer<Pixel>
var width: Int
var height: Int
init?(image: UIImage) {
guard let cgImage = image.cgImage else { return nil } // 1
width = Int(image.size.width)
height = Int(image.size.height)
let bitsPerComponent = 8 // 2
let bytesPerPixel = 4
let bytesPerRow = width * bytesPerPixel
let imageData = UnsafeMutablePointer<Pixel>.allocate(capacity: width * height)
let colorSpace = CGColorSpaceCreateDeviceRGB() // 3
var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Big.rawValue
bitmapInfo |= CGImageAlphaInfo.premultipliedLast.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
guard let imageContext = CGContext(data: imageData, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else { return nil }
imageContext.draw(cgImage, in: CGRect(origin: .zero, size: image.size))
pixels = UnsafeMutableBufferPointer<Pixel>(start: imageData, count: width * height)
}
}
Now that those 3 are defined, a method to calculate the difference value of 1 single square is needed:
func getSquareDifference(xValue:Int, yValue:Int) -> Double {
let square = Square(zeroX: xValue, zeroY: yValue)
var whiteValues = [Double]()
for pixel in 0 ... square?.height {
for pixel in 0 .. square?.width {
var white:CGFloat = 0.0
var alpha:CGFloat = 0.0
let index = y * square!.width + x
let pixel = square!.pixels[index]
let pixelColor = UIColor.init(red: CGFloat(pixel.red), green: CGFloat(pixel.green), blue: CGFloat(pixel.blue), alpha: CGFloat(pixel.alpha))
let pixelGreyScaleColor = pixelColor.getWhite(&white, alpha: &alpha)
whiteValues[pixel] = white
}
}
let min = whiteValues.min()
let max = whiteValues.max()
let difference = max - min
return difference
}
Photo 3: For-loop iteration for all 40.000 squares
At last, the program needs to do this getSquareDifference(x,y) method for every square in the image. The first square starts at (0,0), then it needs to shift to the square at (3,0) until the first horizontal line is finished (200 squares). Then it takes the square at (0,3) and does the same thing for the second line. And so on for all the vertical lines (200 squares). Please see Photo 3 for more info.
func determineTexture(inputImage:UIImage) -> Double {
let rgba = RGBA(image: inputImage)
var texture:Double = 0.0
for y in 0..<rgba!.height {
for x in 0..<rgba!.width {
// For all squares, calculate the difference value and add to the Double 'Texture'
// let difference = square.getSquareDifference(x,y)
// texture += difference
}
}
return texture
}
As you cans see, this method is not finished yet. There are still some problems with:
struct Square: How to define the square which upper left pixel is at (x,y)?
func getSquareDifference(x,y) is not yet tested because there are still some problems with the struct square
func determineTexture(inputImage): How to make a for-loop which calculates de 'difference value' of a square, adds that value to 'var texture:Double', shifts to the next square and repeats this process until all difference values of the 40.000 squares (200x200) are calculated.
It would be awesome if someone knows a possible solution to these problems! If this method works, it would be useful for everyone who needs image classification is his/her app.
*Note: if you want to know how to make sure every image measures exactly 600px x 600 px, I would recommend the following method:
let compressedInputImage = inputImage.sd_resizedImage(with: CGSize(width: 600, height: 600), scaleMode: .aspectFill)

Related

CIFilter color cube data loading

I have around 50 3D LUTs (stored as png images, each being 900KB in size) and use CIColorCube filter to generate a filtered image. I use UICollectionView to display filtered thumbnails (100x100) for each LUT (like in Photos app). The problem is UICollectionView scrolling becomes extremely slow(no where close to smoothness of Photos app) when I generate filtered images as user scrolls. I thought of pre generating filtered images but the problem is it takes around 150 milliseconds to generate cubeData from LUT png, so for 50 thumbnails it takes around 7-8 seconds to prepare filtered thumbnails which is long. And this is exactly the culprit for scrolling performance as well. I am wondering what I can do to make it smooth like in Photos app or other photo editing apps. Here is my code to generate cube data from LUT png. I believe there is more of a CoreImage/Metal trick to fix the issue than UIKit/DispatchQueue/NSOperation based fixes.
public static func colorCubeDataFromLUTPNGImage(_ image : UIImage, lutSize:Int) -> Data? {
let size = lutSize
let lutImage = image.cgImage!
let lutWidth = lutImage.width
let lutHeight = lutImage.height
let rowCount = lutHeight / size
let columnCount = lutWidth / size
if ((lutWidth % size != 0) || (lutHeight % size != 0) || (rowCount * columnCount != size)) {
NSLog("Invalid colorLUT")
return nil
}
let bitmap = getBytesFromImage(image: image)!
let floatSize = MemoryLayout<Float>.size
let cubeData = UnsafeMutablePointer<Float>.allocate(capacity: size * size * size * 4 * floatSize)
var z = 0
var bitmapOffset = 0
for _ in 0 ..< rowCount {
for y in 0 ..< size {
let tmp = z
for _ in 0 ..< columnCount {
for x in 0 ..< size {
let alpha = Float(bitmap[bitmapOffset]) / 255.0
let red = Float(bitmap[bitmapOffset+1]) / 255.0
let green = Float(bitmap[bitmapOffset+2]) / 255.0
let blue = Float(bitmap[bitmapOffset+3]) / 255.0
let dataOffset = (z * size * size + y * size + x) * 4
cubeData[dataOffset + 3] = alpha
cubeData[dataOffset + 2] = red
cubeData[dataOffset + 1] = green
cubeData[dataOffset + 0] = blue
bitmapOffset += 4
}
z += 1
}
z = tmp
}
z += columnCount
}
let colorCubeData = Data(bytesNoCopy: cubeData, count: size * size * size * 4 * floatSize, deallocator: Data.Deallocator.free)
return colorCubeData
}
fileprivate static func getBytesFromImage(image:UIImage?) -> [UInt8]?
{
var pixelValues: [UInt8]?
if let imageRef = image?.cgImage {
let width = Int(imageRef.width)
let height = Int(imageRef.height)
let bitsPerComponent = 8
let bytesPerRow = width * 4
let totalBytes = height * bytesPerRow
let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
let colorSpace = CGColorSpaceCreateDeviceRGB()
var intensities = [UInt8](repeating: 0, count: totalBytes)
let contextRef = CGContext(data: &intensities, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
contextRef?.draw(imageRef, in: CGRect(x: 0.0, y: 0.0, width: CGFloat(width), height: CGFloat(height)))
pixelValues = intensities
}
return pixelValues!
}
And here is my code for UICollectionViewCell setup:
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let lutPath = self.lutPaths[indexPath.item]
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "FilterCell", for: indexPath) as! FilterCell
if let lutImage = UIImage(contentsOfFile: lutPath) {
let renderer = CIFilter(name: "CIColorCube")!
let lutData = ColorCubeHelper.colorCubeDataFromLUTPNGImage(lutImage, lutSize: 64)
renderer.setValue(lutData!, forKey: "inputCubeData")
renderer.setValue(64, forKey: "inputCubeDimension")
renderer.setValue(inputCIImage, forKey: kCIInputImageKey)
let outputImage = renderer.outputImage!
let cgImage = self.ciContext.createCGImage(outputImage, from: outputImage.extent)!
cell.configure(image: UIImage(cgImage: cgImage))
} else {
NSLog("LUT not found at \(indexPath.item)")
}
return cell
}
Here is I think a more efficient way to get the bytes out of the LUT image with correct components ordering, while staying within the Core Image space all the way through until the image is rendered to the screen.
guard let image = CIImage(contentsOf: url) else {
return
}
let totalPixels = Int(image.extent.width) * Int(image.extent.height)
let pixelData = UnsafeMutablePointer<simd_float4>.allocate(capacity: totalPixels)
// [.workingColorSpace: NSNull()] below is important.
// We don't want any color conversion when rendering pixels to buffer.
let context = CIContext(options: [.workingColorSpace: NSNull()])
context.render(image,
toBitmap: pixelData,
rowBytes: Int(image.extent.width) * MemoryLayout<simd_float4>.size,
bounds: image.extent,
format: .RGBAf, // Float32 per component in that order
colorSpace: nil)
let dimension = cbrt(Double(totalPixels))
let data = Data(bytesNoCopy: pixelData, count: totalPixels * MemoryLayout<simd_float4>.size, deallocator: .free)
The assumption that Core Image uses BGRA format is incorrect. Core Image uses RGBA color format (RGBA8, RGBAf, RGBAh and so forth). The CIColorCube look up table is layout out in BGR order, but the colors themselves are in RGBAf format, where each component is represented by 32 bit Floating point number.
Of course for the code above to work the LUT image has to be laid out in a certain way. Here is the example of the identity LUT PNG: Identity LUT
BTW, please check this app out: https://apps.apple.com/us/app/filter-magic/id1594986951. Fresh from the press. It doesn't have the feature of loading the lookup table from the LUT png (yet) but has host of other useful features and provides a reach playground to experiment with every single filter out there,
We found that you can render the LUT image directly into a float-based context to get the format that is needed by CIColorCube:
// render LUT into a 32-bit float context, since that's the data format needed by CIColorCube
let pixelData = UnsafeMutablePointer<simd_float4>.allocate(capacity: lutImage.width * lutImage.height)
let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.floatComponents.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
guard let bitmapContext = CGContext(data: pixelData,
width: lutImage.width,
height: lutImage.height,
bitsPerComponent: MemoryLayout<simd_float4.Scalar>.size * 8,
bytesPerRow: MemoryLayout<simd_float4>.size * lutImage.width,
space: lutImage.colorSpace ?? CGColorSpace.sRGBColorSpace,
bitmapInfo: bitmapInfo)
else {
assertionFailure("Failed to create bitmap context for conversion")
return nil
}
bitmapContext.draw(lutImage, in: CGRect(x: 0, y: 0, width: lutImage.width, height: lutImage.height))
let lutData = Data(bytesNoCopy: pixelData, count: bitmapContext.bytesPerRow * bitmapContext.height, deallocator: .free)
However, if I remember correctly, we had to swap the red and blue components in our LUT images since Core Image uses the BGRA format (as you do in your code as well).
Also, in your collection view delegate, it will probably improve performance if you return the cell as fast as possible and post the generation of the thumbnail image to a background thread that will set the cell's image when done. Like this:
func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
let lutPath = self.lutPaths[indexPath.item]
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "FilterCell", for: indexPath) as! FilterCell
DispatchQueue.global(qos: .background).async {
if let lutImage = UIImage(contentsOfFile: lutPath) {
let renderer = CIFilter(name: "CIColorCube")!
let lutData = ColorCubeHelper.colorCubeDataFromLUTPNGImage(lutImage, lutSize: 64)
renderer.setValue(lutData!, forKey: "inputCubeData")
renderer.setValue(64, forKey: "inputCubeDimension")
renderer.setValue(inputCIImage, forKey: kCIInputImageKey)
let outputImage = renderer.outputImage!
let cgImage = self.ciContext.createCGImage(outputImage, from: outputImage.extent)!
DispatchQueue.main.async {
cell.configure(image: UIImage(cgImage: cgImage))
}
} else {
NSLog("LUT not found at \(indexPath.item)")
}
}
return cell
}
I finally got it working by using vDSP functions which makes cubeData generation superfast, so fast that scrolling goes smooth on iPhone X without using any background queues for loading textures!
public static func cubeDataForLut64(_ lutImage: NSImage) -> Data? {
guard let lutCgImage = lutImage.cgImage else {
return nil
}
return cubeDataForLut64(lutCgImage)
}
private static func cubeDataForLut64(_ lutImage: CGImage) -> Data? {
let cubeDimension = 64
let cubeSize = (cubeDimension * cubeDimension * cubeDimension * MemoryLayout<Float>.size * 4)
let imageWidth = lutImage.width
let imageHeight = lutImage.height
let rowCount = imageHeight / cubeDimension
let columnCount = imageWidth / cubeDimension
guard ((imageWidth % cubeDimension == 0) || (imageHeight % cubeDimension == 0) || (rowCount * columnCount == cubeDimension)) else {
print("Invalid LUT")
return nil
}
let bitmapData = createRGBABitmapFromImage(lutImage)
let cubeData = UnsafeMutablePointer<Float>.allocate(capacity: cubeSize)
var bitmapOffset: Int = 0
var z: Int = 0
for _ in 0 ..< rowCount{ // ROW
for y in 0 ..< cubeDimension{
let tmp = z
for _ in 0 ..< columnCount{ // COLUMN
let dataOffset = (z * cubeDimension * cubeDimension + y * cubeDimension) * 4
var divider: Float = 255.0
vDSP_vsdiv(&bitmapData[bitmapOffset], 1, &divider, &cubeData[dataOffset], 1, UInt(cubeDimension) * 4)
bitmapOffset += cubeDimension * 4
z += 1
}
z = tmp
}
z += columnCount
}
free(bitmapData)
return Data(bytesNoCopy: cubeData, count: cubeSize, deallocator: .free)
}
fileprivate static func createRGBABitmapFromImage(_ image: CGImage) -> UnsafeMutablePointer<Float> {
let bitsPerPixel = 32
let bitsPerComponent = 8
let bytesPerPixel = bitsPerPixel / bitsPerComponent // 4 bytes = RGBA
let imageWidth = image.width
let imageHeight = image.height
let bitmapBytesPerRow = imageWidth * bytesPerPixel
let bitmapByteCount = bitmapBytesPerRow * imageHeight
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapData = malloc(bitmapByteCount)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue).rawValue
let context = CGContext(data: bitmapData, width: imageWidth, height: imageHeight, bitsPerComponent: bitsPerComponent, bytesPerRow: bitmapBytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
let rect = CGRect(x: 0, y: 0, width: imageWidth, height: imageHeight)
context?.draw(image, in: rect)
// Convert UInt8 byte array to single precision Float's
let convertedBitmap = malloc(bitmapByteCount * MemoryLayout<Float>.size)
vDSP_vfltu8(UnsafePointer<UInt8>(bitmapData!.assumingMemoryBound(to: UInt8.self)), 1,
UnsafeMutablePointer<Float>(convertedBitmap!.assumingMemoryBound(to: Float.self)), 1,
vDSP_Length(bitmapByteCount))
free(bitmapData)
return UnsafeMutablePointer<Float>(convertedBitmap!.assumingMemoryBound(to: Float.self))
}

swift - speed improvement in UIView pixel per pixel drawing

is there a way to improve the speed / performance of drawing pixel per pixel into a UIView?
The current implementation of a 500x500 pixel UIView, is terribly slow.
class CustomView: UIView {
public var context = UIGraphicsGetCurrentContext()
public var redvalues = [[CGFloat]](repeating: [CGFloat](repeating: 1.0, count: 500), count: 500)
public var start = 0
{
didSet{
self.setNeedsDisplay()
}
}
override func draw(_ rect: CGRect
{
super.draw(rect)
context = UIGraphicsGetCurrentContext()
for yindex in 0...499{
for xindex in 0...499 {
context?.setStrokeColor(UIColor(red: redvalues[xindex][yindex], green: 0.0, blue: 0.0, alpha: 1.0).cgColor)
context?.setLineWidth(2)
context?.beginPath()
context?.move(to: CGPoint(x: CGFloat(xindex), y: CGFloat(yindex)))
context?.addLine(to: CGPoint(x: CGFloat(xindex)+1.0, y: CGFloat(yindex)))
context?.strokePath()
}
}
}
}
Thank you very much
When drawing individual pixels, you can use a bitmap context. A bitmap context takes raw pixel data as an input.
The context copies your raw pixel data so you don't have to use paths, which are likely much slower. You can then get a CGImage by using context.makeImage().
The image can then be used in an image view, which would eliminate the need to redraw the whole thing every frame.
If you don't want to manually create a bitmap context, you can use
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
// draw everything into the context
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Then you can use a UIImageView to display the rendered image.
It is also possible to draw into a CALayer, which does not need to be redrawn every frame but only when resized.
That's how it looks now, are there any optimizations possible or not?
public struct rgba {
var r:UInt8
var g:UInt8
var b:UInt8
var a:UInt8
}
public let imageview = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
let width_input = 500
let height_input = 500
let redPixel = rgba(r:255, g:0, b:0, a:255)
let greenPixel = rgba(r:0, g:255, b:0, a:255)
let bluePixel = rgba(r:0, g:0, b:255, a:255
var pixelData = [rgba](repeating: redPixel, count: Int(width_input*height_input))
pixelData[1] = greenPixel
pixelData[3] = bluePixel
self.view.addSubview(imageview)
imageview.frame = CGRect(x: 100,y: 100,width: 600,height: 600)
imageview.image = draw(pixel: pixelData,width: width_input,height: height_input)
}
func draw(pixel:[rgba],width:Int,height:Int) -> UIImage
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
let data = UnsafeMutableRawPointer(mutating: pixel)
let bitmapContext = CGContext(data: data,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: 4*width,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
let image = bitmapContext?.makeImage()
return UIImage(cgImage: image!)
}
I took the answer from Manuel and got it working in Swift 5. The main sticking point here was to clear the dangling pointer warning now in Xcode 12.
var image:CGImage?
pixelData.withUnsafeMutableBytes( { (rawBufferPtr: UnsafeMutableRawBufferPointer) in
if let rawPtr = rawBufferPtr.baseAddress {
let bitmapContext = CGContext(data: rawPtr,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: 4*width,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
image = bitmapContext?.makeImage()
}
})
I did have to move away from the rgba struct approach for front loading the data and moved to direct UInt32 values derived from rawValues in the enum. The 'append' or 'replaceInRange' approach to updating an existing array took hours (my bitmap was LARGE) and ended up exhausting swap space on my computer.
enum Color: UInt32 { // All 4 bytes long with full opacity
case red = 4278190335 // 0xFF0000FF
case yellow = 4294902015
case orange = 4291559679
case pink = 4290825215
case violet = 4001558271
case purple = 2147516671
case green = 16711935
case blue = 65535 // 0x0000FFFF
}
With this approach I was able to quickly build a Data buffer with that data amount via:
func prepareColorBlock(c:Color) -> Data {
var rawData = withUnsafeBytes(of:c.rawValue) { Data($0) }
rawData.reverse() // Byte order is reveresed when defined
var dataBlock = Data()
dataBlock.reserveCapacity(100)
for _ in stride(from: 0, to: 100, by: 1) {
dataBlock.append(rawData)
}
return dataBlock
}
With that I just appended each of these blocks into my mutable Data instance 'pixelData' and we are off. You can tweak how the data is assembled, as I just wanted to generate some color bars in a UIImageView to validate the work. For a 800x600 view, it took about 2.3 seconds to generate and render the whole thing.
Again, hats off to Manuel for pointing me in the right direction.

Why does my CGImage not display correctly after setNeedsDisplay

I'm drawing a colorWheel in my app. I do this by generating a CGImage in a separate function and using CGContextDrawImage in drawRect to put it onscreen.
On the initial presentation it looks fine, but after I call setNeedsDisplay (or setNeedsDisplayInRect) the image turns black/distorted. I must be doing something stupid, but can't see what.
DrawRect code looks like:
override func drawRect(rect: CGRect) {
if let context = UIGraphicsGetCurrentContext() {
let wheelFrame = CGRectMake(0,0,circleRadius*2,circleRadius*2)
CGContextSaveGState(context)
//create clipping mask for circular wheel
CGContextAddEllipseInRect(context, wheelFrame)
CGContextClip(context)
//draw the wheel
if colorWheelImage != nil { CGContextDrawImage(context, CGRectMake(0,0,circleRadius*2,circleRadius*2), colorWheelImage) }
CGContextRestoreGState(context)
//draw a selector element
self.drawSliderElement(context)
}
}
CGimage generation function:
func generateColorWheelImage() {
let width = Int(circleRadius*2)
let height = Int(circleRadius*2)
var pixels = [PixelData]()
for yIndex in 0..<width {
for xIndex in 0..<height {
pixels.append(colorAtPoint(CGPoint(x: xIndex, y: yIndex)))
}
}
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo.ByteOrderDefault
let bitsPerComponent: Int = 8
let bitsPerPixel: Int = 24
let renderingIntent = CGColorRenderingIntent.RenderingIntentDefault
assert(pixels.count == Int(width * height))
var data = pixels
let providerRef = CGDataProviderCreateWithData(nil, data, data.count * sizeof(PixelData), nil)
colorWheelImage = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, width * Int(sizeof(PixelData)), rgbColorSpace, bitmapInfo, providerRef, nil, true,renderingIntent)
}
I finally found the answer (here: https://stackoverflow.com/a/10798750/5233176) to this problem when I started running the code on ipad rather than simulator and received a BAD_ACCESS_ERROR, rather seeing a distorted image.
As the answer explains: '[The] CMSampleBuffer is used directly to create a CGImageRef, so [the] CGImageRef will become invalid once the buffer is released.'
Hence the problem was with this line:
let providerRef = CGDataProviderCreateWithData(nil, data, data.count * sizeof(PixelData), nil)
And the problem can be fixed by using a copy of the buffer, like so:
let providerData = NSData(bytes: data, length: data.count * sizeof(PixelData))
let provider = CGDataProviderCreateWithCFData(providerData)

Convert a Double Array to UIImage in swift

Is it possible to convert a double array to UIImage using UIImage function?
var ImageDouble = [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0]
UIImage function asks for String. I tried converting the array on individual element to strings and the whole thing to string, I get a nill result. Can someone help?
[Edit:]
As it seems I was not really clear on my problem. To make it more clear, I am developing a software which crops numbers & operators from a mathematical expression and shows me all the chunks one by one. Now, how do I go about it in Swift? :
Step-1: I have a grayscale image provided. I read it using UIImage function. I convert it to a custom RGBA function as follows so that I can deal on a pixel level like we do in MATLAB or Python or C++. The function is inspired from many websites's suggestions to use it.
struct Pixel {
var value: UInt32
var red: UInt8 {
get { return UInt8(value & 0xFF) }
set { value = UInt32(newValue) | (value & 0xFFFFFF00) }
}
var green: UInt8 {
get { return UInt8((value >> 8) & 0xFF) }
set { value = (UInt32(newValue) << 8) | (value & 0xFFFF00FF) }
}
var blue: UInt8 {
get { return UInt8((value >> 16) & 0xFF) }
set { value = (UInt32(newValue) << 16) | (value & 0xFF00FFFF) }
}
var alpha: UInt8 {
get { return UInt8((value >> 24) & 0xFF) }
set { value = (UInt32(newValue) << 24) | (value & 0x00FFFFFF) }
}
}
struct RGBA {
var pixels: UnsafeMutableBufferPointer<Pixel>
var width: Int
var height: Int
init?(image: UIImage) {
guard let cgImage = image.CGImage else { return nil }
width = Int(image.size.width)
height = Int(image.size.height)
let bitsPerComponent = 8
let bytesPerPixel = 4
let bytesPerRow = width * bytesPerPixel
let imageData = UnsafeMutablePointer<Pixel>.alloc(width * height)
let colorSpace = CGColorSpaceCreateDeviceRGB()
var bitmapInfo: UInt32 = CGBitmapInfo.ByteOrder32Big.rawValue
bitmapInfo |= CGImageAlphaInfo.PremultipliedLast.rawValue & CGBitmapInfo.AlphaInfoMask.rawValue
guard let imageContext = CGBitmapContextCreate(imageData, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo) else { return nil }
CGContextDrawImage(imageContext, CGRect(origin: CGPointZero, size: image.size), cgImage)
pixels = UnsafeMutableBufferPointer<Pixel>(start: imageData, count: width * height)
}
func toUIImage() -> UIImage? {
let bitsPerComponent = 8
let bytesPerPixel = 4
let bytesPerRow = width * bytesPerPixel
let colorSpace = CGColorSpaceCreateDeviceRGB()
var bitmapInfo: UInt32 = CGBitmapInfo.ByteOrder32Big.rawValue
bitmapInfo |= CGImageAlphaInfo.PremultipliedLast.rawValue & CGBitmapInfo.AlphaInfoMask.rawValue
let imageContext = CGBitmapContextCreateWithData(pixels.baseAddress, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo, nil, nil)
guard let cgImage = CGBitmapContextCreateImage(imageContext) else {return nil}
let image = UIImage(CGImage: cgImage)
return image
}
}
Step-2: Now I take one channel of the image [Red: it does not matter which one I take, since all the channels have same output] and apply my custom median filter of kernel size 5x5.
Step-3: Next, I do the binarization of the image using Nibblack approximation,where I used averaging filter and standard deviation.
Step-4: After that, I used connected component labelling to separate out different connected components of the image.
Step-5: Finally, I need to crop the labelled images and resize it. For cropping from the original image , I know the location by using a smart location algorithm. For resizing, I want to use Core Graphics Resizing filter. However, for that I need to convert my current output [a two-dimensional array or flattened] to UIImage or CGImage.
That's my real question: How do I convert to UIImage or CGImage from two-dimensional array or flattened array which are of type Array<Double> or Array<Int>?
Try this:
var ImageDouble = [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0]
let images = ImageDouble.map{ UIImage(named: "\($0)")}
the map function will transform your array to another array.
Just try this:
var str:[String] = [];
for (var i=0;i<ImageDouble.count;i++){
str.append(String(ImageDouble[i]));
}
print(str)
But why would you pass a String of numbers to an UIImage? Are images in your asset named as numbers?
Looks like you want to render 1, 2, etc into a bitmap.
First you need a function to do that, and then use flatMap on the array of doubles using this. This will ensure you get an array of UIImages in a safe way by filtering if any nil results when we try to create images.
func createNumberImage(number:Double) -> UIImage?{
//Set a frame that you want.
let frame = CGRect(x: 0, y: 0, width: 200, height: 200)
let view = UIView(frame:frame )
view.backgroundColor = UIColor.redColor()
let label = UILabel(frame:frame)
view.addSubview(label)
label.font = UIFont.systemFontOfSize(100);
label.text = String(format: "%.1lf", arguments:[number])
label.textAlignment = NSTextAlignment.Center
view
UIGraphicsBeginImageContext(view.bounds.size);
view.layer.renderInContext(UIGraphicsGetCurrentContext()!)
let screenShot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return screenShot
}
let doubles = [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0]
//Get the array of UIImage objects.
let images = doubles.flatMap { createNumberImage($0) }
You can use playground to verify this working. Also you might want to set proper background and text colors and font size to suite your requirements.

Gray scaling the entire view

I am working on a swift based application which shows some events. Each event has time limit after which event expires.
I want to make the event screen grayscale after event expiration for which I have tried things like:
Mask view
if let maskImage = UIImage(named: "MaskImage"){
myView.maskView = UIImageView(image: maskImage)
}
above solution not worked from me as my event screen contains colored images as well
Recursively fetched all subviews and tried to set their backgroundColor but not worked
Changing alpha value of all subViews which shows more faded white color
Query: My event screen has many colorful images and some colorful labels, how can I make all these in grayscale?
Any help would be appreciated.
After some research I've achieved gray scaling effect over entire view something like this:
/**
To convert an image to grayscale
- parameter image: The UIImage to be converted
- returns: The UIImage after conversion
*/
static func convertToGrayScale(_ image: UIImage?) -> UIImage? {
if image == nil {
return nil
}
let imageRect:CGRect = CGRect(x: 0, y: 0, width: image!.size.width, height: image!.size.height)
let colorSpace = CGColorSpaceCreateDeviceGray()
let width = image!.size.width
let height = image!.size.height
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
let context = CGContext(data: nil, width: Int(width), height: Int(height), bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
context?.draw(image!.cgImage!, in: imageRect)
let imageRef = context?.makeImage()
let newImage = UIImage(cgImage: imageRef!)
return newImage
}
Here above method will work for all coloured images in presented view and the same can be used for myView.maskView = UIImageView(image: maskImage).
This will grayscale your entire view and while gray scaling only labels and texts I used this:
// Convert to grayscale
func convertToGrayScaleColor() -> UIColor? {
if self == UIColor.clear {
return UIColor.clear
}
var fRed : CGFloat = 0
var fGreen : CGFloat = 0
var fBlue : CGFloat = 0
var fAlpha: CGFloat = 0
if self.getRed(&fRed, green: &fGreen, blue: &fBlue, alpha: &fAlpha) {
if fRed == 0 && fGreen == 0 && fBlue == 0{
return UIColor.gray
}else{
let grayScaleColor = (fRed + fGreen + fBlue)/3
return UIColor(red: grayScaleColor, green: grayScaleColor, blue: grayScaleColor, alpha: fAlpha)
}
} else {
print("Could not extract RGBA components, so rolling back to default UIColor.grayColor()")
return UIColor.gray
}
}
Above method is a part extension UIColor. Above code is written in Swift 3.0 using xcode 8.1
Please feel free to drop a comment if concerned or confused about anything in the answer.

Resources