I am trying to do a very simple map of an image from one UIImageView to another UIImageView, but via a CGContext, so that I can manipulate the pixel RGB Values. This is the code I have used which allows me to go through the pixels of the initial image and create the output image, but I don't know how to extract the pixel RBG value as I go. Can anyone suggest how?
var sampleImage = UIImage()
func map(){
self.sampleImage = imageViewInput.image!
var imageRect = CGRectMake(0, 0, self.sampleImage.size.width, self.sampleImage.size.height)
UIGraphicsBeginImageContext(sampleImage.size)
var context: CGContextRef = UIGraphicsGetCurrentContext()
CGContextSaveGState(context)
CGContextDrawImage(context, imageRect, sampleImage.CGImage)
var pixelCount = 0
for var i = CGFloat(0); i<sampleImage.size.width; i++ {
for var j = CGFloat(0); j<sampleImage.size.height; j++ {
pixelCount++
var index = pixelCount
var red: CGFloat = // Get the red component value of the pixel
var green: CGFloat = // Get the green component value of the pixel
var blue: CGFloat = // Get the blue component value of the pixel
CGContextSetRGBFillColor(context, red, green, blue, 1)
CGContextFillRect(context, CGRectMake(i, j, 1, 1))
}
}
CGContextRestoreGState(context)
var img: UIImage = UIGraphicsGetImageFromCurrentImageContext()
imageViewOutput.image = img
}
Related
I have two image views, one with an image, another with an image defined with CGContext methods, both with the same image size and image view size, on top of each other. In the storyboard, I can set both image views to "Aspect Fit", so users on different devices can still see the image. However, when I go to draw something on the overlaid second image view, it does not scale it accordingly (or relative to the first image view, even though they are the same size). How do I go about making the second image in the overlaid image view the same scale as the image below?
Example Code:
import CoreGraphics
import UIKit
class Map: UIViewController, UIScrollViewDelegate {
#IBOutlet weak var scrollView: UIScrollView!
#IBOutlet weak var imageView: UIImageView!
#IBOutlet weak var drawnImageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
self.scrollView.minimumZoomScale = 1.0
self.scrollView.maximumZoomScale = 6.0
let data = grabData()
print(data!)
var img = UIImage(named: "floor1")
print(img!.size)
imageView.image = img
img = draw(imageView.bounds.size, data: data!)
print(img!.size)
drawnImageView.image = img
for c in drawnImageView.constraints {
if c.identifier == "constraintImageHeight" {
c.constant = img!.size.height * drawnImageView.bounds.width / img!.size.width;
break;
}
}
}
}
func draw(img: UIImage) -> UIImage {
UIGraphicsBeginImageContext(img.size)
let context = UIGraphicsGetCurrentContext()
// Drawing
let color = UIColor(red: 0.67, green: 0.4, blue: 0.56, alpha: 1)
CGContextSetStrokeColorWithColor(context, color.CGColor)
CGContextSetLineWidth(context, 2.0)
var y = 0
for _ in 0..<100 {
let b = CGRect(origin: CGPoint(x: 0, y: y), size: CGSize(width: 1200, height: 1))
CGContextStrokeRect(context, b)
y += 30
}
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
Update 1:
(replace after '//Drawing')
If you load with iPhone 5 simulator, it doesn't show up in the same place in relation to the photo as it does in the iPhone 6 simulator.
func draw(size: CGSize, data: [TotalLine]) -> UIImage {
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
let screen: CGRect = UIScreen.mainScreen().bounds
let x = screen.size.width, y = screen.size.height
// Drawing
//let color = UIColor(red: 0.67, green: 0.4, blue: 0.56, alpha: 1)
let color = UIColor.redColor()
CGContextSetStrokeColorWithColor(context, color.CGColor)
CGContextSetLineWidth(context, 2.0)
let line = CGRect(origin: CGPoint(x: (x/16), y: (y*0.502)), size: CGSize(width: (x/20), height: 2))
CGContextStrokeRect(context, line)
let test = CGRect(origin: CGPoint(x: 0, y: 0), size: CGSize(width: x, height: y))
CGContextStrokeRect(context, test)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
Update 2:
iPhone 6:
iPhone 5:
I want that red line to show up in between the rooms, like in the iPhone 6 screenshot. In the iPhone 5, it is slightly lower.
Update 3:
Printing image views:
iPhone 5:
Drawn Image View: frame = (0 0; 600 536);
Image View: frame = (0 0; 600 536);
iPhone 6:
Drawn Image View: frame = (0 0; 600 536);
Image View: frame = (0 0; 600 536);
Try to replace this line:
UIGraphicsBeginImageContext(img.size)
with this:
UIGraphicsBeginImageContextWithOptions(img.size, false, UIScreen.main.scale)
It's not clear why do you use 100 and 1200 harcoded values. if your original image is big enough (higher than 3000 or wider than 1200), your lines won't fill the whole image. Also you don't actually need the original image to create overlay. You just need to know the size, right? Try this method instead:
func createLinesImage(size: CGSize) -> UIImage {
UIGraphicsBeginImageContextWithOptions(size, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()!
let color = UIColor(red: 0.67, green: 0.4, blue: 0.56, alpha: 1)
context.setStrokeColor(color.cgColor)
context.setLineWidth(2.0)
var y = CGFloat(0)
while y < size.height {
let lineRect = CGRect(x: 0, y: y, width: size.width, height: 1)
context.stroke(lineRect)
y += 30
}
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
Now, If you want to draw lines only over the original image, call your method like this:
imageView2.image = createLinesImage(size: originalImage.size)
If you need to fill the whole imageView with lines, even if there are blank zones for your original image, use this line:
imageView2.image = createLinesImage(size: imageView2.bounds.size)
I have a table view with a label array and a UIView, with a UIImage array
var menuItems = ["News", "Programm"]
var currentItem = "News"
var menuImages: [UIImage] = [UIImage(named: "news.png")!, UIImage(named: "programm_100.png")!]
The text and its color of the label is set by:
cell.titleLabel.text = menuItems[indexPath.row]
cell.titleLabel.textColor = (menuItems[indexPath.row] == currentItem) ? UIColor.whiteColor() : UIColor.grayColor()
Now I want the Images to appear a bit grey, too.
I've tried to set it with cell.titlePicture.alpha.
How can I set UIView's alpha denericly depending on the current item?
You can use an Extension. Just create a new Swift file and implement this:
import UIKit
extension UIImage{
func alpha(value:CGFloat)->UIImage
{
UIGraphicsBeginImageContextWithOptions(self.size, false, 0.0)
let ctx = UIGraphicsGetCurrentContext();
let area = CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height);
CGContextScaleCTM(ctx, 1, -1);
CGContextTranslateCTM(ctx, 0, -area.size.height);
CGContextSetBlendMode(ctx, CGBlendMode.Normal);
CGContextSetAlpha(ctx, value);
CGContextDrawImage(ctx, area, self.CGImage);
let newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
}
Then apply it when setting your image:
UIImage(named: "news.png").alpha(0.5)
Assuming titlePicture is a UIImageView, you can edit the alpha of it directly:
cell.titlePicture.alpha = (menuItems[indexPath.row] == currentItem) ? 0.6 : 1.0
Tweak the alpha values to your liking. Example output:
alpha = 1.0:
alpha = 0.6:
I am trying to find out how to create an oval gradient which is not a perfect circle. For example an american football/rugby/eye shape instead of a circle.
At the moment I have created a CALayer subclass as below which creates a circle. How can I get this to a oval shape (as mentioned above)?
class CircleGradientLayer: CALayer {
var startRadius: CGFloat?
var startCenter: CGPoint?
var endCenter: CGPoint?
var endRadius: CGFloat?
var locations: [CGFloat]?
var colors: [UIColor]?
override func drawInContext(ctx: CGContext) {
super.drawInContext(ctx)
if let colors = self.colors, let locations = self.locations, let startRadius = self.startRadius, let startCenter = self.startCenter, let endCenter = self.endCenter, let endRadius = self.endRadius {
var colorSpace: CGColorSpaceRef?
var components = [CGFloat]()
for i in 0 ..< colors.count {
let colorRef = colors[i].CGColor
let colorComponents = CGColorGetComponents(colorRef)
let numComponents = CGColorGetNumberOfComponents(colorRef)
if colorSpace == nil {
colorSpace = CGColorGetColorSpace(colorRef)
}
for j in 0 ..< numComponents {
let component = colorComponents[j]
components.append(component)
}
}
if let colorSpace = colorSpace {
let gradient = CGGradientCreateWithColorComponents(colorSpace, components, locations, locations.count)
CGContextDrawRadialGradient(ctx, gradient, startCenter, startRadius, endCenter, endRadius, CGGradientDrawingOptions.DrawsAfterEndLocation)
}
}
}
}
One way to do it would be to set your layer's transform by scaling the X up by some factor. For example:
layer.transform = CATransform3DMakeScale(1.5, 1.0)
Not sure if that's the best solution or not, but it should work!
I'm trying to edit images by changing the pixels.
I have the following code:
let imageRect = CGRectMake(0, 0, self.image.image!.size.width, self.image.image!.size.height)
UIGraphicsBeginImageContext(self.image.image!.size)
let context = UIGraphicsGetCurrentContext()
CGContextSaveGState(context)
CGContextDrawImage(context, imageRect, self.image.image!.CGImage)
for x in 0...Int(self.image.image!.size.width) {
for y in 0...Int(self.image.image!.size.height) {
var red = 0
if y % 2 == 0 {
red = 255
}
CGContextSetRGBFillColor(context, CGFloat(red/255), 0.5, 0.5, 1)
CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
}
}
CGContextRestoreGState(context)
self.image.image = UIGraphicsGetImageFromCurrentImageContext()
I'm looping through all the pixels and changing the value of the each pixel, then converting it back to an image. Want I want to do is somehow get the value of the current pixel (in the y-for-loop) and do something with that data. I have not found anything on the internet about this particular problem.
Under the covers, UIGraphicsBeginImageContext creates a CGBitmapContext. You can get access to the context's pixel storage using CGBitmapContextGetData. The problem with this approach is that the UIGraphicsBeginImageContext function chooses the byte order and color space used to store the pixel data. Those choices (particularly the byte order) could change in future versions of iOS (or even on different devices).
So instead, let's create the context directly with CGBitmapContextCreate, so we can be sure of the byte order and color space.
In my playground, I've added a test image named pic#2x.jpeg.
import XCPlayground
import UIKit
let image = UIImage(named: "pic.jpeg")!
XCPCaptureValue("image", value: image)
Here's how we create the bitmap context, taking the image scale into account (which you didn't do in your question):
let rowCount = Int(image.size.height * image.scale)
let columnCount = Int(image.size.width * image.scale)
let stride = 64 * ((columnCount * 4 + 63) / 64)
let context = CGBitmapContextCreate(nil, columnCount, rowCount, 8, stride,
CGColorSpaceCreateDeviceRGB(),
CGBitmapInfo.ByteOrder32Little.rawValue |
CGImageAlphaInfo.PremultipliedLast.rawValue)
Next, we adjust the coordinate system to match what UIGraphicsBeginImageContextWithOptions would do, so that we can draw the image correctly and easily:
CGContextTranslateCTM(context, 0, CGFloat(rowCount))
CGContextScaleCTM(context, image.scale, -image.scale)
UIGraphicsPushContext(context!)
image.drawAtPoint(CGPointZero)
UIGraphicsPopContext()
Note that UIImage.drawAtPoint takes image.orientation into account. CGContextDrawImage does not.
Now let's get a pointer to the raw pixel data from the context. The code is clearer if we define a structure to access the individual components of each pixel:
struct Pixel {
var a: UInt8
var b: UInt8
var g: UInt8
var r: UInt8
}
let pixels = UnsafeMutablePointer<Pixel>(CGBitmapContextGetData(context))
Note that the order of the Pixel members is defined to match the specific bits I set in the bitmapInfo argument to CGBitmapContextCreate.
Now we can loop over the pixels. Note that we use rowCount and columnCount, computed above, to visit all the pixels, regardless of the image scale:
for y in 0 ..< rowCount {
if y % 2 == 0 {
for x in 0 ..< columnCount {
let pixel = pixels.advancedBy(y * stride / sizeof(Pixel.self) + x)
pixel.memory.r = 255
}
}
}
Finally, we get a new image from the context:
let newImage = UIImage(CGImage: CGBitmapContextCreateImage(context)!, scale: image.scale, orientation: UIImageOrientation.Up)
XCPCaptureValue("newImage", value: newImage)
The result, in my playground's timeline:
Finally, note that if your images are large, going through pixel by pixel can be slow. If you can find a way to perform your image manipulation using Core Image or GPUImage, it'll be a lot faster. Failing that, using Objective-C and manually vectorizing it (using NEON intrinsics) may provide a big boost.
Ok, I think I have a solution that should work for you in Swift 2.
Credit goes to this answer for the UIColor extension below.
Since I needed an image to test this on I chose a slice (50 x 50 - top left corner) of your gravatar...
So the code below converts this:
To this:
This works for me in a playground - all you should have to do is copy and paste into a playground to see the result:
//: Playground - noun: a place where people can play
import UIKit
import XCPlayground
extension CALayer {
func colorOfPoint(point:CGPoint) -> UIColor
{
var pixel:[CUnsignedChar] = [0,0,0,0]
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue)
let context = CGBitmapContextCreate(&pixel, 1, 1, 8, 4, colorSpace,bitmapInfo.rawValue)
CGContextTranslateCTM(context, -point.x, -point.y)
self.renderInContext(context!)
let red:CGFloat = CGFloat(pixel[0])/255.0
let green:CGFloat = CGFloat(pixel[1])/255.0
let blue:CGFloat = CGFloat(pixel[2])/255.0
let alpha:CGFloat = CGFloat(pixel[3])/255.0
//println("point color - red:\(red) green:\(green) blue:\(blue)")
let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)
return color
}
}
extension UIColor {
var components:(red: CGFloat, green: CGFloat, blue: CGFloat, alpha: CGFloat) {
var r:CGFloat = 0
var g:CGFloat = 0
var b:CGFloat = 0
var a:CGFloat = 0
getRed(&r, green: &g, blue: &b, alpha: &a)
return (r,g,b,a)
}
}
//get an image we can work on
var imageFromURL = UIImage(data: NSData(contentsOfURL: NSURL(string:"https://www.gravatar.com/avatar/ba4178644a33a51e928ffd820269347c?s=328&d=identicon&r=PG&f=1")!)!)
//only use a small area of that image - 50 x 50 square
let imageSliceArea = CGRectMake(0, 0, 50, 50);
let imageSlice = CGImageCreateWithImageInRect(imageFromURL?.CGImage, imageSliceArea);
//we'll work on this image
var image = UIImage(CGImage: imageSlice!)
let imageView = UIImageView(image: image)
//test out the extension above on the point (0,0) - returns r 0.541 g 0.78 b 0.227 a 1.0
var pointColor = imageView.layer.colorOfPoint(CGPoint(x: 0, y: 0))
let imageRect = CGRectMake(0, 0, image.size.width, image.size.height)
UIGraphicsBeginImageContext(image.size)
let context = UIGraphicsGetCurrentContext()
CGContextSaveGState(context)
CGContextDrawImage(context, imageRect, image.CGImage)
for x in 0...Int(image.size.width) {
for y in 0...Int(image.size.height) {
var pointColor = imageView.layer.colorOfPoint(CGPoint(x: x, y: y))
//I used my own creativity here - change this to whatever logic you want
if y % 2 == 0 {
CGContextSetRGBFillColor(context, pointColor.components.red , 0.5, 0.5, 1)
}
else {
CGContextSetRGBFillColor(context, 255, 0.5, 0.5, 1)
}
CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
}
}
CGContextRestoreGState(context)
image = UIGraphicsGetImageFromCurrentImageContext()
I hope this works for you. I had fun playing around with this!
This answer assumes you have a CGContext of the image created. An important part of the answer is rounding up the row offset to a multiple of 8 to ensure this works on any image size, which I haven't seen in other solutions online.
Swift 5
func colorAt(x: Int, y: Int) -> UIColor {
let capacity = context.width * context.height
let widthMultiple = 8
let rowOffset = ((context.width + widthMultiple - 1) / widthMultiple) * widthMultiple // Round up to multiple of 8
let data: UnsafeMutablePointer<UInt8> = context!.data!.bindMemory(to: UInt8.self, capacity: capacity)
let offset = 4 * ((y * rowOffset) + x)
let red = data[offset+2]
let green = data[offset+1]
let blue = data[offset]
let alpha = data[offset+3]
return UIColor(red: CGFloat(red)/255.0, green: CGFloat(green)/255.0, blue: CGFloat(blue)/255.0, alpha: CGFloat(alpha)/255.0)
}
I can't manage to display 2 rectangles with gradients on the same view. My following code only shows the first rectangle. If I omit the rectangle1 in the code the rectangle2 is displayed. Only in combination it will only show rectangle1.
I like to display the blue rectangle1 ...
... and the red rectangle2 with different gradient ...
... at the same time.
I have the following code for this:
func draw_2_gradient_rectangles(){
let locations: [CGFloat] = [ 0.0, 1.0 ]
let colorspace = CGColorSpaceCreateDeviceRGB()
// first rectangle
let context = UIGraphicsGetCurrentContext()
let colors = [UIColor.blueColor().CGColor,
UIColor.whiteColor().CGColor]
let gradient = CGGradientCreateWithColors(colorspace,
colors, locations)
var startPoint1 = CGPoint()
var endPoint1 = CGPoint()
startPoint1.x = 0.0
startPoint1.y = 10.0
endPoint1.x = 100;
endPoint1.y = 10
let rectangle_main1 = CGRectMake(CGFloat(15), CGFloat(0), CGFloat(100), CGFloat(30));
CGContextAddRect(context, rectangle_main1);
CGContextClip(context)
CGContextDrawLinearGradient(context, gradient, startPoint1, endPoint1, 0)
// second rectangle
let context2 = UIGraphicsGetCurrentContext()
let colors2 = [UIColor.redColor().CGColor,
UIColor.whiteColor().CGColor]
let gradient2 = CGGradientCreateWithColors(colorspace,
colors2, locations)
var startPoint2 = CGPoint()
var endPoint2 = CGPoint()
startPoint2.x = 100;
startPoint2.y = 10.0;
endPoint2.x = 10.0;
endPoint2.y = 10.9;
let rectangle_main2 = CGRectMake(CGFloat(15), CGFloat(50), CGFloat(100), CGFloat(30));
CGContextAddRect(context2, rectangle_main2);
CGContextClip(context2)
CGContextDrawLinearGradient(context2, gradient2, startPoint2, endPoint2, 0);
}
What am I doing wrong ? Any help ?
UIGraphicsGetCurrentContext() does not create a context but just gives you a reference to the current one.
This means context is the same as context2. And in context you clipped the drawing area so the next CGContextAddRect will draw outside the clipping area.
What you need to do is save the drawing state before each rectangle creation code with :
CGContextSaveGState(context);
and restore it before doing the second rectangle code with
CGContextRestoreGState(context);
This will make sure the clipping area is reset before drawing the second rectangle. e.g:
CGContextSaveGState(context);
// Create rectangle1
CGContextRestoreGState(context);
CGContextSaveGState(context);
// Create rectangle2
CGContextRestoreGState(context);