I've created several UIViews inside of a UIScrollView that resize dynamically based on values I type into Height and Width text fields. Once the UIView resizes I save the contents of the UIScrollView as PDF data.
I find that the dimensions of the UIView within the PDF (when measured in Adobe Illustrator) are always rounded to a third.
For example:
1.5 -> 1.333
1.75 -> 1.666
I check the constant values each time before the constraints are updated and they are accurate. Can anyone explain why the UIViews have incorrect dimensions once rendered as a PDF?
#IBAction func updateDimensions(_ sender: Any) {
guard let length = NumberFormatter().number(from:
lengthTextField.text ?? "") else { return }
guard let width = NumberFormatter().number(from:
widthTextField.text ?? "") else { return }
guard let height = NumberFormatter().number(from:
heightTextField.text ?? "") else { return }
let flapHeight = CGFloat(truncating: width)/2
let lengthFloat = CGFloat(truncating: length)
let widthFloat = CGFloat(truncating: width)
let heightFloat = CGFloat(truncating: height)
UIView.animate(withDuration: 0.3) {
self.faceAWidthConstraint.constant = lengthFloat
self.faceAHeightConstraint.constant = heightFloat
self.faceBWidthConstraint.constant = widthFloat
self.faceA1HeightConstraint.constant = flapHeight
self.view.layoutIfNeeded()
}
}
func createPDFfrom(aView: UIView, saveToDocumentsWithFileName fileName: String)
{
let pdfData = NSMutableData()
UIGraphicsBeginPDFContextToData(pdfData, aView.bounds, nil)
UIGraphicsBeginPDFPage()
guard let pdfContext = UIGraphicsGetCurrentContext() else { return }
aView.layer.render(in: pdfContext)
UIGraphicsEndPDFContext()
if let documentDirectories = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true).first {
let documentsFileName = documentDirectories + "/" + fileName
debugPrint(documentsFileName)
pdfData.write(toFile: documentsFileName, atomically: true)
}
}
You should not be using layer.render(in:) to render your pdf. The reason its always a third is because you must be on a 3x device (it would be 1/2 on a 2x device and simply 1 on a 1x device), so there are 3 pixels per point. When iOS converts your constraints to pixels, then best it can do is round to the nearest third because its picking an integer pixel. The pdf can have much higher pixel density (or use vector art with infinite) resolution, so instead of using layer.render(in:), which is dumping pixels in the rasterized vector layer into your PDF, you should actually draw the contents into the PDF context manually (ie use UIBezier curve, UIImage.draw, etc). This will allow the pdf to capture the full resolution of any rasterized images you have and will allow it to capture any vectors you use without degrading them into rasterized pixels that are constrained by the device screen that you happen to be on.
Related
I am working on a video editing app where each video gets squared in such a way that no portion of the video gets cropped.For this, in case of portrait video, it contains black portion on left & right and for landscape video, it contains black portion on top & bottom side of the video. Black portions are part of the video, they are not for AVPlayerViewController. Here is the sample,
I need to cover these black portions with some CALayers.
What will be the frame(CGRect) of the CALayer?
I am getting the video dimension with naturalSize property which includes the black portions.
Is there any way to get the video dimension without the black portions?(I mean the dimension of actual video content) or
is there any way to get the CGRect of black area of the video?
func initAspectRatioOfVideo(with fileURL: URL) -> Double {
let resolution = resolutionForLocalVideo(url: fileURL)
guard let width = resolution?.width, let height = resolution?.height else {
return 0
}
return Double(height / width)
}
private func resolutionForLocalVideo(url: URL) -> CGSize? {
guard let track = AVURLAsset(url: url).tracks(withMediaType: AVMediaType.video).first else { return nil }
let size = track.naturalSize.applying(track.preferredTransform)
return CGSize(width: abs(size.width), height: abs(size.height))
}
This is a more concise version of Vlad Pulichev's answer.
var aspectRatio: CGFloat! // use the function to assign your variable
func getVideoResolution(url: String) -> CGFloat? {
guard let track = AVURLAsset(url: URL(string: url)!).tracks(withMediaType: AVMediaType.video).first else { return nil }
let size = track.naturalSize.applying(track.preferredTransform)
return abs(size.height) / abs(size.width)
}
All I want to do is take the basic arkit view and turn it into a black and white view. Right now the basic view is just normal and I have no idea on how add the filter. Ideally when taking a screenshot the black and white filter is added onto the screenshot.
import UIKit
import SceneKit
import ARKit
class ViewController: UIViewController, ARSCNViewDelegate {
#IBOutlet var sceneView: ARSCNView!
override func viewDidLoad() {
super.viewDidLoad()
sceneView.delegate = self
sceneView.showsStatistics = true
}
override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)
let configuration = ARWorldTrackingConfiguration()
sceneView.session.run(configuration)
}
override func viewWillDisappear(_ animated: Bool) {
super.viewWillDisappear(animated)
sceneView.session.pause()
}
#IBAction func changeTextColour(){
let snapShot = self.augmentedRealityView.snapshot()
UIImageWriteToSavedPhotosAlbum(snapShot, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
}
}
If you want to apply the filter in real-time the best way to achieve that is to use SCNTechnique. Techniques are used for postprocessing and allow us to render an SCNView content in several passes – exactly what we need (first render a scene, then apply an effect to it).
Here's the example project.
Plist setup
First, we need to describe a technique in a .plist file.
Here's a screenshot of a plist that I've come up with (for better visualization):
And here's it's source:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>sequence</key>
<array>
<string>apply_filter</string>
</array>
<key>passes</key>
<dict>
<key>apply_filter</key>
<dict>
<key>metalVertexShader</key>
<string>scene_filter_vertex</string>
<key>metalFragmentShader</key>
<string>scene_filter_fragment</string>
<key>draw</key>
<string>DRAW_QUAD</string>
<key>inputs</key>
<dict>
<key>scene</key>
<string>COLOR</string>
</dict>
<key>outputs</key>
<dict>
<key>color</key>
<string>COLOR</string>
</dict>
</dict>
</dict>
</dict>
The topic of SCNTechniques is quire broad and I will only quickly cover the things we need for the case at hand. To get a real understanding of what they are capable of I recommend reading Apple's comprehensive documentation on techniques.
Technique description
passes is a dictionary containing description of passes that you want an SCNTechnique to perform.
sequence is an array that specifies an order in which these passes are going to be performed using their keys.
You do not specify the main render pass here (meaning whatever is rendered without applying SCNTechniques) – it is implied and it's resulting color can be accessed using COLOR constant (more on it in a bit).
So the only "extra" pass (besides the main one) that we are going to do will be apply_filter that converts colors into black and white (it can be named whatever you want, just make sure it has the same key in passes and sequence).
Now to the description of the apply_filter pass itself.
Render pass description
metalVertexShader and metalFragmentShader – names of Metal shader functions that are going to be used for drawing.
draw defines what the pass is going to render. DRAW_QUAD stands for:
Render only a rectangle covering the entire bounds of the view. Use
this option for drawing passes that process image buffers output by
earlier passes.
which means, roughly speaking, that we are going to be rendering a plain "image" with out render pass.
inputs specifies input resources that we will be able to use in shaders. As I previously said, COLOR refers to a color data provided by a main render pass.
outputs specifies outputs. It can be color, depth or stencil, but we only need a color output. COLOR value means that we, simply put, are going to be rendering "directly" to the screen (as opposed to rendering into intermediate targets, for example).
Metal shader
Create a .metal file with following contents:
#include <metal_stdlib>
using namespace metal;
#include <SceneKit/scn_metal>
struct VertexInput {
float4 position [[ attribute(SCNVertexSemanticPosition) ]];
float2 texcoord [[ attribute(SCNVertexSemanticTexcoord0) ]];
};
struct VertexOut {
float4 position [[position]];
float2 texcoord;
};
// metalVertexShader
vertex VertexOut scene_filter_vertex(VertexInput in [[stage_in]])
{
VertexOut out;
out.position = in.position;
out.texcoord = float2((in.position.x + 1.0) * 0.5 , (in.position.y + 1.0) * -0.5);
return out;
}
// metalFragmentShader
fragment half4 scene_filter_fragment(VertexOut vert [[stage_in]],
texture2d<half, access::sample> scene [[texture(0)]])
{
constexpr sampler samp = sampler(coord::normalized, address::repeat, filter::nearest);
constexpr half3 weights = half3(0.2126, 0.7152, 0.0722);
half4 color = scene.sample(samp, vert.texcoord);
color.rgb = half3(dot(color.rgb, weights));
return color;
}
Notice, that the function names for fragment and vertex shaders should be the same names that are specified in the plist file in the pass descriptor.
To get a better understanding of what VertexInput and VertexOut structures mean, refer to the SCNProgram documentation.
The given vertex function can be used pretty much in any DRAW_QUAD render pass. It basically gives us normalized coordinates of the screen space (that are accessed with vert.texcoord in the fragment shader).
The fragment function is where all the "magic" happens. There, you can manipulate the texture that you've got from the main pass. Using this setup you can potentially implement a ton of filters/effects and more.
In our case, I used a basic desaturation (zero saturation) formula to get the black and white colors.
Swift setup
Now, we can finally use all of this in the ARKit/SceneKit.
let plistName = "SceneFilterTechnique" // the name of the plist you've created
guard let url = Bundle.main.url(forResource: plistName, withExtension: "plist") else {
fatalError("\(plistName).plist does not exist in the main bundle")
}
guard let dictionary = NSDictionary(contentsOf: url) as? [String: Any] else {
fatalError("Failed to parse \(plistName).plist as a dictionary")
}
guard let technique = SCNTechnique(dictionary: dictionary) else {
fatalError("Failed to initialize a technique using \(plistName).plist")
}
and just set it as technique of the ARSCNView.
sceneView.technique = technique
That's it. Now the whole scene is going to be rendered in grayscale including when taking snapshots.
Filter ARSCNView Snapshot: If you want to create a black and white screenShot of your ARSCNView you can do something like this which returns a UIImage in GrayScale and whereby augmentedRealityView refers to an ARSCNView:
/// Converts A UIImage To A High Contrast GrayScaleImage
///
/// - Returns: UIImage
func highContrastBlackAndWhiteFilter() -> UIImage?
{
//1. Convert It To A CIIamge
guard let convertedImage = CIImage(image: self) else { return nil }
//2. Set The Filter Parameters
let filterParameters = [kCIInputBrightnessKey: 0.0,
kCIInputContrastKey: 1.1,
kCIInputSaturationKey: 0.0]
//3. Apply The Basic Filter To The Image
let imageToFilter = convertedImage.applyingFilter("CIColorControls", parameters: filterParameters)
//4. Set The Exposure
let exposure = [kCIInputEVKey: NSNumber(value: 0.7)]
//5. Process The Image With The Exposure Setting
let processedImage = imageToFilter.applyingFilter("CIExposureAdjust", parameters: exposure)
//6. Create A CG GrayScale Image
guard let grayScaleImage = CIContext().createCGImage(processedImage, from: processedImage.extent) else { return nil }
return UIImage(cgImage: grayScaleImage, scale: self.scale, orientation: self.imageOrientation)
}
An example of using this therefore could be like so:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
//1. Create A UIImageView Dynamically
let imageViewResult = UIImageView(frame: CGRect(x: 0, y: 0, width: self.view.bounds.width, height: self.view.bounds.height))
self.view.addSubview(imageViewResult)
//2. Create The Snapshot & Get The Black & White Image
guard let snapShotImage = self.augmentedRealityView.snapshot().highContrastBlackAndWhiteFilter() else { return }
imageViewResult.image = snapShotImage
//3. Remove The ImageView After A Delay Of 5 Seconds
DispatchQueue.main.asyncAfter(deadline: .now() + 5) {
imageViewResult.removeFromSuperview()
}
}
Which will yield a result something like this:
In order to make your code reusable you could also create an extension of `UIImage:
//------------------------
//MARK: UIImage Extensions
//------------------------
extension UIImage
{
/// Converts A UIImage To A High Contrast GrayScaleImage
///
/// - Returns: UIImage
func highContrastBlackAndWhiteFilter() -> UIImage?
{
//1. Convert It To A CIIamge
guard let convertedImage = CIImage(image: self) else { return nil }
//2. Set The Filter Parameters
let filterParameters = [kCIInputBrightnessKey: 0.0,
kCIInputContrastKey: 1.1,
kCIInputSaturationKey: 0.0]
//3. Apply The Basic Filter To The Image
let imageToFilter = convertedImage.applyingFilter("CIColorControls", parameters: filterParameters)
//4. Set The Exposure
let exposure = [kCIInputEVKey: NSNumber(value: 0.7)]
//5. Process The Image With The Exposure Setting
let processedImage = imageToFilter.applyingFilter("CIExposureAdjust", parameters: exposure)
//6. Create A CG GrayScale Image
guard let grayScaleImage = CIContext().createCGImage(processedImage, from: processedImage.extent) else { return nil }
return UIImage(cgImage: grayScaleImage, scale: self.scale, orientation: self.imageOrientation)
}
}
Which you can then use easily like so:
guard let snapShotImage = self.augmentedRealityView.snapshot().highContrastBlackAndWhiteFilter() else { return }
Remembering that you should place your extension above your class declaration e.g:
extension UIImage{
}
class ViewController: UIViewController, ARSCNViewDelegate {
}
So based on the code provided in your question you would have something like this:
/// Creates A Black & White ScreenShot & Saves It To The Photo Album
#IBAction func changeTextColour(){
//1. Create A Snapshot
guard let snapShotImage = self.augmentedRealityView.snapshot().highContrastBlackAndWhiteFilter() else { return }
//2. Save It The Photos Album
UIImageWriteToSavedPhotosAlbum(snapShotImage, self, #selector(image(_:didFinishSavingWithError:contextInfo:)), nil)
}
///Calback To Check Whether The Image Has Been Saved
#objc func image(_ image: UIImage, didFinishSavingWithError error: Error?, contextInfo: UnsafeRawPointer) {
if let error = error {
print("Error Saving ARKit Scene \(error)")
} else {
print("ARKit Scene Successfully Saved")
}
}
Live Rendering In Black & White:
Using this brilliant answer here by diviaki I was also able to get the entire camera feed to render in Black and White using the following methods:
1st. Register for the ARSessionDelegate like so:
augmentedRealitySession.delegate = self
2nd. Then in the following delegate callback add the following:
//-----------------------
//MARK: ARSessionDelegate
//-----------------------
extension ViewController: ARSessionDelegate{
func session(_ session: ARSession, didUpdate frame: ARFrame) {
/*
Full Credit To https://stackoverflow.com/questions/45919745/reliable-access-and-modify-captured-camera-frames-under-scenekit
*/
//1. Convert The Current Frame To Black & White
guard let currentBackgroundFrameImage = augmentedRealityView.session.currentFrame?.capturedImage,
let pixelBufferAddressOfPlane = CVPixelBufferGetBaseAddressOfPlane(currentBackgroundFrameImage, 1) else { return }
let x: size_t = CVPixelBufferGetWidthOfPlane(currentBackgroundFrameImage, 1)
let y: size_t = CVPixelBufferGetHeightOfPlane(currentBackgroundFrameImage, 1)
memset(pixelBufferAddressOfPlane, 128, Int(x * y) * 2)
}
}
Which successfully renders the camera feed Black & White:
Filtering Elements Of An SCNScene In Black & White:
As #Confused rightly said, If you decided that you wanted the cameraFeed to be in colour, but the contents of your AR Experience to be in Black & White you can apply a filter directly to an SCNNode using it's filters property which is simply:
An array of Core Image filters to be applied to the rendered contents
of the node.
Let's say for example that we dynamically create 3 SCNNodes with a Sphere Geometry we can apply a CoreImageFilter to these directly like so:
/// Creates 3 Objects And Adds Them To The Scene (Rendering Them In GrayScale)
func createObjects(){
//1. Create An Array Of UIColors To Set As The Geometry Colours
let colours = [UIColor.red, UIColor.green, UIColor.yellow]
//2. Create An Array Of The X Positions Of The Nodes
let xPositions: [CGFloat] = [-0.3, 0, 0.3]
//3. Create The Nodes & Add Them To The Scene
for i in 0 ..< 3{
let sphereNode = SCNNode()
let sphereGeometry = SCNSphere(radius: 0.1)
sphereGeometry.firstMaterial?.diffuse.contents = colours[i]
sphereNode.geometry = sphereGeometry
sphereNode.position = SCNVector3( xPositions[i], 0, -1.5)
augmentedRealityView.scene.rootNode.addChildNode(sphereNode)
//a. Create A Black & White Filter
guard let blackAndWhiteFilter = CIFilter(name: "CIColorControls", withInputParameters: [kCIInputSaturationKey:0.0]) else { return }
blackAndWhiteFilter.name = "bw"
sphereNode.filters = [blackAndWhiteFilter]
sphereNode.setValue(CIFilter(), forKeyPath: "bw")
}
}
Which will yield a result something like the following:
For a full list of these filters you can refer to the following: CoreImage Filter Reference
Example Project: Here is a complete Example Project which you can download and explore for yourself.
Hope it helps...
The snapshot object should be an UIImage. Apply filters on this UIImage object by importing CoreImage framework and then apply Core Image filters on it. You should be adjusting the exposure and control values on the image. For more implementation details check this answer . From iOS6, you can also use CIColorMonochromefilter to achieve the same effect.
Here is the apple documentation for all the available filters. Click on each of the filters, to know the visual effects on the image upon application of the filter.
Here is the swift 4 code.
func imageBlackAndWhite() -> UIImage?
{
if let beginImage = CoreImage.CIImage(image: self)
{
let paramsColor: [String : Double] = [kCIInputBrightnessKey: 0.0,
kCIInputContrastKey: 1.1,
kCIInputSaturationKey: 0.0]
let blackAndWhite = beginImage.applyingFilter("CIColorControls", parameters: paramsColor)
let paramsExposure: [String : AnyObject] = [kCIInputEVKey: NSNumber(value: 0.7)]
let output = blackAndWhite.applyingFilter("CIExposureAdjust", parameters: paramsExposure)
guard let processedCGImage = CIContext().createCGImage(output, from: output.extent) else {
return nil
}
return UIImage(cgImage: processedCGImage, scale: self.scale, orientation: self.imageOrientation)
}
return nil
}
This might be the easiest and fastest way to do this:
Apply a CoreImage Filter to the Scene:
https://developer.apple.com/documentation/scenekit/scnnode/1407949-filters
This filter gives a very good impression of a black and white photograph, with good transitions through grays: https://developer.apple.com/library/content/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CIPhotoEffectMono
You could also use this one, and get results easy to shift in hue, too:
https://developer.apple.com/library/content/documentation/GraphicsImaging/Reference/CoreImageFilterReference/index.html#//apple_ref/doc/filter/ci/CIColorMonochrome
And here, in Japanese, is the proof of filters and SceneKit ARKit working together: http://appleengine.hatenablog.com/entry/advent20171215
I've been generating QR Codes using the CIQRCodeGenerator CIFilter and it works very well:
But when I resize the UIImageView and generate again
#IBAction func sizeSliderValueChanged(_ sender: UISlider) {
qrImageView.transform = CGAffineTransform(scaleX: CGFloat(sender.value), y: CGFloat(sender.value))
}
I get a weird Border/DropShadow around the image sometimes:
How can I prevent it from appearing at all times or remove it altogether?
I have no idea what it is exactly, a border, a dropShadow or a Mask, as I'm new to Swift/iOS.
Thanks in advance!
PS. I didn't post any of the QR-Code generating code as it's pretty boilerplate and can be found in many tutorials out there, but let me know if you need it
EDIT:
code to generate the QR Code Image
private func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
guard let filter = CIFilter(name: "CIQRCodeGenerator") else {
return nil
}
filter.setValue(data, forKey: "inputMessage")
guard let qrEncodedImage = filter.outputImage else {
return nil
}
let scaleX = qrImageView.frame.size.width / qrEncodedImage.extent.size.width
let scaleY = qrImageView.frame.size.height / qrEncodedImage.extent.size.height
let transform = CGAffineTransform(scaleX: scaleX, y: scaleY )
if let outputImage = filter.outputImage?.applying(transform) {
return UIImage(ciImage: outputImage)
}
return nil
}
Code for button pressed
#IBAction func generateCodeButtonPressed(_ sender: CustomButton) {
if codeTextField.text == "" {
return
}
let newEncodedMessage = codeTextField.text!
let encodedImage: UIImage = generateQRCode(from: newEncodedMessage)!
qrImageView.image = encodedImage
qrImageView.transform = CGAffineTransform(scaleX: CGFloat(sizeSlider.value), y: CGFloat(sizeSlider.value))
qrImageView.layer.minificationFilter = kCAFilterNearest
qrImageView.layer.magnificationFilter = kCAFilterNearest
}
It’s a little hard to be sure without the code you’re using to generate the image for the image view, but that looks like a resizing artifact—the CIImage may be black or transparent outside the edges of the QR code, and when the image view size doesn’t match the image’s intended size, the edges get fuzzy and either the image-outside-its-boundaries or the image view’s background color start bleeding in. Might be able to fix it by setting the image view layer’s minification/magnification filters to “nearest neighbor”, like so:
imageView.layer.minificationFilter = kCAFilterNearest
imageView.layer.magnificationFilter = kCAFilterNearest
Update from seeing the code you added—you’re currently resizing the image twice, first with the call to applying(transform) and then by setting a transform on the image view itself. I suspect the first resize is adding the blurriness, which the minification / magnification filter I suggested earlier then can’t fix. Try shortening generateQRCode to this:
private func generateQRCode(from string: String) -> UIImage? {
let data = string.data(using: String.Encoding.ascii)
guard let filter = CIFilter(name: "CIQRCodeGenerator") else {
return nil
}
filter.setValue(data, forKey: "inputMessage")
if let qrEncodedImage = filter.outputImage {
return UIImage(cgImage: qrEncodedImage)
}
return nil
}
I think the problem here is that you try to resize it to "non-square" (as your scaleX isn't always the same as scaleY), while the QR code is always square so both side should have the same scale factor to get a non-blurred image.
Something like:
let scaleX = qrImageView.frame.size.width / qrEncodedImage.extent.size.width
let scaleY = qrImageView.frame.size.height / qrEncodedImage.extent.size.height
let scale = max(scaleX, scaleY)
let transform = CGAffineTransform(scaleX: scale, y: scale)
will make sure you have "non-bordered/non-blurred/squared" UIImage.
I guess the issue is with the image(png) file not with your UIImageView. Try to use another image and I hope it will work!
I have a byte array which comes from a finger print sensor device. I wanted to create a bitmap out of it. I have tried few examples but all I am getting is a nil UIImage.
If there are any steps to do that, pls tell me.
Thanks.
This is what my func does:
func didFingerGrabDataReceived(data: [UInt8]) {
if data[0] == 0 {
let width1 = Int16(data[0]) << 8
let finalWidth = Int(Int16(data[1]) | width1)
let height1 = Int16(data[2]) << 8
let finalHeight = Int(Int16(data[3]) | height1)
var finalData:[UInt8] = [UInt8]()
// i dont want the first 8 bytes, so am removing it
for i in 8 ..< data.count {
finalData.append(data[i])
}
dispatch_async(dispatch_get_main_queue()) { () -> Void in
let msgData = NSMutableData(bytes: finalData, length: finalData.count)
let ptr = UnsafeMutablePointer<UInt8>(msgData.mutableBytes)
let colorSpace = CGColorSpaceCreateDeviceGray()
if colorSpace == nil {
self.showToast("color space is nil")
return
}
let bitmapContext = CGBitmapContextCreate(ptr, finalWidth, finalHeight, 8, 4 * finalWidth, colorSpace, CGImageAlphaInfo.Only.rawValue);
if bitmapContext == nil {
self.showToast("context is nil")
return
}
let cgImage=CGBitmapContextCreateImage(bitmapContext);
if cgImage == nil {
self.showToast("image is nil")
return
}
let newimage = UIImage(CGImage: cgImage!)
self.imageViewFinger.image = newimage
}
}
I am getting a distorted image. someone please help
The significant issue here is that when you called CGBitmapContextCreate, you specified that you're building an alpha channel alone, your data buffer is clearly using one byte per pixel, but for the "bytes per row" parameter, you've specified 4 * width. It should just be width. You generally use 4x when you're capturing four bytes per pixel (e.g. RGBA), but since your buffer is using one byte per pixel, you should remove that 4x factor.
Personally, I'd also advise a range of other improvements, namely:
The only thing that should be dispatched to the main queue is the updating of the UIKit control
You can retire finalData, as you don't need to copy from one buffer to another, but rather you can build msgData directly.
You should probably bypass the creation of your own buffer completely, though, and call CGBitmapContextCreate with nil for the data parameter, in which case, it will create its own buffer which you can retrieve via CGBitmapContextGetData. If you pass it a buffer, it assumes you'll manage this buffer yourself, which we're not doing here.
If you create your own buffer and don't manage that memory properly, you'll experience difficult-to-reproduce errors where it looks like it works, but suddenly you'll see the buffer corrupted for no reason in seemingly similar situations. By letting Core Graphics manage the memory, these sorts of problems are prevented.
I might separate the conversion of this byte buffer to a UIImage from the updating of the UIImageView.
So that yields something like:
func mask(from data: [UInt8]) -> UIImage? {
guard data.count >= 8 else {
print("data too small")
return nil
}
let width = Int(data[1]) | Int(data[0]) << 8
let height = Int(data[3]) | Int(data[2]) << 8
let colorSpace = CGColorSpaceCreateDeviceGray()
guard
data.count >= width * height + 8,
let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width, space: colorSpace, bitmapInfo: CGImageAlphaInfo.alphaOnly.rawValue),
let buffer = context.data?.bindMemory(to: UInt8.self, capacity: width * height)
else {
return nil
}
for index in 0 ..< width * height {
buffer[index] = data[index + 8]
}
return context.makeImage().flatMap { UIImage(cgImage: $0) }
}
And then
if let image = mask(from: data) {
DispatchQueue.main.async {
self.imageViewFinger.image = image
}
}
For Swift 2 rendition, see previous revision of this answer.
Is there a way to check the image dimensions (i.e. height and width) before downloading (or partially downloading) the image from a URL? I have found ways to get the image size, but that doesn't help.
Basically I want to calculate the correct height of a UITableView row before the image is downloaded. Is this possible?
You can do a partial download of the image data and then extract the image size from that. You will have to get the data structure of the image format you are using and parse it to some extent. It is possible and not that hard if you are capable of lower level coding.
You can do it by accessing its header details
In Swift 3.0 below code will help you,
if let imageSource = CGImageSourceCreateWithURL(url! as CFURL, nil) {
if let imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as Dictionary? {
let pixelWidth = imageProperties[kCGImagePropertyPixelWidth] as! Int
let pixelHeight = imageProperties[kCGImagePropertyPixelHeight] as! Int
print("the image width is: \(pixelWidth)")
print("the image height is: \(pixelHeight)")
}
}
Create a IBOutlet of HeightConstraint
e.g Here image is in 16:9 ratio from server
This will automatically adjust height for all screen. ImageView ContentMode is AspectFit
override func viewDidLoad() {
cnstHeight.constant = (self.view.frame.width/16)*9
}
Swift 4 Method:
func getImageDimensions(from url: URL) -> (width: Int, height: Int) {
if let imageSource = CGImageSourceCreateWithURL(url as CFURL, nil) {
if let imageProperties = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil) as Dictionary? {
let pixelWidth = imageProperties[kCGImagePropertyPixelWidth] as! Int
let pixelHeight = imageProperties[kCGImagePropertyPixelHeight] as! Int
return (pixelWidth, pixelHeight)
}
}
return (0, 0)
}