Read (custom) EXIF data from png file - ios

For an app I'm trying to parse a vcf-File with all of the colleagues of my firm. Some of them have no real photo and instead automatically get a dummy photo inserted. Now in order to make the app future proof, I don't want to check for a resolution of 500x500 which right now would work. The idea by the department responsible for the vcf generation was to add a comment to the dummy photo base file they always use. I tried reading that in Swift, but have no luck as you can see in my test playground code:
import UIKit
import ImageIO
let photo = UIImage(named: "bild")!
let photoData = UIImagePNGRepresentation(photo)!
let base64String = photoData.base64EncodedString()
let photoSource = CGImageSourceCreateWithData(photoData as CFData, nil)!
for (key, value) in CGImageSourceCopyPropertiesAtIndex(photoSource, 0, nil) as! [String : Any] {
print("\(key): \(value)")
}
Output:
PixelWidth: 500
Depth: 8
ProfileName: sRGB IEC61966-2.1
HasAlpha: 1
ColorModel: RGB
{PNG}: {
Chromaticities = (
"0.3127",
"0.329",
"0.64",
"0.33",
"0.3",
"0.6000000000000001",
"0.15",
"0.06"
);
Gamma = "0.45455";
InterlaceType = 0;
sRGBIntent = 0;
}
PixelHeight: 500
The output of exiftool in Terminal meanwhile shows this on the same image (see especially User Comment and Document Name (Custom Field):
➔ exiftool bild.png
ExifTool Version Number : 10.50
File Name : bild.png
Directory : .
File Size : 4.2 kB
File Modification Date/Time : 2017:05:06 12:51:23+02:00
File Access Date/Time : 2017:05:06 12:51:24+02:00
File Inode Change Date/Time : 2017:05:06 12:51:23+02:00
File Permissions : rw-r--r--
File Type : PNG
File Type Extension : png
MIME Type : image/png
Image Width : 500
Image Height : 500
Bit Depth : 8
Color Type : Palette
Compression : Deflate/Inflate
Filter : Adaptive
Interlace : Noninterlaced
Palette : (Binary data 477 bytes, use -b option to extract)
Transparency : 0
Background Color : 0
Pixels Per Unit X : 2835
Pixels Per Unit Y : 2835
Pixel Units : meters
Modify Date : 2017:05:05 08:04:36
Exif Byte Order : Big-endian (Motorola, MM)
Document Name : dummy
X Resolution : 72
Y Resolution : 72
Resolution Unit : inches
Y Cb Cr Positioning : Centered
Exif Version : 0231
Components Configuration : Y, Cb, Cr, -
User Comment : dummy
Flashpix Version : 0100
Color Space : Uncalibrated
Image Size : 500x500
Megapixels : 0.250
I already tried accessing the User Comment by using kCGImagePropertyExifUserComment, but this returns nil and I guess it would only return some value, if the above code also worked as expected:
let userComment = dict[kCGImagePropertyExifUserComment as String] // User Comment is set --> but this returns nil
let pixelWidth = dict[kCGImagePropertyPixelWidth as String] // As a reference that this does normally work --> shows 500 as expected
Do you have any suggestions how to add a comment to the image that is readable with Swift code?

Here is a complete example showing how to create an image, save it as a PNG with metadata, then retrieve that metadata from the file. You should be able to paste this into an iOS Playground.
//: Playground - noun: a place where people can play
import UIKit
import ImageIO
import MobileCoreServices
var str = "Hello, playground"
if let image = createImage() {
let pngDictionary : NSDictionary = [
kCGImagePropertyPNGTitle : "Smile for the Camera",
kCGImagePropertyPNGAuthor : "Smiles-R-Us",
kCGImagePropertyPNGCopyright : "©2017 Smiles-R-Us",
kCGImagePropertyPNGCreationTime : String(describing: Date()),
kCGImagePropertyPNGDescription : "Have a Nice Day!"
]
let imageMetadata : NSDictionary = [ kCGImagePropertyPNGDictionary : pngDictionary ]
let tempURL = FileManager.default.temporaryDirectory
let filePath = tempURL.appendingPathComponent("Smile.png") as NSURL
let imageDestination = CGImageDestinationCreateWithURL(filePath, kUTTypePNG, 1, nil)
if let destination = imageDestination {
CGImageDestinationAddImage(destination, image.cgImage!, imageMetadata)
CGImageDestinationFinalize(destination)
}
if let imageSource = CGImageSourceCreateWithURL(filePath, nil) {
print (CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil))
}
print(filePath)
}
func createImage() -> UIImage? {
let bounds = CGRect(x: 0, y: 0, width: 200, height: 200)
UIGraphicsBeginImageContext(bounds.size)
if let cgContext = UIGraphicsGetCurrentContext() {
let inset = bounds.insetBy(dx: 20, dy: 20)
cgContext.clear(bounds)
cgContext.saveGState()
cgContext.setStrokeColor(UIColor.black.cgColor)
cgContext.setFillColor(UIColor.black.cgColor)
cgContext.setLineWidth(2.0)
cgContext.strokeEllipse(in: inset)
let eyeLevel = inset.maxY - (inset.height * 0.618)
cgContext.fillEllipse(in: CGRect(x: inset.minX + inset.width * 0.3,
y: eyeLevel, width: 10, height: 10))
cgContext.fillEllipse(in: CGRect(x: inset.minX + inset.width * 0.6,
y: eyeLevel, width: 10, height: 10))
cgContext.addArc(center: CGPoint(x:inset.midX, y:inset.midY), radius: (inset.width/2.0 - 20), startAngle: 2.61, endAngle: 0.52, clockwise: true)
cgContext.strokePath()
}
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}

Related

How to crop file content string on string slices swift?

I'm trying to crop string on slices and send them to the server. Let’s take an example: I have file with such size of its string - 813753. I have to crop it on slices with max size 10000.All below code will be in file picker function:
func documentPicker(_ controller: UIDocumentPickerViewController, didPickDocumentsAt urls: [URL]) {
let file = urls[0]
do{
let fileData = try Data.init(contentsOf: file.absoluteURL)
let encodedString = String.init(data: fileData, encoding: .isoLatin1)!
let fileName = file.lastPathComponent
let time = Int64(NSDate().timeIntervalSince1970)
}
}
At the beginning after selection I figure out whether file has result of division >0:
let sliceCounts = encodedString.count%10000 > 0 ? encodedString.count/10000 + 1 : encodedString.count/10000
then in loop I'm trying to get file content slices:
for slice in 0...sliceCounts{
let partSize = slice*10000
let content = encodedString.count%10000 > 0 && partSize >= encodedString.count
? substring(from: partSize - 10000, to: encodedString.count, s:
encodedString) : substring(from: partSize, to: partSize+10000, s: encodedString)
}
I use such method for getting file part:
func substring(from : Int, to : Int, s : String) -> String {
let start = s.index(s.startIndex, offsetBy: from)
let end = s.index(s.startIndex, offsetBy: to)
return String(s[start..<end])
}
And the problem is that I receive such error:
Fatal error: String index is out of bounds: file Swift/StringCharacterView.swift, line 60
It happens when I'm trying to get the last slice. I think the problem is connected with this condition :
encodedString.count%10000 > 0 && partSize >= encodedString.count
and also I think that maybe I have some problems with sending indexes to my method, due to the error. Maybe someone will see where I make a mistake and will help :)
I see a few issues in your for loop, first you loop one iteration to much and secondly for the end of the string you need to consider that the last chunk might be smaller.
for slice in 0..<sliceCounts {
let partSize = slice * maxSize
let content: String
if partSize + maxSize > encodedString.count {
content = String(encodedString.suffix(encodedString.count - partSize))
} else {
content = encodedString.count%maxSize > 0 && partSize >= encodedString.count
? substring(from: partSize - maxSize, to: encodedString.count, s:
encodedString) : substring(from: partSize, to: partSize+maxSize, s: encodedString)
}
}
Note that I used a constant maxSize instead of a hardcoded value in my code

AVAudioRecorder generates strange Wav file(wrong header)

How can I get the only the PCM data from AVAudioRecorder file?
these are the settings I use to record the file:
let settings : [String : Any] = [
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVSampleRateKey: Int(stethoscopeSampleRateDefault),
AVNumberOfChannelsKey: 1,
AVEncoderAudioQualityKey: AVAudioQuality.medium.rawValue,
]
the outcome of this is strange wav file with strange header.
How can I extract only the PCM data out of it?
The actual sound data in a wav file is in the "data" subchunk of that file - this format description might help you visualize the structure you'll have to navigate. But maybe what's tripping you up is that Apple includes an extra subchunk called "fllr" which precedes the sound data, so you have to seek past that too. Fortunately every subchunk is given an id and size, so finding the data subchunk is still relatively straightforward.
Open the file using FileHandle
Seek to byte 12, which gets you past the header and puts you at the beginning of the first subchunk (should be fmt).
Read 4 bytes and convert to a string, then read 4 more bytes and convert to an integer. The string is the subchunk name, and the integer is the size of that subchunk. If the string is not "data" then seek forward "size" number of bytes and repeat step 3.
Read the rest of the file - this is your PCM data.
With Jamie's guidance I managed to solve this. Here is my code:
func extractSubchunks(data:Data) -> RiffFile?{
var data = data
var chunks = [SubChunk]()
let position = data.subdata(in: 8..<12)
let filelength = Int(data.subdata(in: 4..<8).uint32)
let wave = String(bytes: position, encoding: .utf8) ?? "NoName"
guard wave == "WAVE" else {
print("File is \(wave) not WAVE")
return nil
}
data.removeSubrange(0..<12)
print("Found chunks")
while data.count != 0{
let position = data.subdata(in: 0..<4)
let length = Int(data.subdata(in: 4..<8).uint32)
guard let current = String(bytes: position, encoding: .utf8) else{
return nil
}
data.removeSubrange(0..<8)
let chunkData = data.subdata(in: 0..<length)
data.removeSubrange(0..<length)
let subchunk = SubChunk(name: current, size: length, data: chunkData)
chunks.append(subchunk)
print(subchunk.debugDescription)
}
let riff = RiffFile(size: filelength, subChunks: chunks)
return riff
}
Here's the definition for RiffFile and SubChunk structs:
struct RiffFile {
var size : Int
var subChunks : [SubChunk]
}
struct SubChunk {
var debugDescription: String {
return "name : \(name) size : \(size) dataAssignedsize : \(data.count)"
}
var name : String
var size : Int
var data : Data
}

Generating random data using Metal Performance Shaders

I am trying to generate some random integer data for my app with the GPU using MPSMatrixRandom, and I have two questions.
What is the difference between MPSMatrixRandomMTGP32 and MPSMatrixRandomPhilox?
I understand that these two shaders use different algorithms, but what are the differences between them? Does the performance or output of these two algorithms differ, and if so, how?
What code can you use to implement these shaders?
I tried to implement them myself, but my app consistently crashes with vague error messages. I'd like to see an example implementation of this being done properly.
Here's a sample demonstrating how to generate random matrices using these two kernels:
import Foundation
import Metal
import MetalPerformanceShaders
let device = MTLCreateSystemDefaultDevice()!
let commandQueue = device.makeCommandQueue()!
let rows = 8
let columns = 8
let matrixDescriptor = MPSMatrixDescriptor(rows: rows,
columns: columns,
rowBytes: MemoryLayout<Float>.stride * columns,
dataType: .float32)
let mtMatrix = MPSMatrix(device: device, descriptor: matrixDescriptor)
let phMatrix = MPSMatrix(device: device, descriptor: matrixDescriptor)
let distribution = MPSMatrixRandomDistributionDescriptor.uniformDistributionDescriptor(withMinimum: -1.0, maximum: 1.0)
let mtKernel = MPSMatrixRandomMTGP32(device: device,
destinationDataType: .float32,
seed: 0,
distributionDescriptor: distribution)
let phKernel = MPSMatrixRandomPhilox(device: device,
destinationDataType: .float32,
seed: 0,
distributionDescriptor: distribution)
let commandBuffer = commandQueue.makeCommandBuffer()!
mtKernel.encode(commandBuffer: commandBuffer, destinationMatrix: mtMatrix)
phKernel.encode(commandBuffer: commandBuffer, destinationMatrix: phMatrix)
#if os(macOS)
mtMatrix.synchronize(on: commandBuffer)
phMatrix.synchronize(on: commandBuffer)
#endif
commandBuffer.commit()
commandBuffer.waitUntilCompleted() // Only necessary to ensure GPU->CPU sync for display
print("Mersenne Twister values:")
let mtValues = mtMatrix.data.contents().assumingMemoryBound(to: Float.self)
for row in 0..<rows {
for col in 0..<columns {
print("\(mtValues[row * columns + col])", terminator: " ")
}
print("")
}
print("")
print("Philox values:")
let phValues = phMatrix.data.contents().assumingMemoryBound(to: Float.self)
for row in 0..<rows {
for col in 0..<columns {
print("\(phValues[row * columns + col])", terminator: " ")
}
print("")
}
I can't comment on the statistical properties of these generators; I'd refer you to the papers mentioned in the comments.

How to properly render a 3d model using Metal IOS?

I created a 3D object using blender and exported it as an OBJ file and I tried to render it using Metal by following this http://metalbyexample.com/modern-metal-1 tutorial. But some of my 3D object parts are missing. They are not rendered properly.
Here is my 3D object in blender :-
Here is my rendered object in Metal :-
Here is my blender file :-
https://gofile.io/?c=XfQYLK
How should i fix this?
I already rendered some other shapes like, rectangle, Circle, Star successfully. But the problem is with this shape. I did not change the way i create the shape nor the way it is exported from the blender. Even though I did everything in same way problem still there.
Here is how i load the OBJ file
private var vertexDescriptor: MTLVertexDescriptor!
private var meshes: [MTKMesh] = []
private func loadResource() {
let modelUrl = Bundle.main.url(forResource: self.meshName, withExtension: "obj")
let vertexDescriptor = MDLVertexDescriptor()
vertexDescriptor.attributes[0] = MDLVertexAttribute(name: MDLVertexAttributePosition, format: .float3, offset: 0, bufferIndex: 0)
vertexDescriptor.attributes[1] = MDLVertexAttribute(name: MDLVertexAttributeNormal, format: .float3, offset: MemoryLayout<Float>.size * 3, bufferIndex: 0)
vertexDescriptor.attributes[2] = MDLVertexAttribute(name: MDLVertexAttributeTextureCoordinate, format: .float2, offset: MemoryLayout<Float>.size * 6, bufferIndex: 0)
vertexDescriptor.layouts[0] = MDLVertexBufferLayout(stride: MemoryLayout<Float>.size * 8)
self.vertexDescriptor = MTKMetalVertexDescriptorFromModelIO(vertexDescriptor)
let bufferAllocator = MTKMeshBufferAllocator(device: self.device)
let asset = MDLAsset(url: modelUrl, vertexDescriptor: vertexDescriptor, bufferAllocator: bufferAllocator)
(_, meshes) = try! MTKMesh.newMeshes(asset: asset, device: device)
}
Here is my vertex and fragment shaders :-
struct VertexOut {
float4 position [[position]];
float4 eyeNormal;
float4 eyePosition;
float2 texCoords;
};
vertex VertexOut vertex_3d(VertexIn vertexIn [[stage_in]])
{
VertexOut vertexOut;
vertexOut.position = float4(vertexIn.position, 1);
vertexOut.eyeNormal = float4(vertexIn.normal, 1);
vertexOut.eyePosition = float4(vertexIn.position, 1);
vertexOut.texCoords = vertexIn.texCoords;
return vertexOut;
}
fragment float4 fragment_3d(VertexOut fragmentIn [[stage_in]]) {
return float4(0.33, 0.53, 0.25, 0.5);
}
And here my CommandEncoder :-
func render(commandEncoder: MTLRenderCommandEncoder) {
commandEncoder.setRenderPipelineState(self.renderPipelineState)
let mesh = meshes[0]
let vertexBuffer = mesh.vertexBuffers.first!
commandEncoder.setVertexBuffer(vertexBuffer.buffer, offset: vertexBuffer.offset, index: 0)
let indexBuffer = mesh.submeshes[0].indexBuffer
commandEncoder.drawIndexedPrimitives(type: mesh.submeshes[0].primitiveType,
indexCount: mesh.submeshes[0].indexCount,
indexType: mesh.submeshes[0].indexType,
indexBuffer: indexBuffer.buffer,
indexBufferOffset: indexBuffer.offset)
commandEncoder.endEncoding()
}
Presenting to the drawable is handled in a different place.
How should i properly render my 3D object using Metal?
I made this public repo: https://github.com/danielrosero/ios-touchingMetal, and I think is a great starting point for 3d Rendering with Metal, with textures and a compute function.
You should just change the models inside, check Renderer.swift init(view: MTKView) method.
// Create the MTLTextureLoader options that we need according to each model case. Some of them are flipped, and so on.
let textureLoaderOptionsWithFlip: [MTKTextureLoader.Option : Any] = [.generateMipmaps : true, .SRGB : true, .origin : MTKTextureLoader.Origin.bottomLeft]
let textureLoaderOptionsWithoutFlip: [MTKTextureLoader.Option : Any] = [.generateMipmaps : true, .SRGB : true]
// ****
// Initializing the models, set their position, scale and do a rotation transformation
// Cat model
cat = Model(name: "cat",vertexDescriptor: vertexDescriptor,textureFile: "cat.tga", textureLoaderOptions: textureLoaderOptionsWithFlip)
cat.transform.position = [-1, -0.5, 1.5]
cat.transform.scale = 0.08
cat.transform.rotation = vector_float3(0,radians(fromDegrees: 180),0)
// ****
// Dog model
dog = Model(name: "dog",vertexDescriptor: vertexDescriptor,textureFile: "dog.tga", textureLoaderOptions: textureLoaderOptionsWithFlip)
dog.transform.position = [1, -0.5, 1.5]
dog.transform.scale = 0.018
dog.transform.rotation = vector_float3(0,radians(fromDegrees: 180),0)
// ****
This is the way I import models in my implementation, check Model.swift
//
// Model.swift
// touchingMetal
//
// Created by Daniel Rosero on 1/8/20.
// Copyright © 2020 Daniel Rosero. All rights reserved.
//
import Foundation
import MetalKit
//This extension allows to create a MTLTexture attribute inside this Model class
//in order to be identified and used in the Renderer. This is to ease the loading in case of multiple models in the scene
extension Model : Texturable{
}
class Model {
let mdlMeshes: [MDLMesh]
let mtkMeshes: [MTKMesh]
var texture: MTLTexture?
var transform = Transform()
let name: String
//In order to create a model, you need to pass a name to use it as an identifier,
// a reference to the vertexDescriptor, the imagename with the extension of the texture,
//the dictionary of MTKTextureLoader.Options
init(name: String, vertexDescriptor: MDLVertexDescriptor, textureFile: String, textureLoaderOptions: [MTKTextureLoader.Option : Any]) {
let assetUrl = Bundle.main.url(forResource: name, withExtension: "obj")
let allocator = MTKMeshBufferAllocator(device: Renderer.device)
let asset = MDLAsset(url: assetUrl, vertexDescriptor: vertexDescriptor, bufferAllocator: allocator)
let (mdlMeshes, mtkMeshes) = try! MTKMesh.newMeshes(asset: asset, device: Renderer.device)
self.mdlMeshes = mdlMeshes
self.mtkMeshes = mtkMeshes
self.name = name
texture = setTexture(device: Renderer.device, imageName: textureFile, textureLoaderOptions: textureLoaderOptions)
}
}
If the 3D model is not triangulated properly it will miss behave in Metal. In order to render 3D model correctly, When exporting from modeling software to an OBJ file turn on Triangulate Faces option. This will turn all the faces to triangles. So Metal will not have to re triangulate the faces. But this process may change the vertex order. But 3D model will not change. Only the order of vertices will change.

Proper way of Encoding image as base64String

Currently, I am trying to send image to backend to upload an image in my project. I have seen all possible answer on stack overflow and elsewhere but can not successfully send the data to backend . Even if I send,due to some problem(most probably due to white spaces as I feel), image decoded in backend side is not being in proper format.
Code to encode -
let imageData1 : NSData = UIImageJPEGRepresentation(slctdImage, 0.1)!
let base64StringNew1 = imageData1.base64EncodedStringWithOptions(NSDataBase64EncodingOptions(rawValue: 0))
Now the intresting part -
When I decode that using SWIFT code locally, I get the image and I am being able to display it in an image View. But same string when I convert in any online bse64 converter, I don't get the result .
code used for decoding -
let decodedData = NSData(base64EncodedString:base64StringNew1, options: NSDataBase64DecodingOptions.IgnoreUnknownCharacters)
So what might be my problem . Can any one please suggest the correct way to upload images to backend using base64String?
Try with this
func encodeImage(dataImage:UIImage) -> String{
let imageData = UIImagePNGRepresentation(dataImage)
return imageData!.base64EncodedStringWithOptions([])
}
Checked in http://codebeautify.org/base64-to-image-converter and works
Below code for image encoding.
let image: UIImage = imgProfilePic.image!
let size = CGSizeApplyAffineTransform(image.size, CGAffineTransformMakeScale(0.3, 0.3))
let hasAlpha = false
let scale: CGFloat = 0.0 // Automatically use scale factor of main screen
UIGraphicsBeginImageContextWithOptions(size, !hasAlpha, scale)
image.drawInRect(CGRect(origin: CGPointZero, size: size))
let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
var imageData = UIImageJPEGRepresentation(scaledImage, 0.9)
var base64String = imageData.base64EncodedStringWithOptions(NSDataBase64EncodingOptions(rawValue: 0)) // encode the image
var cd = CoreDataUser(pstrContext: "this")
var params = "strUsername=" + cd.getUsername()
params = params + "&strPassword=" + cd.getPassword()
params = params + "&blbProfilePic=" + base64String
PHP code where the base64 string is being decoded and displayed in the browser.
if ($rows) {
foreach ($rows as $row) {
$data = base64_decode($row["fblbProfilePic"]);
$image = imagecreatefromstring($data);
header('Content-Type: Image/jpeg');
imagejpeg($image);
//file_put_contents("test.jpg", $data);
//var_dump($data);
//echo base64_decode($row["fblbPicture"]);
/ /echo '<img src="data:image/jpg;base64,' . $row["fblbPicture"] . '" />';
}

Resources