If this seems like a dupe, sorry. I'm trying to ask a very specific question, and not sure my searching has really led me to the right place. Anyway, here's the setup. Take a picture on the iPhone camera, turn it into base64 string data, shove it up the wire to a Node API, turn that into a file to shove onto S3. Pretty straight forward.
General disclaimers apply; I'd prefer to use a B64 string in JSON for simplicity and universality, and I'll withhold further comments on the silliness of form-encoded uploads :)
Here's my very simple Swift code to produce B64, turn it back into an image, and display it as a proof that the stuff works - at least in Apple land.
Note: "redButton" is one of the assets in my app. I switched to that for the sake of sending much smaller packets at the API for testing, but the results remain. Thanks.
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any]) {
// build data packet
imagePicker.dismiss(animated: true, completion: nil)
//let image:UIImage = (info[UIImagePickerControllerOriginalImage] as? UIImage)!
let image:UIImage = UIImage(named: "redButton")!
showAlert( title: "size", msg: String( describing: image.size ) )
let data = UIImageJPEGRepresentation(image, 0.5)
let b64 = data!.base64EncodedData()//.base64EncodedString()//options: Data.Base64EncodingOptions.lineLength64Characters)
let b64String = b64.base64EncodedString()
debugPrint( "len=" + String( describing: b64String.lengthOfBytes(using: String.Encoding.utf8)))
let newDataString = Data.init(base64Encoded: b64String )
let newData = Data.init(base64Encoded: newDataString! )
let newImage = UIImage.init(data: newData!)
tmpImage.image = newImage
}
That all works. I see the image in the little UIImage.
So at least fully in the Apple camp, the img->b64->img works.
However...
When I copy the actual resulting blob of b64-encoded string data, and manually paste it into the source attribute, marked up with the data stuff, it does NOT display the expected image, and in fact, just shows a broken image in the browser.
ie...
<html>
<body>
<img src="data:image/jpg;base64,LzlqLzRBQVFTa1pKUmdBQkFRQUFTQUJJ (brevity)...">
</body>
</html>
So, am I doing something wrong in my proofing in the HTML page? Am I expecting the wrong results from what Apple calls base64 string data? Am I just plain missing something painfully obvious that my sleep-deprived brain is missing?
It eventually gets sent to the server in an HTTP POST call, per normal means, as a dictionary, turned into json via the json encoding stuff in newer Swift.
var params = ["image": [ "content_type": "image/jpg", "filename":"test.jpg", "file_data": b64String] ]
And for the sake of compeleteness, here's the Node code where I reconstitute this data into a binary bit, and from here I shove it up to the S3 system, and in every case, the file is not recognized as a proper JPG file.
var b64 = req.body.image.file_data;
var base64data = new Buffer(b64, 'base64'); // according to all the new-node version docs
I'm on the home stretch of a crunch-time product that we're shoving at investors next week, and apparently this is a critical feature to show off for that meeting.
Tell me I'm missing something painfully obvious, and that I'm stupid. I welcome it, please. It can't not be just something stupid, right?
Thanks!
Here:
let b64 = data!.base64EncodedData()
let b64String = b64.base64EncodedString()
you encode the given data twice. It should be just
let b64String = data!.base64EncodedString()
Your “in Apple land” test works because
let newDataString = Data.init(base64Encoded: b64String )
let newData = Data.init(base64Encoded: newDataString! )
also decodes the Base64 twice. That would now be just
let newData = Data(base64Encoded: b64String)
Related
I'm currently trying to do something fairly simple... just trying to decode then encode an image in Swift 5.
I've been able to see that my image is indeed correct, but it seems like whenever I try to encode the base64 string in Swift, it won't load at all into my UIImageView.
I've run the base64 decoded string in online converters and the image is correctly formatted.
Did I do anything stupid? Thanks so much for any help you can provide!
The Decode Process (currently seems to be working)
let b64 = UIImagePNGRepresentation(tempImage);
var tempImage2 = b64?.base64EncodedString(options: .endLineWithLineFeed);
if (tempImage2 == nil) {
tempImage2 = "";
}
And the encoding process / loading into the image view:
if let data = Data(base64Encoded: tempImage2!, options: .ignoreUnknownCharacters) {
var useImage = UIImage(data: data);
imageView.image = useImage
print("and now the image view should show it??");
}
On printing the decoded base64, everything seems correct. As soon as I run the encoding, however, nothing is being loaded into my UIImageView - just blankness.
Turns out that the above code works for decoding and encoding base64 in Swift. The issue, however, was with respect to what happened after this code.
Turns out I was messing up some things with the imageView, reusing it as a subview and greatly complicating the process.
Thanks for the help, everyone!
Is it possible to compare to PFFiles?
I am trying to check whether a downloaded PFFile:
let imageFile = object["groupImage"] as PFFile
is equal to a mock data created by me like this:
let imageData = UIImagePNGRepresentation(UIImage(named: "dot.png")!)
uploadMock = PFFile(name: "mock", data: imageData!)
Now it happens when I call the comparison, it will not work.
if (mock?.isEqual(image))!{
print(true)
} else{
print(false)
}
will always give false, even though the images are the same.
It seems like it would be necessary, to download the image before. I tried to work around with checking the filename (it worked used to work until I transferred to another database).
Any ideas?
Alright, I am not familiar with structs or the ordeal I am dealing with in Swift, but what I need to do is create an iMessage in my iMessage app extension with a sticker in it, meaning the image part of the iMessage is set to the sticker.
I have pored over Apple's docs and https://www.captechconsulting.com/blogs/ios-10-imessages-sdk-creating-an-imessages-extension but I do not understand how to do this or really how structs work. I read up on structs but that has not helped me accomplishing what Apple does in their sample code (downloadable at Apple)
What Apple does is they first compose a message, which I understood, taking their struct as a property, but I take sticker instead
guard let conversation = activeConversation else { fatalError("Expected a conversation") }
//Create a new message with the same session as any currently selected message.
let message = composeMessage(with: MSSticker, caption: "sup", session: conversation.selectedMessage?.session)
// Add the message to the conversation.
conversation.insert(message) { error in
if let error = error {
print(error)
}
}
They then do this (this is directly from sample code) to compose the message:
fileprivate func composeMessage(with iceCream: IceCream, caption: String, session: MSSession? = nil) -> MSMessage {
var components = URLComponents()
components.queryItems = iceCream.queryItems
let layout = MSMessageTemplateLayout()
layout.image = iceCream.renderSticker(opaque: true)
layout.caption = caption
let message = MSMessage(session: session ?? MSSession())
message.url = components.url!
message.layout = layout
return message
}
}
Basically this line is what Im having the problem with as I need to set my sticker as the image:
layout.image = iceCream.renderSticker(opaque: true)
Apple does a whole complicated function thing that I don't understand in renderSticker to pull the image part out of their stickers, and I have tried their way but I think this is better:
let img = UIImage(contentsOfURL: square.imageFileURL)
layout.image = ing
layout.image needs a UIImage, and I can get the imageFileURL from the sticker, I just cant get this into a UIImage. I get an error it does not match available overloads.
What can I do here? How can I insert the image from my sticker into a message? How can I get an image from its imageFileURL?
I'm not sure what exactly the question is, but I'll try to address as much as I can --
As rmaddy mentioned, if you want to create an image given a file location, simply use the UIImage constructor he specified.
As far as sending just a sticker (which you asked about in the comments on rmaddy's answer), you can insert just a sticker into an iMessage conversation. This functionality is available as part of an MSConversation. Here is a link to the documentation:
https://developer.apple.com/reference/messages/msconversation/1648187-insert
The active conversation can be accessed from your MSMessagesAppViewController.
There is no init(contentsOfURL:) initializer for UIImage. The closest one is init(contentsOfFile:).
To use that one with your file URL you can do:
let img = UIImage(contentsOfFile: square.imageFileURL.path)
This might be an amateur question, but although I have searched Stack Overflow extensibly, I haven't been able to get an answer for my specific problem.
I was successful in creating a GIF file from an array of images by following a Github example:
func createGIF(with images: [NSImage], name: NSURL, loopCount: Int = 0, frameDelay: Double) {
let destinationURL = name
let destinationGIF = CGImageDestinationCreateWithURL(destinationURL, kUTTypeGIF, images.count, nil)!
// This dictionary controls the delay between frames
// If you don't specify this, CGImage will apply a default delay
let properties = [
(kCGImagePropertyGIFDictionary as String): [(kCGImagePropertyGIFDelayTime as String): frameDelay]
]
for img in images {
// Convert an NSImage to CGImage, fitting within the specified rect
let cgImage = img.CGImageForProposedRect(nil, context: nil, hints: nil)!
// Add the frame to the GIF image
CGImageDestinationAddImage(destinationGIF, cgImage, properties)
}
// Write the GIF file to disk
CGImageDestinationFinalize(destinationGIF)
}
Now, I would like to turn the actual GIF into NSData so I can upload it to Firebase, and be able to retrieve it on another device.
To achieve my goal, I have two options: Either to find how to use the code above to extract the GIF created (which seems to directly be created when creating the file), or to use the images on the function's parameters to create a new GIF but keep it on NSData format.
Does anybody have any ideas on how to do this?
Since nobody went ahead for over six months I will just put the answer from #Sachin Vas' comment here:
You can get the data using NSData(contentsOf: URL)
I'm working with AEXML to write and read xml documents in Swift. I have the writing working no problem. And I have everything setup for the reading, but I can't seem to turn the saved text xml into the document object. It only ever gets the first element and none of the children. I've tried removing all the lines and spaces but still nothing. The content is reading into the String just fine and I've tried converting the data back to a string and it isn't getting messed up in conversion. Is this even possible with AEXML or am I just doing it wrong?
let doc = AEXMLDocument()
let content = try String(contentsOf:NSURL(string:file) as! URL)
let data = content.data(using: String.Encoding(rawValue: String.Encoding.utf8.rawValue))!
let xml = NSString(data:data, encoding:String.Encoding.utf8.rawValue)
try doc.loadXMLData(data)
So I figured out that I was actually using an outdated version of AEXML which clearly wasn't working anymore. The updated code looks like this.
let content = try String(contentsOf:NSURL(string:file) as! URL)
let data = content.data(using: String.Encoding(rawValue: String.Encoding.utf8.rawValue))!
let options = AEXMLOptions()
let doc = try AEXMLDocument(xml:data,options:options)