How to get specific information of image using Firebase-CloudVision(ML) - ios

I am using Firebase cloudVision (ML) API to read image.
I am able to the get the information of an image back but it is not specific.
Example: when I take and upload a picture of MacBook it is giving the output as "notebook,Loptop,electronic device..etc".
But I want to get its brand name like Apple MacBook ,
I have seen few apps doing this .
I could not find any information regarding this, so here I am posting.
Please suggest or guide if anyone come across this
My Code:
func pickedImage(image: UIImage) {
imageView.image = image
imageView.contentMode = .scaleAspectFit
guard let image = imageView.image else { return }
// let onCloudLabeler =
Vision.vision().cloudImageLabeler(options: options)
let onCloudLabeler = Vision.vision().cloudImageLabeler()
// Define the metadata for the image.
let imageMetadata = VisionImageMetadata()
imageMetadata.orientation = .topLeft
// Initialize a VisionImage object with the given UIImage.
let visionImage = VisionImage(image: image)
visionImage.metadata = imageMetadata
onCloudLabeler.process(visionImage) { labels, error in
guard error == nil, let labels = labels, !labels.isEmpty
else {
// [START_EXCLUDE]
let errorString = error?.localizedDescription ?? "No results returned."
print("Label detection failed with error: \(errorString)")
//self.showResults()
// [END_EXCLUDE]
return
}
// [START_EXCLUDE]
var results = [String]()
let resultsText = labels.map { label -> String in
results.append(label.text)
return "Label: \(label.text), " +
"Confidence: \(label.confidence ?? 0), " +
"EntityID: \(label.entityID ?? "")"
}.joined(separator: "\n")
//self.showResults()
// [END_EXCLUDE]
print(results.count)
print(resultsText)
self.labelTxt.text = results.joined(separator: ",")
results.removeAll()
}
}

If you've seen other apps doing something that your app doesn't do, those other apps are likely using a different ML model than the one you're using.
If you want to accomplish the same using ML Kit for Firebase, you can use a custom model that you either trained yourself or got from another source.

As Puf said, the apps you saw are probably using their own custom ML model. ML Kit now supports creating custom image classification models from your own training data. Check out the AutoML Vision Edge functionality here: https://firebase.google.com/docs/ml-kit/automl-vision-edge

Related

Unable to detect QRCode from image using MLKit

I am using MLKIt for detect QRCode from image. for andrid it is working proper, for ios I am using below pods
pod 'GoogleMLKit/BarcodeScanning'
Here is sample code detect QRcode from image which picked from gallery. every time features array comes empty.
let format: BarcodeFormat = BarcodeFormat.all
let barcodeOptions = BarcodeScannerOptions(formats: format)
let visionImage = VisionImage(image: image)
visionImage.orientation = image.imageOrientation
let barcodeScanner = BarcodeScanner.barcodeScanner(options: barcodeOptions)
barcodeScanner.process(visionImage) { features, error in
guard error == nil, let features = features, !features.isEmpty else {
// Error handling
return
}
// Recognized barcodes
print("Data :: \(features.first?.rawValue ?? "")")
}
We noticed this may happen when there are no padding around the QR code, I also tried to add some padding to it: and it works after that. Could you confirm that it works?
On the other side, ML Kit is also working on a public document on this limitation. Thanks for reporting this.
Julie from ML Kit team

Swift: Process UIImage data for use in Firebase custom TFLite model

I am using Swift, Firebase, and Tensorflow to build an image recognition model. I have a re-trained MobileNet model that takes an input array of [1,224,224,3] copied into my Xcode bundle, and when I try to add data from an image as an input, I get the error: Input 0 should have 602112 bytes, but found 627941 bytes. I am using the following code:
let input = ModelInputs()
do {
let newImage = image.resizeTo(size: CGSize(width: 224, height: 224))
let data = UIImagePNGRepresentation(newImage)
// Store input data in `data`
// ...
try input.addInput(data)
// Repeat as necessary for each input index
} catch let error as NSError {
print("Failed to add input: \(error.localizedDescription)")
}
interpreter.run(inputs: input, options: ioOptions) { outputs, error in
guard error == nil, let outputs = outputs else {
print(error!.localizedDescription)//ERROR BEING CALLED HERE
return }
// Process outputs
print(outputs)
// ...
}
How can I re-process the image data to be 602112 bytes? I am so confused if someone could please help me it would be great :)
Please check out the Quick Start iOS demo app in Swift on how to use a custom TFLite model:
https://github.com/firebase/quickstart-ios/tree/master/mlmodelinterpreter
In particular, I think this is what you are looking for:
https://github.com/firebase/quickstart-ios/blob/master/mlmodelinterpreter/MLModelInterpreterExample/UIImage%2BTFLite.swift#L47
Good luck!

Why is the Vision framework unable to align two images?

I'm trying to take two images using the camera, and align them using the iOS Vision framework:
func align(firstImage: CIImage, secondImage: CIImage) {
let request = VNTranslationalImageRegistrationRequest(
targetedCIImage: firstImage) {
request, error in
if error != nil {
fatalError()
}
let observation = request.results!.first
as! VNImageTranslationAlignmentObservation
secondImage = secondImage.transformed(
by: observation.alignmentTransform)
let compositedImage = firstImage!.applyingFilter(
"CIAdditionCompositing",
parameters: ["inputBackgroundImage": secondImage])
// Save the compositedImage to the photo library.
}
try! visionHandler.perform([request], on: secondImage)
}
let visionHandler = VNSequenceRequestHandler()
But this produces grossly mis-aligned images:
You can see that I've tried three different types of scenes — a close-up subject, an indoor scene, and an outdoor scene. I tried more outdoor scenes, and the result is the same in almost every one of them.
I was expecting a slight misalignment at worst, but not such a complete misalignment. What is going wrong?
I'm not passing the orientation of the images into the Vision framework, but that shouldn't be a problem for aligning images. It's a problem only for things like face detection, where a rotated face isn't detected as a face. In any case, the output images have the correct orientation, so orientation is not the problem.
My compositing code is working correctly. It's only the Vision framework that's a problem. If I remove the calls to the Vision framework, put the phone of a tripod, the composition works perfectly. There's no misalignment. So the problem is the Vision framework.
This is on iPhone X.
How do I get Vision framework to work correctly? Can I tell it to use gyroscope, accelerometer and compass data to improve the alignment?
You should set secondImage as targetImage, and perform handler with firstImage.
I use your composite way.
check out this example from MLBoy:
let request = VNTranslationalImageRegistrationRequest(targetedCIImage: image2, options: [:])
let handler = VNImageRequestHandler(ciImage: image1, options: [:])
do {
try handler.perform([request])
} catch let error {
print(error)
}
guard let observation = request.results?.first as? VNImageTranslationAlignmentObservation else { return }
let alignmentTransform = observation.alignmentTransform
image2 = image2.transformed(by: alignmentTransform)
let compositedImage = image1.applyingFilter("CIAdditionCompositing", parameters: ["inputBackgroundImage": image2])

Adding image from Firebase to UITableViewCell

I want to retrieve the image that is stored in the storage of an user and place it next to his name in a custom UITableViewCell. The problem now is that the tableview will load when the images aren't done downloading (I think?), causing the application to crash because the image array is nil. So what is the correct way to load the tableview? I think, for the user experience, it is important that the tableviewcell image should be shown even if the images aren't done downloading, and present them a default image that is saved in the assists. I thought about making an array with UIImages that links to the default asset of loading an image and changing the image to the profile picture when it is done downloading. But I really have no clue how to do that. This is what I got so far about downloading the image:
let storage = FIRStorage.storage()
let storageRef = storage.reference(forURL: "link.appspot.com")
channelRef?.observeSingleEvent(of: .value, with: { (snapshot) in
if let snapDict = snapshot.value as? [String:AnyObject]{
for each in snapDict{
let UIDs = each.value["userID"] as? String
if let allUIDS = UIDs{
let profilePicRef = storageRef.child((allUIDS)+"/profile_picture.png")
profilePicRef.data(withMaxSize: 1 * 500 * 500) { data, error in
if let error = error {
}
if (data != nil)
{
self.playerImages.append(UIImage (data: data!)!)
}
}
}
let userNames = each.value["username"] as? String
if let users = userNames{
self.players.append(users)
}
}
}
self.tableView.reloadData()
})
This is in the cellForRow
cell.playersImage.image = playerImages[indexPath.row] as UIImage
My rules, haven't changed it from the default rules:
service firebase.storage {
match /b/omega-towers-f5beb.appspot.com/o {
match /{allPaths=**} {
allow read, write: if request.auth != null;
}
}
}
Thank you.
Regarding user experience, you are correct. It is standard to have some sort of default image when loading an image from a URL. A great library to use for image caching and using default assets in its' place is AlamofireImage
Vandan Patel's answer is correct in saying you need to ensure your array is not nil when loading the tableview. You will be given a completion block to handle any extra work you would like to do with your image, using the AlamofireImage library.
This is all assuming you are getting a correct image URL back for your Firebase users.
You should call tableView.reloadData() when the images are done downloading. One important thing, initialize your playerImages as playerImages = [UIImage]() instead of playerImages: [UIImage]!. if it's empty, it wouldn't show your array is nil.
Update:
if let players = playerImages {
//code
}

Swift - How to retrieve multiple images at once (GCD)?

Let me give you some insight on my application itself.
To put it in short, I am creating a social-networking app. Each post consists of an image, profile picture, and caption. Each post exists in my MySQL database. I am using my own framework to retrieve each post. However, once I retrieve each post I still have to retrieve the profile picture and image using the URLs which I retrieved from the database. I would like to retrieve all images at once rather than running in sequential order.
As of now, there are about 5 posts in the database. Loading the necessary images for one post takes about 4 seconds. So right now I am loading the images for one post then retrieving the next in sequential order. So this whole process takes around 20 seconds. So say have 50 posts then it will take an extremely long time to load all the posts. I have some knowledge of GCD (grand-dispatch-queues) however I don't know how to implement it in my app.
Here is my code for retrieving my posts and images:
ConnectionManager.sharedInstance.retrievePosts(UserInformationInstance.SCHOOL) {
(result: AnyObject) in
if let posts = result as? [[String: AnyObject]] {
print("Retrieved \(posts.count) posts.")
for post in posts {
let postIDCurrent = post["id"] as? Int
var UPVOTES = 0;
var UPVOTED: Bool!
var query = ""
if let profilePictureCurrent = post["profile_picture"] {
// Loading profile picture image
let url = NSURL(string: profilePictureCurrent as! String)
let data = NSData(contentsOfURL: url!)
let image = UIImage(data: data!)
UserInformationInstance.postsProfilePictures.append(image!)
print("added profile pic")
} else {
print("error")
}
if let postPictureCurrent = post["image"] {
if (postPictureCurrent as! String != "") {
// Loading image associated with post
let url = NSURL(string: postPictureCurrent as! String)
let data = NSData(contentsOfURL: url!)
let image = UIImage(data: data!)
let imageArray: [AnyObject] = [postIDCurrent!, image!]
UserInformationInstance.postsImages.append(imageArray)
print("added image pic")
}
} else {
print("error")
}
UserInformationInstance.POSTS.append(post)
}
} else {
self.loadSearchUsers()
}
}
So my question is, how can I retrieve all the images at the same time instead of retrieving one after the other?
It would be great if someone could give an explanation as well as some code :)
I would recommend to revise your approach. If your server is fine - it's not busy and well reachable, so that resources downloading is limited by device network adapter bandwidth (X mbps), then it does not matter how you downloading images - concurrently or sequently.
Let me show this. Downloading time of 10 files with size Y mb simultaneously is equal to downloading time of one file, but in this case the downloading speed will be 10 times slower per file:
X/10 - downloading speed per one file
Time = Amount / Speed
T = Y / (X/10) = 10 * Y / X
Now if your are downloading sequently:
T = 10 * (Y / X) = 10 * Y / X
I would recommend to show posts immediately once you retrived them from the storage, then you need to start image downloading asynchronously and set image once that's downloaded. That's the best practice in the industry, consider Facebook, Twitter, Instagram apps.

Resources