Can we train CreateML on iOS device using user's data? - ios

I understand that we can train ML model on macOS with Xcode CreateML GUI, as well as in macOS Playground. The problem I had, is to train a similar model on user's device, using their own data. I'm wondering if it's possible?
Can we train CreateML text classifier on user's device? I had did some research but could not find an answer. Mostly people are talking about deploy a trained model to iOS. But I wanna to train on iOS.
P.s.I also had a look at the updatable CoreML model. Which does not seems to support text classifier. They only supports KNN model as well as shallow neural network.
More Specifically. Can we even use MLTextClassifier this to create Model on iOS? The conflict information is that, on Apple's CreateML main page, it says you need to train on Mac. But this API seems to indicate that it supports iOS, which really confuses me.
init(trainingData: [String : [String]], parameters: MLTextClassifier.ModelParameters)

The CreateML module does work on iOS (since iOS 15). It just doesn't work on iOS simulator.
You can surround all your training code with
#if canImport(CreateML)
...
#endif
so that it only runs when you are on a real device. Admittedly, this is rather inconvenient...
As for how to use the CreateML API, you can follow the guide here. The code would look something like this. Note that I've updated some of the deprecated (since iOS 16) code in the guide to use the newest APIs.
import CoreML
import CreateML
import NaturalLanguage
import TabularData
// training...
let sentimentClassifier = try MLTextClassifier(trainingData: [
"positive": [...],
"negative": [...],
"neutral": [...],
])
// write to file for later use...
let metadata = MLModelMetadata(author: "John Appleseed",
shortDescription: "A model trained to classify movie review sentiment",
version: "1.0")
try sentimentClassifier.write(to: URL(fileURLWithPath: "/path/to/save/SentimentClassifier.mlmodel"),
metadata: metadata)
// or use it immediately:
print(sentimentClassifier.prediction(from: "foo bar baz"))
//... at some later point
let model = try MLModel(contentsOf: URL(fileURLWithPath: "/path/to/save/SentimentClassifier.mlmodel"))
let nlModel = try NLModel(mlModel: model)
print(nlModel.predictedLabel(for: "foo bar baz") ?? "no label")

Related

[iOS][AvFoundation] Need to get the similar classes like MediaCodec in android

is there any class in iOS which returns encoder/ decoder capabilities just like Android MediaCodec/ MediaCodecList.(https://developer.android.com/reference/android/media/MediaCodec)
I need to get the fps/profile/level and width /height supported on each profile of h264 and hevc codec.
I have found it related to AvCaptureSession, but this may not be correct since we need to support AvPlayer only (and camera is not in the part of the flow.)
AVFoundation has very limited supported formats. Refer to this StackOverflow
So If somebody wants to query for the codec info, I recommend him/she use ffmpeg. To be specific, it's ffmpeg-kit. Check here for all API documentation.
I wrote you a sample about how to use it in iOS and query for codec info(or anything you want). Please check it out:
showcase
let mediaInfoSession = FFprobeKit.getMediaInformation(kSampleFilePath)
let mediaInfo = mediaInfoSession?.getMediaInformation()
let props = mediaInfo?.getAllProperties()
let duration = mediaInfo?.getDuration()
let bitRate = mediaInfo?.getBitrate()
...
Feel free to contact me.

IOS prediction app throws error to predict the .mlmodel in IOS14

I try to use my .mlmodel to predict on IOS14 using the same method as described in the apple document link below:
https://developer.apple.com/documentation/coreml/integrating_a_core_ml_model_into_your_app
I get below error. I could not understand what is wrong. My two inputs and output are all set as expected and model seems converted correctly from TF2 to mlmodel. Any suggestions how to analyze and fix the issue?
/Library/Caches/com.apple.xbs/Sources/MetalImage/MetalImage-124.0.29/MPSNeuralNetwork/Filters/MPSCNNKernel.mm:752: failed assertion `[MPSCNNConvolution encode...] Error: destination may not be nil.
This is an error from Metal, which is used to run the model on the GPU. You can try running it on the CPU instead, to see if that works without errors. (If running on the CPU also gives errors, something in your model is wrong.)
let config = MLModelConfiguration()
config.computeUnits = .cpuOnly
let model = YourModel(configuration: config)
...

ML kit face recognition not working on IOS

I'm working on an app that does facial recognition. One of the steps include detecting the user smile. For that, I am currently using google's Ml Kit. The application works fine on Android platform but when I run on Ios (Iphone Xr and others) it does not recognize any faces on any image. I have already followed every steps on how to integrate Ios and Firebase and it runs fine.
Here's my code. It's always falling on length == 0, as the image would not contain any faces. The image passed as parameter is coming from the image_picker plugin.
Future<Face> verifyFace(File thisImage) async {
var beforeTime = new DateTime.now();
final image = FirebaseVisionImage.fromFile(thisImage);
final faceDetector = FirebaseVision.instance.faceDetector(
FaceDetectorOptions(
mode: FaceDetectorMode.accurate,
enableClassification: true,
),
);
var processedImages = await faceDetector.processImage(image);
print('Processing time: ' +
DateTime.now().difference(beforeTime).inMilliseconds.toString());
if (processedImages.length == 0) {
throw new NoFacesDetectedException();
} else if (processedImages.length == 1) {
Face face = processedImages.first;
if(face.smilingProbability == null){
throw new LipsNotFoundException();
}
else {
return face;
}
} else if (processedImages.length > 1) {
throw new TooManyFacesDetectedException();
}
}
If someone has any tips or can tell what I am doing wrong I would be very grateful.
I know this is an old issue, but I was having the same problem and turns out I just forgot to add the pod 'Firebase/MLVisionFaceModel' in the podfile.
there is configuration in some many places so i will better left you this video (although maybe you already see it) so you can see some code and how Matt Sullivan make that one you are trying to do.
let met know if you already see it and please add maybe an example repo i could work with so see you exact code.
From what I can tell, ML Kit face detection does work on iOS but very poorly.
It doesn't even seem worth it to use the SDK.
The docs do say that the face itself must be at least 100x100px. In my testing though the face itself needs to be at least 700px for the SDK to detect the face.
The SDK on Android works super well even on small image sizes (200x200px in total).

Replacement for a custom CIFilter in iOS 12.

Since iOS 12 CIColorKernel(source:"kernel string") is deprecated. Does anybody of you know what is Apples replacement for that?
I am searching for a custom CIFilter in Swift. Maybe there is a Open Source libary?
It was announced back at WWDC 2017 that custom filters can also be written with Metal Shading Language -
https://developer.apple.com/documentation/coreimage/writing_custom_kernels
So now apparently they are getting rid of Core Image Kernel Language altogether.
Here's a quick intro to writing a CIColorKernel with Metal -
https://medium.com/#shu223/core-image-filters-with-metal-71afd6377f4
Writing kernels with Metal is actually easier, the only gotcha is that you need to specify 2 compiler flags in the project (see the article above).
I attempted to follow along with these blog posts and the apple docs, but this integration between CoreImage and Metal is quite confusing. After much searching, I ended up creating an actual working example iOS app that demonstrates how to write a Metal kernel grayscale function and have it process the CoreImage pipeline.
You can use it like this:
let url = Bundle.main.url(forResource: "default", withExtension: "metallib")!
let data = try! Data(contentsOf: url)
let kernel = try! CIKernel(functionName: "monochrome", fromMetalLibraryData: data)
let sampler = CISampler(image: inputImage)
let outputImage = kernel.apply(extent: image.extent, roiCallback: { _, rect in rect }, arguments: [sampler])
According to Apple:
"You need to set these flags to use MSL as the shader language for a CIKernel. You must specify some options in Xcode under the Build Settings tab of your project's target. The first option you need to specify is an -fcikernel flag in the Other Metal Compiler Flags option. The second is to add a user-defined setting with a key called MTLLINKER_FLAGS with a value of -cikernel:

Not Getting QR Code Data Using AVFoundation Framework

I used AVFoundation framework delegate methods to read QR Code.It is reading almost all QR codes & giving resulted data for them. But when I try with some QR code(eg. below QR image) , it is predicting that it is QR code but does not give any data for it.
Your sample is triggering an internal (C++) exception.. it seems to be getting caught around [AVAssetCache setMaxSize:] which suggests either the data in this particular sample is corrupt, or it's just to large for AVFoundation to handle.
As it's an internal exception it is (mostly) failing silently. The exception occurs when you try to extract the stringValue from your AVMetadataMachineReadableCodeObject.
So if you test for the existence of your AVMetadataMachineReadableCodeObject, you will get YES, whereas if you test for stringValue you will get NO.
AVMetadataMachineReadableCodeObject *readableObject =
(AVMetadataMachineReadableCodeObject *)[self.previewLayer
transformedMetadataObjectForMetadataObject:metadataObject];
BOOL foundObject = readableObject != nil;
//returns YES
BOOL foundString = readableObject.stringValue != nil;
//returns NO + triggers internal exception
It's probably best to test for the string, rather than the object, and ignore any result that returns NO.
update
In your comment you ask about native framework solution that will read this barcode. AVFoundation is the native framework for barcode reading, so if it fails on your sample, you will have to look for third-party solutions.
zxing offers an iOS port but it looks to be old and unsupported.
zbarSDK used to be a good solution but also seems to be unsupported past ios4. As AVFoundation now has built-in barcode reading, this is unsurprising.
This solution by Accusoft does read the sample but is proprietary and really pricey.
I do wonder about the content of you sample though - it looks either corrupt or some kind of exotic encoding...

Resources