I'm developing a custom flutter plugin where I send flutter image camera to swift and create a UIImage using flutter camera plugin (https://pub.dev/packages/camera).
For that, I send the camera image bytes using this method:
startImageStream((CameraImage img) {
sendFrameBytes(bytesList: img.planes.map((plane) {
return plane.bytes;
}).toList(),
)}
Planes contains a single array containing the RGBA bytes of the image.
On the swift code, I get the RGBA bytes as NSArray and create a UIImage like this:
func detectFromFrame1(args:NSDictionary, result:FlutterResult){
var rgbaPlan = args["bytesList"] as! NSArray
let rgbaTypedData = rgbaPlan[0] as! FlutterStandardTypedData
let rgbaUint8 = [UInt8](rgbaTypedData.data)
let data = NSData(bytes: rgbaUint8, length: rgbaUint8.count)
let uiimage = UIImage(data: data as Data)
print(uiimage)
}
The problem is rgbaTypedData, rgbaUint8, data are not empty and the created uiimage is always nil, I don't understand where the problem is.
I have the same issue. A workaround I use is to convert the image to jpg in flutter and get and give the bytes to the iOS / native code.
The downside is, that it's slow and not usable for real-time use
Update:
Code Sample (Flutter & TFLite package)
Packages:
https://pub.dev/packages/image and
https://pub.dev/packages/tflite
CODE:
_cameraController.startImageStream((_availableCameraImage)
{
imglib.Image img = imglib.Image.fromBytes(_availableCameraImage.planes[0].width, _availableCameraImage.planes[0].height, _availableCameraImage.planes[0].bytes);
Uint8List imgByte = imglib.encodeJpg(img);
Tfliteswift.detectObjectOnBinary(binary: _availableCameraImage.planes[0].bytes);
}
Related
Is it possible to display a PKCanvasView drawing on MacOS that was previously created on an iOS device (data transfer takes place with core data and Cloudkit)?
You can initialize a new PKDrawing object from your drawing data and generate a NSImage from it:
import PencilKit
do {
let pkDrawing = try PKDrawing(data: drawingData)
let nsImage = pkDrawing.image(from: pkDrawing.bounds, scale: view.window?.backingScaleFactor ?? 1)
} catch {
print(error)
}
I am using MLKIt for detect QRCode from image. for andrid it is working proper, for ios I am using below pods
pod 'GoogleMLKit/BarcodeScanning'
Here is sample code detect QRcode from image which picked from gallery. every time features array comes empty.
let format: BarcodeFormat = BarcodeFormat.all
let barcodeOptions = BarcodeScannerOptions(formats: format)
let visionImage = VisionImage(image: image)
visionImage.orientation = image.imageOrientation
let barcodeScanner = BarcodeScanner.barcodeScanner(options: barcodeOptions)
barcodeScanner.process(visionImage) { features, error in
guard error == nil, let features = features, !features.isEmpty else {
// Error handling
return
}
// Recognized barcodes
print("Data :: \(features.first?.rawValue ?? "")")
}
We noticed this may happen when there are no padding around the QR code, I also tried to add some padding to it: and it works after that. Could you confirm that it works?
On the other side, ML Kit is also working on a public document on this limitation. Thanks for reporting this.
Julie from ML Kit team
Im using Xcode 8.3 with Swift 3. I have written one method named pdfFromData(data:) to form the pdf document from the Data, whenever I build my project its not getting build due to this method, means the compiler is got stopped/hanged when it start compile particular file where I coded pdfFromData(data:) method(In Xcode 8.2 with Swift 3 it worked fine). Whenever i comment this method and build means everything working fine.
func pdfFromData(data: Data) -> CGPDFDocument? { // Form pdf document from the data.
if let pdfData = data as? CFData {
if let provider = CGDataProvider(data: pdfData) {
let pdfDocument = CGPDFDocument(provider)
return pdfDocument
}
}
return nil
}
What's wrong with this method?. I want to build my project with this method as well. Thanks in advance.
I tried debugging your issue. This is what I found out:
if let pdfData = data as? CFData {
}
The above line for casting object of type Data to CFData is where it's taking too much time to build.
Replacing that with the following piece of code significantly reduces your build time.
let pdfNsData: NSData = NSData(data: data) // convert `Data` to `NSData`
if let cfPdfData: CFData = pdfNsData as? CFData {
// cast `NSData` to `CFData`
}
NSData and CFData are toll-free bridged.
Please let me know if there's any doubt
This might be an amateur question, but although I have searched Stack Overflow extensibly, I haven't been able to get an answer for my specific problem.
I was successful in creating a GIF file from an array of images by following a Github example:
func createGIF(with images: [NSImage], name: NSURL, loopCount: Int = 0, frameDelay: Double) {
let destinationURL = name
let destinationGIF = CGImageDestinationCreateWithURL(destinationURL, kUTTypeGIF, images.count, nil)!
// This dictionary controls the delay between frames
// If you don't specify this, CGImage will apply a default delay
let properties = [
(kCGImagePropertyGIFDictionary as String): [(kCGImagePropertyGIFDelayTime as String): frameDelay]
]
for img in images {
// Convert an NSImage to CGImage, fitting within the specified rect
let cgImage = img.CGImageForProposedRect(nil, context: nil, hints: nil)!
// Add the frame to the GIF image
CGImageDestinationAddImage(destinationGIF, cgImage, properties)
}
// Write the GIF file to disk
CGImageDestinationFinalize(destinationGIF)
}
Now, I would like to turn the actual GIF into NSData so I can upload it to Firebase, and be able to retrieve it on another device.
To achieve my goal, I have two options: Either to find how to use the code above to extract the GIF created (which seems to directly be created when creating the file), or to use the images on the function's parameters to create a new GIF but keep it on NSData format.
Does anybody have any ideas on how to do this?
Since nobody went ahead for over six months I will just put the answer from #Sachin Vas' comment here:
You can get the data using NSData(contentsOf: URL)
I'm writing a Swift chat app using JSQMessageViewController as well as PubNub. I have no problem getting text messages in real time and display them correctly. But I'm stuck on retrieving image messages, I can send images without any problems but when the receiver gets the image it becomes a NSCFString data. The output of print(message.data.message) in PubNub's didReceiveMessage function is :<UIImage: 0x155d52020>, {256, 342}, And the output of print(message.data) is : { message = "<UIImage: 0x155d52020>, {256, 342}"; subscribedChannel = aUpVlGKxjR; timetoken = 14497691787509050;}
Does anyone know how to convert this data to UIImage?
You need to convert UIImage to base64 encoding and then send to pubnub message and then decode base64 into UIImage.
Encode:
let imageData = UIImagePNGRepresentation(image)
let imageString = imageData.base64EncodedStringWithOptions(.allZeros)
Decode:
let imageData = NSData(base64EncodedString: imageString, options: NSDataBase64DecodingOptions.fromRaw(0)!)
var image = UIImage(data: imageData)
Reference: Convert between UIImage and Base64 string