Currently working on a feature where I want to crop an image taken from camera in iOS (Android already working)
Currently referred to the library
https://github.com/jbowmanp1107/ImageCropper.Maui
I have raised an issue
https://github.com/jbowmanp1107/ImageCropper.Maui/issues/7
Below Error Showing in file 'PlatformImageCropper.cs'
An error occurred: 'Could not create an native instance of the type 'Bind_TOCropViewController.TOCropViewController': the native class hasn't been loaded.
I have also tried implementing 'MediaPlugin' - Xamarin Library
https://github.com/jamesmontemagno/MediaPlugin
and i was able to run project in android but in iOS it crashes and feature is only available for iOS
https://github.com/jamesmontemagno/MediaPlugin#allow-cropping
Can anyone please help on this feature would be great!!!
The image in Maui has a Clip property, you can set different shapes to it to achieve the effect you want to crop. For example:
//create circular area
var clip = new EllipseGeometry(new Point(100, 100), 100, 100);
// create a rectangular area
var clip1 = new RectangleGeometry(new Rect(100,100,100,100));
image. Clip = clip1;
Related
I am trying to build an iOS application that uses Firebase MLKit to recognize text from live camera frame.
I found an Android sample here.
https://medium.com/digital-curry/firebase-mlkit-textdetection-in-android-using-firebase-ml-vision-apis-with-live-camera-72ef47ad4ebd
Does anyone know good sample in iOS?
Text Detection From Firebase MLKit:
You can find demo code for Image to Text conversion using Firebase MLKit in iOS.
https://github.com/sayaleepote/TextDetect
For live recognization, you can create a custom camera and take a picture in the background periodically and detect a text from an image.
Custom Camera View for Capture Images:
For custom camera view, you can use below sample code.
Link: https://github.com/AlexLittlejohn/ALCameraViewController
Note: You can achieve this feature using iOS native MLKit + Vision framework
Refer This link for iOS native framework.
https://stackoverflow.com/questions/50918310
Let me know if you have any query.
Thanks.
I am making an augmented reality App to demonstrate the options in MacBook and I used the Vuforia SDK.
Here is my problem:
1) I tried with Vuforia Sample Core Feature and I used Image Targets. In Image targets it gives only one image at a time. I attached the output in below mentioned image.
2) My expectation is to show multiple text or image while capturing the real MacBook like below mentioned image.
Please guide me to achieve this.
Vuforia iOS SDK is using OpenGL ES loading 3D object,which's unfriendly to use.
You can use Scenekit, put your objects in a scene, set a rectangle node, put the model in four corners of a rectangle.When ImageTrack is success, load your scene.
How to use SceneKit in Vuforia? check this:https://github.com/yshrkt/VuforiaSampleSwift
I have an Augmented Reality functionality made using Unity + Vuforia plugin which I integrated into the iOS application. The app uses the camera as background and when you navigate camera to some marker 3D object will appear on it.
My task is to add buttons which will start and stop capture video (or image) from the camera. The output should be a video with camera scene + 3D object.
I made some investigation, but the only solution I found is to convert the view of AVCaptureVideoPreviewLayer on which camera preview is showing to a video (or image). But from my opinion, this solution is inefficient and not flexible.
Is there any way to get a current instance of the AVCaptureSession from Unity (or maybe Vuforia plugin)? Or maybe there is another way to solve my problem?
Any pieces of advice or guides will be very helpful.
I don't think you should use AVCaptureSession to get the preview and even do the capture operation in Cocoa-Touch instead you should capture the image in Unity and pass the data to Cocoa-Touch native API.
Here is the link how to capture the screenshot in Unity.
Here I am new to vuforia qualcomm projects for AR. I used unity3d to generate project (image Target framwork). After built and run app from unity3d the ios app able to render camera on screen but after moving camera on target image it's not render the 3d object(cube)
I visited link: vuforia Portal
And go through step by step process:
Add vuforia extension in unity3d.
Add target manager to database.
download package and import it to an unity3d.
drag image target to scene and select the dataset to downloaded package.
add cude as 3d object into image target.
built and run app in ios.
The camera render on screen but on the same image target not display cube in 3d.
Also. My unity3d not generating file format as given in sample applications in vuforia.
Please help regarding this.
Have you enabled the dataset in the ARCamera as well?
Nad
#, VinuPriya...
If you are not using marker, you should use OpenCv framework to detect object where you want to place your 3d object. I have not Implement drag drop for 3d object. I use on click change 3d objects
I was trying to make an iPhone app which asks players to draw line according to the red dotted line as illustrated in the image using SINGLE TOUCH. My question is: How to check whether the user has complete the path ( need not to be accurate ) ? p.s. I'm using iOS4 SDK