Best way to extract text from a photo/image in React Native? - ios

I am building an app with React Native.
After a user takes a photo of an invoice, I would like to be able to extract some key data from the text in the image. I know I will need an OCR of some sort. Is there an easy solution to this? I've seen react-native-text-detector. Is that my best option? Is there a best solution to this?

You can use react-native-firebase-mlkit. It has a lot more functionality than just performing OCR. It also has both on-device support and cloud based support depending on your need.
Here is the library's GitHub page.
It's a wrapper for Google's ML Kit
Heres's a simple example of how to use it:
import RNMlKit from 'react-native-firebase-mlkit';
export class textRecognition extends Component {
...
async takePicture() {
if (this.camera) {
const options = { quality: 0.5, base64: true, skipProcessing: true, forceUpOrientation: true };
const data = await this.camera.takePictureAsync(options);
// for on-device (Supports Android and iOS)
const deviceTextRecognition = await RNMlKit.deviceTextRecognition(data.uri);
console.log('Text Recognition On-Device', deviceTextRecognition);
// for cloud (At the moment supports only Android)
const cloudTextRecognition = await RNMlKit.cloudTextRecognition(data.uri);
console.log('Text Recognition Cloud', cloudTextRecognition);
}
};
...
}

Related

Identifiying primary electron window

I have some code that is shared between multiple renderers in Electron. I want those renderers to know whether they are the main window or one of the child windows. I'm wondering if there's a quick way for an renderer to know what it's ID is.
Currently I am using the following to determine when a renderer is the main one or not.
In renderer javascript
import { ipcRenderer } from 'electron';
const isMainRenderer = ipcRenderer.sendSync('main-renderer-check');
In main/background javascript
ipcMain.on('main-renderer-check', (event) => {
event.returnValue = event.sender.id === 2;
});
This works, but it seems a bit of a convoluted way to work this out.
Is there another way that is more direct?
According to Electron's documentation on ipcRenderer, the event.sender.id property is equal to the ID of the webContents from which the message originated.
Therefore it should be possible to retrieve the current window's unique ID via its WebContents using Electron's remote module:
import { remote } from 'electron';
const isMainRenderer = remote.getCurrentWebContents ().id === 2;

How to ZIP (compress) an SQLITE database file in Cordova?

I need a "support" mode for my Cordova app currently running on Windows Mobile and iOS. For this purpose, I need to compress an sqlite database file and upload it to a server. The database has to be compressed as it might grow over 250MB and the upload has to work without a wifi connection.
Searching the web brought up different approaches but all of them were outdated or did only solve my problem for either iOS or Windows Mobile. For example, when using the Cordova file plug-in I've encountered this in the plug-in documentation:
Supported Platforms
Android iOS OS X Windows* Browser
These platforms do not support FileReader.readAsArrayBuffer nor FileWriter.write(blob).
This was my approach: Cordova - Zip files and folders on iOS
Any ideas?
I would suggest you to give FileReader() a second chance.
In my case wich may be very similar to yours, I read a file using FilerReader.readAsArrayBuffer and after that, compress it using the JSZip library: http://stuartk.com/jszip
In opposition to the cordova-file-plugin's API documentation (https://cordova.apache.org/docs/en/latest/reference/cordova-plugin-file/)
"Windows*"->"These platforms do not support FileReader.readAsArrayBuffer nor FileWriter.write(blob))"
I experienced that readAsArrayBuffer works under Windows UWP platform, but slower.
So in my case, with a file of approx. 50M I had to wait for nearly 2min for the whole process to finish!
Try following this example:
You'll need to adapt to your paths but this runs for WINDOWS UWP and IOS (didn't test it with Android but that was not your question).
Also, you'll need to implement your own error handler (errorHandler).
This solution uses Promises since you'll have to wait for the file beeing read and compressed.
PS1: Always be sure your "device ready's event" as been fired in order to access plugins.
PS2: You may encouter no access permission on the database file, this may be related to the fact, that it's being used by another process.
Be sure the database is closed.
SQLITE:
var sqlite = window.sqlitePlugin.openDatabase({ name: 'yourdb.db', location: 'default' });
sqlite.close(function () {
console.log("DONE closing db");
}, function (error) {
console.log("ERROR closing db");
console.log(JSON.stringify(error));
});
"ZIP" function:
function zipFile(sourceFileName, targetFileName) {
return new Promise(function (resolve, reject) {
window.requestFileSystem(LocalFileSystem.PERSISTENT, 0, function (fs) {
fs.root.getFile(sourceFileName, { create: false, exclusive: false }, function (fe) {
fe.file(function (file) {
var reader = new FileReader();
reader.onloadend = function (data) {
var zip = new JSZip();
zip.file(sourceFileName, data.target.result);
zip.generateAsync({
type: "blob",
compression: "DEFLATE",
compressionOptions: {
level: 9
}
// level 9 means max. compression
// this may also take some time depending on the size of your file
// I tested it with a 50M file, it took about 65 sec.
}).then(
// following is post-zip in order to transfer the file to a server
function (blob) {
fs.root.getFile(targetFileName, { create: true, exclusive: false }, function (newzip) {
writeFile(newzip, blob, "application/zip").then(function () {
var f = blob;
var zipReader = new FileReader();
zipReader.onloadend = function (theFile) {
var base64 = window.btoa(theFile.target.result);
resolve(base64);
};
// need to "resolve" the zipped file as base64 in order to incluse it in my REST post (server-upload)
zipReader.readAsBinaryString(f);
});
});
}
)
};
reader.readAsArrayBuffer(file);
// this may take some time depending on the size of your file
// I tested it with a 50M file, it took about 72 sec.
}, errorHandler);
}, errorHandler);
});
});
}
ex call:
if (window.cordova) {
document.addEventListener('deviceready', function () {
zipFile("yourDatabaseFileName.db","compressedDatabaseFile.zip");
});
}
Why not use sqlite ".dump" command as query and get result via steam and then compress the output. Even though text dump will be larger it will get reasonable size when you compress it. I think there are some very good text only compression algorithms as well,

How to use CDK overlay while leaving an existing component in the foreground?

The Angular Material CDK library provides various features including overlays. All the examples I can find show how to display a new component on top of the overlay. My goal is a little different. I want to display an existing component (one of several on the screen) on top of the overlay.
The behavior I have in mind is that when the user goes into a kind of editing mode on a particular object, the component representing that object would sort of "float" on top of an overlay, until editing is done or cancelled.
Is there any straightforward way to do that? It seems that the cdkConnectedOverlay directive might be useful, but I can't figure out how to make it work.
Angular CDK Provides you two ways to achieve that (Directives and Services).
Using the Overlay service you will need to call the create method passing the positionStrategy prop:
#Component({
....
})
class AppComponent {
#ViewChild('button') buttonRef: ElementREf;
...
ngOnInit() {
const overlayRef = overlay.create({
positionStrategy: getOverlayPosition(),
height: '400px',
width: '600px',
});
const userProfilePortal = new ComponentPortal(UserProfile);
overlayRef.attach(userProfilePortal);
}
getOverlayPosition(): PositionStrategy {
this.overlayPosition = this.overlay.position()
.connectedTo(
this.buttonRef,
{originX: 'start', originY: 'bottom'},
{overlayX: 'start', overlayY: 'top'}
)
return this.overlayPosition;
}
...
}
I made an example to show you how to use the CDK overlays services and classes.
Overlay demo
If you prefer the directive way Look at this medium article and check the examples applying the directive way:
Material CDK Overlay with RxJS

Copy layer-style(effect) with settings

i am writing a script for photoshop and I´m looking for a way to copy a layer-style from one layer to another. The applied layerstyle can vary so I must be able to look for any possible style and copy it. I found some code to copy a layer style but the settings won´t be copied. Using the script listener does not help me much because it´s all hardcoded..
Is there a way to also copy the settings of a style? And a way to do this for all possible styles?
It is my understanding that Adobe does not have any methods for retrieving styles or style properties in the scripting interface. Apparently though, this can be done manually through Action Manager code. This post in the Adobe forums discuss some of the ways to go about doing that: How to get the style of a layer using Photoshop Scripting ?
I Haven't tested this but this maybe what you are looking for:
if (app.documents.length > 0 && app.activeDocument.layers.length > 1) {
transferEffects(app.activeDocument.layers.getByName("styleLayer"), app.activeDocument.activeLayer);
};
// function to copy layer effects of one layer and apply them to another one
function transferEffects(layer1, layer2) {
app.activeDocument.activeLayer = layer1;
try {
var id157 = charIDToTypeID("CpFX");
executeAction(id157, undefined, DialogModes.ALL);
app.activeDocument.activeLayer = layer2;
var id158 = charIDToTypeID("PaFX");
executeAction(id158, undefined, DialogModes.ALL);
} catch (e) {
alert("the layer has no effects");
app.activeDocument.activeLayer = layer2;
}
};

How to hide YouTube video after it ends using googleapis package?

I want to operate with googleapis package. I want simply to hide a YouTube video after it ends. I have the following code:
import 'dart:html';
import 'package:googleapis/youtube/v3.dart';
import 'package:googleapis_auth/auth_io.dart';
final _credentials = new ServiceAccountCredentials.fromJson(r'''
{
"private_key_id": ...,
"private_key": ...,
"client_email": ...,
"client_id": ...,
"type": "service_account"
}
''');
const _SCOPES = const [StorageApi.DevstorageReadOnlyScope];
main(){
// insert stylesheet for video (width, height, border, etc.) - works well
LinkElement styles=new LinkElement();
styles..href='iframe.css'
..rel='stylesheet'
..type='text/css';
document.head.append(styles);
//add iframe element - works also well
IFrameElement video=new IFrameElement();
video.attributes={'allowfullscreen':'','seamless':'','src':'http://www.youtube.com/embed/ORsFFjt1x6Q?autoplay=1&enablejsapi=1'};
document.body.insertAdjacentElement('afterBegin',video);
//check if the video has ended - probably doesn't work
if(video.contentDocument.onLoad){
if(video.getPlayerState==0){
video.remove();
}
}
}
What am I doing wrong?
Your video variable is type of IFrameElement and you can't run callMethod of JsObject on it.
You need to use Youtube API of Dart which is in Beta from Google and is deprecated in Dart document.
Dart has an onEnded event method for IFrameElement, Element and MediaElement which is in experimental yet but may work for you.
I suggest to use YouTube iframe JavaScript API in your Dart code!
these sources explain what you need:
https://www.dartlang.org/articles/js-dart-interop/
https://developers.google.com/youtube/iframe_api_reference
if you need more information tell me what you want to know.

Resources