How to persist data in an Electron app? [closed] - electron

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 months ago.
Improve this question
I've been scouring the Electron documentation to try and figure out how to persist data in an Electron app. For example, in iOS or OS X, you could use NSUserDefaults to store user settings and preferences. I would like to do something similar. How can I persist data in an Electron app?

NeDB is the only suggested or featured tool as an embedded persistent database for Electron by Electron, currently. - http://electron.atom.io/community/
It's also could be useful to store user settings if settings are complex.
Why NeDB could be a better solution on this case?
Embedded persistent or in memory database for Node.js, nw.js, Electron
and browsers, 100% JavaScript, no binary dependency. API is a subset
of MongoDB's and it's plenty fast. - NeDB
Creating or loading a database:
var Datastore = require('nedb')
, db = new Datastore({ filename: 'path/to/datafile', autoload: true });
// You can issue commands right away
Inserting a document:
var doc = { hello: 'world'
, n: 5
, today: new Date()
, nedbIsAwesome: true
, notthere: null
, notToBeSaved: undefined // Will not be saved
, fruits: [ 'apple', 'orange', 'pear' ]
, infos: { name: 'nedb' }
};
db.insert(doc, function (err, newDoc) { // Callback is optional
// newDoc is the newly inserted document, including its _id
// newDoc has no key called notToBeSaved since its value was undefined
});
Finding documents:
// Finding all inhabited planets in the solar system
db.find({ system: 'solar', inhabited: true }, function (err, docs) {
// docs is an array containing document Earth only
});
The list goes on...
Update - September 2019
As of 2019, this is no longer the valid answer. See the answers of #jviotti and #Tharanga below.

There is an NPM module I wrote called electron-json-storage that is meant to abstract this out and provide a nice and easy interface to the developer.
The module internally reads/writes JSON to/from app.getPath('userData'):
const storage = require('electron-json-storage');
// Write
storage.set('foobar', { foo: 'bar' }).then(function() {
// Read
storage.get('foobar').then(function(object) {
console.log(object.foo);
// will print "bar"
});
});

There is a nice module for storing user data in elecron. It's called electron-store.
Installation
$ npm install electron-store
Sample usage (copied from github page)
const Store = require('electron-store');
const store = new Store();
store.set('unicorn', '🦄');
console.log(store.get('unicorn'));
//=> '🦄'
// Use dot-notation to access nested properties
store.set('foo.bar', true);
console.log(store.get('foo'));
//=> {bar: true}
store.delete('unicorn');
console.log(store.get('unicorn'));
//=> undefined
This module has many features and there are many advantages over window.localStorage

Electron views are built with Webkit which gives you access to the web based localstorage api. Good for simple and easy settings storage.
If you need something more powerful or need storage access from the main script, you can use one of the numerous node based storage modules. Personally I like lowdb.
With most node storage modules, you will need to provide a file location. Try:
var app = require('app');
app.getPath('userData');

You have multiple opions other than what mentioned in other answers.
Option 1 - SQL data
If you want to store data in an SQL databse then you can https://github.com/mapbox/node-sqlite3
Option 2 - Configuration data
Or if you are storing configurations, You can store directly in OS's userData storage.
const electron = require('electron');
const fs = require('fs');
const path = require('path');
const dataPath = electron.app.getPath('userData');
const filePath = path.join(dataPath, 'config.json');
function writeData(key, value){
let contents = parseData()
contents[key] = value;
fs.writeFileSync(filePath, JSON.stringify(contents));
}
function readData(key, value) {
let contents = parseData()
return contents[key]
}
function parseData(){
const defaultData = {}
try {
return JSON.parse(fs.readFileSync(filePath));
} catch(error) {
return defaultData;
}
}

There is a module that gives simple methods to get and set json files to this directory, creates subdirectories if needed and supports callbacks and promises:
https://github.com/ran-y/electron-storage
Readme:
Installation
$ npm install --save electron-storage
usage
const storage = require('electron-storage');
API
storage.get(filePath, cb)
storage.get(filePath, (err, data) => {
if (err) {
console.error(err)
} else {
console.log(data);
}
});
storage.get(filePath)
storage.get(filePath)
.then(data => {
console.log(data);
})
.catch(err => {
console.error(err);
});
storage.set(filePath, data, cb)
storage.set(filePath, data, (err) => {
if (err) {
console.error(err)
}
});
storage.set(filePath, data)
storage.set(filePath, data)
.then(data => {
console.log(data);
})
.catch(err => {
console.error(err);
});
storage.isPathExists(path, cb)
storage.isPathExists(path, (itDoes) => {
if (itDoes) {
console.log('pathDoesExists !')
}
});
storage.isPathExists(path)
storage.isPathExists(path)
.then(itDoes => {
if (itDoes) {
console.log('pathDoesExists !')
}
});

You can go for Indexeddb, which is most likely suitable for client-side app needs due to:
Its built-in versioning mechanism. Client-side applications often face version fragmentation as users don't usually update to new version at the same time. So checking version of existing database and update accordingly is a good idea.
It's schemaless, which allows flexibility of adding more data to client's storage (which happen pretty often in my experience) without having to update database to a new version, unless you create new indices.
It support wide ranges of data: basic types as well as blob data (file, images)
All in all it's a good choice. The only caveat is that chromium cores may automatically wipe out indexeddb to reclaim disk space when storage is under strain if navigator.storage.persist is not set, or when the host machine is crashed leaving indexeddb in corrupted state.

There are a plethora of ways for data persistence that can be used in Electron and choosing the right approach depends essentially on your use case(s). If it's only about saving application settings then you can use simple mechanisms such as flat files or HTML5 Storage APIs, for advanced data requirements you should opt for large scale database solutions such as MySQL or MongoDB (with or without ORMs).
You can check this list of methods/tools to persist data in Electron apps

Related

How to structure data in Firestore using swift [duplicate]

The documentation does not have any examples on how to add a subcollection to a document. I know how to add document to a collection and how to add data to a document, but how do I add a collection (subcollection) to a document?
Shouldn't there be some method like this:
dbRef.document("example").addCollection("subCollection")
Edit 13 Jan 2021:
According to the updated documentation regarding array membership, now it is possible to filter data based on array values using whereArrayContains() method. A simple example would be:
CollectionReference citiesRef = db.collection("cities");
citiesRef.whereArrayContains("regions", "west_coast");
This query returns every city document where the regions field is an array that contains west_coast. If the array has multiple instances of the value you query on, the document is included in the results only once.
Assuming we have a chat application that has a database structure that looks similar to this:
To write a subCollection in a document, please use the following code:
DocumentReference messageRef = db
.collection("rooms").document("roomA")
.collection("messages").document("message1");
Creating a messages collection and calling addDocument() 1000 times will be expensive for sure, but this is how Firestore works. You can switch to Firebase Realtime Database if you want where the number of writes doesn't matter. But regarding Supported Data Types in Firestore, in fact, you can use an array because it is supported. In Firebase Realtime database you could also use an array, but this is an anti-pattern. One of the many reasons Firebase recommends against using arrays is that it makes the security rules impossible to write.
Cloud Firestore can store arrays, but it does not support querying array members or updating single array elements. However, you can still model this kind of data by leveraging the other capabilities of the Cloud Firestore. Here is the documentation where it is very well explained.
You also cannot create a subcollection with 1000 messages, add all of them to the database, and expect it to be considered a single record. It will be considered one write operation for every message, in total 1000 operations. The picture above does not show how to retrieve data, it shows a database structure in which you have something like this:
collection -> document -> subCollection -> document
Here's a variation where the subcollection is storing ID values at the collection level, rather than within a document where the subcollection is a field there with additional data.
This is useful for connecting a 1-to-Many ID mapping w/out having to drill through an additional document:
function fireAddStudentToClassroom(studentUserId, classroomId) {
var db = firebase.firestore();
var studentsClassroomRef =
db.collection('student_class').doc(classroomId)
.collection('students');
studentsClassroomRef
.doc(studentUserId)
.set({})
.then(function () {
console.log('Document Added ');
})
.catch(function (error) {
console.error('Error adding document: ', error);
});
}
Thanks to #Alex's answer
This answer a bit off from the original question here, where it explicitly asks for adding a collection to a document. However, after searching for a solution for this scenario and not finding any mention in docs or on SO, this post seems like a reasonable place to share the findings
Here's my code:
firebase.firestore().collection($scope.longLanguage + 'Words').doc($scope.word).set(wordData)
.then(function() {
console.log("Collection added to Firestore!");
var promises = [];
promises.push(firebase.firestore().collection($scope.longLanguage + 'Words').doc($scope.word).collection('AudioSources').doc($scope.accentDialect).set(accentDialectObject));
promises.push(firebase.firestore().collection($scope.longLanguage + 'Words').doc($scope.word).collection('FunFacts').doc($scope.longLanguage).set(funFactObject));
promises.push(firebase.firestore().collection($scope.longLanguage + 'Words').doc($scope.word).collection('Translations').doc($scope.translationLongLanguage).set(translationObject));
Promise.all(promises).then(function() {
console.log("All subcollections were added!");
})
.catch(function(error){
console.log("Error adding subcollections to Firestore: " + error);
});
})
.catch(function(error){
console.log("Error adding document to Firestore: " + error);
});
This makes a collection EnglishWords, which has a document of. The document of has three subcollections: AudioSources (recordings of the word in American and British accents), FunFacts, and Translations. The subcollection Translations has one document: Spanish. The Spanish document has three key-value pairs, telling you that 'de' is the Spanish translation of 'of'.
The first line of the code creates the collection EnglishWords. We wait for the promise to resolve with .then, and then we create the three subcollections. Promise.all tells us when all three subcollections are set.
IMHO, I use arrays in Firestore when the entire array is uploaded and downloaded together, i.e., I don't need to access individual elements. For example, an array of the letters of the word 'of' would be ['o', 'f']. The user can ask, "How do I spell 'of'?" The user isn't going to ask, "What's the second letter in 'of'?"
I use collections when I need to access individual elements, a.k.a. documents. With the older Firebase Realtime Database, I had to download arrays and then iterate through the arrays with forEach to get the element I wanted. This was a lot of code, and with a deep data structure and/or large arrays I was downloading tons of data that I didn't need, and slowing my app running forEach loops on large arrays. Firestore puts the iterators in the database, on their end, so that I can request a single element and it sends me just that element, saving me bandwidth and making my app run faster. This might not matter for a web app, if your computer has a broadband connection, but for mobile apps with poor data connections and slow devices this is important.
Here are two pictures of my Firestore:
From the docs:
You do not need to "create" or "delete" collections. After you create the first document in a collection, the collection exists. If you delete all of the documents in a collection, it no longer exists.
Here i faced the same issue and solve with the answere of #Thomas David Kehoe
db.collection("First collection Name").doc("Id of the document").collection("Nested collection Name").add({
//your data
}).then((data) => {
console.log(data.id);
console.log("Document has added")
}).catch((err) => {
console.log(err)
})
too late for an answer but here is what worked for me,
mFirebaseDatabaseReference?.collection("conversations")?.add(Conversation("User1"))
?.addOnSuccessListener { documentReference ->
Log.d(TAG, "DocumentSnapshot written with ID: " + documentReference.id)
mFirebaseDatabaseReference?.collection("conversations")?.document(documentReference.id)?.collection("messages")?.add(Message(edtMessage?.text.toString()))
}?.addOnFailureListener { e ->
Log.w(TAG, "Error adding document", e)
}
add success listener for adding document and use firebase generated ID for a path.
Use this ID for the complete path for a new collection you want to add.
I.E. - dbReference.collection('yourCollectionName').document(firebaseGeneratedID).collection('yourCollectionName').add(yourDocumentPOJO/Object)
Okay so I recently faced a similar problem given the recent update in the firebase/firestore documentation.
And here is a solution that worked for me
const sendMessage = async () => {
await setDoc(doc(db, COLLECTION_NAME, projectId, SUB_COLLECTION_NAME, nanoid()), {
text:'this is a sample text',
createdAt: serverTimestamp(),
name: currentUser?.firstName + ' ' + currentUser?.lastName,
photoUrl: currentUser?.photoUrl,
userId: currentUser?.id,
});
}
You can find a similar example in the docs
https://firebase.google.com/docs/firestore/data-model#web-version-9_3
chat room
If you wish to listen for live update you can use a similar method as follows
const messagesRef = collection(db, COLLECTION_NAME, projectId, SUB_COLLECTION_NAME)
const liveUpdate = async () => {
const queryObj = query(messagesRef, orderBy("createdAt"), limit(25));
onSnapshot(queryObj, (querySnapshot) => {
const msgArr: any = [];
querySnapshot.forEach((doc) => {
msgArr.push({ id: doc.id, ...doc.data() })
});
console.log(msgArr);
});
}
There is no separate method to add sub-collection into the document.
You can just call the collection method itself.
If the collection exists it will reference that otherwise create a new one.
dbRef.document("example").collection("subCollection")

Cloud functions in Firebase trigger are not part of the transaction update

I am using the realtime database and I am using transactions to ensure the integrity of my data set. In my example below I am updating currentTime on every update.
export const updateTime = functions.database.ref("/users/{userId}/projects/{projectId}")
.onUpdate((snapshot) => {
const beforeData = snapshot.before.val();
const afterData = snapshot.after.val();
if (beforeData.currentTime !== afterData.currentTime) {
return Promise.resolve();
} else {
return snapshot.after.ref.update( {currentTime: new Date().getTime()})
.catch((err) =>{
console.error(err);
});
}
});
It seems the cloud function is not part of the transaction, but triggers multiple updates in my clients, which I try to avoid.
For example, I watched this starter tutorial which replaces :pizza: with a pizza emoji. In my client I would see :pizza: for one frame before it gets replaced with the emoji. I know, the pizza tutorial is just an example, but I am running into a similar issue. Any advice is highly appreciated!
Cloud Functions don't run as part of the database transaction indeed. They run after the database has been updated, and receive "before" and "after" snapshots of the affected data.
If you want a Cloud Function to serve as an approval process, the idiomatic approach is to have the clients write to a different location (typically called a pending queue) that the function listens to. The function then performs whatever operation it wants, and writes the result to the final location.

Origin of files `holoclient.js` and `holoclient.map` in holochain application?

In https://github.com/holochain/holochat-rust, how are the files ui/holoclient.js and ui/holoclient.map obtained ?
Also, is there any official documentation about that that I missed and is this still the way to get a UI to talk to the holochain container ?
ui/holoclient.js is a tiny library that makes it much easier to talk to a running Holochain app instance. The current way of connecting your GUI to an instance is a JSON-RPC-like process via a local WebSocket connection. It's intended as a nice wrapper to make zome function calls feel like local, in-browser function calls. Documentation is currently very light, but it shouldn't take much to figure out how it's supposed to work using the example. In a nutshell:
const url = 'ws://localhost:3000/'
window.holoclient.connect(url).then(({call, close}) => {
document.getElementById('form').addEventListener('submit', e => {
e.preventDefault()
// First, get a list of locally running Holochain instances...
call('info/instances')().then(info => {
// Now that we have instance info, we can make zome calls into any of them
// by referring to them by DNA hash (and agent ID) as specified in our
// container config.
// Search for the instance we're looking for, given known DNA and agent
// hashes.
const matchingInstances = Object.entries(info)
.find(([id, value]) => value.dna === 'blog_dna_hash' && value.agent === 'my_agent_hash')
const instance = getInstance(info, 'the_dna_hash', 'the_agent_hash')
const content = document.querySelector('#message').value
// Make another zome call now
call(instance, 'blog', 'main', 'create_post')({
content: content
})
})
})
})
It's written in TypeScript, which would mean that ui/holoclient.map is a 'source map', a file which maps line numbers in the compiled JavaScript file to line numbers in the original TypeScript source. Both Chrome and Firefox look for and use those source maps when you're debugging your JS.

What local databases are available/recommended in trigger.io apps, if any?

I'm investigating building a mobile app with trigger.io, but I'm not finding good documentation on local database options. My app will send data to an external API, but needs to be able to store data locally as a draft (if the user is offline, the API is unavailable, whatever).
I see that there's a prefs module for storing data, but it does not seem like the right thing (correct me if I'm wrong). What options are recommended here? Is there something analogous to the SQLite plugin for PhoneGap, perhaps?
This probably depends on what your usage patterns are going to be.
For example, forge.prefs could get a bit fiddly if you want to do any kind of interesting queries, but could work well if you just want to persist a single JavaScript object structure. Using window.localStorage is likely to have similar pros/cons.
Alternatively, you can use the WebSQL API in your JavaScript: http://docs.trigger.io/en/v1.4/release-notes.html#v1-3-5. You don't need to use a module for this, it should work for any Android or iOS app built with Forge. This essentially gives you an SQLite database accessible from JavaScript. To give you a feel for the API, here's an example:
// create db
var db = openDatabase('mydb', '1.0', 'example database', 2 * 1024 * 1024);
db.transaction(function (tx) {
tx.executeSql('CREATE TABLE IF NOT EXISTS foo (id unique, text)');
tx.executeSql('INSERT INTO foo (id, text) VALUES (1, "foobar")');
});
// query db
db.transaction(function (tx) {
tx.executeSql('SELECT * FROM foo', [], function (tx, results) {
var rows = results.rows;
for (var i = 0; i < rows.length; ++i) {
forge.logging.info("row text: " + rows.item(i).text);
}
});
});
You should be able to find some tutorials about on the web!

Server-side internationalization for Backbone and Handlebars

I'm working on a Grails / Backbone / Handlebars application that's a front end to a much larger legacy Java system in which (for historical & customizability reasons) internationalization messages are deep in a database hidden behind a couple of SOAP services which are in turn hidden behind various internal Java libraries. Getting at these messages from the Grails layer is easy and works fine.
What I'm wondering, though, is how to get (for instance) internationalized labels into my Handlebars templates.
Right now, I'm using GSP fragments to generate the templates, including a custom tag that gets the message I'm interested in, something like:
<li><myTags:message msgKey="title"/> {{title}}</li>
However, for performance and code layout reasons I want to get away from GSP templates and get them into straight HTML. I've looked a little into client-side internationalization options such as i18n.js, but they seem to depend on the existence of a messages file I haven't got. (I could generate it, possibly, but it would be ginormous and expensive.)
So far the best thing I can think of is to wedge the labels into the Backbone model as well, so I'd end up with something like
<li>{{titleLabel}} {{title}}</li>
However, this really gets away from the ideal of building the Backbone models on top of a nice clean RESTful JSON API -- either the JSON returned by the RESTful service is cluttered up with presentation data (i.e., localized labels), or I have to do additional work to inject the labels into the Backbone model -- and cluttering up the Backbone model with presentation data seems wrong as well.
I think what I'd like to do, in terms of clean data and clean APIs, is write another RESTful service that takes a list of message keys and similar, and returns a JSON data structure containing all the localized messages. However, questions remain:
What's the best way to indicate (probably in the template) what message keys are needed for a given view?
What's the right format for the data?
How do I get the localized messages into the Backbone views?
Are there any existing Javascript libraries that will help, or should I just start making stuff up?
Is there a better / more standard alternative approach?
I think you could create quite an elegant solution by combining Handelbars helpers and some regular expressions.
Here's what I would propose:
Create a service which takes in a JSON array of message keys and returns an JSON object, where keys are the message keys and values are the localized texts.
Define a Handlebars helper which takes in a message key (which matches the message keys on the server) and outputs an translated text. Something like {{localize "messageKey"}}. Use this helper for all template localization.
Write a template preprocessor which greps the message keys from a template and makes a request for your service. The preprocessor caches all message keys it gets, and only requests the ones it doesn't already have.
You can either call this preprocessor on-demand when you need to render your templates, or call it up-front and cache the message keys, so they're ready when you need them.
To optimize further, you can persist the cache to browser local storage.
Here's a little proof of concept. It doesn't yet have local storage persistence or support for fetching the texts of multiple templates at once for caching purposes, but it was easy enough to hack together that I think with some further work it could work nicely.
The client API could look something like this:
var localizer = new HandlebarsLocalizer();
//compile a template
var html = $("#tmpl").html();
localizer.compile(html).done(function(template) {
//..template is now localized and ready to use
});
Here's the source for the lazy reader:
var HandlebarsLocalizer = function() {
var _templateCache = {};
var _localizationCache = {};
//fetches texts, adds them to cache, resolves deferred with template
var _fetch = function(keys, template, deferred) {
$.ajax({
type:'POST',
dataType:'json',
url: '/echo/json',
data: JSON.stringify({
keys: keys
}),
success: function(response) {
//handle response here, this is just dummy
_.each(keys, function(key) { _localizationCache[key] = "(" + key + ") localized by server"; });
console.log(_localizationCache);
deferred.resolve(template);
},
error: function() {
deferred.reject();
}
});
};
//precompiles html into a Handlebars template function and fetches all required
//localization keys. Returns a promise of template.
this.compile = function(html) {
var cacheObject = _templateCache[html],
deferred = new $.Deferred();
//cached -> return
if(cacheObject && cacheObject.ready) {
deferred.resolve(cacheObject.template);
return deferred.promise();
}
//grep all localization keys from template
var regex = /{{\s*?localize\s*['"](.*)['"]\s*?}}/g, required = [], match;
while((match = regex.exec(html))) {
var key = match[1];
//if we don't have this key yet, we need to fetch it
if(!_localizationCache[key]) {
required.push(key);
}
}
//not cached -> create
if(!cacheObject) {
cacheObject = {
template:Handlebars.compile(html),
ready: (required.length === 0)
};
_templateCache[html] = cacheObject;
}
//we have all the localization texts ->
if(cacheObject.ready) {
deferred.resolve(cacheObject.template);
}
//we need some more texts ->
else {
deferred.done(function() { cacheObject.ready = true; });
_fetch(required, cacheObject.template, deferred);
}
return deferred.promise();
};
//translates given key
this.localize = function(key) {
return _localizationCache[key] || "TRANSLATION MISSING:"+key;
};
//make localize function available to templates
Handlebars.registerHelper('localize', this.localize);
}
We use http://i18next.com for internationalization in a Backbone/Handlebars app. (And Require.js which also loads and compiles the templates via plugin.)
i18next can be configured to load resources dynamically. It supports JSON in a gettext format (supporting plural and context variants).
Example from their page on how to load remote resources:
var option = {
resGetPath: 'resources.json?lng=__lng__&ns=__ns__',
dynamicLoad: true
};
i18n.init(option);
(You will of course need more configuration like setting the language, the fallback language etc.)
You can then configure a Handlebars helper that calls i18next on the provided variable (simplest version, no plural, no context):
// namespace: "translation" (default)
Handlebars.registerHelper('_', function (i18n_key) {
i18n_key = Handlebars.compile(i18n_key)(this);
var result = i18n.t(i18n_key);
if (!result) {
console.log("ERROR : Handlebars-Helpers : no translation result for " + i18n_key);
}
return new Handlebars.SafeString(result);
});
And in your template you can either provide a dynamic variable that expands to the key:
<li>{{_ titleLabeli18nKey}} {{title}}</li>
or specify the key directly:
<li>{{_ "page.fancy.title"}} {{title}}</li>
For localization of datetime we use http://momentjs.com (conversion to local time, formatting, translation etc.).

Resources