I'm investigating building a mobile app with trigger.io, but I'm not finding good documentation on local database options. My app will send data to an external API, but needs to be able to store data locally as a draft (if the user is offline, the API is unavailable, whatever).
I see that there's a prefs module for storing data, but it does not seem like the right thing (correct me if I'm wrong). What options are recommended here? Is there something analogous to the SQLite plugin for PhoneGap, perhaps?
This probably depends on what your usage patterns are going to be.
For example, forge.prefs could get a bit fiddly if you want to do any kind of interesting queries, but could work well if you just want to persist a single JavaScript object structure. Using window.localStorage is likely to have similar pros/cons.
Alternatively, you can use the WebSQL API in your JavaScript: http://docs.trigger.io/en/v1.4/release-notes.html#v1-3-5. You don't need to use a module for this, it should work for any Android or iOS app built with Forge. This essentially gives you an SQLite database accessible from JavaScript. To give you a feel for the API, here's an example:
// create db
var db = openDatabase('mydb', '1.0', 'example database', 2 * 1024 * 1024);
db.transaction(function (tx) {
tx.executeSql('CREATE TABLE IF NOT EXISTS foo (id unique, text)');
tx.executeSql('INSERT INTO foo (id, text) VALUES (1, "foobar")');
});
// query db
db.transaction(function (tx) {
tx.executeSql('SELECT * FROM foo', [], function (tx, results) {
var rows = results.rows;
for (var i = 0; i < rows.length; ++i) {
forge.logging.info("row text: " + rows.item(i).text);
}
});
});
You should be able to find some tutorials about on the web!
Related
The documentation does not have any examples on how to add a subcollection to a document. I know how to add document to a collection and how to add data to a document, but how do I add a collection (subcollection) to a document?
Shouldn't there be some method like this:
dbRef.document("example").addCollection("subCollection")
Edit 13 Jan 2021:
According to the updated documentation regarding array membership, now it is possible to filter data based on array values using whereArrayContains() method. A simple example would be:
CollectionReference citiesRef = db.collection("cities");
citiesRef.whereArrayContains("regions", "west_coast");
This query returns every city document where the regions field is an array that contains west_coast. If the array has multiple instances of the value you query on, the document is included in the results only once.
Assuming we have a chat application that has a database structure that looks similar to this:
To write a subCollection in a document, please use the following code:
DocumentReference messageRef = db
.collection("rooms").document("roomA")
.collection("messages").document("message1");
Creating a messages collection and calling addDocument() 1000 times will be expensive for sure, but this is how Firestore works. You can switch to Firebase Realtime Database if you want where the number of writes doesn't matter. But regarding Supported Data Types in Firestore, in fact, you can use an array because it is supported. In Firebase Realtime database you could also use an array, but this is an anti-pattern. One of the many reasons Firebase recommends against using arrays is that it makes the security rules impossible to write.
Cloud Firestore can store arrays, but it does not support querying array members or updating single array elements. However, you can still model this kind of data by leveraging the other capabilities of the Cloud Firestore. Here is the documentation where it is very well explained.
You also cannot create a subcollection with 1000 messages, add all of them to the database, and expect it to be considered a single record. It will be considered one write operation for every message, in total 1000 operations. The picture above does not show how to retrieve data, it shows a database structure in which you have something like this:
collection -> document -> subCollection -> document
Here's a variation where the subcollection is storing ID values at the collection level, rather than within a document where the subcollection is a field there with additional data.
This is useful for connecting a 1-to-Many ID mapping w/out having to drill through an additional document:
function fireAddStudentToClassroom(studentUserId, classroomId) {
var db = firebase.firestore();
var studentsClassroomRef =
db.collection('student_class').doc(classroomId)
.collection('students');
studentsClassroomRef
.doc(studentUserId)
.set({})
.then(function () {
console.log('Document Added ');
})
.catch(function (error) {
console.error('Error adding document: ', error);
});
}
Thanks to #Alex's answer
This answer a bit off from the original question here, where it explicitly asks for adding a collection to a document. However, after searching for a solution for this scenario and not finding any mention in docs or on SO, this post seems like a reasonable place to share the findings
Here's my code:
firebase.firestore().collection($scope.longLanguage + 'Words').doc($scope.word).set(wordData)
.then(function() {
console.log("Collection added to Firestore!");
var promises = [];
promises.push(firebase.firestore().collection($scope.longLanguage + 'Words').doc($scope.word).collection('AudioSources').doc($scope.accentDialect).set(accentDialectObject));
promises.push(firebase.firestore().collection($scope.longLanguage + 'Words').doc($scope.word).collection('FunFacts').doc($scope.longLanguage).set(funFactObject));
promises.push(firebase.firestore().collection($scope.longLanguage + 'Words').doc($scope.word).collection('Translations').doc($scope.translationLongLanguage).set(translationObject));
Promise.all(promises).then(function() {
console.log("All subcollections were added!");
})
.catch(function(error){
console.log("Error adding subcollections to Firestore: " + error);
});
})
.catch(function(error){
console.log("Error adding document to Firestore: " + error);
});
This makes a collection EnglishWords, which has a document of. The document of has three subcollections: AudioSources (recordings of the word in American and British accents), FunFacts, and Translations. The subcollection Translations has one document: Spanish. The Spanish document has three key-value pairs, telling you that 'de' is the Spanish translation of 'of'.
The first line of the code creates the collection EnglishWords. We wait for the promise to resolve with .then, and then we create the three subcollections. Promise.all tells us when all three subcollections are set.
IMHO, I use arrays in Firestore when the entire array is uploaded and downloaded together, i.e., I don't need to access individual elements. For example, an array of the letters of the word 'of' would be ['o', 'f']. The user can ask, "How do I spell 'of'?" The user isn't going to ask, "What's the second letter in 'of'?"
I use collections when I need to access individual elements, a.k.a. documents. With the older Firebase Realtime Database, I had to download arrays and then iterate through the arrays with forEach to get the element I wanted. This was a lot of code, and with a deep data structure and/or large arrays I was downloading tons of data that I didn't need, and slowing my app running forEach loops on large arrays. Firestore puts the iterators in the database, on their end, so that I can request a single element and it sends me just that element, saving me bandwidth and making my app run faster. This might not matter for a web app, if your computer has a broadband connection, but for mobile apps with poor data connections and slow devices this is important.
Here are two pictures of my Firestore:
From the docs:
You do not need to "create" or "delete" collections. After you create the first document in a collection, the collection exists. If you delete all of the documents in a collection, it no longer exists.
Here i faced the same issue and solve with the answere of #Thomas David Kehoe
db.collection("First collection Name").doc("Id of the document").collection("Nested collection Name").add({
//your data
}).then((data) => {
console.log(data.id);
console.log("Document has added")
}).catch((err) => {
console.log(err)
})
too late for an answer but here is what worked for me,
mFirebaseDatabaseReference?.collection("conversations")?.add(Conversation("User1"))
?.addOnSuccessListener { documentReference ->
Log.d(TAG, "DocumentSnapshot written with ID: " + documentReference.id)
mFirebaseDatabaseReference?.collection("conversations")?.document(documentReference.id)?.collection("messages")?.add(Message(edtMessage?.text.toString()))
}?.addOnFailureListener { e ->
Log.w(TAG, "Error adding document", e)
}
add success listener for adding document and use firebase generated ID for a path.
Use this ID for the complete path for a new collection you want to add.
I.E. - dbReference.collection('yourCollectionName').document(firebaseGeneratedID).collection('yourCollectionName').add(yourDocumentPOJO/Object)
Okay so I recently faced a similar problem given the recent update in the firebase/firestore documentation.
And here is a solution that worked for me
const sendMessage = async () => {
await setDoc(doc(db, COLLECTION_NAME, projectId, SUB_COLLECTION_NAME, nanoid()), {
text:'this is a sample text',
createdAt: serverTimestamp(),
name: currentUser?.firstName + ' ' + currentUser?.lastName,
photoUrl: currentUser?.photoUrl,
userId: currentUser?.id,
});
}
You can find a similar example in the docs
https://firebase.google.com/docs/firestore/data-model#web-version-9_3
chat room
If you wish to listen for live update you can use a similar method as follows
const messagesRef = collection(db, COLLECTION_NAME, projectId, SUB_COLLECTION_NAME)
const liveUpdate = async () => {
const queryObj = query(messagesRef, orderBy("createdAt"), limit(25));
onSnapshot(queryObj, (querySnapshot) => {
const msgArr: any = [];
querySnapshot.forEach((doc) => {
msgArr.push({ id: doc.id, ...doc.data() })
});
console.log(msgArr);
});
}
There is no separate method to add sub-collection into the document.
You can just call the collection method itself.
If the collection exists it will reference that otherwise create a new one.
dbRef.document("example").collection("subCollection")
I am using the realtime database and I am using transactions to ensure the integrity of my data set. In my example below I am updating currentTime on every update.
export const updateTime = functions.database.ref("/users/{userId}/projects/{projectId}")
.onUpdate((snapshot) => {
const beforeData = snapshot.before.val();
const afterData = snapshot.after.val();
if (beforeData.currentTime !== afterData.currentTime) {
return Promise.resolve();
} else {
return snapshot.after.ref.update( {currentTime: new Date().getTime()})
.catch((err) =>{
console.error(err);
});
}
});
It seems the cloud function is not part of the transaction, but triggers multiple updates in my clients, which I try to avoid.
For example, I watched this starter tutorial which replaces :pizza: with a pizza emoji. In my client I would see :pizza: for one frame before it gets replaced with the emoji. I know, the pizza tutorial is just an example, but I am running into a similar issue. Any advice is highly appreciated!
Cloud Functions don't run as part of the database transaction indeed. They run after the database has been updated, and receive "before" and "after" snapshots of the affected data.
If you want a Cloud Function to serve as an approval process, the idiomatic approach is to have the clients write to a different location (typically called a pending queue) that the function listens to. The function then performs whatever operation it wants, and writes the result to the final location.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 months ago.
Improve this question
I've been scouring the Electron documentation to try and figure out how to persist data in an Electron app. For example, in iOS or OS X, you could use NSUserDefaults to store user settings and preferences. I would like to do something similar. How can I persist data in an Electron app?
NeDB is the only suggested or featured tool as an embedded persistent database for Electron by Electron, currently. - http://electron.atom.io/community/
It's also could be useful to store user settings if settings are complex.
Why NeDB could be a better solution on this case?
Embedded persistent or in memory database for Node.js, nw.js, Electron
and browsers, 100% JavaScript, no binary dependency. API is a subset
of MongoDB's and it's plenty fast. - NeDB
Creating or loading a database:
var Datastore = require('nedb')
, db = new Datastore({ filename: 'path/to/datafile', autoload: true });
// You can issue commands right away
Inserting a document:
var doc = { hello: 'world'
, n: 5
, today: new Date()
, nedbIsAwesome: true
, notthere: null
, notToBeSaved: undefined // Will not be saved
, fruits: [ 'apple', 'orange', 'pear' ]
, infos: { name: 'nedb' }
};
db.insert(doc, function (err, newDoc) { // Callback is optional
// newDoc is the newly inserted document, including its _id
// newDoc has no key called notToBeSaved since its value was undefined
});
Finding documents:
// Finding all inhabited planets in the solar system
db.find({ system: 'solar', inhabited: true }, function (err, docs) {
// docs is an array containing document Earth only
});
The list goes on...
Update - September 2019
As of 2019, this is no longer the valid answer. See the answers of #jviotti and #Tharanga below.
There is an NPM module I wrote called electron-json-storage that is meant to abstract this out and provide a nice and easy interface to the developer.
The module internally reads/writes JSON to/from app.getPath('userData'):
const storage = require('electron-json-storage');
// Write
storage.set('foobar', { foo: 'bar' }).then(function() {
// Read
storage.get('foobar').then(function(object) {
console.log(object.foo);
// will print "bar"
});
});
There is a nice module for storing user data in elecron. It's called electron-store.
Installation
$ npm install electron-store
Sample usage (copied from github page)
const Store = require('electron-store');
const store = new Store();
store.set('unicorn', '🦄');
console.log(store.get('unicorn'));
//=> '🦄'
// Use dot-notation to access nested properties
store.set('foo.bar', true);
console.log(store.get('foo'));
//=> {bar: true}
store.delete('unicorn');
console.log(store.get('unicorn'));
//=> undefined
This module has many features and there are many advantages over window.localStorage
Electron views are built with Webkit which gives you access to the web based localstorage api. Good for simple and easy settings storage.
If you need something more powerful or need storage access from the main script, you can use one of the numerous node based storage modules. Personally I like lowdb.
With most node storage modules, you will need to provide a file location. Try:
var app = require('app');
app.getPath('userData');
You have multiple opions other than what mentioned in other answers.
Option 1 - SQL data
If you want to store data in an SQL databse then you can https://github.com/mapbox/node-sqlite3
Option 2 - Configuration data
Or if you are storing configurations, You can store directly in OS's userData storage.
const electron = require('electron');
const fs = require('fs');
const path = require('path');
const dataPath = electron.app.getPath('userData');
const filePath = path.join(dataPath, 'config.json');
function writeData(key, value){
let contents = parseData()
contents[key] = value;
fs.writeFileSync(filePath, JSON.stringify(contents));
}
function readData(key, value) {
let contents = parseData()
return contents[key]
}
function parseData(){
const defaultData = {}
try {
return JSON.parse(fs.readFileSync(filePath));
} catch(error) {
return defaultData;
}
}
There is a module that gives simple methods to get and set json files to this directory, creates subdirectories if needed and supports callbacks and promises:
https://github.com/ran-y/electron-storage
Readme:
Installation
$ npm install --save electron-storage
usage
const storage = require('electron-storage');
API
storage.get(filePath, cb)
storage.get(filePath, (err, data) => {
if (err) {
console.error(err)
} else {
console.log(data);
}
});
storage.get(filePath)
storage.get(filePath)
.then(data => {
console.log(data);
})
.catch(err => {
console.error(err);
});
storage.set(filePath, data, cb)
storage.set(filePath, data, (err) => {
if (err) {
console.error(err)
}
});
storage.set(filePath, data)
storage.set(filePath, data)
.then(data => {
console.log(data);
})
.catch(err => {
console.error(err);
});
storage.isPathExists(path, cb)
storage.isPathExists(path, (itDoes) => {
if (itDoes) {
console.log('pathDoesExists !')
}
});
storage.isPathExists(path)
storage.isPathExists(path)
.then(itDoes => {
if (itDoes) {
console.log('pathDoesExists !')
}
});
You can go for Indexeddb, which is most likely suitable for client-side app needs due to:
Its built-in versioning mechanism. Client-side applications often face version fragmentation as users don't usually update to new version at the same time. So checking version of existing database and update accordingly is a good idea.
It's schemaless, which allows flexibility of adding more data to client's storage (which happen pretty often in my experience) without having to update database to a new version, unless you create new indices.
It support wide ranges of data: basic types as well as blob data (file, images)
All in all it's a good choice. The only caveat is that chromium cores may automatically wipe out indexeddb to reclaim disk space when storage is under strain if navigator.storage.persist is not set, or when the host machine is crashed leaving indexeddb in corrupted state.
There are a plethora of ways for data persistence that can be used in Electron and choosing the right approach depends essentially on your use case(s). If it's only about saving application settings then you can use simple mechanisms such as flat files or HTML5 Storage APIs, for advanced data requirements you should opt for large scale database solutions such as MySQL or MongoDB (with or without ORMs).
You can check this list of methods/tools to persist data in Electron apps
I am developing a mobile application using phonegap, Initially I have developed using WEBSQL but now I m planning to move it on INDEXDB. The problem is it does not have direct support on IOS , so on doing much R&D I came to know using IndexedDB Polyfil we can implement it on IOS too
http://blog.nparashuram.com/2012/10/indexeddb-example-on-cordova-phonegap.html
http://nparashuram.com/IndexedDBShim/
Can some please help me how to implement this as there are not enough documentation for this and I cannot figure out a any other solution / api except this
I have tested this on safari 5.1.7
Below is my code and Error Image
var request1 = indexedDB.open(dbName, 5);
request1.onsuccess = function (evt) {
db = request1.result;
var transaction = db.transaction(["AcceptedOrders"], "readwrite");
var objectStore = transaction.objectStore("AcceptedOrders");
for (var i in data) {
var request = objectStore.add(data[i]);
request.onsuccess = function (event) {
// alert("am again inserted")
// event.target.result == customerData[i].ssn;
};
}
};
request1.onerror = function (evt) {
alert("IndexedDB error: " + evt.target.errorCode);
};
Error Image
One blind guess
Maybe your dbName contains illegal characters for WebSQL database names. The polyfill doesn't translate your database names in any kind. So if you create a database called my-test, it would try to create a WebSQL database with the name my-test. This name is acceptable for an IndexedDB database, but in WebSQL you'll get in trouble because of the - character. So your database name has to match both, the IndexedDB and the WebSQL name conventions.
... otherwise use the debugger
You could set a break point onto your alert(...); line and use the debugger to look inside the evt object. This way you may get either more information about the error itself or more information to share with us.
To do so, enable the development menu in the Safari advanced settings, hit F10 and go to Developer > Start debugging JavaScript (something like that, my Safari is in a different language). Now open then "Scripts" tab in the developer window, select your script and set the break point by clicking on the line number. Reload the page and it should stop right in your error callback, where you can inspect the evt object.
If this doesn't help, you could get the non-minified version of the polyfill and try set some breakpoints around their open function to find the origin of this error.
You could try my open source library https://bitbucket.org/ytkyaw/ydn-db/wiki/Home. It works on iOS and Android.
I want to use amazon Dynamo DB with rails.But I have not found a way to implement pagination.
I will use AWS::Record::HashModel as ORM.
This ORM supports limits like this:
People.limit(10).each {|person| ... }
But I could not figured out how to implement following MySql query in Dynamo DB.
SELECT *
FROM `People`
LIMIT 1 , 30
You issue queries using LIMIT. If the subset returned does not contain the full table, a LastEvaluatedKey value is returned. You use this value as the ExclusiveStartKey in the next query. And so on...
From the DynamoDB Developer Guide.
You can provide 'page-size' in you query to set the result set size.
The response of DynamoDB contains 'LastEvaluatedKey' which will indicate the last key as per the page size. If response does't contain 'LastEvaluatedKey' it means there are no results left to fetch.
Use the 'LastEvaluatedKey' as 'ExclusiveStartKey' while fetching next time.
I hope this helps.
DynamoDB Pagination
Here's a simple copy-paste-run proof of concept (Node.js) for stateless forward/reverse navigation with dynamodb. In summary; each response includes the navigation history, allowing user to explicitly and consistently request either the next or previous page (while next/prev params exist):
GET /accounts -> first page
GET /accounts?next=A3r0ijKJ8 -> next page
GET /accounts?prev=R4tY69kUI -> previous page
Considerations:
If your ids are large and/or users might do a lot of navigation, then the potential size of the next/prev params might become too large.
Yes you do have to store the entire reverse path - if you only store the previous page marker (per some other answers) you will only be able to go back one page.
It won't handle changing pageSize midway, consider baking pageSize into the next/prev value.
base64 encode the next/prev values, and you could also encrypt.
Scans are inefficient, while this suited my current requirement it won't suit all!
// demo.js
const mockTable = [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20]
const getPagedItems = (pageSize = 5, cursor = {}) => {
// Parse cursor
const keys = cursor.next || cursor.prev || [] // fwd first
let key = keys[keys.length-1] || null // eg ddb's PK
// Mock query (mimic dynamodb response)
const Items = mockTable.slice(parseInt(key) || 0, pageSize+key)
const LastEvaluatedKey = Items[Items.length-1] < mockTable.length
? Items[Items.length-1] : null
// Build response
const res = {items:Items}
if (keys.length > 0) // add reverse nav keys (if any)
res.prev = keys.slice(0, keys.length-1)
if (LastEvaluatedKey) // add forward nav keys (if any)
res.next = [...keys, LastEvaluatedKey]
return res
}
// Run test ------------------------------------
const runTest = () => {
const PAGE_SIZE = 6
let x = {}, i = 0
// Page to end
while (i == 0 || x.next) {
x = getPagedItems(PAGE_SIZE, {next:x.next})
console.log(`Page ${++i}: `, x.items)
}
// Page back to start
while (x.prev) {
x = getPagedItems(PAGE_SIZE, {prev:x.prev})
console.log(`Page ${--i}: `, x.items)
}
}
runTest()
I faced a similar problem.
The generic pagination approach is, use "start index" or "start page" and the "page length".
The "ExclusiveStartKey" and "LastEvaluatedKey" based approach is very DynamoDB specific.
I feel this DynamoDB specific implementation of pagination should be hidden from the API client/UI.
Also in case, the application is serverless, using service like Lambda, it will be not be possible to maintain the state on the server. The other side is the client implementation will become very complex.
I came with a different approach, which I think is generic ( and not specific to DynamoDB)
When the API client specifies the start index, fetch all the keys from
the table and store it into an array.
Find out the key for the start index from the array, which is
specified by the client.
Make use of the ExclusiveStartKey and fetch the number of records, as
specified in the page length.
If the start index parameter is not present, the above steps are not
needed, we don't need to specify the ExclusiveStartKey in the scan
operation.
This solution has some drawbacks -
We will need to fetch all the keys when the user needs pagination with
start index.
We will need additional memory to store the Ids and the indexes.
Additional database scan operations ( one or multiple to fetch the
keys )
But I feel this will be very easy approach for the clients, which are using our APIs. The backward scan will work seamlessly. If the user wants to see "nth" page, this will be possible.
In fact I faced the same problem and I noticed that LastEvaluatedKey and ExclusiveStartKey are not working well especially when using Scan So I solved Like this.
GET/?page_no=1&page_size=10 =====> first page
response will contain count of records and first 10 records
retry and increase number of page until all record come.
Code is below
PS: I am using python
first_index = ((page_no-1)*page_size)
second_index = (page_no*page_size)
if (second_index > len(response['Items'])):
second_index = len(response['Items'])
return {
'statusCode': 200,
'count': response['Count'],
'response': response['Items'][first_index:second_index]
}