Airtable returns not all records via API - airtable

I do load aittable records via their API:
const base = airtable.base(item.baseId);
base("Dishes")
.select({
})
.eachPage(
function page(records, fetchNextPage) {
tableRecords.push(...records);
// To fetch the next page of records, call `fetchNextPage`.
// If there are more records, `page` will get called again.
// If there are no more records, `done` will get called.
fetchNextPage();
},
function done(err) {
if (err) {
console.error(err);
return;
}
console.log("##Done", tableRecords.length);
}
);
and as a result, I receive 2202 records. But in the table in UI I do see 2271 records. And when I do export to csv - I see the same 2271 as well.
Code is pretty basic, I even remove view setting to ensure, that it's not a presentational issue.
Google did not help me (nothing related). Did anyone face the same issue? Any solution?
NB: for sure I already compared both lists and found items I do miss, but while observing those items I see nothing special there. So it says me what I do miss, but not why

const base = airtable.base(item.baseId);
base("Dishes")
.select({})
.all()

Related

fovea in app purchae plugin/several database entries

I have a problem on the in app purchase cordova plugin by fovea.
I am kinda confused.
What I want to do, is that when the user chooses a product (monthly subscription), I do what I need to do, process payment and all that jazz, and when everything is done, I save an entry in my database, to indicate that the user is subscribed (and some more info).
However, when I use it, I see that I have not one but at least 10 saved entries. For same user, same product.
I have no idea why it does that. So I guess something is wrong with my code. As I use Sandbox for testing IOS side, sometimes the pop up just doesn't appear (where you need to enter your password and confirm your purchase), and yet I have 10/20 entries saved in my database (I put that bit of code when product is owned, then in the .finished event).
Can someone help me?
Here is the code
let produit = null;
if (!(window as any).store) {
alert('Store indispo');
}
this.store.register({
id: this.valeurEnvoi.app,
alias: 'abonnement',
type: store.PAID_SUBSCRIPTION,
});
console.log(this.valeurEnvoi.app);
this.store.refresh();
this.store.ready( () => {
produit = this.store.get(this.valeurEnvoi.app);
this.envoiLogs('récupération du produit', 'NO ID');
if (produit.canPurchase) {
this.store.order(produit);
}
});
this.store.refresh();
// this.store.manageSubscriptions();
this.store.when(produit).updated( (p) => {
if (p.owned) {
} else {
}
});
this.store.refresh();
console.log(this.store.log);
this.store.error( error => {
this.testLog = error.message;
alert('erreur: ' + error.code + ' message: ' + error.message);
});
this.store.when(produit).approved( (order) => {
order.finish();
});
this.store.refresh();
this.store.when(produit).finished( (order) => {
const test = this.store.findInLocalReceipts(produit);
alert(test.transactionId);
this.confirmerAchatMobile();
});
Also, If I could have some pointer to get the transactionID, that would be great! Thank you!
If you see any strange stuff, let me know, it will help.
I took a look at the repo. refresh() appears to be deprecated:
/**
* #deprecated - use store.initialize(), store.update() or store.restorePurchases()
*/
refresh() {
throw new Error("use store.initialize() or store.update()");
}
source: https://github.com/j3k0/cordova-plugin-purchase/blob/master/www/store.js
Also, from a comment I infer that the transactions are replayed if you call refresh multiple times, could that cause your issue?: Notice that all previous transactions will be replayed if you call store.refresh() a second time in the lifetime of your app.
source: https://github.com/j3k0/cordova-plugin-purchase/issues/1298
There are quite a few (109) issues on the git repo. So I think you might find your answer there. If not, I'd create an issue there, the maintainers are probably more likely to have an answer for you.

How to structure data in Firestore using swift [duplicate]

The documentation does not have any examples on how to add a subcollection to a document. I know how to add document to a collection and how to add data to a document, but how do I add a collection (subcollection) to a document?
Shouldn't there be some method like this:
dbRef.document("example").addCollection("subCollection")
Edit 13 Jan 2021:
According to the updated documentation regarding array membership, now it is possible to filter data based on array values using whereArrayContains() method. A simple example would be:
CollectionReference citiesRef = db.collection("cities");
citiesRef.whereArrayContains("regions", "west_coast");
This query returns every city document where the regions field is an array that contains west_coast. If the array has multiple instances of the value you query on, the document is included in the results only once.
Assuming we have a chat application that has a database structure that looks similar to this:
To write a subCollection in a document, please use the following code:
DocumentReference messageRef = db
.collection("rooms").document("roomA")
.collection("messages").document("message1");
Creating a messages collection and calling addDocument() 1000 times will be expensive for sure, but this is how Firestore works. You can switch to Firebase Realtime Database if you want where the number of writes doesn't matter. But regarding Supported Data Types in Firestore, in fact, you can use an array because it is supported. In Firebase Realtime database you could also use an array, but this is an anti-pattern. One of the many reasons Firebase recommends against using arrays is that it makes the security rules impossible to write.
Cloud Firestore can store arrays, but it does not support querying array members or updating single array elements. However, you can still model this kind of data by leveraging the other capabilities of the Cloud Firestore. Here is the documentation where it is very well explained.
You also cannot create a subcollection with 1000 messages, add all of them to the database, and expect it to be considered a single record. It will be considered one write operation for every message, in total 1000 operations. The picture above does not show how to retrieve data, it shows a database structure in which you have something like this:
collection -> document -> subCollection -> document
Here's a variation where the subcollection is storing ID values at the collection level, rather than within a document where the subcollection is a field there with additional data.
This is useful for connecting a 1-to-Many ID mapping w/out having to drill through an additional document:
function fireAddStudentToClassroom(studentUserId, classroomId) {
var db = firebase.firestore();
var studentsClassroomRef =
db.collection('student_class').doc(classroomId)
.collection('students');
studentsClassroomRef
.doc(studentUserId)
.set({})
.then(function () {
console.log('Document Added ');
})
.catch(function (error) {
console.error('Error adding document: ', error);
});
}
Thanks to #Alex's answer
This answer a bit off from the original question here, where it explicitly asks for adding a collection to a document. However, after searching for a solution for this scenario and not finding any mention in docs or on SO, this post seems like a reasonable place to share the findings
Here's my code:
firebase.firestore().collection($scope.longLanguage + 'Words').doc($scope.word).set(wordData)
.then(function() {
console.log("Collection added to Firestore!");
var promises = [];
promises.push(firebase.firestore().collection($scope.longLanguage + 'Words').doc($scope.word).collection('AudioSources').doc($scope.accentDialect).set(accentDialectObject));
promises.push(firebase.firestore().collection($scope.longLanguage + 'Words').doc($scope.word).collection('FunFacts').doc($scope.longLanguage).set(funFactObject));
promises.push(firebase.firestore().collection($scope.longLanguage + 'Words').doc($scope.word).collection('Translations').doc($scope.translationLongLanguage).set(translationObject));
Promise.all(promises).then(function() {
console.log("All subcollections were added!");
})
.catch(function(error){
console.log("Error adding subcollections to Firestore: " + error);
});
})
.catch(function(error){
console.log("Error adding document to Firestore: " + error);
});
This makes a collection EnglishWords, which has a document of. The document of has three subcollections: AudioSources (recordings of the word in American and British accents), FunFacts, and Translations. The subcollection Translations has one document: Spanish. The Spanish document has three key-value pairs, telling you that 'de' is the Spanish translation of 'of'.
The first line of the code creates the collection EnglishWords. We wait for the promise to resolve with .then, and then we create the three subcollections. Promise.all tells us when all three subcollections are set.
IMHO, I use arrays in Firestore when the entire array is uploaded and downloaded together, i.e., I don't need to access individual elements. For example, an array of the letters of the word 'of' would be ['o', 'f']. The user can ask, "How do I spell 'of'?" The user isn't going to ask, "What's the second letter in 'of'?"
I use collections when I need to access individual elements, a.k.a. documents. With the older Firebase Realtime Database, I had to download arrays and then iterate through the arrays with forEach to get the element I wanted. This was a lot of code, and with a deep data structure and/or large arrays I was downloading tons of data that I didn't need, and slowing my app running forEach loops on large arrays. Firestore puts the iterators in the database, on their end, so that I can request a single element and it sends me just that element, saving me bandwidth and making my app run faster. This might not matter for a web app, if your computer has a broadband connection, but for mobile apps with poor data connections and slow devices this is important.
Here are two pictures of my Firestore:
From the docs:
You do not need to "create" or "delete" collections. After you create the first document in a collection, the collection exists. If you delete all of the documents in a collection, it no longer exists.
Here i faced the same issue and solve with the answere of #Thomas David Kehoe
db.collection("First collection Name").doc("Id of the document").collection("Nested collection Name").add({
//your data
}).then((data) => {
console.log(data.id);
console.log("Document has added")
}).catch((err) => {
console.log(err)
})
too late for an answer but here is what worked for me,
mFirebaseDatabaseReference?.collection("conversations")?.add(Conversation("User1"))
?.addOnSuccessListener { documentReference ->
Log.d(TAG, "DocumentSnapshot written with ID: " + documentReference.id)
mFirebaseDatabaseReference?.collection("conversations")?.document(documentReference.id)?.collection("messages")?.add(Message(edtMessage?.text.toString()))
}?.addOnFailureListener { e ->
Log.w(TAG, "Error adding document", e)
}
add success listener for adding document and use firebase generated ID for a path.
Use this ID for the complete path for a new collection you want to add.
I.E. - dbReference.collection('yourCollectionName').document(firebaseGeneratedID).collection('yourCollectionName').add(yourDocumentPOJO/Object)
Okay so I recently faced a similar problem given the recent update in the firebase/firestore documentation.
And here is a solution that worked for me
const sendMessage = async () => {
await setDoc(doc(db, COLLECTION_NAME, projectId, SUB_COLLECTION_NAME, nanoid()), {
text:'this is a sample text',
createdAt: serverTimestamp(),
name: currentUser?.firstName + ' ' + currentUser?.lastName,
photoUrl: currentUser?.photoUrl,
userId: currentUser?.id,
});
}
You can find a similar example in the docs
https://firebase.google.com/docs/firestore/data-model#web-version-9_3
chat room
If you wish to listen for live update you can use a similar method as follows
const messagesRef = collection(db, COLLECTION_NAME, projectId, SUB_COLLECTION_NAME)
const liveUpdate = async () => {
const queryObj = query(messagesRef, orderBy("createdAt"), limit(25));
onSnapshot(queryObj, (querySnapshot) => {
const msgArr: any = [];
querySnapshot.forEach((doc) => {
msgArr.push({ id: doc.id, ...doc.data() })
});
console.log(msgArr);
});
}
There is no separate method to add sub-collection into the document.
You can just call the collection method itself.
If the collection exists it will reference that otherwise create a new one.
dbRef.document("example").collection("subCollection")

How to refresh previously bound entity every time user visits the page

I just ran into a problem where I am not sure how to solve.
Background: I've got an App with two views:
1st one to input a number,
2nd one to see the details.
After the view switched to the detail view, I would call the bindElement() to get my data from the backend.
_onRoutePatternMatched: function(oEvent) {
// ...
this.getView().bindElement({
path: "/EntitySet('" + id+ "')"
});
},
Problem is that the ID is quite often the same, hence, the method will call the backend only if the ID is different from the last call.
So I tried to solve the problem by using the following:
this.getView().getModel().read("/EntitySet('" + id+ "')",{
success: function(oData, response) {
that.getView().setModel(oData, "");
}
});
By this, the data is always up to date. But now the binding is a bit different.
Binding with bindElement():
{
"id": "1234",
"propety1": "abc",
// ...
}
Binding with setModel() and id = 1234:
{
"EntitySet('1234')": {
"id": "1234",
"propety1": "abc",
// ...
}
}
For the first way, my binding looked like this:
<ObjectHeader title="{id}">
Now, it would have to look like this:
<ObjectHeader title="{/EntitySet('1234')/id}">
And here I have a problem, because the value of id (in this case 1234) will always be different and so the binding won't work. I can't bind directly to the ObjectHeader, because I will need some properties from the model later. That is the reason I am binding to the view so that all that remain available.
My idea was to edit the binding inside the success method of the read method. I would like to delete the surrounding element. Do you have an idea, how to do this? Or even a simpler/better idea to solve my pity?
Edit:
I forgot to mention the refresh method. This would be possible, but where do I have to put it? I don't want to call the backend twice.
Simply call the API myODataModel.invalidateEntry(<key>) before binding the context in order to retrieve the latest data.
// after $metadata loaded..
const model = this.getOwnerComponent().getModel("odata");
const key = model.createKey(/*...*/) //See https://stackoverflow.com/a/47016070/5846045
model.invalidateEntry(key); // <-- before binding
this.getView().bindElement({
path: "odata>/" + key,
// ...
});
From https://embed.plnkr.co/b0bXJK?show=controller/Detail.controller.js,preview
invalidateEntrydoc
Invalidate a single entry in the model data.
Mark the selected entry in the model cache as invalid. Next time a context binding or list binding is done, the entry will be detected as invalid and will be refreshed from the server.

Parse iOS not retrieving pointer data

I have a Cloud Code function that queries some data and returns it separated into groups. For each record on these groups, I need to also get information of a column which is a pointer. My cloud code is:
var queryAuctionParticipant = new Parse.Query('AuctionParticipant');
queryAuctionParticipant.equalTo('userId', request.user);
queryAuctionParticipant.include('auctionId');
queryAuctionParticipant.include('auctionId.creatorId');
queryAuctionParticipant.findAll({
success: function(resultAuctionParticipant) {
var grouped = {
upcoming : [],
previous : [],
running : [],
};
for (var i=0; i<resultAuctionParticipant.length; i++) {
var obj = resultAuctionParticipant[i];
var auctionId = obj.get('auctionId');
if (obj.get('auctionId').get('startsAt') > now) {
grouped.upcoming.push(auctionId);
}
else if (obj.get('auctionId').get('endsAt') < now) {
grouped.previous.push(auctionId);
}
else {
grouped.running.push(auctionId);
}
}
response.success(grouped);
},
error: function(error) {
response.error(error, error.code);
}
});
Everything fine until here. If I try to console.log the .get('auctionId').get('creatorId') of any of the records in the Cloud Code function, all the data is logged perfectly, no problem.
However, on the result of my callFunctionInBackground, the creatorId column is converted to a PFUser instance (correct) but the instance is empty, none of the columns are present.
It seems that Parse's SDK isn't parsing the result from JSON to PFObject/PFUser in a very deep level. It means if I have many pointers inside pointers, it won't retrieve the data, even using Parse's mechanism for including keys and/or pointers.
Any thoughts?
Thanks!
Had a similar problem some time ago in CloudCode, where somehow all the fields where emptied upon request.
It turned out to be a bug in Parse's newest Javascript SDK. Please have a look at your CloudCode folder - it should contain a global.json file where you can specify the JS SDK version. By default, the version states "latest", change it to "1.4.2" and upload your cloud code folder again.
In case the global.json file is missing in your cloud code folder, please have a look at the above mentioned thread, where I described how to create it manually.
Hope this works for you!

nsIProtocolHandler: trouble loading image for html page

I'm building an nsIProtocolHandler implementation in Delphi. (more here)
And it's working already. Data the module builds gets streamed over an nsIInputStream. I've got all the nsIRequest, nsIChannel and nsIHttpChannel methods and properties working.
I've started testing and I run into something strange. I have a page "a.html" with this simple HTML:
<img src="a.png">
Both "xxm://test/a.html" and "xxm://test/a.png" work in Firefox, and give above HTML or the PNG image data.
The problem is with displaying the HTML page, the image doesn't get loaded. When I debug, I see:
NewChannel gets called for a.png, (when Firefox is processing an OnDataAvailable notice on a.html),
NotificationCallbacks is set (I only need to keep a reference, right?)
RequestHeader "Accept" is set to "image/png,image/*;q=0.8,*/*;q=0.5"
but then, the channel object is released (most probably due to a zero reference count)
Looking at other requests, I would expect some other properties to get set (such as LoadFlags or OriginalURI) and AsyncOpen to get called, from where I can start getting the request responded to.
Does anybody recognise this? Am I doing something wrong? Perhaps with LoadFlags or the LoadGroup? I'm not sure when to call AddRequest and RemoveRequest on the LoadGroup, and peeping from nsHttpChannel and nsBaseChannel I'm not sure it's better to call RemoveRequest early or late (before or after OnStartRequest or OnStopRequest)?
Update: Checked on the freshly new Firefox 3.5, still the same
Update: To try to further isolate the issue, I try "file://test/a1.html" with <img src="xxm://test/a.png" /> and still only get above sequence of events happening. If I'm supposed to add this secundary request to a load-group to get AsyncOpen called on it, I have no idea where to get a reference to it.
There's more: I find only one instance of the "Accept" string that get's added to the request headers, it queries for nsIHttpChannelInternal right after creating a new channel, but I don't even get this QueryInterface call through... (I posted it here)
Me again.
I am going to quote the same stuff from nsIChannel::asyncOpen():
If asyncOpen returns successfully, the
channel is responsible for keeping
itself alive until it has called
onStopRequest on aListener or called
onChannelRedirect.
If you go back to nsViewSourceChannel.cpp, there's one place where loadGroup->AddRequest is called and two places where loadGroup->RemoveRequest is being called.
nsViewSourceChannel::AsyncOpen(nsIStreamListener *aListener, nsISupports *ctxt)
{
NS_ENSURE_TRUE(mChannel, NS_ERROR_FAILURE);
mListener = aListener;
/*
* We want to add ourselves to the loadgroup before opening
* mChannel, since we want to make sure we're in the loadgroup
* when mChannel finishes and fires OnStopRequest()
*/
nsCOMPtr<nsILoadGroup> loadGroup;
mChannel->GetLoadGroup(getter_AddRefs(loadGroup));
if (loadGroup)
loadGroup->AddRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this), nsnull);
nsresult rv = mChannel->AsyncOpen(this, ctxt);
if (NS_FAILED(rv) && loadGroup)
loadGroup->RemoveRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
nsnull, rv);
if (NS_SUCCEEDED(rv)) {
mOpened = PR_TRUE;
}
return rv;
}
and
nsViewSourceChannel::OnStopRequest(nsIRequest *aRequest, nsISupports* aContext,
nsresult aStatus)
{
NS_ENSURE_TRUE(mListener, NS_ERROR_FAILURE);
if (mChannel)
{
nsCOMPtr<nsILoadGroup> loadGroup;
mChannel->GetLoadGroup(getter_AddRefs(loadGroup));
if (loadGroup)
{
loadGroup->RemoveRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
nsnull, aStatus);
}
}
return mListener->OnStopRequest(NS_STATIC_CAST(nsIViewSourceChannel*,
this),
aContext, aStatus);
}
Edit:
As I have no clue about how Mozilla works, so I have to guess from reading some code. From the channel's point of view, once the original file is loaded, its job is done. If you want to load the secondary items linked in file like an image, you have to implement that in the listener. See TestPageLoad.cpp. It implements a crude parser and it retrieves child items upon OnDataAvailable:
NS_IMETHODIMP
MyListener::OnDataAvailable(nsIRequest *req, nsISupports *ctxt,
nsIInputStream *stream,
PRUint32 offset, PRUint32 count)
{
//printf(">>> OnDataAvailable [count=%u]\n", count);
nsresult rv = NS_ERROR_FAILURE;
PRUint32 bytesRead=0;
char buf[1024];
if(ctxt == nsnull) {
bytesRead=0;
rv = stream->ReadSegments(streamParse, &offset, count, &bytesRead);
} else {
while (count) {
PRUint32 amount = PR_MIN(count, sizeof(buf));
rv = stream->Read(buf, amount, &bytesRead);
count -= bytesRead;
}
}
if (NS_FAILED(rv)) {
printf(">>> stream->Read failed with rv=%x\n", rv);
return rv;
}
return NS_OK;
}
The important thing is that it calls streamParse(), which looks at src attribute of img and script element, and calls auxLoad(), which creates new channel with new listener and calls AsyncOpen().
uriList->AppendElement(uri);
rv = NS_NewChannel(getter_AddRefs(chan), uri, nsnull, nsnull, callbacks);
RETURN_IF_FAILED(rv, "NS_NewChannel");
gKeepRunning++;
rv = chan->AsyncOpen(listener, myBool);
RETURN_IF_FAILED(rv, "AsyncOpen");
Since it's passing in another instance of MyListener object in there, that can also load more child items ad infinitum like a Russian doll situation.
I think I found it (myself), take a close look at this page. Why it doesn't highlight that the UUID has been changed over versions, isn't clear to me, but it would explain why things fail when (or just prior to) calling QueryInterface on nsIHttpChannelInternal.
With the new(er) UUID, I'm getting better results. As I mentioned in an update to the question, I've posted this on bugzilla.mozilla.org, I'm curious if and which response I will get there.

Resources