I'm trying to query for an id inside an array of a document, but I just found a way that I query inside the whole collection, and I think this is not the most optimized way to do this.
This is what I'm thinking about:
theColletionReference.document("theDocumentId").whereField("fieldName", arrayContains: ["theIdImLookingFor"]).getDocument{
//the code remaining
}
I know that the code above is wrong, but that is the idea I'm trying to implement!
This is my database:
If you want to do anything at all with a single document, simply get() it and examine the contents of the document sbapshot to see if it contains what you want. There is no need for a full query just to see if a document contains some value.
Related
I'm developing an application in Quasar/Electron and using Dexie/IndexedDB for my database. I want to find all distinct records in the database that contain both my Event ID and a Dog ID (both key indexed fields). I am able to do this with the following code:
await myDB.runTable
.orderBy('[fk_event+fk_dog]')
.eachUniqueKey((theDuo) => {
this.runsArray.push({eventID: theDuo[0], dogID: theDuo[1]})
})
I'm using a combined key which is working well. However, I need to have more of the records than just the keys. I need a few more fields, is this possible?
I was trying to get records with the unique key function while also using the where function, but that doesn't seem to work.
I need to get all the unique (distinct?) dogs in the table that are in a particular event. And also get their corresponding information. I'm not sure if there is a better, more efficient way to do this? I can always pull out all the records and loop through them to build a custom array, I was just hoping to do this at the table read level. (yeah I'm still in tables/records even though these are collections etc. :p ).
Even the above code gives me all the events, and I can pull out what I need with a filter. I just was thinking it would be faster and more efficient to do it at the read level.
this.enteredRuns = this.runsArray.filter((theEvent) => {
return ( (theEvent.eventID == this.currentEventID) )
})
Try
await myDB.runTable
.orderBy('[fk_event+fk_dog]')
.clone({unique: "unique"})
.toArray()
I know this isn't documented but it should do the work to use unique cursor while still extracting the whole objects and not just the keys. You cannot combine with where but you could use .filter. Just be aware that not all records with be scanned as it will jump over records with same keys - selecting the first visited records only.
I have proven to myself that I can insert text into a Google Docs document using this code:
function appendToDocument() {
let offset = 12;
let updateObject = {
documentId: 'xxxxxxx',
resource: {
requests: [{
"insertText": {
"text": "John Doe",
"location": {
"index": offset,
},
},
}],
},
};
gapi.client.docs.documents.batchUpdate(updateObject).then(function(response) {
appendPre('response = ' + JSON.stringify(response));
}, function(response) {
appendPre('Error: ' + response.result.error.message);
});
}
My next step is to create an entire, complex document using the api. I am stunned by what appears to be the fact that I need to maintain locations into the documents, like this
new Location().setIndex(25)
I am informing myself of that opinion by reading this https://developers.google.com/docs/api/how-tos/move-text
The document I am trying to create is very dynamic and very complex, and handing the coding challenge to keeping track of index values to the api user, rather than the api designer, seems odd.
Is there an approach, or a higher level api, that allows me construct a document without this kind of house keeping?
Unfortunately, the short answer is no, there's no API that lets you bypass the index-tracking required of the base Google Docs API - at least when it comes to building tables.
I recently had to tackle this issue myself - a combination of template updating and document construction - and I basically ended up writing an intermediate API with helper functions to search for and insert by character indices.
For example, one trick I've been using for table creation is to first create a table of a specified size at a given index, and put some text in the first cell. Then I can search the document object for the tableCells element that contains that text, and work back from there to get the table start index.
Another trick is that if you know how many specific kinds of objects (like tables) you have in your document, you can parse through the full document object and keep track of table counts, and stop when you get to the one you want to update/delete (you can use this approach for creating too but the target text approach is easier, I find).
From there with some JSON parsing and trial-and-error, you can figure out the start index of each cell in a table, and write functions to programmatically find and create/replace/delete. If there's an easier way to do all this, I haven't found it. There is one Github repo with a Google Docs API wrapper specifically for tables, and it does appear to be active, although I found it after I wrote everything on my own and I haven't used it.)
Here's a bit of code to get you started:
def get_target_table(doc, target_txt):
""" Given a target string to be matched in the upper left column of a table
of a Google Docs JSON object, return JSON representing that table. """
body = doc["body"]["content"]
for element in body:
el_type = list(element.keys())[-1]
if el_type == "table":
header_txt = get_header_cell_text(element['table']).lower().strip()
if target_txt.lower() in header_txt:
return element
return None
def get_header_cell_text(table):
""" Given a table element in Google Docs API JSON, find the text of
the first cell in the first row, which should be a column header. """
return table['tableRows'][0]\
['tableCells'][0]\
['content'][0]\
['paragraph']['elements'][0]\
['textRun']['content']
Assuming you've already created a table with the target text in it: now, start by pulling the document JSON object from the API, and then use get_target_table() to find the chunk of JSON related to the table.
doc = build("docs", "v1", credentials=creds).documents().get(documentId=doc_id).execute()
table = get_target_table(doc, "my target")
From there you'll see the nested tableRows and tableCells objects, and the content inside each cell has a startIndex. Construct a matrix of table cell start indices, and then, for populating them, work backwards from the bottom right cell to the upper left, to avoid displacing your stored indices (as suggested in the docs and in one of the comments).
It's definitely a bit of a slog. And styling table cells is a whole 'nother beast, which is a dizzying maze of JSON options. The interactive JSON constructor tool on the Docs API site is useful to get the syntax write.
Hope this helps, good luck!
The answer I arrived at: You can create Docs without using their JSON schema.
https://developers.google.com/drive/api/v3/manage-uploads#node.js_1
So, create the document in your format of choice (HTML, DocX, MD (you'd use pandoc to convert MD to another format)), and then upload that.
I have some ExtJs component.
I set itemId for it, but id is autogenerated.
Now Ext.getCmp('autogenerated-id') returns my component.
But Ext.ComponentQuery.query('#autogenerated-id') returns an empty array.
But:
Ext.ComponentQuery.query('[id=assets-information-form-1918]') again returns my component. :)
I use ExtJs 6.5.3 classic.
It seems like itemId config property hides autogenerated id from Ext.ComponentQuery, so they become mutually exclusive.
I don't need other means for search or advice to set id for the component, to write letter to Sencha support or to write post on their forum.
I need:
Means to force my Ext.ComponentQuery.query('#autogenerated-id') to
find the Component for which getId() returns 'autogenerated-id'.
If it is not possible by design, I need a link to some documentation
describing this behavior, a link to some bug report, or a filename and line number in ExtJs sources + a little snippet copy/paste from there.
From the documentation
Summary Provides searching of Components within Ext.ComponentManager
(globally) or a specific Ext.container.Container on the document with
a similar syntax to a CSS selector. Returns Array of matching
Components, or empty Array.
Ext.ComonpentQuery.query('#itemId') returns and array. Your cold above is using the auto-generated id of the component. The # indicates to query based on the component itemId and not the component id.
Try
Ext.ComponentQuery.query('assets-information-form-1918');
which will return an array, as noted in the documentation.
Ext.getCmp()
This is shorthand reference to Ext.ComponentManager#get. Looks up an
existing Ext.Component by id
Therefore it returns the component object.
Ext.ComponentQuery.query('#itemId')[0] would return the first object in the array.
Ext.ComponentQuery is the Sencha preferred method because it is more powerful when used as it does return an array so you an also query items by xtype and other attributes.
In my stored procedures I often have to access another document, and currently do a query e.g. var query = 'SELECT * from foo f where f.id = "bar"';
I know this will always return 1 result, so is there a way I can access the document directly by id without having to do a query?
You can call a document directly through the REST API with the following URL when using SQL(Core):
https://{databaseaccount}.documents.azure.com/dbs/{db-id}/colls/{coll-id}/docs/{doc-id}
More information about this interface can be found here: Get a Document
Is this what you are looking for?
I know this will always return 1 result, so is there a way I can
access the document directly by id without having to do a query?
Per my knowledge, there is no such method to get document directly without doing a query in stored procedure.
If you want to access the document which is fixed, you could fully pass it into the stored procedure as a json string parameter, without doing a redundant query.
If the accessed document is flexible, you need to query by it's id or it's _self property.
I want to load data from many files. Each file is named with a date and I need to inject this date to each of the fetched Entries of my file.
I know I could do this with an foreach - loop before inserting the data into the collection, but I think there should be a better solution.
Content of one file
[{"price":"95,34","isin":"FR0000120073"},{"price":"113,475","isin":"CA13645T1003"}]
The Code I use to move the data into a collection.
$collection= collect(json_decode(File::get($file)));
I tried for example the "map" method, however I don't know how to pass an additional variable to the anonymous function.
The content of my collection should look like this:
[{"price":"95,34","isin":"FR0000120073","date":"2016-06-23"},{"price":"113,475","isin":"CA13645T1003","date":"2016-06-23"}]
Is there any simple solution using the collections or do I have to use a foreach-loop?
May be this will help
$collection = collect(json_decode(File::get($file)));
$collection = $collection->each(function ($item, $key) {
//First iteration of $item will be {"price":"95,34","isin":"FR0000120073"}
$item->date = "2016-06-23"; //Insert key, value pair to the collection
});