Is there any way to update multiple child values at once? - ios

I need to set all the child distances to 0 (check photo of Firebase db below) in 1 setting. Is there any way I can do this? The usual update function for Firebase generally works for only one userID.

To write a value, the client must specify the complete path to that value in the database. Firebase does not support the equivalent of SQL's update queries.
So you will need to first load the data, and then update each child. You can perform those updates in a big batch if you want, using multi-location updates. For more on those, see the blog post introducing them and the answer here: Firebase - atomic write of multiple values to multiple locations

Related

DD_INSERT in the update strategy transformation

Informatica automatically inserts all the rows to the target...so why do we have to use update strategy transformation (DD_INSERT) to insert the records???
Good Question and you don't have to in case of insert only/update only case.
This assumption that informatica automatically inserts all the rows to the target is not true always. Yes, by default it inserts but there are many cases when we want to only update/only insert/delete/insert or update/reject the data.
Update strategy is used to control those scenarios.
DD_INSERT - This option is only to insert data.
DD_UPDATE - This updates the data based on key defined in target.
DD_REJECT - This rejects the data.
DD_DELETE - This deletes the data based on key defined in target.
Session should be data driven.
So, in your case, you can either mention dd_insert or set the session properties to insert only.
if you want to insert new data but update old data, you need to use dd_insert or dd_update.
if you want to insert new data but ignore old data, you need to use dd_insert or dd_reject.
if you want to only update old data, you need to use dd_update or set the session properties to update only..
Update strategy transformation is used when you want handy logic to check insert/update/delete records by comparing source key columns with target key columns. for each type of update strategy transformation, you can choose how you want to process your data. below link examples by Informatica will help you to understand this:
https://docs.informatica.com/data-integration/powercenter/10-5/transformation-language-reference/constants/dd_insert/examples.html
You can achieve the same by using a lookup and setting IUD Flag in expression transformation and then based on each flag you can route different types of records (ins, upd, del) into different target to update target in different ways.
Informatica is giving you read to use logic in the form of Update strategy transformation and dd_insert is one type of that.
While it is true that insert is the default behavior, it may be changed on session properties. See the Treat Source Rows As property documentation.
Only when you want to build a logic and decide what to do with a particular row, you should use Data Driven property value and then use DD_INSERT and other values according to your needs.

"Transactional safety" in influxDB

We have a scenario where we want to frequently change the tag of a (single) measurement value.
Our goal is to create a database which is storing prognosis values. But it should never loose data and track changes to already written data, like changes or overwriting.
Our current plan is to have an additional field "write_ts", which indicates at which point in time the measurement value was inserted or changed, and a tag "version" which is updated with each change.
Furthermore the version '0' should always contain the latest value.
name: temperature
-----------------
time write_ts (val) current_mA (val) version (tag) machine (tag)
2015-10-21T19:28:08Z 1445506564 25 0 injection_molding_1
So let's assume I have an updated prognosis value for this example value.
So, I do:
SELECT curr_measurement
INSERT curr_measurement with new tag (version = 1)
DROP curr_mesurement
//then
INSERT new_measurement with version = 0
Now my question:
If I loose the connection in between for whatever reason in between the SELECT, INSERT, DROP:
I would get double records.
(Or if I do SELECT, DROP, INSERT: I loose data)
Is there any method to prevent that?
Transactions don't exist in InfluxDB
InfluxDB is a time-series database, not a relational database. Its main use case is not one where users are editing old data.
In a relational database that supports transactions, you are protecting yourself against UPDATE and similar operations. Data comes in, existing data gets changed, you need to reliably read these updates.
The main use case in time-series databases is a lot of raw data coming in, followed by some filtering or transforming to other measurements or databases. Picture a one-way data stream. In this scenario, there isn't much need for transactions, because old data isn't getting updated much.
How you can use InfluxDB
In cases like yours, where there is additional data being calculated based on live data, it's common to place this new data in its own measurement rather than as a new field in a "live data" measurement.
As for version tracking and reliably getting updates:
1) Does the version number tell you anything the write_ts number doesn't? Consider not using it, if it's simply a proxy for write_ts. If version only ever increases, it might be duplicating the info given by write_ts, minus the usefulness of knowing when the change was made. If version is expected to decrease from time to time, then it makes sense to keep it.
2) Similarly, if you're keeping old records: does write_ts tell you anything that the time value doesn't?
3) Logging. Do you need to over-write (update) values? Or can you get what you need by adding new lines, increasing write_ts or version as appropriate. The latter is a more "InfluxDB-ish" approach.
4) Reading values. You can read all values as they change with updates. If a client app only needs to know the latest value of something that's being updated (and the time it was updated), querying becomes something like:
SELECT LAST(write_ts), current_mA, machine FROM temperature
You could also try grouping the machine values together:
SELECT LAST(*) FROM temperature GROUP BY machine
So what happens instead of transactions?
In InfluxDB, inserting a point with the same tag keys and timestamp over-writes any existing data with the same field keys, and adds new field keys. So when duplicate entries are written, the last write "wins".
So instead of the traditional SELECT, UPDATE approach, it's more like SELECT A, then calculate on A, and put the results in B, possibly with a new timestamp INSERT B.
Personally, I've found InfluxDB excellent for its ability to accept streams of data from all directions, and its simple protocol and schema-free storage means that new data sources are almost trivial to add. But if my use case has old data being regularly updated, I use a relational database.
Hope that clear up the differences.

Compare two PCollections for removal

Every day latest data available in the CloudSQL table, so while writing data into another CloudSQL table, I need to compare the existing data and perform the actions like, remove the deleted data and update the existing data and insert new data.
Could you please suggest best way to do this scenario using Dataflow pipeline (preferable Java).
One thing I identified that using upsert function in CloudSQL, we could do the insert/update the records with the help of jdbc.JdbcIO. But I do not know how to identified collection for removal.
You could read the old and new tables and do a Join followed by a DoFn that compares the two and only outputs changed elements, which can then be written wherever you like.

A faster way to get data from Firebase?

I recently uploaded 37,000 strings of data to Firebase (name of all cities in USA). To find that it takes way too long to go through each one using the observe .childAdded method to upload it to a basic table view.
Is there an alternative? How can I get the data to my app faster? The data shouldn’t change.... so is there an alternative?
There is no way to load the same data faster. Firebase isn't artificially throttling your download speed, so the time it takes to read the 37,000 strings, is he time it takes to read the 37,000 strings.
To make your application respond faster to the user, you will have to load less data. And since it's unlikely your user will read all 37,000 strings, a good first option is to only load the data that they will see.
Since you're describing an auto-complete scenario, I'd first look at using a query to only retrieve child nodes that match what they already typed. In Firebase that'd be something like this:
ref.queryOrdered(byChild: "name")
.queryStarting(atValue: "stack")
.queryEnding(atValue: "stack\u{f8ff}")
This code takes (on the server) all nodes under ref, and orders them by name. It then finds the first one starting with stack and returns all child nodes until it finds one not starting with stack anymore.
With this approach the filtering happens on the server, and the client only has to download the data that matches the query.
This is something easily solvable using Algolia. Algolia can search large data sets without taking much time at all. This way, you query Algolia and never need to look at the Firebase database.
In your Firebase Functions, listen for any new nodes in the place you keep your names of cities, and when that function gets called, add that string to your Algolia index.
You can follow the Algolia docs here: Algolia Docs

Avoiding round-trips when importing data from Excel

I'm using EF 4.1 (Code First). I need to add/update products in a database based on data from an Excel file. Discussing here, one way to achieve this is to use dbContext.Products.ToList() to force loading all products from the database then use db.Products.Local.FirstOrDefault(...) to check if product from Excel exists in database and proceed accordingly with an insert or add. This is only one round-trip.
Now, my problem is there are two many products in the database so it's not possible to load all products in memory. What's the way to achieve this without multiplying round-trips to the database. My understanding is that if I just do a search with db.Products.FirstOrDefault(...) for each excel product to process, this will perform a round-trip each time even if I issue the statement for the exact same product several times ! What's the purpose of the EF caching objects and returning the cached value if it goes to the database anyway !
There is actually no way to make this better. EF is not a good solution for this kind of tasks. You must know if product already exists in database to use correct operation so you always need to do additional query - you can group multiple products to single query using .Contains (like SQL IN) but that will solve only check problem. The worse problem is that each INSERT or UPDATE is executed in separate roundtrip as well and there is no way to solve this because EF doesn't support command batching.
Create stored procedure and pass information about product to that stored procedure. The stored procedure will perform insert or update based on the existence of the record in the database.
You can even use some more advanced features like table valued parameters to pass multiple records from excel into procedure with single call or import Excel to temporary table (for example with SSIS) and process them all directly on SQL server. As last you can use bulk insert to get all records to special import table and again process them with single stored procedures call.

Resources