I was looking at the Firebase documentation and saw that you can increment a data in the database with the following line:
Database.database().reference().child("Post").setValue([
"number" : ServerValue.increment(10)
])
It also says on the documentation that "the increment operation occurs directly on the database server", I don't really understand what that means. What is the difference between this operation and an operation like :
// We have previously retrieved the value of number which we have stored in a variable
Database.database().reference().child("Post").setValue([
"number" : numberOldValue + 10
])
Instead of you getting the value from the server and doing that little "atomic" action of adding a integer to another one the increment allows you to just say for what value you want to increment the one on the server. It works on the server side so you don't need to worry at all to get the current value. If it changes in a millisecond before you send your request it will notice that.
Extra info: It is also much faster than the transaction. Check it out here.
Transaction VS ServerValue
When working with data that could be corrupted by concurrent modifications, such as incremental counters, you can use a transaction operation,Using a transaction prevents current increment from being incorrect if multiple users star the same post at the same time or the client had stale data.
The benefit of ServerValue.increment(10) makes you no worry about grabbing the current value to increment it as it will get the current value and increment it with sent value automatically
Related
Context
I want to create a Power-Automate flow that automatically creates a sub-task in Azure DevOps when the Effort of a PBI is set.
When the Effort field goes from blank to a positive value, the task should be added (using the newly set Effort value as the task's Original Estimate and Remaining Work).
I managed to create a flow that does that using When a work item is update trigger.
Problem
The flow runs too often (whenever the work item changes, as long as the Effort is > 0).
Question
What would be the best way to ensure this flow runs only once per PBI?
Thoughts
Perhaps check for the presence of child tasks?
Perhaps set a hidden property when adding the task the first time and check that property afterwards?
You could add a trigger condition to the settings of your trigger action. Use the following expression:
#greater(triggerOutputs()?['body/fields/Microsoft_VSTS_Scheduling_Effort'], 0)
Add a Send an HTTP request action directly after the trigger. Use the following URI:
YourProjectName/_apis/wit/workItems/#{triggerOutputs()?['body/id']}/updates?api-version=6.0
Add a Filter Array. Use this expression for the From
body('Send_an_HTTP_request_to_Azure_DevOps')['value']
In the where of the Filter Array use the following expression which you add via the advanced mode:
#greater(length(string(item()?['fields']?['Microsoft.VSTS.Scheduling.Effort']?['newValue'])), 0)
In a condition check if the Filter Array returns no results. If it does, the value of Effort has not been changed in the past and you can safely create your new task
length(body('Filter_array'))
is equal to 1
I'm trying to see if there is data in a stream and I provided the exact stream name as follows :
Select SYSTEM$STREAM_HAS_DATA('STRM_EXACT_STREAM_NAME_GIVEN');
But, I get an error :
SQL compilation error: Invalid value ['STRM_EXACT_STREAM_NAME_GIVEN'] for function 'SYSTEM$STREAM_HAS_DATA', parameter 1: must be a valid stream name
1) Any idea why ? How can this error be resolved ?
2) Would it hurt to resume a set of tasks (alter task resume;) without knowing if the corresponding stream has data in it or not? I blv if there is (delta) data in the stream, the task will load it, if not, the task won't do anything.
3) Any idea how to modify / update a stream that shows up as 'STALE' ? - or should just loading fresh data into the table associated with the stream should set the stream as 'NOT STALE' i.e. stale = false ? what if loading the associated table does not update the state of the task? (and that is what is happening currently in my case, as things appear.
1) It doesn't look like you have a stream by that name. Try running SHOW STREAMS; to see what streams you have active in the database/schema that you are currently using.
2) If your task has a WHEN clause that validates against the SYSTEM$STREAM_HAS_DATA result, then resuming a task and letting it run on schedule only hits against your global services layer (no warehouse credits), so there is no harm there.
3) STALE means that the stream data wasn't used by a DML statement in a long time (I think its 14 days by default or if data retention is longer than 14 days, then it's the longer of those). Loading more data into the stream table doesn't help that. Running a DML statement will, but since the stream is stale, doing so may have bad consequences. Streams are meant to be used for frequent DML, so not running DML against a stream for longer than 14 days is very uncommon.
I'm creating an app that uses core data to store information from a web server. When there's an internet connection, the app will check if there are any changes in the entries and update them. Now, I'm wondering which is the best way to go about it. Each entry in my database has a last updated timestamp. Which of these 2 will be more efficient:
Go through all entries and check the timestamp to see which entry needs to be updated.
Delete the whole entity and re-download everything again.
Sorry if this seems like an obvious question and thanks!
I'd say option 1 would be most efficient, as there is rarely a case where downloading everything (especially in a large database with large amounts of data) is more efficient than only downloading the parts that you need.
I recently did something similiar.
I solve the problem, by assigning an unique ID and a global 'updated timestamp' and thinking about 'delta' change.
I explain better, I have a global 'latest update' variable stored in user preferences, with a default value of 01/01/2010.
This is roughly my JSON service:
response: {
metadata: {latestUpdate: 2013...ecc}
entities: {....}
}
Then, this is what's going on:
pass the 'latest update' to the web service and retrieve a list of entities
update the core data store
if everything went fine with core data, the 'latestUpdate' from the service metadata became my new 'latest update variable' stored in user preferences
That's it. I am only retrieving the needed change, and of course the web service is structured to deliver a proper list. Which is: a web service backed by a database, can deal with this matter quite well, and leave the iphone to be a 'simple client' only.
But I have to say that for small amount of data, it is still quite performant (and more bug free) to download the whole list at each request.
As per our discussion in the comments above, you can model your core data object entries with version control like this
CoreDataEntityPerson:
name : String
name_version : int
image : BinaryData
image_version : int
You can now model the server xml in the following way:
<person>
<name>michael</name>
<name_version>1</name_version>

<image_version>1</image_version>
</person>
Now, you can follow the following steps :
When the response arrives and you parse it, you initially create a new object from entity and fill the data directly.
Next time, when you perform an update on the server, you increase the version count of an entry by 1 and store it.
E.g. lets say the name michael is now changed to abraham, then version count of name_version on server will be 2
This updated version count will come in the response data.
Now, while storing the data in the same object, if you find the version count to be same, then the data update of that entry can be skipped, but if you find the version count to be changed, then the update of that entry needs to be done.
This way you can efficiently perform check on each entry and perform updates only on the changed entries.
Advice:
The above approach works best when you're dealing with large amount of data updation.
In case of simple text entries for an object, simple overwrite of data on all entries is efficient enough. And this also keeps the data reponse model simple.
I'm having a problem is sending(creating) an HL7 message using mirth.
I want to read data from my patient table in SQLSERVER 2008 and, using that data,
I want to send a message to my destination connector, a file writer. I want my messages to get saved in the file writer's output directory.
So far I'm able to generate the message, but the size of the output file in my destination directory is increasing as the channel's polling time goes on.
Have I done something wrong in the transformer mapping?
UPDATE:
The size of the output file in my destination directory IS increasing. (My .txt file starts from 1 kb and goes to 900kb and so on). This is happening becasue same data is getting generated again and again and multiple times too. for eg. my generated message has one(MSH,PID,PV1,ORM) for one row of data in my Database. The same MSH,PID, PV1 and ORM are getting generated multiple times.
If you are seeing the same data generated in your output directory multiple time, the most likely cause is that you are not doing anything to indicate to your database that a given record has been processed.
For example, if you have 1 record in your database: ["John", "Smith", "12134" ...] on the first poll, you will generate 1 message. If on the second poll you also have a second record ["Fred", "Jones", "98371" ...], you will generate TWO messages - one for John Smith and one for Fred Jones. And so on.
The key is to use the "Run On-Update Statement" of your Database Reader (Source) connector to update the database table you are polling with an indication that a given record has been processed. This ensures that the same record is not processed multiple times.
This requires that your source table have some kind of column to indicate the record has been processed. Mirth will not keep track of this for you - you must do it manually.
You can't have a file reader as a destination, so I assume you mean file writer. You say that "the size of my file in my destination is increasing." Is that a typo? Do you mean NOT increasing?
If it is increasing, then your messages are getting generated and you can view them to start your next round of troubleshooting...
If not, the you should look at the message log in the dashboard to see what is happening on a message-by-message basis - that would be the next place to troubleshoot.
You have to have a way of distinguishing what records to pull from the database by filtering on some sort of status flag or possible a time-stamp. Then, you have to use some sort of On-Update statement to mark these same records as processed.
i.e.
Select id, patient, result from results where status_flag='N'
or
Select * from results where status_flag = 'N' and created_date >= '9/25/2012'
Then, in either a transformer step or the On-Update section of your Source, you would do something like:
Update results
set status_flag = 'Y' where id=$(id)
If you do not do something like this and you have Mirth polling at a certain interval, it will just keep pulling the same records over and over.
You have to change your connector type as Database reader in source.
You have to change your connector type as file writer in the destination.
And you can write your data in the file, For which you have access to write.
while creating HL7 template you have to use the following code in outbound message template
MSH|^~\&|||
Thanks
Krishna
First time user long time reader. I have thoroughly looked for an explanation for the problem I'm having via the mighty search engine Google, but alas I have failed to produce any significant insight.
I need to be able to ensure that a model form is not reloaded with invalid data. Since the model stored in memory on the server is edited directly with the parameters of the web form first, and THEN checked for validity, without additional code invalid model data will ALWAYS be sent back to the form. This is less than desirable to me. My question is this: how do I ensure this doesn't happen?
What I'm thinking is I need some mechanism for saving the state of the object before it's modified with the parameters sent from the web form, and then after a failed validation restore the object to it's previous, correct and unmodified state of being.
Help!
Thanks,
Les
The object isn't actually modified in the db if validation fails, even though the object is in an invalid state in the form ... the thinking behind this is that the user wants to see the errors they made so they can correct them.
If you don't want that to be the case, then just read back the object with a WhateverObject.find(x) and assign it to the variable that the form is referencing and it will 'restore' the object to its previous unmodified state.
To add to what concept47 said you can also get the value for a particular field using
object.field_was
Have a look at ActiveRecord::Dirty for details (http://ar.rubyonrails.org/classes/ActiveRecord/Dirty.html)
Using that you could retrieve the original values for just those fields that had validation errors.