neo4j - Commands withing transaction - neo4j

Well I am writing server plugin. I want to know what should all commands come with in transaction ?
Like, is it a good programming to begin transaction at the beginning of function and end transaction before return or we should do it in some otherway.
One more thing : Is there any limitations on, what all we can write withing transaction.
Can I write anything in java withing transaction like for loop, while loop , if , else ..
Thanks
Amit Aggarwal

If you are using a release < Neo4j 2.0 then only operations that modify the database need to be wrapped in a transaction. In Neo4j 2.0, any operation that accesses the graph needs to be wrapped in a transaction.
You can definitely use loops/branches etc.

Related

Where do I put rails Transactions and how do I execute them

I have a case where I need to create like 10000 entries in a table and after some research I decided to use a transaction to do it.
My problem is I haven't found any documentation or guide that will tell me where I put a transaction or how I execute it
This can be achieved very easily:
ActiveRecord::Base.transaction do
... your code ...
end
The code inside the block will run within a database transaction. If any error occurs during execution, all the changes will be rolled back.

neo4jrb how to reset transaction timeout of an open transaction

Currently using neo4j-community-2.1.7
I understand that the facility has been included in this version.
Have been unable to find any reference to it in the ruby docs.
Would appreciate it very much if I may have some direction on how to reset the timeout using neo4jrb.
Regards
Ross
I am unaware of a way to reset the transaction timeout of an open transaction. Maybe someone more familiar with transactions in the Java API can clarify.
If you want to change the transaction timeout length at boot, that's handled in neo4j-server.properties as described at http://neo4j.com/docs/stable/server-configuration.html.
Within Neo4j-core, if using Neo4j-community or Neo4j-enterprise (and therefore Neo4j Embedded) the code suggests that you can specify a config file by giving a third argument to Neo4j::Session.open, a hash that contains config options. That method, if given :embedded_db as its first arg, will call Neo4j::Embedded#initialize and give that hash as an argument. If you do something like this:
Neo4j::Session.open(:embedded_db, 'path_to_db', properties_file: 'path_and_filename_to_neo4j-server.properties')
It will eventually use that properties file:
db_service.loadPropertiesFromFile(properties_file) if properties_file
This is not demonstrated in any of the specs, unfortunately, but you can see it in the initialize and start methods at https://github.com/neo4jrb/neo4j-core/blob/230d69371ed6bf39297786155ef4f3b1831dac08/lib/neo4j-embedded/embedded_session.rb.
RE: COMMENT INFO
If you're using :server_db, you don't need to include the neo4j-community gem. It isn't loaded, it isn't compatible with Neo4j in Server mode.
That's the first time I've seen the link you provided, good to know that's there. We don't expose a way to do that in Neo4j.rb and won't because it would require some threading magic that we can't support. If you want to do it manually, the best I can tell you is that you can get a current transaction ID this way:
tx = Neo4j::Transaction.new
# do stuff and before your long-running query...
tx.resource_data[:commit].split('/')[-2]
That will return the transaction number that you can use in POST as described in their support doc.
If you'd like help troubleshooting your long-running Cypher query, I'm sure people on SO will help.

Deadlock on concurrent update, but I can see no concurrency

What could trigger a deadlock-message on Firebird when there is only a single transaction writing to the DB?
I am building a webapp with a backend written in Delphi2010 on top of a Firebird 2.1 database. I am getting an concurrent-update error that I cannot make sense of. Maybe someone can help me debug the issue or explain scenarios that may lead to the message.
I am trying an UPDATE to a single field on a single record.
UPDATE USERS SET passwdhash=? WHERE (RECID=?)
The message I am seeing is the standard:
deadlock
update conflicts with concurrent update
concurrent transaction number is 659718
deadlock
Error Code: 16
I understand what it tells me but I do not understand why I am seeing it here as there are no concurrent updates I know of.
Here is what I did to investigate.
I started my appplication server and checked the result of this query:
SELECT
A.MON$ATTACHMENT_ID,
A.MON$USER,
A.MON$REMOTE_ADDRESS,
A.MON$REMOTE_PROCESS,
T.MON$STATE,
T.MON$TIMESTAMP,
T.MON$TOP_TRANSACTION,
T.MON$OLDEST_TRANSACTION,
T.MON$OLDEST_ACTIVE,
T.MON$ISOLATION_MODE
FROM MON$ATTACHMENTS A
LEFT OUTER JOIN MON$TRANSACTIONS T
ON (T.MON$ATTACHMENT_ID = A.MON$ATTACHMENT_ID)
The result indicates a number of connections but only one of them has non-NULLs in the MON$TRANSACTION fields. This connection is the one I am using from IBExperts to query the monitor-tables.
Am I right to think that connection with no active transaction can be disregarded as not contributing to a deadlock-situation?
Next I put a breakpoint on the line submitting the UPDATE-Statement in my application server and executed the request that triggers it. When the breakpoint stopped the application I then reran the Monitor-query above.
This time I could see another transaction active just as I would expect:
Then I let my appserver execute the UPDATE and reap the error-message as shown above.
What can trigger the deadlock-message when there is only one writing transaction? Or are there more and I am misinterpreting the output? Any other suggestions on how to debug this?
Firebird uses MVCC (Multiversion Concurrency Control) for its transaction model. One of the features is that - depending on the transaction isolation - you will only see the last version committed when your transaction started (consistency and concurrency isolation levels), or that were committed when your statement started (read committed). A change to a record will create a new version of the record, which will only become visible to other active transactions when it has been committed (and then only for read committed transactions).
As a basic rule there can only be one uncommitted version of a record. So attempts by two transactions to update the same record will fail for one of those transaction. For historical reasons these type of errors are grouped under the deadlock error family, even though it is not actually a deadlock in the normal concurrency vernacular.
This rule is actually a bit more restrictive depending on your transaction isolation: for consistency and concurrency level there can also be no newer committed versions of a record that is not visible to your transaction.
My guess is that for you something like this happened:
Transaction 1 started
Transaction 2 started with concurrency or consistency isolation
Transaction 1 modifies record (new version created)
Transaction 1 commits
Transaction 2 attempts to modify same record
(Note, step 1+3 and 2 could be in a different order (eg 1,3,2, or 2,1,3))
Step 5 fails, because the new version created in step 3 is not visible to transaction 2. If instead read committed had been used then step 5 would succeed as the new version would be visible to the transaction at that point.

Why can't bcp execute procedures having temp table(#tempTable)?

Recently I was tasked with creating a SQL Server Job to automate the creation of a CSV file. There was existing code, which was using an assortment of #temp tables.
When I set up the job to execute using BCP calling the existing code (converted into a procedure), I kept getting errors:
SQLState = S0002, NativeError = 208
Error = [Microsoft][SQL Native Client][SQL Server]Invalid object name #xyz
As described in other post(s), to resolve the problem lots of people recommend converting all the #tempTables to #tableVariables.
However, I would like to understand WHY BCP doesn't seem to be able to use #tempTables?
When I execute the same procedure from within SSMS it works though!? Why?
I did do a quick and simple test using global temp tables within a procedure and that seemed to succeed via a job using BCP, so I am assuming it is related to the scope of the #tempTables!?
Thanks in advance for your responses/clarifications.
DTML
You are correct in guessing that it's a scope issue for the #temp tables.
BCP is spawned as a separate process, so the tables are no longer in scope for the new processes. SSMS likely uses sub-processes, so they would still have access to the #temp tables.

transaction in activerecord

Folks,
I am fairly new to transactions in activerecord in rails and I have a piece of code, where I do something like:
transaction do
specimen = Specimen.find_by_doc_id(25)
specimen.state = "checking"
specimen.save
result = Inventory.do_check(specimen)
if result
specimen.state="PASS"
else
specimen.state="FAIL"
end
specimen.save
end
My goal here for using a transaction is if I get an exception in Inventory.do_check(it is a client to external web-services and does a bunch of HTTP calls and checks) then I want the specimen.state to rollback to its previous value. I wanted to know if this will work as above? Also, it looks like on my development machine the lock is set on the entire Specimen table, when I try to query that table/model I get a BUSY exception(I am using SQLLite). I was thinking that the lock should only be set on that object/record.
Any feedback is much appreciated, as I said I am really new to this so my question may be very naive.
Implementation and locking depends on the DB. I don't use SQLLite and I won't be surprised if it locks the entire table in such case. But reading should still work, so it's probably because it doesn't allow two concurrent operations on a single connection, so is waiting for your transaction to finish before allowing any other operation. See, for example, this SO answer: https://stackoverflow.com/a/7154699/2117020.
However, my main point is you shouldn't be holding the transaction while accessing external services in any case. However it is implemented, keeping the transaction for seconds is not what you'd want. Looks like in your case all you want is to recover from an exception. Do you simply want to set the state to "FAIL" or "initial" as a result, or does do_check() modify your specimen? If do_check() doesn't modify the specimen, you should better do something like:
specimen = Specimen.find_by_doc_id(25)
specimen.state="checking"
specimen.save
# or simply specimen.update_attribute( :state, "checking" )
begin
specimen.state = Inventory.do_check(specimen) ? "PASS" : "FAIL"
rescue
specimen.state = "FAIL" # or "initial" or whatever
end
specimen.save
The locking is going to be highly dependent on your database. You could use a row lock. Something like this:
specimen = Specimen.find_by_doc_id(25)
success = true
# reloads the record and does a select for update which locks the row until the block exits (its wrapped in a transation)
specimen.with_lock do
result = Inventory.do_check(specimen)
if(result)
specimen.state="PASS"
else
specimen.state="FAIL"
end
specimen.save!
end
Checking the external site in a transaction is not ideal, but if you use with_lock and your database supports row lock, you should just be locking this single row (it will block reads, so use carefully)
Take a look at the pessimistic locking documentation in active record:
http://ruby-docs.com/docs/ruby_1.9.3-rails_3.2.2/Rails%203.2.2/classes/ActiveRecord/Locking/Pessimistic.html

Resources