When I run Custom Transaction Detail Report in QuickBooks, I can include a column called Trans #. I am not sure what it means. How is it different from Reference Number and Transaction ID?
Trans # and Transaction ID are both internal unique ID's of the transaction that are automatically generated by QuickBooks. Neither of them are something that you can edit or even see when you have the transaction itself open. I'm not sure why QuickBooks has both, other than that I don't think you can see the Transaction ID via the GUI.
Reference number is a user defined ID for the transaction (check number, "Entry No." on Journal Entries, etc.). This can be edited after the fact and doesn't have to be unique.
Hope this is what you were looking for.
Related
I have posts and organisations in my database. Posts belongs_to organisation and organisation has_many posts.
I have an existing post_id column in my post table which I by now increment manually when I create a new post.
How can I add auto increment to that column scoped to the organisation_id?
Currently I use mysql as my database, but I plan to switch to PostgreSQL, so the solution should work for both if possible :)
Thanks a lot!
#richard-huxton has the correct answer and is thread safe.
Use a transaction block and use SELECT FOR UPDATE inside that transaction block. Here is my rails implementation. Use 'transaction' on a ruby class to start a transaction block. Use 'lock' on the row you want to lock, essentially blocking all other concurrent access to that row, which is what you want for ensuring unique sequence number.
class OrderFactory
def self.create_with_seq(order_attributes)
order_attributes.symbolize_keys!
raise "merchant_id required" unless order_attributes.has_key?(:merchant_id)
merchant_id = order_attributes[:merchant_id]
SequentialNumber.transaction do
seq = SequentialNumber.lock.where(merchant_id: merchant_id, type: 'SequentialNumberOrder').first
seq.number += 1
seq.save!
order_attributes[:sb_order_seq] = seq.number
Order.create(order_attributes)
end
end
end
We run sidekiq for background jobs, so I tested this method by creating 1000 background jobs to create orders using 8 workers with 8 threads each. Without the lock or the transaction block, duplicate sequence number occur as expected. With the lock and the transaction block, all sequence numbers appear to be unique.
OK - I'll be blunt. I can't see the value in this. If you really want it though, this is what you'll have to do.
Firstly, create a table org_max_post (org_id, post_id). Populate it when you add a new organisation (I'd use a database trigger).
Then, when adding a new post you will need to:
BEGIN a transaction
SELECT FOR UPDATE that organisation's row to lock it
Increment the post_id by one, update the row.
Use that value to create your post.
COMMIT the transaction to complete your updates and release locks.
You want all of this to happen within a single transaction of course, and with a lock on the relevant row in org_max_post. You want to make sure that a new post_id gets allocated to one and only one post and also that if the post fails to commit that you don't waste post_id's.
If you want to get clever and reduce the SQL in your application code you can do one of:
Wrap the hole lot above in a custom insert_post() function.
Insert via a view that lacks the post_id and provides it via a rule/trigger.
Add a trigger that overwrites whatever is provided in the post_id column with a correctly updated value.
Deleting a post obviously doesn't affect your org_max_post table, so won't break your numbering.
Prevent any updates to the posts at the database level with a trigger. Check for any changes in the OLD vs NEW post_id and throw an exception if there is one.
Then delete your existing redundant id column in your posts table and use (org_id,post_id) as your primary key. If you're going to this trouble you might as well use it as your pkey.
Oh - and post_num or post_index is probably better than post_id since it's not an identifier.
I've no idea how much of this will play nicely with rails I'm afraid - the last time I looked at it, the database handling was ridiculously primitive.
Its good to know how to implement it. I would prefer to use a gem myself.
https://github.com/austinylin/sequential (based on sequenced)
https://github.com/djreimer/sequenced
https://github.com/felipediesel/auto_increment
First, I must say this is not a good practice, but I will only focus on a solution for your problem:
You can always get the organisation's posts count by doing on your PostsController:
def create
post = Post.new(...)
...
post.post_id = Organization.find(organization_id).posts.count + 1
post.save
...
end
You should not alter the database yourself. Let ActiveRecord take care of it.
I'm looking into an issue where the Refund button isn't available for orders that were placed prior to an upgrade of a client site from 1.3 to 1.7. I'm attempting to create a credit memo from Sales Order > Invoice > Credit Memo.
Drilling into the code and data, it seems that $this->getCardsStorage() is not returning any stored credit cards for order payments made prior to the upgrade. In fact, the additional_information field in the sales_flat_order_payment table is NULL for those orders - I believe that field was created in 1.4 or later.
The thing that seems odd to me is that there would be no backwards compatibility for payment data created prior to 1.4. I've done a decent bit of searching for this problem and the closest thing I can find is where people are having problems with refunds entirely after upgrading. That's not the case for me - refunds appear to be working fine for post-upgrade orders.
If it is the case that there simply is not backwards-compatibility, it would be good to at least see a bug report on it.
I posted this to the magento bug tracker: Bug #28601
That's true, there is a problem with it in 1.4 upgrades.
In 1.4 were introduced transactions and were used additional_information field, there were different field before, called additional_data, that was also serialized, but in different manner, you can look up for payment record before 1.4 and after 1.4 to compare how the data structure is changed. And when you will see the difference in data, you can create a script that will migrate old values.
Sincerely,
Ivan
UPDATE
Check the following code:
https://github.com/LokeyCoding/magento-mirror/blob/magento-1.3/app/code/core/Mage/Paygate/Model/Authorizenet.php
During authorization process, transaction id is stored in both properties:
cc_trans_id and last_trans_id. When customer perform a capture only last_trans_id get updated.
In 1.3 method getRefundTransactionId() was returning last_trans_id value.
In 1.7 the same method is looks like the following:
https://github.com/LokeyCoding/magento-mirror/blob/magento-1.7/app/code/core/Mage/Paygate/Model/Authorizenet.php
So you see it is completely rewritten!
For making your 1.7 code work for 1.3 transaction, you need to do the following for old transactions:
If last_trans_id == cc_trans_id and order has invoice, then create only capture transaction record in order_payment_transaction table.
If last_trans_id == cc_trans_id and order does not have invoice, create authorization transaction record
If last_trans_id !== cc_trans_id create 2 records, first with cc_trans_id and it will be auth transaction and second one will be a child transaction with capture type.
When you will export this old orders with such values, you will be able to refund old order from the admin.
What is considered "best practice" or the general rule of thumb for when something should be wrapped in a transaction block?
Is it primarily just when you are going to be performing actions on a collection of things, and you want to rollback if something breaks? Something like:
class User < ActiveRecord::Base
def mark_all_posts_as_read!
transaction do
posts.find_each { |p| p.update_attribute(:read, true) }
end
end
end
Are there other scenarios where it would be beneficial to perform things inside a transaction?
I'm not sure that qualifies as a great use for a transaction: generally, I would only use a transaction in a model if the state of one object depended on the state of another object. If either object's state is incorrect, then I don't want either to be committed.
The classic example, of course, is bank accounts. If you're transferring money from one account to another, you don't want to add it to the receiving account, save, and then debit it from the sending account. If any part of that goes wrong then money has just vanished, and you will have some pretty angry customers. Doing both parts in one transaction ensures that if an error occurs, neither will have committed anything to the database.
The ActiveRecord Transaction Documentation does a surprisingly good job of discussing the how and why of using transactions... and there's always the Wikipedia article if you want more information as well.
I have a Postgres database (9) that I am writing a trigger for. I want the trigger to set the modification time, and user id for a record. In Firebird you have a CONNECTIONID that you can use in a trigger, so you could add a value to a table when you connect to the database (this is a desktop application, so connections are persistent for the lifetime of the app), something like this:
UserId | ConnectionId
---------------------
544 | 3775
and then look up in the trigger that connectionid 3775 belongs to userid 544 and use 544 as the user that modified the record.
Is there anything similar I can use in Postgres?
you could use the process id. It can be retrieved with:
pg_backend_pid()
With this pid you can also use the table pg_stat_activity to get more information about the current backend, althouht you already should know everything, since you are using this backend.
Or better. Just create a serial, and retrieve one value from it for each connection:
CREATE SEQUENCE 'connectionids';
And then:
SELECT next_val('connectionids');
in each connection, to retrieve a connection unique id.
One way is to use the custom_variable_classes configuration option. It appears to be designed to allow the configuration of add-on modules, but can also be used to store arbitrary values in the current database session.
Something along the lines of the following needs to be added to postgresql.conf:
custom_variable_classes = 'local'
When you first connect to the database you can store whatever information you require in the custom class, like so:
SET local.userid = 'foobar';
And later in on you can retrieve this value with the current_setting() function:
SELECT current_setting('local.userid');
Adding an entry to a log table might look something like this:
INSERT INTO audit_log VALUES (now(), current_setting('local.userid'), ...)
While it may work for your desktop use case, note that process ID numbers do rollover (32768 is a common upper limit), so using them as a unique key to identify a user can run into problems. If you ever end up with leftover data from a previous session in the table that's tracking user->process mapping, that can collide with newer connections assigned the same process id once it's rolled over. It may be sufficient for your app to just make sure you aggressively clean out old mapping entries, perhaps at startup time given how you've described its operation.
To avoid this problem in general, you need to make a connection key that includes an additional bit of information, such as when the session started:
SELECT procpid,backend_start FROM pg_stat_activity WHERE procpid=pg_backend_pid();
That has to iterate over all of the connections active at the time to compute, so it does add a bit of overhead. It's possible to execute that a bit more efficiently starting in PostgreSQL 8.4:
SELECT procpid,backend_start FROM pg_stat_get_activity(pg_backend_pid());
But that only really matters if you have a large number of connections active at once.
Use current_user if you need the database user (I'm not sure that's what you want by reading your question).
How to achieve itempotency when incrementing a database column via PUT? (For example a credits-count in a purchase process)
Send a unique transaction id with every request, store all executed transaction ids and don't react to requests with a transaction id that you already saw.