I have a Postgres database (9) that I am writing a trigger for. I want the trigger to set the modification time, and user id for a record. In Firebird you have a CONNECTIONID that you can use in a trigger, so you could add a value to a table when you connect to the database (this is a desktop application, so connections are persistent for the lifetime of the app), something like this:
UserId | ConnectionId
---------------------
544 | 3775
and then look up in the trigger that connectionid 3775 belongs to userid 544 and use 544 as the user that modified the record.
Is there anything similar I can use in Postgres?
you could use the process id. It can be retrieved with:
pg_backend_pid()
With this pid you can also use the table pg_stat_activity to get more information about the current backend, althouht you already should know everything, since you are using this backend.
Or better. Just create a serial, and retrieve one value from it for each connection:
CREATE SEQUENCE 'connectionids';
And then:
SELECT next_val('connectionids');
in each connection, to retrieve a connection unique id.
One way is to use the custom_variable_classes configuration option. It appears to be designed to allow the configuration of add-on modules, but can also be used to store arbitrary values in the current database session.
Something along the lines of the following needs to be added to postgresql.conf:
custom_variable_classes = 'local'
When you first connect to the database you can store whatever information you require in the custom class, like so:
SET local.userid = 'foobar';
And later in on you can retrieve this value with the current_setting() function:
SELECT current_setting('local.userid');
Adding an entry to a log table might look something like this:
INSERT INTO audit_log VALUES (now(), current_setting('local.userid'), ...)
While it may work for your desktop use case, note that process ID numbers do rollover (32768 is a common upper limit), so using them as a unique key to identify a user can run into problems. If you ever end up with leftover data from a previous session in the table that's tracking user->process mapping, that can collide with newer connections assigned the same process id once it's rolled over. It may be sufficient for your app to just make sure you aggressively clean out old mapping entries, perhaps at startup time given how you've described its operation.
To avoid this problem in general, you need to make a connection key that includes an additional bit of information, such as when the session started:
SELECT procpid,backend_start FROM pg_stat_activity WHERE procpid=pg_backend_pid();
That has to iterate over all of the connections active at the time to compute, so it does add a bit of overhead. It's possible to execute that a bit more efficiently starting in PostgreSQL 8.4:
SELECT procpid,backend_start FROM pg_stat_get_activity(pg_backend_pid());
But that only really matters if you have a large number of connections active at once.
Use current_user if you need the database user (I'm not sure that's what you want by reading your question).
Related
I recently had the very unpleasant experience of the postgresql sequence falling out of sync with the table id in a rails app. Until this happened, rails 'magic' meant that I had never known what a postgresql sequence was - it had always happened automagically. Even more alarmingly, I didn't even know when I had caused this 'out of sync' issue to occur (it was only when later records were created that the errors were thrown)
I managed to wade my way through the drama. But now I want to understand all the possible causes of such a severe problem as the postgresql sequence falling out of sync with the id in a rails table. I want to know this so I can avoid it in the future.
I caused it to happen by creating new records manually where I specified the id. E.g. User.create(id: 4566, name: "Jo", email: "jo#gmail.com") (importantly, creating the record without specifying the id would have avoided the issue i.e. User.create(name: "Jo", email: "jo#gmail.com")
My question: in addition to specifying ids in newly created records, what other things does a rails dev need to know will cause this issue?
A pg sequence does not follow transaction semantics, it just returns its older value +1 every time it's asked to. If you open a transaction, create some records, but then rollback, the records you created will not be persisted in the DB but the sequence will still be incremented the next time it's used.
It cannot follow transaction semantics because it's supposed to keep track of a "global state counter". If two concurrent transactions were to create a new row in the same table, we want both to have different ids, for example.
My advice would be to not explicitly set id's on the DB. If you need a custom identifier, use another column.
for i, name in ipairs(redis.call('KEYS''cache:user_transaction_logs:*:8866666')) do redis.call('DEL', name); end"
How can I Optimise this redis query?
We are using Redis as cache store in Rails.Whenever auser makes a successfull transaction The receivers and initiators transaction history is expired from redis
The query can not be optimized - it should be replaced in its entirety because the use of KEYS is discouraged for anything other than debugging purposes on non-production environments.
A preferable approach, instead of trying to fetch the relevant key names ad-hoc, is to manage them in a data structure (e.g. Set or List) and read from it when you perform the deletions.
You need to change the approach for how you are storing cache entries for your users.
Your keys should look something like cache:user_transaction_logs:{user_id}.
Then you will be able to just delete the entry by its key (user_id).
In case if you need several cache entries per user_id - use Redis hashes (https://redis.io/commands#hash), and then again you will be able to delete all entries per user_id with one command DELETE or needed entry with HDEL.
Also a good idea to use Redis database numbers (default 0, 1-15 available) and put separate functionalities on separate database numbers. Then in case if you need to wipe cache of whole functionality that can be done with one command FLUSHDB
I am building a rails app and the data should be reset every "season" but still kept. In other words, the only data retrieved from any table should be for the current season but if you want to access previous seasons, you can.
We basically need to have multiple instances of the entire database, one for each season.
The clients idea was to export the database at the end of the season and save it, then start fresh. The problem with this is that we can't look at all of the data at once.
The only idea I have is to add a season_id column to every model. But in this scenario, every query would need to have where(season_id: CURRENT_SEASON). Should I just make this a default scope for every model?
Is there a good way to do this?
If you want all the data in a single database, then you'll have to filter it, so you're on the right track. This is totally fine, as data is filtered all the time anyway so it's not a big deal. Also, what you're describing sounds very similar to marking data as archived (where anything not in the current season is essentially archived), something that is very commonly done and usually accomplished (I believe) via setting a boolean flag on every record to true or false in order to hide it, or some equivalent method.
You'll probably want a scope or default_scope, where the main downside of a default_scope is that you must use .unscoped in all places where you want to access data outside of the current season, whereas not using a default scope means you must specify the scope on every call. Default scopes can also seem to get applied in funny places from time to time, and in my experience I prefer to always be explicit about the scopes I'm using (i.e. I therefore never use default_scope), but this is more of a personal preference.
In terms of how to design the database you can either add the boolean flag for every record that tells whether or not that data is in the current season, or as you noted you can include a season_id that will be checked against the current season ID and filter it that way. Either way, a scope of some sort would be a good way to do it.
If using a simple boolean, then either at the end of the current season or the start of the new season, you would have to go and mark any current season records as no longer current. This may require a rake task or something similar to make this convenient, but adds a small amount of maintenance.
If using a season_id plus a constant in the code to indicate which season is current (perhaps via a config file) it would be easier to mark things as the current season since no DB updates will be required from season to season.
[Disclaimer: I'm not familiar with Ruby so I'll just comment from the database perspective.]
The problem with this is that we can't look at all of the data at once.
If you need to keep the old versions accessible, then you should keep them in the same database.
Designing "versioned" (or "temporal" or "historized") data model is something of a black art - let me know how your model looks like now and I might have some suggestions how to "version" it. Things can get especially complicated when handling connections between versioned objects.
In the meantime, take a look at this post, for an example of one such model (unrelated to your domain, but hopefully providing some ideas).
Alternatively, you could try using a DBMS-specific mechanism such as Oracle's flashback query, but this is obviously not available to everybody and may not be suitable for keeping the permanent history...
in the application i am currently creating in ruby on rails. I am trying to do some tests in rails console where i have to destroy data in the database and the database is connected to a server. I am importing an XML and parsing it and putting it into a database with scaffolding.
Now what i need: Basically what i am attempting to do is to destroy the data and replace it with a new one every week..but the problem i am getting, the userid is gone up to 700+ and there are only 50 records :S cause it doesnt reset...
To delete all records i am currently using "whatever.destroy_all" does the trick
Any help?
Btw i am using SQLITE
The ID column created in the table usually is set as unique and to increment by 1 for each new record, which is why each time you destroy and add new data the ID keeps getting higher.
The fact that the ID # is getting larger and larger is not an issue at all.
If you really want to start back at zero, I would think you could drop the table and recreate it, but that seems like overkill for a trivial issue.
Regarding the connection to the other scaffold, how are you connecting the two and what do they both represent?
Ideally the data population for testing should be done through fixtures (or easy tools such as factorygirl etc..)
The main advantage of having a fix data set is you can run your tests in any environment. But as per your requirement you can do something like this,
When you populate the date through the active records pass the id parameter as well
Ex: User.new(:id => 1, :name => "sameera").create
By this way you can have constant id's But make sure you increment the id accordingly.
I have a DateTime LastSeen property that stores in the database when the user was last seen.
I have 1 way in mind when to update the database is to do it when validating the user during logging in.
Another way is if I'm going to update the database every 20 minutes, where do I put this logic in asp.net mvc? Do I need to set a lastupdate in the cookie and check that? Where would I check this cookie other than in the global.ascx. file?
How do other systems do it?
Personally, I would take a page out of google analytics' book and run this client side. To get there:
a) Setup a Handler/Action/something that takes http requests to handle recording user "seen" activities
b) Setup an ajax call to (a) to record activities at a reasonable interval from the client.
This will let you get to a much better answer to the question "what if bob just opened the site, saw he didn't have any messages and went on browsing [whatever]"
I think as you suggest, update that value when the user logs on would be simplest.
If you model also has CreatedOn, CreatedBy, ModifiedOn, ModifiedBy properties you can also query these values with a join onto the user table to see if they have been active elsewhere in the app but this may not be great in performance as you'll need a join on every table in your database.