I am just starting with nhibernate-envers so, please, bear with me.
I am trying to manage a typical situation where I have customer and orders.
I would like to have different revisions for each entity.
I was wondering if it is possible to customize the the environment so that my customers have a progressive revision and my orders have another one, each starting from 1.
Thanks.
No. The revision number is a global number for your domain model (like a changeset number for a repository in a VCS).
Related
I'm looking to build an application similar to Github, similar in the sense that you have both Users and Organizations, and these two can have many X (in the case of Github, it's repositories).
Does anyone know what's the best way to go about this? Should I put them both on the same Database table with a 'type' attribute, or just do 2 different tables?
Thanks for any help!
Edit: So the application I have in mind is not meant to host code. It's similar to Github in just that it has Users and Organizations, who both can create X.
This has already been done very well with GitLab. You might consider using GitLab, rather than creating your own application. If you do decide to continue, perhaps you can start with a fork of GitLab, or at least crib any relevant portions from their architecture.
I originally wrote my Ruby on Rails application for one client. Now, I am changing it so that it can be used for different clients. My end-goal is that some user (not me) can click a button and create a new project. Then all the necessary changes (new schema, new tables, handling of code) are generated without anyone needing me to edit a database.yml file or add new schema definitions. I am currently using the SCOPED access. So I have a project model and other associated models have a project_id column.
I have looked at other posts regarding multi-tenant applications in Rails. A lot of people seem to suggest creating a different schema for each new client in Postgres. For me, however, it is not much useful for a new client to have a different schema in terms of data model. Each client will have the same tables, rows, columns, etc.
My vision for each client is that my production database first has a table of different projects/clients. And each one of those tables links to a set of tables that are pretty much the same with different data. In other terms a table of tables. Or in other terms, the first table will map to a different set of data for each client that has the same structure.
Is the way I explained my vision at all similar to the way that Postgres implements different "schemas"? Does it look like nested tables? Or does Postgres have to query all the information in the database anyway? I do not currently use Postgres, but I would be willing to learn if it fits the design. If you know of database software that works with Rails that fits my needs, please do let me know.
Right now, I am using scopes to accomplish multi-tenant applications, but it does not feel scalable or clean. It does however make it very easy for a non-technical user to create a new project provided I give them fillable information. Do you know if it is possible with the multi-schema Postgres defintion to have it work automatically after a user clicks a button? And I would prefer that this be handled by Rails and not by an external script if possible? (please do advise either way)
Most importantly, do you recommend any plugins or that I should adopt a different framework for this task? I have found Rails to be limited in some cases of abstraction as above and this is the first time I have ran into a Rails-scaling issue.
Any advice related to multi-tenant applications or my situation is welcome. Any questions for clarification or additional advice are welcome as well.
Thanks,
--Dave
MSDN has a good introduction to multi-tenant data architecture.
At one end of the spectrum, you have one database per tenant ("shared nothing"). "Shared nothing" makes disaster recovery pretty simple, and has the highest degree of isolation between tenants. But it also has the highest average cost per tenant, and it supports the fewest tenants per server.
At the other end of the spectrum, you store a tenant id number in every row of every shared table ("shared everything"). "Shared everything" makes disaster recovery hard--for a single tenant, you'd have to restore just some rows in every shared table--and it has the lowest degree of isolation. (Badly formed queries can expose private data.) But it has the lowest cost per tenant, and it supports the highest number of tenants per server.
My vision for each client is that my production database first has a
table of different projects/clients. And each one of those tables
links to a set of tables that are pretty much the same with different
data. In other terms a table of tables. Or in other terms, the first
table will map to a different set of data for each client that has the
same structure.
This sounds like you're talking about one schema per tenant. Pay close attention to permissions (SQL GRANT and REVOKE statements. And ALTER DEFAULT PRIVILEGES.)
There are two railscasts on multitenancy that using scopes and subdomains and another to help with handling multiple schemas.
There is also the multitenant gem which could help with your scopes and apartment gem for handling multiple schemas.
Here is also a good presentation on multitenancy-with-rails.
Dont forget about using default scopes, while creating named scops the way you are now works it does feel like it could be done better. I came across this guide by Samuel Kadolph regarding this issue a few months ago and it looks like it could work well for your situation and have the benefit of keeping your application free of some PgSQL only features.
Basically the way he describes setting the application up involves adding the concepts of tennants to your application and then using this to scope the data at query time using the database.
Ok the scenario is simple(?). I have say Articles, Reviews and Comments, three different models that a user can update. I was wondering what is the best design to follow if I want the updated records (from the user) to become available only to an administrator (and not in a public view) but at the same time the old ones remain as is until the admin reviews them.
First thought was that would be best to have self-referential joins, making a new record on the update action of the user and referencing the old one so that the admin when finishing the review the old record, the latter gets deleted and the reviewed one gets published. I seriously think that if there are many updates from users, then the table holding the data will grow BIG in a very short time. Do you have any other suggestion?
Thank a lot for the help :)
P.S. I am using Ruby on Rails framework, if that makes any different at all..
This kinda reminds me of Wordpress draft system and to be sincere, i think it's probably one of the few ways to do it. That is, have a column with post_type like [draft,published etc..]. Alternatively, you could have a drafts model alone and a published model that are polymorphic of the Article model. I think that you are right in your concerns about a table getting too big. Thus, i think that you would better choose a strategy of removing old drafts or archiving them in another table, so that they just occupy space and not mess with your search time for them (also, remember to use indexes for better performance).
Maybe act-as-state-machine can help as well ( http://www.practicalecommerce.com/blogs/post/122-Rails-Acts-As-State-Machine-Plugin )
One thing to keep in mind is that unless you're trying to create a detailed revision history, there are really only two versions of a given article, view or comment that you need to keep around: The current approved public version of the item and the current edited version of the item. So you can probably just story two copies, and copy the draft version into published and delete the draft when it's approved, or update it if the user makes additional changes before it's approved.
This is my first post so please be patient with me... :)
I am learning rails at the moment and took an internal project to help me and get proper handson practice.
The situation is like this:
We have an existing MS SQL Server 2000 DB with bunch of customers. Usual stuff.
I don't know who designed it but there is a huge table "Customer" where all the details are.
Soon we'll be moving customers out to a new company and need to track the movement.
So the application should have the snapshot of the movement details for a particular customer: whether he was called on the phone, talked to or contacted by other means. Notes of the conversation. Whether he agreed to move. Etc ...
So the original customer data should be pooled from MS SQL and all the new tracking data should be in proper new rails DB.
I was considering few things:
1. Pooling customer records from old DB and doing the rest of the work in the new one.
This one no good as my research shows that rails cannot work with two DBs at the same time.
2. Connecting to just old MS SQL and doing all the work there creating new necessary tables.
This one seems to be a lot of trouble. "odbc" adapter gives me errors. "sqlserver" adapter does not work with mssql 2000
Plus, I predict lots of troublew working with existing MS DB.
3. This method I think is the most rational.
Dump the customer table from the old MS SQL DB to CVS and import it to the new sqlite db created for the rails app.
Please let me know if you think of other methods to solve this problem.
With the third method I still see many problems which I would like help if possible.
For instance, rails creates additional fields in the table so the data import might not work. Or am I mistaken?
Is the third method sounds doable for you? Do you see any pitfalls. Suggestions?
Thanks very much.
So unfortunately, ruby doesn't work great on windows, and MSSQL is probably the worst choice for a rails db. Given that I could deploy on MySQL or PG on linux, I would export the data from the old db, and totally start fresh, so write a data import script that shapes things the way rails likes. First step would be to move towards rails conventions, so a Customer model will have a customers table with an auto incrementing primary key field called id that is an integer. All fields use "wide case" (all lower case, with underscores seperating words), and audit fields on all tables that are datetime, and called created_at / updated_at. Definately a pain (especially if you are new to rails), but you will have a hellish time if you try to fight convention and are not at least moderately comfortable with the platform. IMO sqlite is a bad choice for anything other then an app that wont store much data and will be used by only one person.
If all I have is one model (for example Wiki) and want to save it along with its versions, I could use acts_as_versioned plugin which stores the wikis in "wikis" table and its versions in "wikis_versions" table. This is plain an simple even if I want to moderate the latest version before showing it to the public using a field as status with "pending review/ published".
What's the best way to handle Wiki with associations (for example attachments, assets,..) which also have versions? And how would you moderate it? Do you create a new version to wiki even though only its association is changed just to keep the flow going, if so what about other associations?
What's the best way to handle it with little db overhead?
Thanks in advance.
I have used both acts_as_versioned and acts_as_audited.
I prefer the latter because it uses a single table. Using acts_as_versioned we've had issues with changes to versioned tables requiring extra migrations => this adds extra complexity to our build and deployment process.
Richard Livsey has a nice plugin for this that works with acts_as_versioned.
http://github.com/rlivsey/acts_as_versioned_association/tree/master