Being able to save (multiple) changes without affecting database - structural issue - asp.net-mvc

I'm developing a business web application which will maintain a database with several tables. One of the requirements are that one should be able to "save your work" without affecting the database and then later pick it up and continue working. Multiple "savings" must be supported.
This administration tool will be developed in ASP.NET MVC4 or Microsoft's LightSwitch, I haven't decided yet.
The problem I have is that I don't know how to solve this structurally, are there any known techniques to this problem? I need help by someone to point me in the right direction, I'm stuck here..
EDIT: I'll try to explain further with a scenario
I make a change to one row and save, but the change should only be visible to me (not affect the main database).
I realize the change in 1. is bad and choose to start over with changing the data in the same row, I also make a change to another row. I save these changes (but only for me)
Now I have two savings (from step 1 and 2), I change my mind and the changes made in 1. is correct and I open that "savefile" and commit the changes to main databse. I'll then delete the "savefile" from step 2.
Hope that makes the situation more clear.
Thanks

The easiest way that I can think of is to let the database do the work for you.
You could:
Just add some type of a "status" column ("committed", "uncommitted" etc) to the table, & filter out any "uncommitted" records in any grid that displays "real" data. You can then also filter
a different way in your editing grid, that only shows you
"uncommitted" records, or if you could save an ID instead of a status, if you
only want to see your own records.
Add another table to hold the uncommitted records, rather than
"pollute" the actual table with extra columns.
Does that make sense?

if you are really going to build a transactional type version control system, then you have a huge job ahead.
look at some popular tools like SVN and see the level of complication and features they support.
Rolling back a partial transaction inside a database - especially one with constraints and triggers will be very difficult. almost everything would run into an issue somewhere.
you may also consider storing uncommitted transactions outside the database - like in some local XML structure - then committing the components only once when desired.
either way, the interface for finding all the records and determining which ones to do what with will be a challenge - nevermind the original application.

Related

Is there a solution to change table names in Cognos framework model automatically?

Currently, there are a lot of old tables existing in framework model A in Cognos content store, we need rename these old tables to new tables(old tables structure are same to new tables, but their name are different) automatically, not manually, not by framework manager. Anyone has idea about it? could you share me? many thanks.
It would help very much if you clarify what the situation is. Your terminology is confusing.
The Cognos content store is the data base which stores everything which appears in the UI of Cognos Analytics. This includes the data source connection definitions, reports, and administration functionality like the memberships of groups and roles.
It is controlled by Cognos and you would very much like not touch it and alter it.
The objects in a Framework manager model are not stored in the content store. When you publish a package, the information in the package is written into the content store. Where is not something you should need to know.
My understanding about your question is this:
Some tables in a data base which is used in your FM model have had their name changed. You want to alter the model so that the model uses the new tables rather than the old ones.
There is extensive functionality in the FM UI to deal with cases such as the remapping to new source functionality.
It should be possible to programmatically alter the model.
The nature of the change of the tables is important.
If the nature of the renaming has been that you have changed the case of the tables from any of all upper case or all lower case or a mix of case table names and you have made them all lower case or all upper case and if you have 11.1.7 then you can use this utility.
https://www.ibm.com/support/knowledgecenter/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ug_fm.doc/c_fm_model_update_util.html
The utility does not have provision to specify a subset of tables to have their case changed. It will do the action on all of the data source query subjects. This might be a problem in your case.
2.
I have never needed to do this and, subsequently, do not know if this would really work but you could edit the action logs. This would entail looking through the metadata import actions and the modelling actions which refer to the tables which were renamed and editing them to refer to the new tables.
You would really need to understand the action logs.
For example, if you happen to have expanded the node in the UI for any of the tables, views, or synonyms when you did the metadata import, the action log will have decided to do an import of the list of the the tables ( or views etc.) of the data base. This happens even if you closed the node. If you didn't then the metadata import as captured in the action log will import every table (or view etc.) generically. This means that if a table has been added, the action log playback will import it and if a table has been deleted, then the action log playback will not fail because it could not find the table. Why such a state could ever happen is beyond me but that is what happens.
Do this on a copy. That should not be necessary to mention but you never know.
You would need to be absolutely sure that no-one decided that they were clever and manually edited the model.xml file. Manual editing does not get captured in the action logs.
You would need to be absolutely sure that no-one decided that they were clever and deleted some or all of the log files in the Cognos project.
You would need to be really really really sure that this is better than remapping. Frankly, the time invested in importing the new tables and remapping your model to use them would probably be less than the time invested in editing and then running the action logs, and then wading through the model to make sure you have not mucked things up.
Also, I don't know if editing the action logs is allowed in the licence so you could end up getting IBM mad at you.
I assume from your vocabulary that you are very new to Cognos...
The appropriate solution depends on how you define "a lot". Your Framework Manager model can be edited manually or by a program you write in C# or Java that leverages the Cognos Framework Manager SDK. Cognos SDKs are not trivial. There is a steep learning curve. In my experience, if "a lot" means less than a few hundred, you should do this manually. An experienced Framework Manager modeler could probably get through a hundred tables in an afternoon.

TFS - Move tasks between Work Items

As a previous result of a bad TFS project management, several tasks has been created in the wrong work item. Now I need to move several tasks to different work items. Is there an easy way to do it?
So far, I have to edit each task, remove the previos link to primary element and create a new one, but this is taking a lot of my time.
I suspect that the easiest way to do it would be from Excel. Create a Tree-based query that shows everything, then move the child records in Excel using simple cut and insert cut cells. Excel will then allow you to publish the new structure in one go.
If you need to move items up to a higher or lower level, place the Title Field in the column representing the level.
See this little video I captured to show how it is done.
MS Project is extremely good with modifying hierarchies of work items. The steps are exactly the same as setting it up in Excel, but project inherently handles parent/child relationships, giving them a drag-and-drop interaction.
jessehouwing's Excel answer will be easier if you have never worked with project before.
Updated jesshouwing's comments are correct. Especially about the shivers.

Dynamic database connection in a Rails App

I'm quite new to Rails but in my current assignment I have no other choice but use RoR. My problem is that in my app I would like to create, connect and destroy databases automatically on user demand but as far as I understand it is quite hard to accomplish this with ActiveRecord. It would be nice to hear some advice from more experienced RoR developers on this issue.
The problem in details:
I have a main database (which I access with activerecord). In this database I store a list of my active programs (and some template data for creating new programs). I would like to create a separate database for each of this programs (when a user creates a new program in my app).
In the programs' databases I would like to store the state and basic info of the particular program and a huge amount of program related data (which is used to calculate the state and is necessary to have for audit reasons).
My problem is that for example I want a dashboard listing all the active programs and their state data. So first I have to get the list from my main db and after that I have to connect to all the required program databases and get the state data.
My question is what is the best practice to accomplish this? What should I use (ActiveRecord, a particular gem, etc.)?
Hi, thanks for your answers so far, I would like to add a couple of details to make my problem more clear for you:
First of all, I'm not confusing database and table. In my case there is a tool which is processing log files. Its a legacy tool (written in ruby 1.8.6) and before running it, I have to run an SQL script which creates a database with prefilled- and also with empty tables for this tool. The tool then processes the logs and inserts the calculated data into different tables in this database. The catch is that the new system should support running programs parallel which means I have to create different databases for different programs.(this was not an issue so far while the tool was configured by hand before each run, but now the configuration must be automatic by my tool) There is no way of changing the legacy tool while it would be too complicated in the given time frame, also it's a validated tool. So this is the reason I cannot use different tables for different programs, because my solution should be based on an other tool.
Summing my task up:
I have to crate a complex tool using RoR and Ruby 2.0.0 which:
- creates a specific database for a legacy tool every time a user want to start a new program
- configures this old tool on a daily basis to process the required logs and insert the calculated data into the appropriate database
- access these databases and show dashboards based on their data
The database I'm using is MySQL.
I cannot use other framework, because the future owner of my tool won't be able to manage/change/update it. So I have to go with RoR, which is quite painful for me right now and I really hope some of you guys can give me a little guidance.
Ok, this is certainly outside of the typical use case scenario, BUT it is very doable within Rails and ActiveRecord.
First of all, you're going to want to execute some SQL directly, which is fine, but you'll also have to take extra care if you're using user input to determine the name of the new database for instance, and do your own escaping. (Or use one of ActiveRecord's lower-level escaping methods that we normally don't worry about.) The basic idea though is something like:
create_sql = <<SQL
CREATE TABLE foo ...
SQL
ActiveRecord::Base.connection.execute(create_sql)
Although now that I look at ActiveRecord::ConnectionAdapters::Mysql2Adapter, there's a #create method that might help you.
The next step is actually doing different things in the context of different databases. The key there is ActiveRecord::Base.establish_connection. Using that, and passing in the params for the database you just created, you should be able to do what you need to for that particular db. If the db's weren't being created dynamically, I'd put that line at the top of a standard ActiveRecord model so that that model would always connect to that db instead of the main one. If you want to use the same class, and connect it to different db's (one at a time of course), you would probably remove_connection before calling establish_connection to the next one.
I hope this points you in the right direction. Good luck!

Version control of text fields in rails application

I'm planning a new app that will handle multiple text fields on many projects for many users. ie, there will be a lot of text fields to manage.
A key feature will be the ability to "roll back" to view and update to previous versions of each and every text field.
Can anyone give me some advice on how best to handle this?
It seems like there would be a huge amount of data if each and every version of each and every potential text field was stored in the same table. But it may be the only way, and there is nothing wrong with storing each and every version in it's entirety?
I thought there might be a smart approach to this though?
I would suggest using a versioning library like paper_trail. It will log all the changes to the fields you tell it to track. If you're really concerned with the amount of data that needs to be stored you might prefer a library like vestal_versions, which only stores the changes you made, not a complete copy of each version.

What's the best practice for handling mostly static data I want to use in all of my environments with Rails?

Let's say for example I'm managing a Rails application that has static content that's relevant in all of my environments but I still want to be able to modify if needed. Examples: states, questions for a quiz, wine varietals, etc. There's relations between your user content and these static data and I want to be able to modify it live if need be, so it has to be stored in the database.
I've always managed that with migrations, in order to keep my team and all of my environments in sync.
I've had people tell me dogmatically that migrations should only be for structural changes to the database. I see the point.
My counterargument is that this mostly "static" data is essential for the app to function and if I don't keep it up to date automatically (everyone's already trained to run migrations), someone's going to have failures and search around for what the problem is, before they figure out that a new mandatory field has been added to a table and that they need to import something. So I just do it in the migration. This also makes deployments much simpler and safer.
The way I've concretely been doing it is to keep my test fixture files up to date with the good data (which has the side effect of letting me write more realistic tests) and re-importing it whenever necessary. I do it with connection.execute "some SQL" rather than with the models, because I've found that Model.reset_column_information + a bunch of Model.create sometimes worked if everyone immediately updated, but would eventually explode in my face when I pushed to prod let's say a few weeks later, because I'd have newer validations on the model that would conflict with the 2 week old migration.
Anyway, I think this YAML + SQL process works explodes a little less, but I also find it pretty kludgey. I was wondering how people manage that kind of data. Is there other tricks available right in Rails? Are there gems to help manage static data?
In an app I work with, we use a concept we call "DictionaryTerms" that work as look up values. Every term has a category that it belongs to. In our case, it's demographic terms (hence the data in the screenshot), and include terms having to do with gender, race, and location (e.g. State), among others.
You can then use the typical CRUD actions to add/remove/edit dictionary terms. If you need to migrate terms between environments, you could write a rake task to export/import the data from one database to another via a CSV file.
If you don't want to have to import/export, then you might want to host that data separate from the app itself, accessible via something like a JSON request, and have your app pull the terms from that request. That seems like a lot of extra work if your case is a simple one.

Resources