Is there a solution to change table names in Cognos framework model automatically? - sdk

Currently, there are a lot of old tables existing in framework model A in Cognos content store, we need rename these old tables to new tables(old tables structure are same to new tables, but their name are different) automatically, not manually, not by framework manager. Anyone has idea about it? could you share me? many thanks.

It would help very much if you clarify what the situation is. Your terminology is confusing.
The Cognos content store is the data base which stores everything which appears in the UI of Cognos Analytics. This includes the data source connection definitions, reports, and administration functionality like the memberships of groups and roles.
It is controlled by Cognos and you would very much like not touch it and alter it.
The objects in a Framework manager model are not stored in the content store. When you publish a package, the information in the package is written into the content store. Where is not something you should need to know.
My understanding about your question is this:
Some tables in a data base which is used in your FM model have had their name changed. You want to alter the model so that the model uses the new tables rather than the old ones.
There is extensive functionality in the FM UI to deal with cases such as the remapping to new source functionality.
It should be possible to programmatically alter the model.
The nature of the change of the tables is important.
If the nature of the renaming has been that you have changed the case of the tables from any of all upper case or all lower case or a mix of case table names and you have made them all lower case or all upper case and if you have 11.1.7 then you can use this utility.
https://www.ibm.com/support/knowledgecenter/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ug_fm.doc/c_fm_model_update_util.html
The utility does not have provision to specify a subset of tables to have their case changed. It will do the action on all of the data source query subjects. This might be a problem in your case.
2.
I have never needed to do this and, subsequently, do not know if this would really work but you could edit the action logs. This would entail looking through the metadata import actions and the modelling actions which refer to the tables which were renamed and editing them to refer to the new tables.
You would really need to understand the action logs.
For example, if you happen to have expanded the node in the UI for any of the tables, views, or synonyms when you did the metadata import, the action log will have decided to do an import of the list of the the tables ( or views etc.) of the data base. This happens even if you closed the node. If you didn't then the metadata import as captured in the action log will import every table (or view etc.) generically. This means that if a table has been added, the action log playback will import it and if a table has been deleted, then the action log playback will not fail because it could not find the table. Why such a state could ever happen is beyond me but that is what happens.
Do this on a copy. That should not be necessary to mention but you never know.
You would need to be absolutely sure that no-one decided that they were clever and manually edited the model.xml file. Manual editing does not get captured in the action logs.
You would need to be absolutely sure that no-one decided that they were clever and deleted some or all of the log files in the Cognos project.
You would need to be really really really sure that this is better than remapping. Frankly, the time invested in importing the new tables and remapping your model to use them would probably be less than the time invested in editing and then running the action logs, and then wading through the model to make sure you have not mucked things up.
Also, I don't know if editing the action logs is allowed in the licence so you could end up getting IBM mad at you.

I assume from your vocabulary that you are very new to Cognos...
The appropriate solution depends on how you define "a lot". Your Framework Manager model can be edited manually or by a program you write in C# or Java that leverages the Cognos Framework Manager SDK. Cognos SDKs are not trivial. There is a steep learning curve. In my experience, if "a lot" means less than a few hundred, you should do this manually. An experienced Framework Manager modeler could probably get through a hundred tables in an afternoon.

Related

Access 2016 - Easy multiple user database?

Is there an easy way to set up a database that's accessible to several people that can do all the things a single user would do?
I'm studying Database 101 and I am currently doing a project with four other people and we're having trouble meeting up and doing it so it would be great if we could do it from wherever.
When I say "easy way" I mean without having the super-ultra-deluxe-enterprise-edition of software.
Can it be done with a "local" Dropbox folder?
What you can do at zero cost is to have one project master.
Distribute a copy to each member. Each will have to do completely separate tasks, like one for designing a form, one for adjusting a report, one for some code module, one for another code module.
When done, in the evening or what you agree upon, you collect the different versions with a list of what objects has been changed or added. Import these in your master, and then distribute this to the members as the current revised working copy.
It takes some discipline, but that's all. And all masters you save as a zip given a filename including the date and time. This way, nothing can get lost.

Dynamic database connection in a Rails App

I'm quite new to Rails but in my current assignment I have no other choice but use RoR. My problem is that in my app I would like to create, connect and destroy databases automatically on user demand but as far as I understand it is quite hard to accomplish this with ActiveRecord. It would be nice to hear some advice from more experienced RoR developers on this issue.
The problem in details:
I have a main database (which I access with activerecord). In this database I store a list of my active programs (and some template data for creating new programs). I would like to create a separate database for each of this programs (when a user creates a new program in my app).
In the programs' databases I would like to store the state and basic info of the particular program and a huge amount of program related data (which is used to calculate the state and is necessary to have for audit reasons).
My problem is that for example I want a dashboard listing all the active programs and their state data. So first I have to get the list from my main db and after that I have to connect to all the required program databases and get the state data.
My question is what is the best practice to accomplish this? What should I use (ActiveRecord, a particular gem, etc.)?
Hi, thanks for your answers so far, I would like to add a couple of details to make my problem more clear for you:
First of all, I'm not confusing database and table. In my case there is a tool which is processing log files. Its a legacy tool (written in ruby 1.8.6) and before running it, I have to run an SQL script which creates a database with prefilled- and also with empty tables for this tool. The tool then processes the logs and inserts the calculated data into different tables in this database. The catch is that the new system should support running programs parallel which means I have to create different databases for different programs.(this was not an issue so far while the tool was configured by hand before each run, but now the configuration must be automatic by my tool) There is no way of changing the legacy tool while it would be too complicated in the given time frame, also it's a validated tool. So this is the reason I cannot use different tables for different programs, because my solution should be based on an other tool.
Summing my task up:
I have to crate a complex tool using RoR and Ruby 2.0.0 which:
- creates a specific database for a legacy tool every time a user want to start a new program
- configures this old tool on a daily basis to process the required logs and insert the calculated data into the appropriate database
- access these databases and show dashboards based on their data
The database I'm using is MySQL.
I cannot use other framework, because the future owner of my tool won't be able to manage/change/update it. So I have to go with RoR, which is quite painful for me right now and I really hope some of you guys can give me a little guidance.
Ok, this is certainly outside of the typical use case scenario, BUT it is very doable within Rails and ActiveRecord.
First of all, you're going to want to execute some SQL directly, which is fine, but you'll also have to take extra care if you're using user input to determine the name of the new database for instance, and do your own escaping. (Or use one of ActiveRecord's lower-level escaping methods that we normally don't worry about.) The basic idea though is something like:
create_sql = <<SQL
CREATE TABLE foo ...
SQL
ActiveRecord::Base.connection.execute(create_sql)
Although now that I look at ActiveRecord::ConnectionAdapters::Mysql2Adapter, there's a #create method that might help you.
The next step is actually doing different things in the context of different databases. The key there is ActiveRecord::Base.establish_connection. Using that, and passing in the params for the database you just created, you should be able to do what you need to for that particular db. If the db's weren't being created dynamically, I'd put that line at the top of a standard ActiveRecord model so that that model would always connect to that db instead of the main one. If you want to use the same class, and connect it to different db's (one at a time of course), you would probably remove_connection before calling establish_connection to the next one.
I hope this points you in the right direction. Good luck!

Cobination of mvc 4,entity framework, stored procedures is the right way?

We are doing a new project, for all devices and browsers compatibility we have decided to use asp.net mvc 4, Html5, css 3, for communicating with Database Entity Framework we want to use.
Our senior members(Manager, DBA(they are also new to mvc 4, EF)) in the team asking us to write every thing will be in the stored procedures while communicating Database so that maintenance becomes easy.
Is it the correct match if we go like that(MVC4+ EF + stored procedures)? Will i not get maintenance and performance if i go with Code first reverse engineering(because database tables are ready i want to do like that), Please reply.
Below is the flow we want to do, please correct me
As Database is already ready, so first we will write the stored procedures for communication with DB.
New Mvc 4 project and will add .edmx file(EF) and select tables and Stored procedures
in mvc controller or web api we write the consuming stored procedures
There is nothing technically wrong with ASP.NET MVC + EF + Stored Procedures approach, from the first sight.
But my experience show, is that typically it's huge overkill. The common problem I see is the conflicting interests between developers and DBA's. In most worst scenarios all DB releated stuff are controlled by DBA, so if developer what to add/change some feature he needs to wait for implementation of it by DBA (or wait for approve, which could also take long).
So, I personally see that as more bureaucratic way of development.
My own perpective is to be more agile on development and tools like Code First matches that. Stored Procedures could still play major role, while code/performance optimization, but not something to start with.
I agree that using stored procedures in the database is a good approach. Centralizing data validation and calculations in the database ensures data integrity. Client-side validation is important for the user experience but you must also ensure that you test the data validity in the database.
Using Entity Framework, you can generate entities which relate directly to tables in your database, or else you can design entities which use procedures for insert/update/delete operations rather than simple table updates.
In MVC you will use the entities as models to manage your data interactions.
Good luck
This is my personal view. I am sure others might have different ones. Since you are asking this question I am hoping you are open for discussions, otherwise I wouldn’t have bothered as this topic is like a religious discussion as lots of people have very strong opinions and are not likely to change them.
Personally I don't think stored procedures are meant to write business logic. They should be used for writing data access logic. I would only use a stored procedure if I want to optimize an expensive query such as a dynamic search but nothing else. You will get slightly less performance if you have your logic in the domain model, but its not even noticeable in most situations.
One of the strong arguments for writing business logic in stored procedures is because you can easily change some logic by changing your stored procedure. But should we really go and change the business logic of a deployed application without doing proper testing. What will happen if you accidently do a mistake? Doing a deployment is not such a big deal now with continuous builds and I don’t think as a professional developer you should take that risk.
When you decide to write your logic in stored procedures, you give up all the object oriented concepts and you end up writing some procedural code that we wrote maybe 10 years ago. C# language has come a long way now and you will not be able to use those new language features in heart of your application which is business logic. You also loose the visual studio features to refactor code, advanced and easy debugging features etc.
I also don’t like the idea of having triggers as it’s not visible in source code. Imagine someone new in your team trying to add a new feature some time later and if he doesn’t know that a trigger exists, he might write some incorrect logic.
If your application contains some complex business logic, (I am sure most applications do) you should have a domain model that contains not only just properties of your entities, but also your logic. Otherwise you will be falling in to the anti-pattern called anemic data model.
You will not be able to test your business logic by writing unit testing if you have your logic in stored procedures.
You will also not be able to deploy your business logic to multiple servers if you have them in stored procedures if your site becomes really successful.
You will also not be using all powerful capabilities of Entity framework and LINQ if you have all your logic in your stored procedures. You actually don’t need an ORM Mapper if that is the approach you are going to take.
This is what I would recommend for your project.
Even though you already have the database, you can still use code first approach of Entity framework. You can download the EF code first reverse engineer power tool and have the code first code auto generated for you. This is going to be a one off thing and after than if you have any more changes, you can directly do to the database and update the code first code accordingly. Fluent API is bit confusing at first, but you can easily learn that from the generated code.
Do not access your data context from the controller. Have a repository layer that will contain all your data access logic. You can access the repository from your controller. (This allows you to unit test your code by mocking the repository). There are lots of video tutorials on how to use the repository pattern on asp.net site.
Your domain model is going to be the entities that got generated from the Entity framework. Try to have your business logic in those models. It takes a little while to get use to the domain model pattern. But one you get used to it you will start to appreciate its benefits.
Hope this helps.

Being able to save (multiple) changes without affecting database - structural issue

I'm developing a business web application which will maintain a database with several tables. One of the requirements are that one should be able to "save your work" without affecting the database and then later pick it up and continue working. Multiple "savings" must be supported.
This administration tool will be developed in ASP.NET MVC4 or Microsoft's LightSwitch, I haven't decided yet.
The problem I have is that I don't know how to solve this structurally, are there any known techniques to this problem? I need help by someone to point me in the right direction, I'm stuck here..
EDIT: I'll try to explain further with a scenario
I make a change to one row and save, but the change should only be visible to me (not affect the main database).
I realize the change in 1. is bad and choose to start over with changing the data in the same row, I also make a change to another row. I save these changes (but only for me)
Now I have two savings (from step 1 and 2), I change my mind and the changes made in 1. is correct and I open that "savefile" and commit the changes to main databse. I'll then delete the "savefile" from step 2.
Hope that makes the situation more clear.
Thanks
The easiest way that I can think of is to let the database do the work for you.
You could:
Just add some type of a "status" column ("committed", "uncommitted" etc) to the table, & filter out any "uncommitted" records in any grid that displays "real" data. You can then also filter
a different way in your editing grid, that only shows you
"uncommitted" records, or if you could save an ID instead of a status, if you
only want to see your own records.
Add another table to hold the uncommitted records, rather than
"pollute" the actual table with extra columns.
Does that make sense?
if you are really going to build a transactional type version control system, then you have a huge job ahead.
look at some popular tools like SVN and see the level of complication and features they support.
Rolling back a partial transaction inside a database - especially one with constraints and triggers will be very difficult. almost everything would run into an issue somewhere.
you may also consider storing uncommitted transactions outside the database - like in some local XML structure - then committing the components only once when desired.
either way, the interface for finding all the records and determining which ones to do what with will be a challenge - nevermind the original application.

Is there some way in Delphi to cache master-detail rows and post both master and detail child rows at the same time

I want to post in memory some child rows, and then conditionally post them, or don't post them to an underlying SQL database, depending on whether or not a parent row is posted, or not posted. I don't need a full ORM, but maybe just this:
User clicks Add doctor. Add doctor dialog box opens.
Before clicking Ok on Add doctor, within the Add doctor dialog, the user adds one or more patients which persist in memory only.
User clicks Ok in Add doctor window. Now all the patients are stored, plus the new doctor.
If user clicked Cancel on the doctor window, all the doctor and patient info is discarded.
Try if you like, mentally, to imagine how you might do the above using delphi data aware controls, and TADOQuery or other ADO objects. If there is a non-ADO-specific way to do this, I'm interested in that too, I'm just throwing ADO out there because I happen to be using MS-SQL Server and ADO in my current applications.
So at a previous employers where I worked for a short time, they had a class called TMasterDetail that was specifically written to add the above to ADO recordsets. It worked sometimes, and other times it failed in some really interesting and difficult to fix ways.
Is there anything built into the VCL, or any third party component that has a robust way of doing this technique? If not, is what I'm talking about above requiring an ORM? I thought ORMs were considered "bad" by lots of people, but the above is a pretty natural UI pattern that might occur in a million applications. If I was using a non-ADO non-Delphi-db-dataset style of working, the above wouldn't be a problem in almost any persistence layer I might write, and yet when databases with primary keys that use identity values to link the master and detail rows get into the picture, things get complicated.
Update: Transactions are hardly ideal in this case. (Commit/Rollback is too coarse a mechanism for my purposes.)
Your asking two separate questions:
How do I cache updates?
How can I commit updates to related tables at the same time.
Cached updates can be accomplished a number of different ways. Which one is best depends on your specific situation:
ADO Batch Updates
Since you've already stated that you're using ADO to access the data this is a reasonable option. You simply need to set the LockType to ltBatchOptimistic and CursorType to either ctKeySet or ctStatic before opening the dataset. Then call TADOCustomDataset.UpdateBatch when you're ready to commit.
Note: The underlying OLEDB provider must support batch updates to take advantage of this. The provider for SQL Server fully supports this.
I know of no other way to enforce the master/detail relationship when persisting the data than to call UpdateBatch sequentially on both datasets.
Parent.UpdateBatch;
Child.UpdateBatch;
Client Datasets
Data caching is one of the primary reasons for TClientDataset's existence and synchronizing a master/detail relationship isn't difficult at all.
To accomplish this you define the master/detail relationship on two dataset components as usual (in your case ADOQuery or ADOTable). Then create a single provider and connect it to the master dataset. Connect a single TClientDataset to the provider and you're done. TClientDatset interprets the detail dataset as a nested dataset field, which can be accessed and bound to data aware controls just like any other dataset.
Once this is in place you simply call TClientDataset.ApplyUpdates and the client dataset will take care of ordering the updates for the master/detail data correctly.
ORMs
There is a lot that can be said about ORMs. Too much to fit into an answer on StackOverflow so I'll try to be brief.
ORMs have gotten a bad rap lately. Some pundits have gone so far as to label them an anti-pattern. Personally I think this is a bit unfair. Object-relational mapping is an incredibly difficult problem to solve correctly. ORMs attempt to help by abstracting away a lot of the complexity involved in transferring data between a relational table and an instance of an object. But like with everything else in software development there are no silver bullets and ORMs are no exception.
For a simple data entry application without a lot of business rules an ORM is probably overkill. But as an application becomes more and more complex an ORM starts to look more appealing.
In most cases you'll want to use a third party ORM rather than rolling your own. Writing a custom ORM that perfectly fits your requirements sounds like a good idea and its easy to get started with simple mappings but you'll soon start running into issues like parent/child relationships, inheritance, caching and cache invalidation (trust me I know this from experience). Third party ORMs have already encountered these issues and spent an enormous amount of resources to solve them.
With many ORMs you trade code complexity for configuration complexity. Most of them are actively working to reduce the boilerplate configuration by turning to conventions and policies. If you name all your primary keys Id rather than having to map each table's Id column to a corresponding Id property for each class you simply tell the ORM about this convention and it assumes all tables and classes its aware of follow the convention. You only have to override the convention for specific cases where it doesn't apply. I'm not familiar with all of the ORMs for Delphi so I can't say which support this and which don't.
In any case you'll want to design your application architecture so you can push off the decision of which ORM framework (or for that matter any framework) to use as long as possible.

Resources