I'm working against a DB2 database with a DotNet application. Some parts of the system are build and maintained in AS400 LANSA, other parts are DotNet.
We curently have the issue that we are maintaining a lot of summary table that need to be updated with values of many different tables. This causes our data to be out of sync with each other all the time and we need to run script each night checking and correcting errors.
These tables are required since they claim it is impossible to make join or use views in LANSA
Is this correct? Are they alternatives i can supply for them to avoid these problems?
If you use LANSA with RDMLX, then you can use SELECT_SQL to incorporate JOIN's as needed.
By Views, do you mean logical files? Those get used as needed by the database engine automatically. If you mean a VIEW created off multiple physical files, then these can be replicated in LANSA using pre-determined Join Fields. Have a look at the LANSA documentation about these.
You could use something like this:
#MYSQLST := ('SELECT {write your SQL select statement here}') Select_Sql Fields({my fields}) Using(#MYSQLST)
Worth reviewing the LANSA documentation on the use of Free Format SELECT_SQL
Related
Currently, there are a lot of old tables existing in framework model A in Cognos content store, we need rename these old tables to new tables(old tables structure are same to new tables, but their name are different) automatically, not manually, not by framework manager. Anyone has idea about it? could you share me? many thanks.
It would help very much if you clarify what the situation is. Your terminology is confusing.
The Cognos content store is the data base which stores everything which appears in the UI of Cognos Analytics. This includes the data source connection definitions, reports, and administration functionality like the memberships of groups and roles.
It is controlled by Cognos and you would very much like not touch it and alter it.
The objects in a Framework manager model are not stored in the content store. When you publish a package, the information in the package is written into the content store. Where is not something you should need to know.
My understanding about your question is this:
Some tables in a data base which is used in your FM model have had their name changed. You want to alter the model so that the model uses the new tables rather than the old ones.
There is extensive functionality in the FM UI to deal with cases such as the remapping to new source functionality.
It should be possible to programmatically alter the model.
The nature of the change of the tables is important.
If the nature of the renaming has been that you have changed the case of the tables from any of all upper case or all lower case or a mix of case table names and you have made them all lower case or all upper case and if you have 11.1.7 then you can use this utility.
https://www.ibm.com/support/knowledgecenter/SSEP7J_11.1.0/com.ibm.swg.ba.cognos.ug_fm.doc/c_fm_model_update_util.html
The utility does not have provision to specify a subset of tables to have their case changed. It will do the action on all of the data source query subjects. This might be a problem in your case.
2.
I have never needed to do this and, subsequently, do not know if this would really work but you could edit the action logs. This would entail looking through the metadata import actions and the modelling actions which refer to the tables which were renamed and editing them to refer to the new tables.
You would really need to understand the action logs.
For example, if you happen to have expanded the node in the UI for any of the tables, views, or synonyms when you did the metadata import, the action log will have decided to do an import of the list of the the tables ( or views etc.) of the data base. This happens even if you closed the node. If you didn't then the metadata import as captured in the action log will import every table (or view etc.) generically. This means that if a table has been added, the action log playback will import it and if a table has been deleted, then the action log playback will not fail because it could not find the table. Why such a state could ever happen is beyond me but that is what happens.
Do this on a copy. That should not be necessary to mention but you never know.
You would need to be absolutely sure that no-one decided that they were clever and manually edited the model.xml file. Manual editing does not get captured in the action logs.
You would need to be absolutely sure that no-one decided that they were clever and deleted some or all of the log files in the Cognos project.
You would need to be really really really sure that this is better than remapping. Frankly, the time invested in importing the new tables and remapping your model to use them would probably be less than the time invested in editing and then running the action logs, and then wading through the model to make sure you have not mucked things up.
Also, I don't know if editing the action logs is allowed in the licence so you could end up getting IBM mad at you.
I assume from your vocabulary that you are very new to Cognos...
The appropriate solution depends on how you define "a lot". Your Framework Manager model can be edited manually or by a program you write in C# or Java that leverages the Cognos Framework Manager SDK. Cognos SDKs are not trivial. There is a steep learning curve. In my experience, if "a lot" means less than a few hundred, you should do this manually. An experienced Framework Manager modeler could probably get through a hundred tables in an afternoon.
Programming Language: Delphi 6
SQL Server in The Back end.
Problem:
Application used to hit DB each time we needed something , Ended up hitting it for more than 2000 times for getting certain things, Caused problems with application being slow. This hitting DB happened for a lot of tables each having different structure and different number of columns. So I’m trying to reduce the number of calls.
We can have about 4000 records at a time from each table.
Proposed Solution:
Let’s get all the data from DB at once and use it when we need it so we don’t have to keep hitting the DB.
How Solution is turning out so far:
This version of Delphi doesn’t have a dictionary. So we already have an implementation of a dictionary from String List (Let us assume that implementation is good).
Solution 1:
Store this in a dictionary that we created with:
A unique field as a key.
And Add rest of the data as strings in String List separated like this:
FiledName1:FileValue,FieldName2:FieldValue2,…..
Might Have to create about 2000 String List to map data to key.
I took a look at the following link:
How Should I Implement a Huge but Simple Indexed StringList in Delphi?
Looks like they could move to a different DB not possible with me.
Is this a sane solution?
Solution 2:
Store this in a dictionary with List.
That list will have Delphi Records.
Records can’t directly be added to so I took a look at this Link:
Delphi TList of records
Solution 3:
Or Given that I’m using TAdoQuery should I use Seek or locate to find my records.
Please advice on the best way to do this?
Requirements:
Need Random Access of this data.
Insertion of data will happened only once when we get all the data , per table as we require them.
Need to only read the data , don’t have to modify it.
Constantly need to search in terms of primary key.
In addition to changing the application we have already done good indexing on DB to take care of things from DB side. This is more to make things run well from the application.
This sounds like a perfect use case for TClientDataSet. It's an in-memory dataset that be indexed, filtered, and searched easily, hold any information you can retrieve from the database using a SQL statement, and it has pretty good performance over a few thousand reasonable-sized rows of data. (The link above is to the current documentation, as I don't have one available for the Delphi 6 docs. They should be very similar, although I don't recall which specific version added the ability to directly include MidasLib in your uses clause to eliminate distributing Midas.dll with your app.)
Carey Jensen wrote a series of articles about it a few years back that you might find useful. The first one can be found in A ClientDataset in Every Database Application - the others in the series are linked from it.
I'm quite new to Rails but in my current assignment I have no other choice but use RoR. My problem is that in my app I would like to create, connect and destroy databases automatically on user demand but as far as I understand it is quite hard to accomplish this with ActiveRecord. It would be nice to hear some advice from more experienced RoR developers on this issue.
The problem in details:
I have a main database (which I access with activerecord). In this database I store a list of my active programs (and some template data for creating new programs). I would like to create a separate database for each of this programs (when a user creates a new program in my app).
In the programs' databases I would like to store the state and basic info of the particular program and a huge amount of program related data (which is used to calculate the state and is necessary to have for audit reasons).
My problem is that for example I want a dashboard listing all the active programs and their state data. So first I have to get the list from my main db and after that I have to connect to all the required program databases and get the state data.
My question is what is the best practice to accomplish this? What should I use (ActiveRecord, a particular gem, etc.)?
Hi, thanks for your answers so far, I would like to add a couple of details to make my problem more clear for you:
First of all, I'm not confusing database and table. In my case there is a tool which is processing log files. Its a legacy tool (written in ruby 1.8.6) and before running it, I have to run an SQL script which creates a database with prefilled- and also with empty tables for this tool. The tool then processes the logs and inserts the calculated data into different tables in this database. The catch is that the new system should support running programs parallel which means I have to create different databases for different programs.(this was not an issue so far while the tool was configured by hand before each run, but now the configuration must be automatic by my tool) There is no way of changing the legacy tool while it would be too complicated in the given time frame, also it's a validated tool. So this is the reason I cannot use different tables for different programs, because my solution should be based on an other tool.
Summing my task up:
I have to crate a complex tool using RoR and Ruby 2.0.0 which:
- creates a specific database for a legacy tool every time a user want to start a new program
- configures this old tool on a daily basis to process the required logs and insert the calculated data into the appropriate database
- access these databases and show dashboards based on their data
The database I'm using is MySQL.
I cannot use other framework, because the future owner of my tool won't be able to manage/change/update it. So I have to go with RoR, which is quite painful for me right now and I really hope some of you guys can give me a little guidance.
Ok, this is certainly outside of the typical use case scenario, BUT it is very doable within Rails and ActiveRecord.
First of all, you're going to want to execute some SQL directly, which is fine, but you'll also have to take extra care if you're using user input to determine the name of the new database for instance, and do your own escaping. (Or use one of ActiveRecord's lower-level escaping methods that we normally don't worry about.) The basic idea though is something like:
create_sql = <<SQL
CREATE TABLE foo ...
SQL
ActiveRecord::Base.connection.execute(create_sql)
Although now that I look at ActiveRecord::ConnectionAdapters::Mysql2Adapter, there's a #create method that might help you.
The next step is actually doing different things in the context of different databases. The key there is ActiveRecord::Base.establish_connection. Using that, and passing in the params for the database you just created, you should be able to do what you need to for that particular db. If the db's weren't being created dynamically, I'd put that line at the top of a standard ActiveRecord model so that that model would always connect to that db instead of the main one. If you want to use the same class, and connect it to different db's (one at a time of course), you would probably remove_connection before calling establish_connection to the next one.
I hope this points you in the right direction. Good luck!
I'm trying to figure out the cut-off with respect to when a "text entry" should be stored in the database vs. as a static file. Are there any rules of thumb here? The text entries will be at the most several paragraphs and have links to images and tables (and hyperlinks to other text entries). Some criteria for the text entry:
I'm thinking of using DITA as the content format
The text should be searchable
If the text is revised, a new version will be created
thanks in advance, Chuck
The "rails way" would be using a database.
The solution will be more scalable, therefore faster and probably easier to develop with (using migration and so on). Using the file system, you will have to build lots of functions on your own, that are already implemented for database usage.
You could create a Model (e.g.) Document and easily use existing versioning systems, like paper_trail. When using an indexed search, you can just have an has_many relation enabling you to realise the depencies between the models (destroy a model means to destroy the search index).
Rather than a cut-off, you could look at what databases provide and ask yourself if those features would be useful. Take Isolation (the I in ACID): if you have any worries that multiple people could be trying to edit an entry at the same time, a database would handle that well while you'd have to handle the locks yourself working with files. Or Atomicity: you might want to update two things at once (e.g. an index page and an entry page) and know they will either both succeed or both fail.
Databases do a number of things beyond ACID, such as taking advantage of multiple datatypes, making querying easier, and allowing for scaling. It's a question worth asking since most databases end up having data stored in a bunch of files on disk. Would you end up writing a mini-database if you used files yourself?
Besides, if you're using rails you mind as well take advantage of its ActiveRecord functionality, and make it possible to use the many plugins that expect a database.
I'd use a database for even a small, single user rails app.
I was wondering what the "correct"-way is when using a Linq to Sql-model in Visual Studio.
Should I create a new model for each of my components, say Blog, Users, News and so on and have all different xxxDataContext's with tables and SPROCs added in each of these.
Or should I create one MyDbDataContext and always work against that?
What's the pro's/con's? My gut tells me to divide it up in smaller context's, but it also feels like that could bring problems as the project expands?
What's the deal? Help me Stackoverflow :)
There will always be overhead when creating the data context as the model needs to be built. Depending on the number of tables in your database this might not be much of a big deal though. If it's only 10 tables or so, the overhead will not be much more than that for a context with say 1 table (sorry, I don't have actual stress testing to show the overhead, but, hey, maybe that gives me something to blog on this weekend).. When looking at large databases the overhead might be a enough to consider using seperate contexts.
The main advantage I would see with using a single data context is that you gain the ability to use JOINs in your LINQ query and that will be translated to T-SQL. Where as if you do the join after the arrays of objects are pulled, the performance might be a bit slower. Additionally, keeping track of multiple data contexts might be confusing and good naming conventions would be needed. So building your own data model w/ business logic which encapsulates the contexts would be a bit harder. I've done this and it's not fun :)
However, if you still feel you want to go that route, then I would recommend putting similar tables (that you might need to join) in the same context. Also, there are some tuts online that recommend using a shared MappingSource when using multiple contexts that use the same source. Information on this can be found here: http://www.albahari.com/nutshell/speedinguplinqtosql.aspx
Sorry, I know that's not really a black and white answer, but hopefully it helps :)
Addition:
Just wanted to add that I did a small test and ran 20,000 SELECT statements against a small sized table using 2 different data contexts:
DataClasses1DataContext contained mappings to all tables in the db (4 total)
DataClasses2DataContext contained a single mapping for just the one table
Results:
Time to execute 20000 SELECTs using DataClasses1DataContext: 00:00:10.4843750
Time to execute 20000 SELECTs using DataClasses2DataContext: 00:00:10.4218750
As you can see, it's not much of a difference.