in the tfs database Tfs_DefaultCollection
There are nearly no keys. (maybe none, as I haven't exhaustively searched)
I'm able to see all my work items, and current/previous states just fine. However when I join WorkItemAre to Tbl_Iteration none of the IterationIds match at all. How do I look at work items by iteration name/title?
my search code is as follows:
this.Tbl_TeamConfigurationIterations.Dump();
this.Tbl_Iterations.Dump();
var myPersonId= foo; // omitted from sample
var qIteration = from wia in WorkItemsAres.Where(x=>x.AssignedTo==myPersonId && x.State!="Closed" && x.State!="Resolved")
join iLeft in Tbl_Iterations on wia.IterationID equals iLeft.SequenceId into iL
from iteration in iL.DefaultIfEmpty()
select new {iteration.Iteration,wia};
qIteration.Dump();//.Select(x=>new{x.AreaID,x.IterationID, x.Title,x.WorkItemType}).Dump();
for those interested in the solutions (both direct db and tfs api dll calls):
Direct Db version
Proper Tfs dll calls
#Maslow,
you can use xxTree table to get iteration name (join with WorkItemsAre table on IterationID = xxTree.Id) but you should be aware that this is undocumented and unsupported way. I would strongly recommend using TFS object model to do such things.
Microsoft very strongly recommends against using the transactional database directly (in fact it can put you into an unsupportable state). If you want to query TFS Data is recommended to do so using the API and Client Object Model (SDK). There is a very rich API for interacting with TFS that is supported.
See the docs here: http://msdn.microsoft.com/en-us/library/bb130146.aspx
Here is something that came out of a private mailing list where we (ALM MVP's) were trying to better understand Microsofts stance on this:
Reading from the [TFS] databases programmatically, or manually, can cause unexpected locking within Microsoft SQL Server which can adversely affect performance. Any read operations against the [TFS] databases that originate from queries, scripts, .dll files (and so on) that are not provided by the Microsoft [TFS] Development Team or by Microsoft [TFS] Support will be considered unsupported if they are identified as a barrier to the resolution of a Microsoft support engagement.
If unsupported read operations are identified as a barrier to the resolution of support engagement, the database will be considered to be in an unsupported state. To return the database to a supported state, all unsupported read activities must stop.
Related
We are using informix db on linux operating system.Is there a way we can know history of queries that have been executed through isql ?
The Informix server has a feature for SQL tracing that is enabled with the SQLTRACE onconfig parameter. This can be set to collect various levels of information relating to executed statements including the statement text. This is maintained in an in-memory circular buffer so the information would need to be extracted from this buffer and stored separately if you wanted to maintain a permanent history.
There is more information on this feature in the Informix Adminstrator's Guide at https://www.ibm.com/support/knowledgecenter/SSGU8G_12.1.0/com.ibm.admin.doc/ids_admin_1126.htm
Is it a good idea to continuously use External Data Access (EDA) for synchronization of big files (let's say with 10 million records) with RDBMS. Will EDA also handle incremental updates to the source UniData file and automatically reflect those changes(CREATE, UPDATE and DELETE) to the target RDBMS system?
Also, according to the documentation, EDA supports right now MSSQL, Oracle and DB2. Is it possible to configure EDA to work for example with PostgreSQL?
I've an idea and I don't know if it is doable in Cobol or not, I want to use Online VSAM file in online program, so my online VSAM file has multiple of records and i want if there is new record added to the file my online program detect that and do some of process, is it doable and please give me some of hint
What your describing is basically a trigger based on an event. You described COBOL as the language but in order to achieve what you want you also need to choose a runtime environment. Something like CICS, IMS Db2, WebSphere (Java), MQ, etc.
VSAM itself does not provide a triggering mechanism. An approach that would start to achieve what you want would be to create an MQ queue that processes the records to be written and they could write the record and take additional action. MQ cuts across all the runtimes listed above and is probably the most reliable.
Another option is to look at using Db2 where you could create a Triggers or user defined function that might achieve what your looking for. Here is a reference article that describes many methods.
Here is a list of some of the articles in the link mentioned above:
Utilizing Triggers within DB2 by Aleksey Shevchenko
Using Stored Procedures as Communication Mechanism to a Mainframe by
Robert Catterall
Workload Manager Implementation and Exploitation
Stored Procedures, UDFs and Triggers-Common Logic or Common Problem?
If you are looking to process records simply written from any source to VSAM there are really no inherent capabilities to achieve that in the Access Method Services where VSAM datasets are defined.
You need to consider your runtime environment, capabilities and goals as you continue your design.
If this is a high volume application you could consider IBM's "Change Data Capture" product. On every update to a chosen VSAM file it will dump a before and after image of the record into a message queue. Which can then be processed by the language and platform of your choice.
Also worth considering is if by "online" you mean a CICS application, then, the VSAM file will be exclusively owned by a single CICS region, and all updates will be processed by programs running in this region. You may be able to tweak the application to kick off some post processing (As simple as adding "EXEC CICS START yourtransacion ..." to the existing program(s).
Check out CICS Events. You can set an event for when the VSAM file is written to and action it with a COBOL program. There are several event adapters, you will probably be interested in the one that writes to a TS queue.
What is the difference between ADOTable and ClientDataSet?
Both components are capable of performing Batch Update, why add the extra overhead of having 2 additional components like ClientDataSet and DataSetProvider.
The main difference is that ClientDataSet can operate without a connection to external database. You can use it as in-memory table or load it's contents from file.
In combination with DataSetProvider it is frequently used to overcome limits of unidirectional datasets and as a cache.
A ClientDataSet is an in-memory dataset, which has a lot of usefull additional functionallities.
One big advantage compared to Interbase/Firebird tables and queries is, that you don't need to keep a transaction alive, e.g. as long as you display the data in a grid.
Have a look at this article:
A ClientDataSet in Every Database Application
Client dataset is a generic implementation that works regardless of the underlying db access library. It can work (through the provider) with any TCustomDataset descendant, be it a dbExpress dataset, a BDE one, an ADO one, or any of the many libraries available for Delphi to allow for direct database access using the native client (i.e. ODAC, Direct Oracle Access, ecc. ecc.)
It can also work in a multi-tier mode where the data access dataset and provider are in a remote server application and the TClientDataset is in the client application, allowing for "thin client" deployment which doesn't require database clients or data access library like ADO installed on the client (the required midas.dll code can be linked to the application when using recent versions of Delphi, anyway only the midas.dll is required otherwise).
On top of that it can be used as an in-memory table able to store data in a local file. It allows for the "briefcase" model also, where a thin client can still work when not connected to the database, and then "sync" when a connection becomes available. That's was more useful in the past, when wireless access was not common.
As you can see, TClientDataset offers a lot more of a TADODataset.
The most important difference I can think of is resolving update conflicts. In fact, TClientDataSet exposes the handy ReconcileErrorForm dialog, which wraps up the process of showing the user the old and new records and allows them to specify what action to take, while with TADOTable for instance, you're basically on your own.
Question: Does Informix have a construct equivalent to Oracle's "materialized view" or is there a better way to synchronize two tables (not DB's) accross a DB link?
I could write a sync myself (was asked to) but that seems like re-inventing the wheel.
Background: Recently we had to split (one part of DB one one server, the other part on the other server) a monolithic Informix 9.30 DB (Valent's MPM) since the combination of AppServer and DB server couldn't handle the load anymore.
In doing this we had to split a user defined table space (KPI Repository) aranged in a star shema of huge fact tables and well defined dimension tables.
Unfortunately a telco manager decided to centralize the dimension tables (Normalization, no data redundancy, no coding needed) on one machine and thus make them available as views over a DB-link on the other machine. This is both slow and unstable, as it every now and then crashes the DB server if the view is used in sub-queries (demonstrable), very uncool on a producton server
I may be getting your requirements but could you not just use enterprise replication to replicate the single table across the DB's?
IDS 9.30 is archaic (four main releases off current). Ideally, it should not still be in service; you should be planning to upgrade to IDS 11.50.
As MrWiggles states, you should be looking at Enterprise Replication (ER); it allows you to control which tables are replicated. ER allows update-anywhere topologies; that is, if you have 2 systems, you can configure ER so that changes on either system are replicated to the other.
Note that IDS 9.40 and 10.00 both introduced a lot of features to make ER much simpler to manage - more reasons (if the fact that IDS 9.30 is out of support is not sufficient) to upgrade.
(IDS does not have MQT - materialized query tables.)