is there a way to increase the limit of depictable items in a historicized diagram (like tests or bugs)? In our setup, it is hardcoded by Microsoft to a limit of 1000 items. Is there a file on the server where this limit can be changed?
Thanks in advance
Unfortunately, this is by design. We can not change this limitation. Here is already a User voice, you can vote it:
https://visualstudio.uservoice.com/forums/330519-visual-studio-team-services/suggestions/16438078-i-cannot-plot-in-a-chart-widget-a-query-with-more
Generally we do not recommend to do actions against the TFS Databases.
However if you insist on changing the limitation, then you can run below sql query against the Tfs_Configuration database (You'd better backup the DBs first):
Open SSMS and connect to TFS SQL server
Navigate to the Tfs_Configuration DB -> Right Click -> New Query
-> Run below query:
exec prc_SetRegistryValue 1, '#\Service\WorkItemTracking\Settings\MaxTrendChartTimeSliceResultSize\', 5000
Please note that change the limitation may affect the performance when using TFS.
Related
I am supporting Filenet Applications and generally focus on performance improvement techniques. Often we face this issue related to the queries optimization. Generally we get the queries from DBA and these are DB SQL which are fired at the database level. Now from the application code we pass the CE SQL and not the DB SQL. I am aware that the CE parse the CE SQL to underlying DB SQL. I am trying to figure out if I have the DB SQL can I get the corresponding CE SQL which is being fired. A code or script which I can write in which I enter the CE SQL and the corresponding DB SQL gets generated. Appreciate if I could get any pointers on this as I am really stuck.
You need to enable Trace Logging for the DB subsystem. This is done through the Trace Control tab of Domain configuration in ACCE. Then you will be able to see database queries in p8_server_trace.log.
For convenience you might want to enable tracing for the SRCH subsystem as well. Then original and generated queries will go hand in hand.
Detailed info on Trace Logging is available in the FileNet P8 documentation.
The way to capture CE SQL queries is to turn on auditing for the object class your are interested in and select Query Event as the event. Now every time a query is performed an event object is created. This object has a property called QueryText which contains the CE query that is performed. You could use the creation time or some other information in the query to match it to your database query.
The query events can be queried using the ACCE or accessed programmatically using the API object com.filenet.api.events.QueryEvent.
Be aware that on a busy system a lot of query events can be generated!
I've read all post with the same or very close headline, but still can't find a proper solution or explanation to my problem.
I'm working with MySQL Workbench 6.3 CE. I have been able to create a database with several tables, and create a conexion with python to write data to it. Still, I had a problem related to a varchar field that needed to be set to more than 45 characters. When I try to set it to bigger limits, like VARCHAR(70), no matter how many times I try, wether I set higher limits for timeout, I get the 2013 error, saying my connection was closed during the query.
I'm using the above version of workbench, on windows 10, and I'm trying to modify that field from the workbench. Afer that first time, I can't drop a table either, nor can I connect from python.
What is happening?
Ok, apparently what was happening is that I had a block, and there where a lot of query waiting in a situation of "waiting for table metadata block".
I did the following in the console of workbench
Select concat('KILL ',id,';') from information_schema.processlist where user='root'
that generates a list of all those processes. I copy that list in a new tab, and execute a massive kill of processes. After that it worked again.
Can anybody explain me how did I arrive to that situation and what precautions to take in my python scripts so as to avoid it?
Thnak you
This is just a general question, not too technical. We have this use-case wherein we are to load hundreds of thousands of records to an existing Neo4j database. Now, we cannot afford to make the database offline because of users who are accessing it. I know that Neo4j requires exclusive lock on the database while it's performing batch updates. Is there a way around my problem? I don't want to lock my database while doing updates. I still want my users to access it - even for just read-only access. Thanks.
Neo4j never requires exclusive lock on the database. It selectively locks portions of the graph that are affected by mutating operations. So there are some things you can do to achieve your goal. Are you a Neo4j Enterprise customer?
Option 1: If so, you can run your batch insert on the master node and route users to slaves for reading.
Option 2: Alternatively, you could do a "blue-green" style deployment where you:
take a backup (B) of your existing database (A), then mark the A database read-only
apply your batch inserts onto B either by starting a separate instance, or even better, using BatchInserters. That way, you'll insert your hundreds of thousands in a few seconds
start the new database B
flip a switch on a load-balancer, so that users start to be routed to the B instead of A
take A down
(Please let me know if you need some tips how to make a read-only DB.)
Option 3: If you can only afford to run one instance at any one time, then there are techniques you can employ to let your users access the database as usual and still insert large volumes of data. One of them could be using a single-threaded "writer" with a queue that batches write operations. Because one thread only ever writes to the database, you never run into deadlock scenarios and people can happily read from the database. For option 3, I suggest using GraphAware Writer.
I've assumed you are not trying to insert hundreds of thousands of nodes to a running Neo4j database using Cypher. If you are, I would start there and change it to use Java APIs or the BatchInserter API.
I have started using TFS Integration Tools to migrate work items from one TFS2010 project to another team project within the same collection. After some small trial runs and modifications to the field and value mappings I started a migration on our entire product backlog. Approximately 170000 change groups were discovered and analyse started. However, during the analysis the connection to the TFS server was lost so the migration had to be restarted. After the restart approx 340000 change groups were identified (roughly double) without any significant changes being made to work items in the backlog.
Has anyone experienced a similar problem or are aware of settings or changes that can be made in the tool to limit this increase in change groups? The amount of time taken to analyse so many groups is causing the migration to take much longer that was initially expected.
After several runs, I found out that the count appears to be a running total so logically enough when I experienced a break in connection all change groups had to be re-analysed causing the "doubling" in change groups.
I'm working on a project in which we have two versions of an MVC App, the live, and the dev versions, I've made changes to the dev version and added tables and data, etc.
Is there any way to migrate these changes onto the live version without losing all data(i.e. just regenerating the database).
I've already tried just rebuilding the database but we lose all data that was previously stored( as obviously we are essentially deleting the old database and rebuilding it).
tl;dr
How do I migrate my dev version of an mvc app along with any new tables to the live version of an mvc app with missing models and tables.
Yes, it is possible to migrate your changes from your dev instance to your production instance; to do so you must create SQL scripts that update your production database with the changes. This may be accomplished by manually writing the scripts or by using tools to generate the scripts for you. However you go about it, you will need scripts to update your database (well, you could perform manual updates via the tooling of your database, but this is not desirable, as you want the updates to occur in a short time window, and you want this to happen reliably and repeatably).
The easiest way that I know of to do this is to use tools like SQL Compare (for schema updates) or SQL Data Compare (for data updates). These are from Redgate, but they cost a fair bit of money. They are well worth they price, and most companies I've worked with are happy to pay for licenses. However, you may not want to shell out for them personally. To use these tools, you connect them to source and destination databases, and they analyze the differences between the databases (schematically or data) and produce SQL scripts. These scripts may then be run manually or by the tools themselves.
Ideally, when you work on your application, you should be producing these scripts as you go along. That way when it comes time to update your environments, you may simply run the scripts you have. It is worth taking the time to include this in your build process, so database scripts get included in your builds. You should write your scripts so they are idempotent, meaning that you can run them multiple times and the end result will be the same (the database updated to the desired schema and data).
One may of managing this is creating a DBVersions table in your database. This table includes all your script updates. For example you could have a table like the following (this is SQL Server 2008 dialect):
CREATE TABLE [dbo].[DBVersions] (
[CaseID] [int] NOT NULL,
[DateExecutedOn] [datetime] NOT NULL,
CONSTRAINT [PK_DBVersions] PRIMARY KEY CLUSTERED (
[CaseID] ASC
)
) ON [PRIMARY]
CaseID refers to the case (or issue) number of the feature or bug that requires the SQL update. Your build process can check this table to see if the script has already been run. If not, it runs it. This is useful if you cannot write your scripts in a way that allows them to be run more than once. If all your scripts can be run an unbounded number of times, then this table is not strictly necessary, though it could still be useful to reduce the need to run a large number of script every time a deployment is done.
Here are links to the Redgate tools. There may be many other tools out there, but I've had a very good experience with these.
http://www.red-gate.com/products/sql-development/sql-compare/
http://www.red-gate.com/products/sql-development/sql-data-compare/
It depends on your deployment strategy and this is more of a workflow that your team need to embrace. If regenerating the live database from scratch it can take awhile depending how big the database size is. I don't see a need to do this in most scenarios.
You would only need to separate out database schema object and data row scripts. The live database version should have its database schema objects scripted out and stored in a repository. When a developer is working on a new functionality, he/she will need to make those changes against the database scripts in the repository. If there is a need to make changes to the database rows then the developer will also need to check in the data row scripts in the repository. On a daily deployment the live database version can be compared against what is checked in the repository and pushed to make it in sync.
On our side we use tools such as RedGate Schema Compare and Data Compare to do the database migration from the dev version to our intended target version.