My question is similar to this question but in my case I am wondering can two of the SAME version of TFS (2013) share a data layer. I cannot find any documentation stating that it is ok or, alternatively, that it will cause issues.
I am wondering if this would be a viable option for testing setting up a new instance of the server but I do not know what issues having two TFS instances connected to the same data layer / database may cause (or if its even possible).
The immediate answer to your question is, "Yes". However, it's not what you want. You can configure multiple application tiers pointing to the same database; that's how you enable high availability/failover scenarios.
For testing purposes, you should create a second set of infrastructure.
The documentation is here https://learn.microsoft.com/en-us/azure/devops/server/install/install-2013/config-tfs-advanced?view=tfs-2013 and what you are talking about is the "Server Database Label". It mentions:
In Server Database Label, type a label string, which is then embedded into all three of the default database names.
This technique enables you to use a single instance of SQL Server to host multiple configuration databases.
As other have mentioned, this might not be a good idea for a test environment though.
Related
I want to present release data complexity which is associated with each node like at epic, userstory etc in grafana in form of charts but grafana do not support neo4j database.Is there any way Directly or indirectly to present neo4j database in grafana?
I'm having the same issues and found this question among others. From my research I cannot agree with this answer completely, so I felt I should point some things out, here.
Just to clarify: a graph database may seem structurally different from a relational or time series database, but it is possible to build Cypher queries that basically return graph data as tables with proper columns as it would be with any other supported data source. Therefore this sentence of the above mentioned answer:
So what you want to do is just not possible.
is not absolutely true, I'd say.
The actual problem is, there is no datasource plugin for Neo4j available at the moment. You would need to implement one on your own, which will be a lot of work (as far as I can see), but I suspect it to be possible. For me at least, this will be too much work to do, so I won't use any approach to read data directly from Neo4j into Grafana.
As a (possibly dirty) workaround (in my case), a service will regularly copy relevant portions of the Neo4j graph into a relational database (or a time series database, if the data model is sufficiently simple for that), which Grafana is aware of (see datasource plugins), so I can query it from there. This is basically the replication idea also given in the above mentioned answer. In this case you obviously end up with at least 2 different database systems and an additional service, which is not so insanely great, but at the moment it seems to be the quickest way to resolve the problem with the missing datasource plugin. Maybe this is applicable in your case, too.
Using neo4j's graphite metrics you can actually configure data to be sent to grafana, and from there build whichever dashboards you like.
Up until recently, graphite/grafana wasn't supported, but it is now (in the recent 3.4 series releases), along with prometheus and other options.
Update July 2021
There is a new plugin called Node Graph Panel (currently in beta) that can visualise graph structures in Grafana. A prerequisite for displaying your graph is to make sure that you have an API that exposes two data frames, one for nodes and one for edges, and that you set frame.meta.preferredVisualisationType = 'nodeGraph' on both data frames. See the Data API specification for more information.
So, one option would be to setup an API around your Neo4j instance that returns the nodes and edges according to the specifications above. Note that I haven't tried it myself (yet), but it seems like a viable solution to get Neo4j data into Grafana.
Grafana support those databases, but not Neo4j : Graphite, InfluxDB, OpenTSDB, Prometheus, Elasticsearch, CloudWatch
So what you want to do is just not possible.
You can replicate your Neo4j data inside of those database, but the datamodel is really different ... (timeseries vs graph).
If you just want to have some charts, you can use Apache Zeppeline for that.
I am using TFS 2012 with a SQL Server 2008R2 database. My company has multiple clients with each using one of three different versions of an application and each version is applying a different version of stored procedure, view, and table (name is hardcoded with version number) database objects in a single database.
I am new to this company and to this multi-version approach all deployed to production. Does TFS or TFS with Microsoft ALM Ranger support a multi-branch production versioning that does not require multiple physical database copies, each with its own version? If so, please also provide a link to some good documentation or literature I can review in greater detail.
I am concerned about the complexity of ultimately merging branches in the future. My preference is to keep a single database and keep the physical object names consistently generic (xxxxx, not xxxx_v1_2 or something like that). I did check other questions in stackoverflow and other sources, but could not find a clear answer to my question.
Thanks in advance for your help.
What is Team Foundation Server?
Team Foundation Server provides a set
of collaborative software development tools that integrate with your
existing IDE or editor thus enabling your cross-functional team to
work effectively on software projects of all sizes.
Your question is more related to architecture of database:
A) applying a different version of stored procedure, view, and table
(name is hardcoded with version number) database objects in a single
database.
B) Holding multiple physical database copies. For single database, keep
the physical object names consistently generic
Both of the scheme should be support in TFS. TFS just need to be configured with the SQL server which holding the database. Even configuring TFS with the databases spread across different SQL server instances is supported. As regards which one is better, it depends on the actual situation of your company.
I'm quite new to Rails but in my current assignment I have no other choice but use RoR. My problem is that in my app I would like to create, connect and destroy databases automatically on user demand but as far as I understand it is quite hard to accomplish this with ActiveRecord. It would be nice to hear some advice from more experienced RoR developers on this issue.
The problem in details:
I have a main database (which I access with activerecord). In this database I store a list of my active programs (and some template data for creating new programs). I would like to create a separate database for each of this programs (when a user creates a new program in my app).
In the programs' databases I would like to store the state and basic info of the particular program and a huge amount of program related data (which is used to calculate the state and is necessary to have for audit reasons).
My problem is that for example I want a dashboard listing all the active programs and their state data. So first I have to get the list from my main db and after that I have to connect to all the required program databases and get the state data.
My question is what is the best practice to accomplish this? What should I use (ActiveRecord, a particular gem, etc.)?
Hi, thanks for your answers so far, I would like to add a couple of details to make my problem more clear for you:
First of all, I'm not confusing database and table. In my case there is a tool which is processing log files. Its a legacy tool (written in ruby 1.8.6) and before running it, I have to run an SQL script which creates a database with prefilled- and also with empty tables for this tool. The tool then processes the logs and inserts the calculated data into different tables in this database. The catch is that the new system should support running programs parallel which means I have to create different databases for different programs.(this was not an issue so far while the tool was configured by hand before each run, but now the configuration must be automatic by my tool) There is no way of changing the legacy tool while it would be too complicated in the given time frame, also it's a validated tool. So this is the reason I cannot use different tables for different programs, because my solution should be based on an other tool.
Summing my task up:
I have to crate a complex tool using RoR and Ruby 2.0.0 which:
- creates a specific database for a legacy tool every time a user want to start a new program
- configures this old tool on a daily basis to process the required logs and insert the calculated data into the appropriate database
- access these databases and show dashboards based on their data
The database I'm using is MySQL.
I cannot use other framework, because the future owner of my tool won't be able to manage/change/update it. So I have to go with RoR, which is quite painful for me right now and I really hope some of you guys can give me a little guidance.
Ok, this is certainly outside of the typical use case scenario, BUT it is very doable within Rails and ActiveRecord.
First of all, you're going to want to execute some SQL directly, which is fine, but you'll also have to take extra care if you're using user input to determine the name of the new database for instance, and do your own escaping. (Or use one of ActiveRecord's lower-level escaping methods that we normally don't worry about.) The basic idea though is something like:
create_sql = <<SQL
CREATE TABLE foo ...
SQL
ActiveRecord::Base.connection.execute(create_sql)
Although now that I look at ActiveRecord::ConnectionAdapters::Mysql2Adapter, there's a #create method that might help you.
The next step is actually doing different things in the context of different databases. The key there is ActiveRecord::Base.establish_connection. Using that, and passing in the params for the database you just created, you should be able to do what you need to for that particular db. If the db's weren't being created dynamically, I'd put that line at the top of a standard ActiveRecord model so that that model would always connect to that db instead of the main one. If you want to use the same class, and connect it to different db's (one at a time of course), you would probably remove_connection before calling establish_connection to the next one.
I hope this points you in the right direction. Good luck!
I want to know best practices for creation of features.
Normally Visual studio extension creates feature for each web part.
Or it good practice or we should create 1 feature for multiple web parts in one WSP?
I don't know of any best-practice, but I can see two ways (I can think of) of looking at it:
When you separate your webparts into several features, you have the possibility to activate/deactivate the different webparts at will. If one webpart has an error you can just deactivate it. When one webpart fails compiling, you still have the others running smoothly.
The downside is that you "clutter" the Sharepoint Interface, because you have to manage several Features instead of one. That goes for activating/deactivating as well as deploying/retracting.
If you have one feature it is all of the above, just in reverse. You only have one feature to activate/deactivate, which makes it faster to manager. But if that one feature fails in some way (or any of the webparts within) you can only deactivate the whole thing. The same goes for deployment/retracting. When one webpart within your feature fails you have to retract the whole thing.
Whether development is easier or harder depends on your preference. One might say that it is harder to keep a consistent configuration in one huge feature deploying a multitude of webparts, workflows and master pages (where was the entry for that workflow again? ah yes, in line 1112) - on the other hand you have everything in one place and don't have to search in several features.
I would really make it up to your personal preference. When you are deploying a Solution to a customer, the customer is certainly more happy to click/install/deploy the "MyCompany Super Solution Feature" instead of several smaller ones, in the end you don't install MS Word with several setup.exe's (and then again, you can choose what features of Word to install...)
It basically depends upon your requirements.
By the way, this problem is resolved in VS 2010 extension
After much reading on ruby on rails and multiple database connections, it seems that I have found something that not that many folks do, at least not with ror. I am used to querying many different databases and schemas and pulling back the information either for a report or for one seamless page. So, a user doesn't have to log on to several different systems. I can create a page that has all the systems on one or two web pages.
Is that not a normal occurrence in the web and database driven design?
EDIT: Is this because most all my original code is in classic asp?
I really honestly think that most ORM designers don't seem to take the thought that users may want to access more than one database into account. This seems to be a pretty common limitation in the ORM universe.
Our client website runs across 3 databases, so I do this to. Actually, I'm condensing everything into views off of one central database which then connects to the others.
I never considered this to be "normal" behavior though. I would guess that most of the time you would be designing for one system and working against that.
EDIT: Just to elaborate, we use Linq to SQL for our data layer and we define the objects against the database views. This way we keep reports and application code working off the same data model. There is some extra work setting up the Linq entities, because you have to manually define primary keys and set up associations... however so far it has definitely proven worthwhile. We tried to do so with Entity Framework, but had a lot of trouble getting the relationships set up appropriately and had to give up. The funny thing is I had thought Entity Framework was supposed to be designed for more advanced scenarios like ours...
It is not uncommon to hit multiple databases during a single part of an application's workflow. However, in every instance that I have done it, this has been performed through several web service calls, which among other things wrap the databases in question.
I have not, to my knowledge, ever had a need to hit multiple databases directly at once and merge results into a single report.
I've seen this kind of architecture in corporate Portals- where lots of data is pulled in via different data sources. The whole point of a portal is to bring silo'd systems together- users might not want to be using lots of systems in isolation (especially if they have to sign into each one). In that sort of scenario it is normal, particularly if it is a large company that has expanded rapidly and has a large number of heterogenous systems.
In your case whether this is the right thing to do depends on why you have these seperate DBs.
With ORM's it may be a little difficult. However, it can be done. Pull the objects as needed from the various databases, then use them as a composite to create a new object that is the actual one that is desired. If you can skip the ORM part of the process, then you can directly query the databases and build your object directly.
Pulling data from two databases and compiling a report is not uncommon, but because cross-database queries cannot be optimized by the query engine of either database, OLTP systems typically use a single database, to keep the application performant.
If you build the system from the ground up, it is not advisable to do it this way. If you are working with a system you didn't design, there is no much choice and it is not uncommon (that is the difference between "organic" and "planned" grow).
Not counting master and various test instances, I hit nine databases on a regular basis. Yes, I inherited it, and yes, "Classic" ASP figures prominently. Of course, all the "brillant" designers of this mess are long gone. We're replacing it with things more sane as quickly as we safely can.
I would think that if you're building a new system, and keep adding databases and get to the point of two or three databases, it's probably time to re-think your design. OTOH, if you're aggregating data from multiple, disparate systems, then, no, it's not that strange. Depending on the timliness you need, and your budget for throwing hardware at the problem, and if your data is mostly static, this would be a good scenario for a "reporting server" that pulls the data down from the Live server periodically.