How source code is stored in SQL server for TFS2010.Is it possible to see it by digging the Database?
Versions of checked-in files are indeed stored in the database, which is basically just a simple blob store that contains a mix of the entire version controlled files as well as "deltas" between them.
That is to say that the server will occasionally store the differences between two versions of the files using a binary delta algorithm. For example, for a file $/Project/File.txt, version 1 may be stored intact but version 2 may be stored as the delta from version 1. When a client requests version 2 of $/Project/File.txt, the file may be reassembled from deltas before delivery.
The database is intended to be treated as an opaque data store and is generally not supported. In order to interact with your version control programmatically, it is intended that you would use the very rich APIs that are available for communicating with Team Foundation Server, either from .NET
or from Java.
Related
I have a graph working fully with the plugin locally in neo4j desktop. I've replicated everything from this graph in my grapheneDB instance. I can't use the gds procedures as I get the error:
gds.proc... is unavailable because it is sandboxed and has dependencies outside of the sandbox. Sandboxing is controlled by the dbms.security.procedures.unrestricted setting. Only unrestrict procedures you can trust with access to database internals.
I know to fix this I need to add these two lines to the config/properties file:
dbms.security.procedures.unrestricted=apoc.*,gds.*
dbms.security.procedures.whitelist=apoc.*,gds.*
I just dont know how to do that on grapheneDB, I've read all the docs I can find.
I've tried adding the gds plugin by adding the jar file as just a stored procedure and then also as a server extension with a zip file containing both the jar file and the two config lines mention above in a neo4j-server.properties file.
When added as a server extension I can tell neo4j hasnt found the gds plugin at all. Am I just missing a location in the properties file? Or am I missing something obvious in the stored procedure upload method?
Using the dev free tier graphenedb, Neo4j Community Edition 3.5.17 and graph data science 1.1.1
Thanks
After a couple of weeks back and forth with graphene support, the config changes have been made. They will be adding support for the GDS plugin as part of their base image soon, but until then you may still need to request that they patch your db for you and add it as a stored procedure.
Is it possible, that MSSCCI make VFP project loading slow? Project has 1000+ files, workspace is server. Project loads about 120+ seconds. Network traffic is greater during loading, CPU and memory no significant change. How can I optimalize loading project please?
SOLUTION:
NO, it seems that slow loading is consequence of using MSSCCI provider for little large projects source controlled in VFP.
We looked into moving from Visual SourceSafe to TFS a few years ago. When the VFP project was integrated with TFS, opening the project took longer than with VSS. There were also other oddities with the integration, such as not being able to see when a file was already checked out by someone else. We ended up abandoning the idea and stuck with VSS. That said, I wouldn't necessarily blame the MSSCCI provider. It probably has more to do with the way VFP queries source control data.
Note that you are not required to use the VFP project integration. You can use a separate source control client to check files in/out. You'll need a process for generating text versions of binary files (SCX, VCX, etc.).
FWIW, opening projects with VSS can also be slow. Upgrading our VSS server made a big difference. You may find the same if you are running TFS on an older/slower server.
I am not using it so I cannot directly comment on it.
A project is merely a table, and a project with 1000+ files would be roughly mean around 2Mb which is nothing for today's networking (even if it meant to bring down all that data). Normally it should open instantly or with 1-2 seconds delay at most (assuming you are not using an extremely slow network).
Please provide more details about your environment.
Make sure your TFS and MSSCCI are used latest version.
Try on another client machine to see whether your issue would be reproduced.
Create a new workspace to see whether the performance persists.
I’m a bit confused about the “Character Encoding” in Rational Team Concert, while having trouble with UTF-8 encoded files that are now stored in RTC. (I never had any trouble with these files before.)
The “Character Encoding” shows up in the Eclipse client (at least) here:
File Compare.
Jazz SCM Properties.
The “Character Encoding” is not displayed in the Visual Studio RTC client, at least I could not find it. (Of course, VS has its own ways to display and change encoding of files, but these are independent of RTC.)
I saw several files that are version controlled with MIME Type text/plain which have different “Character Encoding”s for nearly every revision, sometimes changing from UTF-8 to Cp1252 back and forth. Usually, only a few lines in a large file are changed.
It seems to me that automatic merge with the Visual Studio client regularly, but not always, gets confused with encoding and/or byte order marks and changes non 7-bit-ASCII characters. I cannot reproduce this.
I learned several things from a good answer:
Encoding isn’t stored on the server, it is client-only.
scm set property file.encoding sets a user property (and this even can be set to random value such as foo). However:
As far as I can see, file.encoding is completely ignored by Visual Studio, although this doc says:
To change the encoding for files that are checked in from the CLI or Rational Team Concert Client for Microsoft Visual Studio IDE, run scm set property [...] Example: scm set property file.encoding UTF-8 path/to/file.
tl;dr: My question is: Is this “Character Encoding” and/or “file.encoding” of any relevance, and if yes, what is it used for?
Following the FAQ, it is used by an RTC Client (Eclipse or VS) at the checkin phase.
If the encoding specified there differs from the one used in the file you want to check-in, there will be an error:
Basically, there is a text file that Jazz attempted to read when checking in the project contents that it could not because the content does not adhere to the encoding rules. The error message should provide you with the name of file that caused the problem.
Within Eclipse, you have a default encoding for text files.
To see what it is, from the toolbar select Windows > Preferences... > General > Workspace.
If this is not the encoding for most of your text files, you should change it here.
When working within a team you should decide upon a common encoding that you and your team will use. That encoding should also be available on the server (for annotate to work well). You will need to communicate with the rest of your team what the encoding is.
Lazarus generates 3 file types for projects - *.lpr, *.lpi and *.lps. The first 2 files are necessary.
Should I keep *.lps files in version control system or should I include *.lps files in global ignore list?
IMO, no if you are not sharing the projects. Due to the FAQ, the lps files are "Lazarus Program Session - Personal data like cursor positions, source editor files, personal build modes. stored in XML".
This old, but as I am starting with use hg, I had the same question.
It seems best to NOT store .lps file in version control systems.
References:
http://wiki.freepascal.org/File_extensions
https://github.com/github/gitignore/blob/master/Global/Lazarus.gitignore
(Also wiki.freepascal.org/file_types and forum.lazarus.freepascal.org/index.php?topic=9298.0)
Suppose I have three separate applications called MyPasswordManager, MyToolManager and MyMovieManager. Each of these applications uses a Firebird Embedded database.
If a customer buys all three of my aplications and installs them on his/her computer. And my customer has all three applications running at the same time, what happens?
Will the Firebird dll's have conflicts? What do you do in this situation?
If you put the Firebird dll's in the application folder (where the .exe is) there won't be a problem since this is the first place where your application will look for them.
You have to make sure that the applications each install to their own folder, if you want to use different versions of the dll's.
Cape, you really ought to read the "readme_embedded.txt" file in the doc directory - it has all the answers youre looking for. Some relevant quotes (for the FB 2.5 version):
2.2. Database access
The database file can be accessed by multiple client
programs. The database consistency in this case is
guaranteed internally (by the shared lock table).
2.4. Compatibility
You may run any number of applications with the embedded
server without any conflicts. Having IB/FB server running
is not a problem either.
have you tested it in your dev machine? I think just putting the apps and the dlls in different folders each one could work. Maybe renaming the dlls with different names can work too