Comparison.Compare process Continues without error. Errors count (1) writes in (ICompareResult)compareResult. I think the error is caused by the license or products. The same process is working properly when I do on a single page. The second file given for comparison in the complex project returns as a result.
single page project compareResult return Comparison of source Path to targetPath
complex project compareResult return target Path
the used products:
GroupDocs.Viewer => **18.8.0.0**
GroupDocs.Comparison => **18.7.1.0**
<!-- language: c# -->
GroupDocs.Comparison.Common.ICompareResult compareResult = null;
GroupDocs.Comparison.Comparer comparer = new
GroupDocs.Comparison.Comparer();
compareResult = comparer.Compare(sourcePath, targetPath,
objComparisonSettings);
I think the error is caused by the license or products
Please note that if you are evaluating the API in trial mode (without applying the license) you may face limitations. Documents with more than 3 pages are not supported in trial mode.
However, you can request a temporary license here. Follow the wizard and on step 5 you can avail it.
I reduced the level of comparison
Related
We are using the BuildHTTPClient to programmatically create a copy of a build definition, update the variables in memory and then save the updated object as a new definition.
I'm using Microsoft.TeamFoundation.Build2.WebApi.BuildHTTPClient 16.141. The TFS version is 17 update 3 (rest api 3.x)
This is a similar question to https://serverfault.com/questions/799607/tfs-buildhttpclient-updatedefinition-c-example but I'm trying to stay within using the BuildHttpClient libraries and not go directly to the RestAPIs.
The problem is the Steps list is always null along with other properties even though we have them in the build definition.
UPDATE Posted as an answer below
After looking at #Daniel Frosts attempt below we started looking at using older versions of the NuGet package. Surprisingly the supported version 15.131.1 does not support this but we have found out that the version="15.112.0-preview" does.
After rolling back all of our Dlls to match that version the steps were cloned when saving the new copy of the build.
All of the code examples we used work when you are using this package. We were unable to get Daniel's example working but the version of the Dll was the issue.
We need to create a GitHub issue and report it to MS
First Attempt - GetDefinitionAsync:
VssConnection connection = new VssConnection(DefinitionTypesDTO.serverUrl, new VssCredentials());
BuildHttpClient bdClient = connection.GetClient<BuildHttpClient>();
Task <BuildDefinition> resultDef = bdClient.GetDefinitionAsync(DefinitionTypesDTO.teamProjectName, buildID);
resultDef.Wait();
BuildDefinition updatedDefinition = UpdateBuildDefinitionValues(resultDef.Result, dr, defName);
updatedTask = bdClient.CreateDefinitionAsync(updatedDefinition, DefinitionTypesDTO.teamProjectName);
The update works on the variables and we can save the updated definition back to TFS but there are not any tasks in the newly created build definition. When we look at the object that is returned from GetDefinitionAsync we see that the Steps list is empty. It looks like GetDefinitionAsync just doesn't get the full object.
Second Attempt - Specific Revision:
int rev = 9;
Task <BuildDefinition> resultDef = bdClient.GetDefinitionAsync(DefinitionTypesDTO.teamProjectName, buildID, revision: rev);
resultDef.Wait();
BuildDefinition updatedDefinition = UpdateBuildDefinitionValues(resultDef.Result, dr, defName);
Based on SteveSims post we were thinking we are not getting the correct revision. So we added revision to the request. I see the same issue with the correct revision. Similarly to SteveSims post I can open the DefinitionURL in a browser and I see that the tasks are in the JSON in the browser but the BuildDefinition object is not populated with them.
Third Attempt - GetFullDefinition:
So then I thought to try getFullDefinition, maybe that's that "Full" means of course with out any documentation on these libraries I have no idea.
var task2 = bdClient.GetFullDefinitionsAsync(DefinitionTypesDTO.teamProjectName, "MyBuildDefName","$/","TfsVersionControl");
task2.Wait();
Still no luck, the Steps list is always null even though we have steps in the build definition.
Fourth Attempt - Save As Template
var task2 = bdClient.GetTemplateAsync DefinitionTypesDTO.teamProjectName, "1_Batch_Dev");
task2.Wait();
I tried saving the Build Definition off as a template. So in the Web UI I chose "Save as Template", still no steps.
Fifth Attempt: Using the URL as mentioned in SteveSims post:
Finally i said ok, i'll try the solution SteveSims used, using the webclient to get the object from the URL.
var client = new WebClient();
client.UseDefaultCredentials = true;
var json = client.DownloadString(LastDefinitionUrl);
//Convert the JSON to an actual builddefinition
BuildDefinition result = JsonConvert.DeserializeObject<BuildDefinition>(json);
This also didn't work. The build definition steps are null. Even when looking at the Json object (var json) i see the steps. But the object is not loaded with them.
I've seen this post which seems to add the Steps to the base definition, i've tried this but honestly I'm having an issue understanding how he has modified the BuildDefinition Object when referencing that via NuGet?
https://dennisdel.com/blog/getting-build-steps-with-visual-studio-team-services-.net-api/
After looking at #Daniel Frosts attempt below we started looking at using older versions of the NuGet package. Surprisingly the supported version 15.131.1 does not support this but we have found out that the version="15.112.0-preview" does.
After rolling back all of our Dlls to match that version the steps were cloned when saving the new copy of the build.
All of the code examples above work when you are using this package. We were unable to get Daniel's example working but we didn't try hard as we had working code.
We need to create a GitHub issue for this.
Found this in my code, which works.
Use this package, not sure if it could have an impact (joke).
...packages\Microsoft.TeamFoundationServer.Client.15.112.1\lib\net45\Microsoft.TeamFoundation.Build2.WebApi.dll
private Microsoft.TeamFoundation.Build.WebApi.BuildDefinition GetBuildDefinition(string projectName, string buildDefinitionName)
{
var buildDefinitionReferences = _buildHttpClient.GetFullDefinitionsAsync(projectName, "*", null, null, DefinitionQueryOrder.DefinitionNameAscending, top: 1000).Result;
return buildDefinitionReferences.SingleOrDefault(x => x.Name == buildDefinitionName && x.DefinitionQuality != DefinitionQuality.Draft);
}
With the newer clients Steps will always be empty. In newer api-versions (which are used by the newer clients) the steps have moved to Phases. If you use GetDefinitions or GetFullDefinitions and look in
definition.Process.Phases[0].Steps
you'll find them. (GetDefinitions gets shallow references so the process won't be included.)
The Steps collection still exists for compatibility reasons (we don't want apps to crash with stuff like MethodNotFoundExceptions) but it won't be populated.
I was having this problem, although I able to get Phases[0] information at runtime, but could not get it at design time. I solved this problem using dynamic type.
dynamic process = buildDefTemplate.Process;
foreach (BuildDefinitionStep tempStep in process.Phases[0].Steps)
{
// do some work here
}
Not, it is working!
Microsoft.TeamFoundationServer.Client version 16.170.0 I can get build steps through process.Phases[0].Steps only with process and step being dynamic as #whitecore above stated
var definitions = buildClient.GetFullDefinitionsAsync(project: project.Name);
foreach (var definition in definitions.Result)
{
Console.WriteLine(string.Format("\n {0} - {1}:", definition.Id, definition.Name));
dynamic process = definition.Process;
foreach (dynamic step in process.Phases[0].Steps)
{
Console.WriteLine(step.DisplayName);
}
}
I am using SQLProvider from NuGet (https://www.nuget.org/packages/SQLProvider/ v1.1.42) in an F# project to access our MSSQL database.
I am referring to the sample code from here, https://fsprojects.github.io/SQLProvider/core/programmability.html and also the source code tests on GitHub, https://github.com/fsprojects/SQLProvider/blob/master/tests/SqlProvider.Tests/scripts/MySqlTests.fsx.
#r #"....\packages\SQLProvider.1.1.42\lib\net451\FSharp.Data.SqlProvider.dll"
#r #"....\System.Data.Linq.dll"
open System
open FSharp.Data.Sql
open FSharp.Data.Sql.Common
open System.Data.Linq
type SeriesResult = { .. fields .. }
[<Literal>]
let ConnectionString = #"connStr"
type Sql = SqlDataProvider<
ConnectionString = ConnectionString,
DatabaseVendor = Common.DatabaseProviderTypes.MSSQLSERVER>
let db = Sql.GetDataContext()
let test =
[
for f in db.Procedures.MyStoredProcedure.Invoke("param").ResultSet do
yield f.MapTo<SeriesResult>()
]
I need to access results from the call to MyStoredProcedure, but ResultSet errors with error FS0039: The field, constructor or member 'ResultSet' is not defined". I also get this for ColumnValues, and on MapTo (presumably because the type is unknown).
Is there an additional library I should be referencing?
I have: FSharp.Core, FSharp.Data, FSharp.Data.SqlProvider, mscorelib, System, System.Core, System.Data, System.Data.Linq, System.Xml.Linq
Thanks!
(wanted to tag with SQLProvider - but can't!)
One possible reason for this is that the intelli-sense thread probably timed out waiting for a response from the SQL provider.
The stored procs and the set of types to carry the result ResultSet are computed lazily (when you type the .). This is good in one way as it means the provider doesn't introspect the entire database on instantiation, pulling in lots of stuff you're probably not going to use. However it does have the side effect, of needing to do a non-trivial amount of work in the . completion on the first request, we cache the result after that. I believe Microsoft have a metric that says any intelli-sense work should complete in 250ms, but what the actual thread timeout is I'm not sure. With a language like C# and F# hitting a response target of 250ms can be a big ask on large solutions, but throw a database in the mix (even a small local database) this becomes a very hard target to hit.
Quite why it didn't recover and try again until you added the references, will only be known to Visual Studio; Usually however just closing and re-opening the file is enough. In rare cases unload the project from the solution and reload.
I am looking at some old MetaEditor4 / MQL4 code, where a local variable was declared twice:
......
1 int start()
2 {
3 if (1==2)
4 {
5 double myVar = 1;
6 } else
7 {
8 double myVar = 2;
9 }
10 return;
11 }
.......
The compilation process in MetaEditor, version 5.00, build 1601, fails with:
'myVar' - variable already defined in line 8.
If I remove the line 8, the compilation goes well.
My questions are:
1. Is there any option in MetaEditor that tolerates the multiple declaration of a local variable?
2. In previous versions of MetaTrader Terminal 4 / MetaEditor and .MQ4 code: was it possible to declare a local variable more than once in such a situation?
3. The MetaEditor has the version 5.00, build 1601, but the extension of the code is .mq4 and it was installed together with the MetaTrader Terminal software MetaTrader4 ( from FXCM ). Therefore I assume I can still use .MQ4 code with it. Is there any chance to get a pure MQL4 installation from somewhere?
Whenever I install mt4 ( from e.g.: mt4 download), it ends up
with the mt5 installer.
Prologue:
The worlds of MQL4 evolve. One may try to circumvent this fact, but finally, at one's own disappointment, attempts to avoid evolution will sooner or later go in vain.
Having been thrown into a need to re-engineer code-base spanning a few man*decades in size, I can tell you many stories about what worked and what did not.
An "Old code" v/s a New-MQL4.56789
If just one thing ought be taken from this, never try to "circumvent" New-MQL4, but rather review the code and refactor the "Old code" - this is a way safer way to survive ( way longer ).
Yes, there are chances ( zero warrants, just a few chances left temporarily on the table ) the new compiler version will remain able to generate an executable version of the code, but given a new set of rules have already come in the city, the game will not last long.
Ad 1 + 2 )Compiler still tolerates multiple declarations, but not in one scope
If new version of a compiler defined that any variable is declared only relative to it's scope of validity, the serious programmer ought take this as a general principle. The code above actually has other problem, right nailed to the scope-of-validity:
2 ...
3 if ( 1 == 2 ) {
4 ...
5 double myVar = 1; // myVar declared & known |since HERE >
6 ... // masking any other,|known HERE :
7 ... // |known HERE :
8 } else // |till HERE . Undef further
9 {
10 ...
11 double myVar = 2; // myVar declared & known |since HERE >
12 // masking any other,|known HERE :
13 ... // |known HERE :
14 } // |till HERE . Undef further
so, if there were any _global_ scope'd variable with the same name myVar, it will not be "visible" during an existence of locally declared variable, wearing the same name.
Finally, having the code-execution escaped from any of the lines 8 or 14 further, the locally there declared variable double myVar simply ceased to exist anymore and this behaviour is principally correct ( and the "older" compiler releases were tolerating a sort of dangerous habit of side-effects, during years of tolerating scope-of-validity spillover(s), so it was the high time to clean the rules, so as to meet a fair level of C/S standards.
Ad 3) language receives a lot from MQL5, even if not used in MQL4
Yes, MetaEditor will correctly compile a MQL4 code into .mq4 code-execution format, no problem here. Even an auto-update process started to go independently from MT4 Terminal platform (auto-)updates ( so you will quite often see new Help file coming and enforced re-compilation of all your localhost visible .MQ4 assets into updated .EX4 format, so "Do not panic."
Better never install a Broker-agnostic MT4, always go to your Broker's Support and get installation package & help from your Broker. This is business relation you have signed in a contract, so keep these strings as you are going to trade your money on a table they operate under the set Terms & Conditions. Some Brokers have means of platform customisations, so rather benefit from their custom settings that will match their Server-side automation.
It is more a question of economy of R&D efforts. ( May read a lot about language components injected from the MQL5 domain in the IDE Editor MQL4 Help ). This is a natural will of the product design strategy, not to double efforts on a dual-line. Without doubts, there are many details the Help file could be improved and better maintained, the common sense here is to live with the facts and re-learn what newly introduced features remain neutral for the MQL4 code base and what new things may actually help one a lot in aspects, where older compilers were short in powers.
If one objects that some compiler / platform re-design steps were bad, I would agree on a single-thread, platform-critical, potentially blocking, concentration of executing all the CustomIndicator-s in just one SPoF-thread.
But C'est La Vie, until system architects will not review this SPoF, the platform will remain susceptible to crashes from this feature, but the ball is on the other side of the court and a change will have to be implemented there.
the code might be run with 'strict' or non-strict mode.
strict means that variable must be declared within its scope, non-strict - all the mess that you have now.
so put #property strict at the beginning of the file
open a demo account somewhere and install mt4 there. demo can be valid for 30 days only with registration via web-site of a broker or with unlimited and demo opened from mt4 (example - Alpari)
I'm just learning F#, and setting up a FAKE build harness for a hello-world-like application. (Though the phrase "Hell world" does occasionally come to mind... :-) I'm using a Mac and emacs (generally trying to avoid GUI IDEs by preference).
After a bit of fiddling about with documentation, here's how I'm invoking the F# compiler via FAKE:
let buildDir = #"./build-app/" // Where application build products go
Target "CompileApp" (fun _ -> // Compile application source code
!! #"src/app/**/*.fs" // Look for F# source files
|> Seq.toList // Convert FileIncludes to string list
|> Fsc (fun p -> // which is what the Fsc task wants
{p with //
FscTarget = Exe //
Platform = AnyCpu //
Output = (buildDir + "hello-fsharp.exe") }) // *** Writing to . instead of buildDir?
) //
That uses !! to make a FileIncludes of all the sources in the usual way, then uses Seq.toList to change that to a string list of filenames, which is then handed off to the Fsc task. Simple enough, and it even seems to work:
...
Starting Target: CompileApp (==> SetVersions)
FSC with args:[|"-o"; "./build-app/hello-fsharp.exe"; "--target:exe"; "--platform:anycpu";
"/Users/sgr/Documents/laboratory/hello-fsharp/src/app/hello-fsharp.fs"|]
Finished Target: CompileApp
...
However, despite what the console output above says, the actual build products go to the top-level directory, not the build directory. The message above looks like the -o argument is being passed to the compiler with an appropriate filename, but the executable gets put in . instead of ./build-app/.
So, 2 questions:
Is this a reasonable way to be invoking the F# compiler in a FAKE build harness?
What am I misunderstanding that is causing the build products to go to the wrong place?
This, or a very similar problem, was reported in FAKE issue #521 and seems to have been fixed in FAKE pull request #601, which see.
Explanation of the Problem
As is apparently well-known to everyone but me, the F# compiler as implemented in FSharp.Compiler.Service has a practice of skipping its first argument. See FSharp.Compiler.Service/tests/service/FscTests.fs around line 127, where we see the following nicely informative comment:
// fsc parser skips the first argument by default;
// perhaps this shouldn't happen in library code.
Whether it should or should not happen, it's what does happen. Since the -o came first in the arguments generated by FscHelper, it was dutifully ignored (along with its argument, apparently). Thus the assembly went to the default place, not the place specified.
Solutions
The temporary workaround was to specify --out:destinationFile in the OtherParams field of the FscParams setter in addition to the Output field; the latter is the sacrificial lamb to be ignored while the former gets the job done.
The longer term solution is to fix the arguments generated by FscHelper to have an extra throwaway argument at the front; then these 2 problems will annihilate in a puff of greasy black smoke. (It's kind of balletic in its beauty, when you think about it.) This is exactly what was just merged into the master by #forki23:
// Always prepend "fsc.exe" since fsc compiler skips the first argument
let optsArr = Array.append [|"fsc.exe"|] optsArr
So that solution should be in the newest version of FAKE (3.11.0).
The answers to my 2 questions are thus:
Yes, this appears to be a reasonable way to invoke the F# compiler.
I didn't misunderstand anything; it was just a bug and a fix is in the pipeline.
More to the point: the actual misunderstanding was that I should have checked the FAKE issues and pull requests to see if anybody else had reported this sort of thing, and that's what I'll do next time.
One of my users at a large university (with, I imagine, the aggressive security settings that university IT departments general have on their computers) is getting an empty string returned by Windows XP for CSIDL_COMMON_APPDATA or CSIDL_PERSONAL. (I'm not sure which of these is returning the empty string, because I haven't yet examined his computer to see how he's installed the software, but I'm pretty sure it's the COMMON_APPDATA...)
Has anyone encountered this or have suggestions on how to deal with this?
Here's the Delphi code I'm using to retrieve the value:
Function GetSpecialFolder( FolderID: Integer):String;
var
PIDL: PItemIDList;
Path: array[0..MAX_PATH] of Char;
begin
SHGetSpecialFolderLocation(Application.Handle, FolderID, PIDL);
SHGetPathFromIDList(PIDL, Path);
Result := Path;
end; { GetSpecialFolder }
ShowMessage(GetSpecialFolder(CSIDL_COMMON_APPDATA)); <--- This is an empty string
Edit:
Figuring out this API made me feel like I was chasing my tail - I went in circles trying to find the right call. This method and others similar to it are said to be deprecated by Microsoft (as well as by a earlier poster to this question (#TLama?) who subsequently deleted the post.) But, it seems like most of us, including me, regularly and safely ignore that status.
In my searches, I found a good answer here on SO from some time ago, including sample code for the non-deprecated way of doing this: what causes this error 'Unable to write to application file.ini'.
If you want to find out why an API call is failing you need to check the return values. That's what is missing in this code.
You need to treat each function on its own merits. Read the documentation on MSDN. In the case of SHGetSpecialFolderLocation, the return value is an HRESULT. For SHGetPathFromIDList you get back a BOOL. If that is FALSE then the call failed.
The likely culprit here is SHGetSpecialFolderLocation, the code that receives the CSIDL, but you must check for errors whenever you call Windows API functions.
Taking a look at the documentation for CSIDL we see this:
CSIDL_COMMON_APPDATA
Version 5.0. The file system directory that contains application data for all users. A typical path is C:\Documents and Settings\All
Users\Application Data. This folder is used for application data that
is not user specific. For example, an application can store a
spell-check dictionary, a database of clip art, or a log file in the
CSIDL_COMMON_APPDATA folder. This information will not roam and is
available to anyone using the computer.
If the machine has a shell version lower than 5.0, then this CSIDL value is not supported. That's the only documented failure mode for this CSIDL value. I don't think that applies to your situation, so you'll just have to see what the HRESULT status code has to say.