How to set Timeout limit at ServiceModel Metadata Utility Tool (Svcutil.exe) - svcutil.exe

As described at this post
http://msdn.microsoft.com/en-us/library/aa347733%28v=vs.110%29.aspx
Timeout
The tool has a 5 minute timeout when retrieving metadata. This timeout
only applies to retrieving metadata over the network. It does not
apply to any processing of that metadata.
I am always got timeout error when I try to generate my web service file.
HTTP GET Error
URI: http://localhost/TheProject/Services.svc?wsdl
There was an error downloading 'http://localhost/TheProject/Services.svc?wsdl'.
The operation has timed out
If you would like more help, type "svcutil /?"
So, I usually run the script to generate the file 5 or more times at VS CMD. Then, My files generated successfully.
I am looking way to set the timeout limit, so, I am not run 5 or more times my command.
Please help.

Related

FluentFTP, error uploading in only one remote server

I'm testing a new FTP service to replace our old one written using the wininet.dll API.
The application connect to more than 100 remote servers and all of then works on the old service.
On the new one, written in C# and using FluentFTP, in only one remote server the upload doesn't work (dowload is ok so, there's not a connection issue).
I've contacted the infra team responsible for the server, they said that all transfered files are moved imediately from the destiny folder as soon as transfer is finished.
I imagine that this is causing the error below, when upload command sends the MDTM command to keep original file timestamp and the transfered file doesn't exists anymore.
Is that correct? Is there an option that I can set to make the upload just put the file in destiny and nothing else?
I've tried the verifyOptions = FtpVerify.None but the error persists.
# UploadFile("c:\temp\testfile.txt\testfile.txt", "/envio/testfile.txt", Overwrite, False, None)
# FileExists("/envio/testfile.txt")
Command: SIZE /envio/testfile.txt
Response: 550 Requested action not taken. File doesnot exist
Command: MDTM /envio/testfile.txt
Response: 213 19700101000000
# DeleteFile("/envio/testfile.txt")
Command: DELE /envio/testfile.txt
Response: 550 Requested action not taken. File unavailable (e.g., file not found, no access).
Thanks

Why Jest tests are SOMETIMES failing on CircleCI?

I have Jest tests that are running against the dockerized Neo4j Database, and sometimes they fail on CircleCI. The error message for all 25+ of them is :
thrown: "Exceeded timeout of 5000 ms for a hook.
#*******api: Use jest.setTimeout(newTimeout) to increase the timeout value, if this is a long-running test."
Since they fail sometimes, like once in 25 runs, I am wondering if jest.setTimeout will solve the issue. I was able to fail them locally by setting jest.setTimeout(10), but I am not sure how to debug this even more, or whether something else could be an issue here aside from a small timeout (default 5000). I would understand if 1/25 or a few fails, or if all other suits fail, but only a single file with all tests within that file is failing. And it is always the same file, never some other file for this reason ever.
Additional information, locally, that single file runs in less than a 1000ms connected to the staging database which is huge compared to the dockerized that has only a few files at the time of running
For anyone who sees this, I was able to solve this by adding the --maxWorkers=2 flag to the test command in my CircleCI config. See here for details: https://support.circleci.com/hc/en-us/articles/360005442714-Your-test-tools-are-smart-and-that-s-a-problem-Learn-about-when-optimization-goes-wrong-
Naman's answer is perfect! I couldn't believe it but it really solved my problem. Just to be extra clear on how to do it:
I change the test script from my package.json from jest to jest --maxWorkers=2. Then I pushed and it did solve my error.

jenkins error out on large file size

I am new to jenkins. I am trying to build a repo as a production on the server but when I click on 'build now' all files succeed except one large file with error as:
FTP: Caught exception [IOException caught while copying.] Sleeping for [10,000]ms before trying again
the file size is 137MB and it is an mp4 file.
I updated the plugins for both publish over ftp/ssh and still same problem.
help please
There is default time-out setting with SSH plugin. You can set it as long it takes to complete your task. Or simply set to 0 (zero) to avoid any timeout. With 0, it will basically keep the connection.

InfluxDB influx-enterprise.key.json no such file or directory

I try to install InfluxDB Enterprise Edition using this documentation: https://docs.influxdata.com/enterprise/v1.2/production_installation/. The Requirements suggest to either use a license-key or license-path, where I do it with the License Key.
In Step 2, after installing, configuring and starting the Data Nodes I try to join the data nodes to the cluster. But executing influxd-ctl add-data enterprise-data-01:8088 gives me the error:
add-data: operation exited with error: open /tmp/influx-enterprise.key.json: no such file or directory
although I configured not to use the license-key json but rather the license-key.
I also have the json file, so I tried it with the license-path but still getting the same error. Has somebody else encountered the same issues?
EDIT
Issue has been resolved, I had to restart the Data nodes after I changed the configuration to use the license-path facepalm. I went into this problem as I previously entered an old license key.

Grails 3 - Gradle: Binary file gets corrupted during build on Heroku

I am trying to use the Google Rest API from a Heroku instance. I am having problems with my certificate file, but everything works as expected locally.
The certificate is a PKCS 12 certificate, and the exception I get is:
java.io.IOException: DerInputStream.getLength(): lengthTag=111, too
big.
I finally found the source of this problem. Somewhere along the way the certificate file is modified, locally it is 1732 bytes but on the Heroku instance it is 3024 bytes. But I have no idea when this occurs. I build with the same command locally (./gradlew stage) and execute the resulting jar with the same command.
The file is stored in grails-app/conf, I don't know any better place to put it. I am reading it using this.getClass().getClassLoader().getResourceAsStream(...)
I found similar problems can occur when using Maven with resource filtering. But I haven't found any signs of Grails or Gradle doing the same kind of resource filtering.
Does anyone have any clues about what this can be?

Resources