Receiving intermittent warning - "Unable to delete temporary files" - google-cloud-dataflow

We've noticed that we're getting the following warnings when running some of our CDF jobs.
Is this anything to be concerned about?
2015-04-27T22:42:16.442Z: S09: (49e39a5770173e26): Unable to delete
temporary files
gs://[removed]/cdf-mapped/dax-tmp-2015-04-27_15_36_45-15805476788472573847-S09-1-81f2ef2477e4a2bc/NetworkActiveViews_232503_20150427#DAX.csv$
Causes: (49e39a5770173187): Unable to delete directory:
gs://[removed]/cdf-mapped/dax-tmp-2015-04-27_15_36_45-15805476788472573847-S09-1-81f2ef2477e4a2bc.

This is a harmless error message and it should not impact the pipeline.

Related

Duplicate key error during Umbraco installation

I'm installing Umbraco V7 and get this error during installation
Error during installation
The database failed to install. ERROR: The database configuration failed
with the following message: The CREATE UNIQUE INDEX statement terminated because
a duplicate key was found for the object name 'dbo.cmsPropertyType' and the
index name 'IX_cmsPropertyTypeUniqueID'.
The duplicate key value is (00000000- 0000-0000-0000-000000000000).
The statement has been terminated. Please check log file for additional
information (can be found in '/App_Data/Logs/UmbracoTraceLog.txt')
Any Idea where I can get full DB script for Umbraco or Full DB Backup?
Since it is a brand new install, I would try wiping your db and starting the install over. It sounds like something got corrupted once, and now it is perpetually breaking.

Grails 3 - Gradle: Binary file gets corrupted during build on Heroku

I am trying to use the Google Rest API from a Heroku instance. I am having problems with my certificate file, but everything works as expected locally.
The certificate is a PKCS 12 certificate, and the exception I get is:
java.io.IOException: DerInputStream.getLength(): lengthTag=111, too
big.
I finally found the source of this problem. Somewhere along the way the certificate file is modified, locally it is 1732 bytes but on the Heroku instance it is 3024 bytes. But I have no idea when this occurs. I build with the same command locally (./gradlew stage) and execute the resulting jar with the same command.
The file is stored in grails-app/conf, I don't know any better place to put it. I am reading it using this.getClass().getClassLoader().getResourceAsStream(...)
I found similar problems can occur when using Maven with resource filtering. But I haven't found any signs of Grails or Gradle doing the same kind of resource filtering.
Does anyone have any clues about what this can be?

ant fresh_install build failed

I'm installing Dspace in Windows 7 and I got it working fine with me until I got this error message:
I tried doing what the link :build failed .. creation was not successful for an unknown reason has said, but still nothing's change. Does anyone know how to fix it?
Check dspace.install.dir at build.properties, it seems to have a bad value.
Change the value to the location you want to install and use "/" character to the directory. For example C:/Program Files/Dspace.

Jenkins - Publish Over CIFS Plugin error

I'm using this Publish over CIFS Plugin and I contiinous get an error, even though the copy succeeds. What I'm trying to do is to copy all the contents of a build results directory, all all it's assets, to the remove host. However I get an error message that I can't explain, and the on-line help is failing me.
On the Transfers Section I have only 1 block and this is the setup
Source files: build/123.456/**
Remove prefix: build/
Remote directory: builds/this_release/Latest/
Below are the error messages I get.
CIFS: Connecting from host [my-host]
CIFS: Connecting with configuration [to-host] ...
CIFS: Disconnecting configuration [to-host] ...
ERROR: Exception when publishing, exception message [A Transfer Set must contain Source files - if you really want to include everything, set Source files to **/ or **\]
Build step 'Send build artifacts to a windows share' changed build result to UNSTABLE
What I don't understand is that files under the 'build/123.456/', and sub-directories, get copied as I wanted but still I get an error. Any suggestions on how to correct that ? I've tried removing the '**' and it still works, but I still get an error.
Actually I've found the reason for my error.
I had a second (empty) Transfer Set defined on my job, with no fields filled in
This Set was the reason for the error message.

Xcode server bot failing test action because "Too many open files in system."

The error I'm seeing is as follows:
Test target PrototypeTests encountered an error (The operation couldn’t be completed. Too many open files in system. Too many open files in system)
Test target Prototype Integration Tests encountered an error (The operation couldn’t be completed. Too many open files in system. Too many open files in system)
I am able to run the analyze and archive actions with no problems but enabling the test action causes the above errors. I've even tried this with empty tests and the problem still persists.
The output of sudo launchctl limit maxfiles on my server is:
maxfiles 256 unlimited
Please let me know if I can provide any more information.
You need to increase your ulimit. You should add the line:
ulimit -n 4096
in your ~/.profile or similar.
The reason you have to add this line to your bash launch file is because just running sudo ulimit -n 4096 will only change the limit in current bash session.
I received this same message while trying to compile while low RAM, low disk space, and many open apps & files on my desktop. Closing most of them and emptying the trash resolved the issue.

Resources