Where are travis logs dumped - travis-ci

I am trying to test an R package on travis for linux compatibility. Since it fails, I need to access the logs. I added in .travis.yml
after_failure:
- "cat /home/travis/build/xxxx/yyy/yyy.Rcheck/00check.log"
- ./travis-tool.sh dump_logs
This prints the logs but for the actual file it refers to
/home/travis/build/xxxxx/yyy/yyy.Rcheck/00check.log
Can someone tell me how to retrieve the file?
marco

You can use our artifacts support to upload the log to an S3 bucket: http://docs.travis-ci.com/user/uploading-artifacts/

Related

.env file Flaged as not being an object when deploying trough DevOps CD pipeline

I have just created a .env file to separate my environment variables from my main docker-compose file. I can run this document on my local machine fine with no errors or issues but when I try run it through my CD pipeline I get the following error.
[error]Top level object in 'C:\BuildAgent_work\r38\a\"Myproject Name"\drop\ .env' needs to be an object not 'class 'str'.
I first thought this was because I had set up my build/CI process wrong but I have played around with it and have had no luck.
I have also done some research online to find others with the same problem but none relate to DevOps in anyway so it has been unhelpful
I am not sure how to reproduce this problem but if anyone knows I can try provide some of my code if needed
Edit:
Here is a snippet of my .env file. Check comment below for my thoughts
ContainerInfrastructure_Version=6.7.93-beta.1
ContainerInfrastructureCore_Version=6.7.41-beta.1
AuthenticationWebService_Version=6.7.52-beta.1
CRM_Version=6.7.52-beta.1
Expected result:
Deploys successfully
What I'm getting during the docker-compose task:
[error]Top level object in 'C:\BuildAgent_work\r38\a\Goldpine.ReleaseManagement\drop.env' needs to be an object not 'class 'str'.
Ok so I figured it out. I'm not sure how to explain this briefly but I'll do my best.
So the problem was within DevOps itself not my code. It turns out a .env file only works/gets picked up if you run the docker-compose command from within the working directory that the docker-compose.yml file exists.
When it went into DevOps it was not running the command from within the downloaded artefact directory but by creating a path to it using the -f command.
So long story short, If you use a .env file you need to set the working directory within the CD pipeline to your artefact folder for it to be able to see the .env file correctlly.
I hope this is clear enough if not just let me know and I'll try change it accordingly :)

Deploying code on lambda failed using serverless

I was trying to deploy code on lambda using serverless deploy and got below error, tried multiple solutions available online but didn't work.
Error -
Serverless: Packaging service...
Serverless Error ---------------------------------------
The specified bucket does not exist
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information -----------------------------
OS: darwin
Node Version: 8.12.0
Serverless Version: 1.31.0
When you are deploying your Serverless application it uses the service attribute (defined in your serverless.yaml) as a unique identifier of your application in the CloudFormation.
Said so, you may have some conflict if you change the name of the bucket without removing the stack. Ex:
You deploy you application with the bucket called myBucket.
CloudFormation will be created considering this info.
You change this name to myBucketPlus and try to deploy.
Serverless will clean the mybucketPlus with the last deploy before pushing the new one.
But wait! myBucketPlus does not exist.
As you did not describe what exactly you did, I tried to give an example but it could be something else.
Also you could try removing and deploying again.
The best way to resolve this issue is -
Execute below command to see the lambda information which will also provide the S3 bucket name, region, endpoint info etc but you need only bucket name and region for this case.
sls info -v
Create the bucket in the intended region.
Done.

Writing file out of bazel test [duplicate]

I'm running E2E test using some bazel test target (the scala flavour of java_test).
In Maven I used to dump logs to target/logs folder that was created during test time, and then if something failed - I could have looked in this folder and find the logs.
In bazel - what path can I put in my test logs configuration so it would be writable and conveniently available upon test finish / test failure?
I know that the java.io.tmp dir is writable but gets deleted immediately after test finishes.
So digging through bazel docs I found this:
https://docs.bazel.build/versions/master/test-encyclopedia.html#initial-conditions
Seems like I can read env variable "TEST_UNDECLARED_OUTPUTS_DIR" and it will give me a writable path. Anything I write there would be zipped and saved under ./bazel-out/darwin-fastbuild/testlogs/<package-name>/<target-name>/test.outputs/outputs.zip
Pretty cool!
You can set a writable path for bazel test by using the environment variable TEST_TMPDIR="<.file directory.>".

Google Cloud Storage: Output path does not exist or is not writeable

I am trying to follow this simple Dataflow example from google cloud site.
I have successfully installed the dataflow pipeline plugin and gcloud SDK (as well as Python 2.7). I have also set up a project on google cloud and enabled billing and all the necessary API's - as specified in the instructions above.
However, when I go to the run configurations and change the Pipeline Arguments tab to select BlockingDataflowPipelineRunner, after entering creating a bucket and setting my project-id, hitting run gives me:
Caused by: java.lang.IllegalArgumentException: Output path does not exist or is not writeable: gs://my-cloud-dataflow-bucket
at com.google.cloud.dataflow.sdk.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:146)
at com.google.cloud.dataflow.sdk.util.DataflowPathValidator.verifyPathIsAccessible(DataflowPathValidator.java:79)
at com.google.cloud.dataflow.sdk.util.DataflowPathValidator.validateOutputFilePrefixSupported(DataflowPathValidator.java:62)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner.fromOptions(DataflowPipelineRunner.java:255)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.fromOptions(BlockingDataflowPipelineRunner.java:82)
... 9 more
I have used my terminal to execute 'gcloud auth login' and I see in the browser that I am successfully logged in.
I am really not sure what I have done wrong here. Can anyone confirm if this is a known issue with using dataflow pipeline and google buckets?
Thanks!
I had a similar issue with GCS bucket permissions, though I certainly had write permissions and I could upload files into the bucket.
What solved the problem for me was acquiring roles/dataflow.admin permission for the project I was submitting the pipeline to.
When submitting pipelines to the Google Cloud Dataflow Service, the pipeline runner on your local machine uploads files, which are necessary for execution in the cloud, to a "staging location" in Google Cloud Storage.
The pipeline runner on your local machine seems to be unable to write the required files to the staging location provided (gs://my-cloud-dataflow-bucket). It could be that the location doesn't exist, or that it belongs to a different GCP project than you authenticated against, or that there are more specific permissions set on that bucket, etc.
You can start debugging the issue via gsutil command-line too. For example, try running gsutil ls gs://my-cloud-dataflow-bucket to attempt to list the contents of the bucket. Then, try to upload via gsutil cp command. This will perhaps produce enough information to root-cause the issue you are facing.
Try to provide zone parameter, it works in my case with similar error. And of course export GOOGLE_APPLICATION_CREDENTIALS environment variable before running your app.
...
-Dexec.args="--runner=DataflowRunner \
--gcpTempLocation=gs://bucket/tmp \
--zone=bucket-zone \
...
Got the same error. Fixed it by setting GOOGLE_APPLICATION_CREDENTIALS using the key file with write permissions in ~/.bash_profile on Mac.
I realised I needed to use a specific acl command via gsutil. Setting my account to have owner permissions did not do the job. Instead using:
gsutil acl set public-read-write gs://my-bucket-name-here
worked in this case. Hope this helps someone!

Grails 3 - Gradle: Binary file gets corrupted during build on Heroku

I am trying to use the Google Rest API from a Heroku instance. I am having problems with my certificate file, but everything works as expected locally.
The certificate is a PKCS 12 certificate, and the exception I get is:
java.io.IOException: DerInputStream.getLength(): lengthTag=111, too
big.
I finally found the source of this problem. Somewhere along the way the certificate file is modified, locally it is 1732 bytes but on the Heroku instance it is 3024 bytes. But I have no idea when this occurs. I build with the same command locally (./gradlew stage) and execute the resulting jar with the same command.
The file is stored in grails-app/conf, I don't know any better place to put it. I am reading it using this.getClass().getClassLoader().getResourceAsStream(...)
I found similar problems can occur when using Maven with resource filtering. But I haven't found any signs of Grails or Gradle doing the same kind of resource filtering.
Does anyone have any clues about what this can be?

Resources