Get screenshot of failed tests from Travis CI - travis-ci

For local I know how to download the failed tests screenshots.
scp -P 2222 vagrant#127.0.0.1:/tmp/features_article_feature_817.png ~/Downloads/.
How do we download the screenshot from travis CI ?

For people who get here via Google, there is an alternative approach.
You can run a (failing) job/build in debug mode, which gives you access to an interactive session via ssh. See the Travis docs for more information on how to.
Once in your interactive environment, you can run your build phases and find info on failing specs in your tmp folder.

You can't really ssh to Travis CI. What you can do is to upload your build artifacts (like screenshots) to Amazon S3. Here's an example config that would result in uploading all png files found in the /tmp directory:
# .travis.yml
addons:
artifacts: true
paths:
- $(ls /tmp/*.png | tr "\n" ":")
You'll also have to configure some Amazon specific environment variables:
ARTIFACTS_KEY=(AWS access key id)
ARTIFACTS_SECRET=(AWS secret access key)
ARTIFACTS_BUCKET=(S3 bucket name)
Environment variables can be encrypted and securely defined in your .travis.yml with the travis tool.
Read more about amazon s3 uploader and secure variables in Travis CI docs:
https://docs.travis-ci.com/user/uploading-artifacts/
https://docs.travis-ci.com/user/environment-variables/#Defining-encrypted-variables-in-.travis.yml

There's a bit of an error in the yaml here - paths should be indented under artifacts. The .travis.yml fiel would have
# .travis.yml
addons:
artifacts:
paths:
- $(ls /tmp/*.png | tr "\n" ":")

Related

GitLabCI Kaniko on shared runner "error checking push permissions -- make sure you entered the correct tag name"

This similar question is not applicable because I am not using Kubernetes or my own registered runner.
I am attempting to build a Ruby-based image in my GitLabCI pipeline in order to have my gems pre-installed for use by subsequent pipeline stages. In order to build this image, I am attempting to use Kaniko in a job that runs in the .pre stage.
build_custom_dockerfile:
stage: .pre
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
IMAGE_TAG: ${CI_COMMIT_REF_SLUG}-${CI_COMMIT_SHORT_SHA}
script:
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"username\":\"${CI_REGISTRY_USER}\",\"password\":\"${CI_REGISTRY_PASSWORD}\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context ${CI_PROJECT_DIR} --dockerfile ${CI_PROJECT_DIR}/dockerfiles/custom/Dockerfile --destination \
${CI_REGISTRY_IMAGE}:${IMAGE_TAG}
This is of course based on the official GitLabCI Kaniko documentation.
However, when I run my pipeline, this job returns an error with the following message:
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: registries must be valid RFC 3986 URI authorities: registry.gitlab.com
The Dockerfile path is correct and through testing with invalid Dockerfile paths to the --dockerfile argument, it is clear to me this is not the source of the issue.
As far as I can tell, I am using the correct pipeline environment variables for authentication and following the documentation for using Kaniko verbatim. I am running my pipeline jobs with GitLab's shared runners.
According to this issue comment from May, others were experiencing a similar issue which was then resolved when reverting to the debug-v0.16.0 Kaniko image. Likewise, I changed the Image name line to name: gcr.io/kaniko-project/executor:debug-v0.16.0 but this resulted in the same error message.
Finally, I tried creating a generic user to access the registry, using a deployment key as indicated here. Via the GitLabCI environment variables project settings interface, I added two variables corresponding to the username and key, and substituted these variables in my pipeline script. This resulted in the same error message.
I tried several variations on this approach, including renaming these custom variables to "CI_REGISTRY_USER" and "CI_REGISTRY_PASSWORD" (the predefined variables). I also made sure neither of these variables was marked as "protected". None of this solved the problem.
I have also tried running the tutorial script verbatim (without custom image tag), and this too results in the same error message.
Has anyone had any recent success in using Kaniko to build Docker images in their GitLabCI pipelines? It appears others are experiencing similar problems but as far as I can tell, no solutions have been put forward and I am not certain whether the issue is on my end. Please let me know if any additional information would be useful to diagnose potential problem sources. Thanks all!
I ran into this issue before many times forgetting that the variable was set to protected thus will only be exported to protected branches.
Hey i got it working but it was quite a hassle to find out.
The credentials i had to use were my git username and password not the registry user/passwordd!
Here is what my gitlab-ci.yml looks like (of course you would need to replace everything with variables but i was too lazy to do it until now)
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
tags:
- k8s
script:
- echo "{\"auths\":{\"registry.mydomain.de/myusername/mytag\":{\"username\":\"myGitusername\",\"password\":\"myGitpassword\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination registry.mydoamin.de/myusername/mytag:$CI_COMMIT_SHORT_SHA

Can't find SSH keys settings under travis project settings

My CI project is dependent on another private repo. So I refer to the document to upload the private key using
➜ travis sshkey --upload ~/.ssh/id_travis_rsa --pro
Updating ssh key for Jeff-Tian/uni-sso with key from /Users/tianjef/.ssh/id_travis_rsa
Current SSH key: key for clone k8s-config
Finger print: 65:25:66:26:4d:5d:9f:ac:25:ba:ea:be:c4:d5:e3:5f
From the above I double checked the finger print, and compares to the github ssh keys:
They are matched.
However, the travis build still fails by:
(https://travis-ci.com/github/Jeff-Tian/uni-sso/builds/161350192)
$ git clone git#github.com:Jeff-Tian/k8s-config.git ${HOME}/k8s-config
Cloning into '/home/travis/k8s-config'...
Warning: Permanently added the RSA host key for IP address '140.82.114.4' to the list of known hosts.
Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
The command "git clone git#github.com:Jeff-Tian/k8s-config.git ${HOME}/k8s-config" failed and exited with 128 during .
And then I check the settings on travis settings, can't find the ssh keys settings pane:
Help:
Where goes wrong? Is it a Travis CI bug?
Seems the ssh keys config is only available for private repos.
The issue here is the main repo is public, but when deploy it, a private repo need to be downloaded. This scenario is not covered by the official document.
The workaround is to switch copying the private repo via https instead of ssh, so no need to upload the ssh keys.
By setting up the GH_TOKEN in the setting, and then write that token to .netrc file. Then copy the private repo using https is working:
.travis.yml:
- echo -e "machine github.com\n login $GH_TOKEN" > ~/.netrc
- git clone https://github.com/Jeff-Tian/k8s-config.git ${HOME}/k8s-config

Trigger step in Bitbucket pipelines

I have a CI pipeline in Bitbucket which is building, testing and deploying an application.
The thing is that after the deploy I want to run selenium tests.
Selenium tests are in an another repository in Bitbucket and they have their own pipeline.
Is there a trigger step in the Bitbucket pipeline to trigger a pipeline when a previous one has finished?
I do not want to do a fake push to the test repository to trigger those tests.
The most "correct" way I can think of doing this is to use the Bitbucket REST API to manually trigger a pipeline on the other repository, after your deployment completes.
There are several examples of how to create a pipeline here: https://developer.atlassian.com/bitbucket/api/2/reference/resource/repositories/%7Bworkspace%7D/%7Brepo_slug%7D/pipelines/#post
Copy + pasting the first example. How to trigger a pipeline for the latest commit on master:
$ curl -X POST -is -u username:password \
-H 'Content-Type: application/json' \
https://api.bitbucket.org/2.0/repositories/jeroendr/meat-demo2/pipelines/ \
-d '
{
"target": {
"ref_type": "branch",
"type": "pipeline_ref_target",
"ref_name": "master"
}
}'
according to their official documentation there is no "easy way" to do that, cause the job are isolated in scope of one repository, yet you can achieve your task in following way:
create docker image with minimum required setup for execution of your tests inside
upload to docker hub (or some other repo if you have such)
use docker image in last step of you pipeline after deploy to execute tests
Try out official component Bitbucket pipeline trigger: https://bitbucket.org/product/features/pipelines/integrations?p=atlassian/trigger-pipeline
You can run in after deploy step
script:
- pipe: atlassian/trigger-pipeline:4.1.7
variables:
BITBUCKET_USERNAME: $BITBUCKET_USERNAME
BITBUCKET_APP_PASSWORD: $BITBUCKET_APP_PASSWORD
REPOSITORY: 'your-awesome-repo'
ACCOUNT: 'teams-in-space'
#BigGinDaHouse I did something more or less like you say.
My step is built on top of docker image with headless chrome, npm and git.
I did follow the steps below:
I have set private key for the remote repo in the original repo. Encoded base 64. documentation. The public key is being set into the remote repo in SSH Access option in bitbucket menu.
In the pipeline step I am decoding it and setting it to a file. I am also changing its permission to be 400.
I am adding this Key inside the docker image. ssh-add
Then I am able to do a git clone followed by npm install and npm test
NOTE: The entry.sh is because I am starting the headless browser.
- step:
image: kimy82/headless-selenium-npm-git
script:
- echo $key_in_env_variable_in_bitbucket | base64 --decode > priv_key
- chmod 400 ./priv_key
- eval `ssh-agent -s`
- ssh-agent $(ssh-add priv_key; git clone git#bitbucket.org:project.git)
- cd project
- nohup bash /usr/bin/entry.sh >> out.log &
- npm install
- npm test
Top answers (this and this) are correct, they work.
Just adding that we found out (after a LOT of trial and error) that the user executing the pipeline must have WRITE permissions on the repo where the pipeline is invoked (even though his app password permissions were set to "WRITE" for repos and pipelines...)
Also, this works for executing pipelines in Bitbucket's cloud or on-premise, through local runners.
(Answering as I am lacking reputation for commenting)

Jenkins Pipeline gcloud problems in docker

I'm trying to set up jenkins pipeline according to
this article but instead use google container registry to push the docker images to.
The Problem: The part which fails me is this jenkinsfile stage block
stage ('Push Docker Image To Container Registry') {
docker.image('google/cloud-sdk:alpine').inside {
sh "echo ${env.GOOGLE_AUTH} > gcp-key.json"
sh 'gcloud auth activate-service-account --key-file ./service-account-creds.json'
}
}
The Error:
Please verify that you have permissions to write to the parent directory.)
ERROR: (gcloud.components.update) Could not create directory [/.config/gcloud]: Permission denied.
I can't run any command to do with gcloud as the error above is what i get all the time.
I tried create the "/.config" directory manually logged into the aws instance and open up the permission of the folder to everyone but that didn't help either.
I also can't find anywhere how to properly setup google cloud for jenkins pipeline using docker.
Any suggestions are greatly appreciated :)
It looks like it's trying to write data directly into your root file system directory.
The .config directory for gcloud would normally be in the following locations for username and/or root user:
/home/yourusername/.config/gcloud
/root/.config/gcloud
It looks like, for some reason, jenkins thinks the parent directory should be in /.
I would try checking where your cloud sdk config directories are on the machine you are running this on (and for the user the scripts runs as):
$ sudo find / -iname "gcloud"
And look for location similars to those printed above.
Could it be that the Cloud SDK is installed in a none standard location on the machine?

Bitbucket Pipeline how to setup ssh agent to deploy on a remote server

Here is the workflow I want to achieve:
commit code
bitbucket pipeline run test on my public docker image
bitbucket pipeline executes ansible script to deploy on my public docker image
The first 2 steps working fine, but here is the problem:
How/Where should I store my private keys to allow ansible to ssh to my remote server via ssh agent?
I am a bit reluctant to store the private key inside Pipeline env settings, since everyone else has admin access to the repo can see it.
There is a similar question asked here but the answer suggests to setup the keys on docker and use private repo, which it's a bit different to mine.
You can now setup SSH keys under pipeline settings so that you do not need to use environment variables and copy to certain locations in the container. The private key is not shown at all.
Under
Settings -> Pipelines -> SSH keys
You would need to get the public key to the production containers known_hosts file.
I have set up a similar process and used Pipelines environment variables, there is a checkbox to secure the value so you don't need to worry about others viewing it.
The set up is pretty easy:
Base64 encode a private key and store it in an environment variable
in Bitbucket
Commit a "my_known_hosts" file to your codebase that includes
the public SSH key of the remote host.
Then in your bitbucket-pipelines.yml file set up the known_hosts and key:
- mkdir -p ~/.ssh
- cat my_known_hosts >> ~/.ssh/known_hosts
- (umask 077 ; echo $MY_SSH_KEY | base64 --decode > ~/.ssh/id_rsa)
Full documentation is available here https://confluence.atlassian.com/bitbucket/access-remote-hosts-via-ssh-847452940.html

Resources