cannot connect to directory in bitbucket pipeline - bitbucket

I've read that you can specify in your yml configuration the directory so it does not build in the root but rather in /sites/mywebsitename.
The config until the error:
image: php:7.4
pipelines:
branches:
master:
- step:
name: Deploy to production
deployment: production
script:
- cd sites
The error:
No such file or directory
Any comment or advice is highly appreciated.

If your bitbucket-pipeline.yml was exactly as you shared and you posted complete, there's one missing line that needs to have the "pipe" listed there, under "script". As reference, please visit this thread:
https://community.atlassian.com/t5/Bitbucket-questions/Pipeline-fails-with-No-such-file-or-directory/qaq-p/1069160
I hope it is useful for you.

Related

travis-ci: how to move or rename a file

Before deployment, (or after, but this is harder as we are deploying to s3), we need to rename staging.robots.txt to robots.txt (overwriting the default robots.txt) for the staging deployment only, so that we can block crawling on our staging server (but allow it on production).
Any idea if this is possible?
On the Travis documentation site, there is no info on the before_deploy stage, and we cant see any feature to rename files. With Jenkins, I would simply put cp xxx yyy or similar in the build script, as I know my Jenkins is running on Ubuntu, but we don't know the equivalent Travis command for the .travis.yml file.
== UPDATE ==
Having done more research, it might be possible to do this through a script e.g. commit move.sh into your repo, then call it. As you can choose what OS the build is done one (e.g. Linux), you can write the script for that platform. However, it's not clear at what point you can call this script in the .yml file.
You can simply write a script to invoke in your .travis.yml file for deployment. See the documentation.
Here's the example copied from these docs:
deploy:
provider: script
script: scripts/deploy.sh
on:
tags: true
branch: master
The above config for deploy would invoke on tagging the master branch and the the script (scripts/deploy.sh) would be invoked.
Other than that, you can simply write this command under before_install section like this:
before_install:
- mv abc.txt xyz.txt
You've used cp command but you are talking about renaming, not copying. So, I've used mv command to rename the file.
If you want to do something at the end, you can add an after_success section as well.
Hope that helps!

Share gitlab-ci.yml between projects

We are thinking to move our ci from jenkins to gitlab. We have several projects that have the same build workflow. Right now we use a shared library where the pipelines are defined and the jenkinsfile inside the project only calls a method defined in the shared library defining the actual pipeline. So changes only have to be made at a single point affecting several projects.
I am wondering if the same is possible with gitlab ci? As far as i have found out it is not possible to define the gitlab-ci.yml outside the repository. Is there another way to define a pipeline and share this config with several projects to simplify maintainance?
GitLab 11.7 introduces new include methods, such as include:file:
https://docs.gitlab.com/ee/ci/yaml/#includefile
include:
- project: 'my-group/my-project'
ref: master
file: '/templates/.gitlab-ci-template.yml'
This will allow you to create a new project on the same GitLab instance which contains a shared .gitlab-ci.yml.
First let me start by saying: Thank you for asking this question! It triggered me to search for a solution (again) after often wondering if this was even possible myself. We also have like 20 - 30 projects that are quite identical and have .gitlab-ci.yml files of about 400 - 500 loc that have to each be changed if one thing changes.
So I found a working solution:
Inspired by the Auto DevOps .gitlab-ci.yml template Gitlab itself created, and where they use one template job to define all functions used and call every before_script to load them, I came up with the following setup.
Multiple project repo's (project-1, project-2) requiring a shared set of CI jobs / functions
Functions script containing all shared functions in separate repo
Files
So using a shared ci jobs scipt:
#!/bin/bash
function list_files {
ls -lah
}
function current_job_info {
echo "Running job $CI_JOB_ID on runner $CI_RUNNER_ID ($CI_RUNNER_DESCRIPTION) for pipeline $CI_PIPELINE_ID"
}
A common and generic .gitlab-ci.yml:
image: ubuntu:latest
before_script:
# Install curl
- apt-get update -qqq && apt-get install -qqqy curl
# Get shared functions script
- curl -s -o functions.sh https://gitlab.com/giix/demo-shared-ci-functions/raw/master/functions.sh
# Set permissions
- chmod +x functions.sh
# Run script and load functions
- . ./functions.sh
job1:
script:
- current_job_info
- list_files
You could copy-paste your file from project-1 to project-2 and it would be using the same shared Gitlab CI functions.
These examples are pretty verbose for example purposes, optimize them any way you like.
Lessons learned
So after applying the construction above on a large scale (40+ projects) I want to share some lessons learned so you don't have to find out the hard way:
Version (tag / release) your shared ci functions script. Changing one thing can now make all pipelines fail.
Using different Docker images could cause an issue in the requirement for bash to load the functions (e.g. I use some Alpine-based images for CLI tool based jobs that have sh by default)
Use project based CI/CD secret variables to personalize build jobs for projects. Like environment URL's etc.
Since gitlab version 12.6, it's possible define a external .gitlab-cy.yml file.
To customize the path:
Go to the project's Settings > CI / CD.
Expand the General pipelines section.
Provide a value in the Custom CI configuration path field.
Click Save changes.
...
If the CI configuration will be hosted on an external site, the URL link must end with .yml:
http://example.com/generate/ci/config.yml
If the CI configuration will be hosted in a different project within
GitLab, the path must be relative to the root directory in the other
project, with the group and project name added to the end:
.gitlab-ci.yml#mygroup/another-project
my/path/.my-custom-file.yml#mygroup/another-project
Use include feature, (available from GitLab 10.6):
https://docs.gitlab.com/ee/ci/yaml/#include
So, i always wanted to post, with what i came up with now:
Right now we use a mixed approach of #stefan-van-gastel's idea of a shared ci library and the relatively new include feature of gitlab 11.7. We are very satisfied with this approach as we can now manage our build pipeline for 40+ repositories in a single repository.
I have created a repository called ci_shared_library containing
a shell script for every single build job containing the execution logic for the step.
a pipeline.yml file containing the whole pipeline config. In the before script we load the ci_shared_library to /tmp/shared to be able to execute the scripts.
stages:
- test
- build
- deploy
- validate
services:
- docker:dind
before_script:
# Clear existing shared library
- rm -rf /tmp/shared
# Get shared library
- git clone https://oauth2:${GITLAB_TOKEN}#${SHARED_LIBRARY} /tmp/shared
- cd /tmp/shared && git checkout master && cd $CI_PROJECT_DIR
# Set permissions
- chmod -R +x /tmp/shared
# open access to registry
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
test:
stage: test
script:
- /tmp/shared/test.sh
build:
stage: build
script:
- /tmp/shared/build.sh
artifacts:
paths:
- $CI_PROJECT_DIR/target/RPMS/x86_64/*.rpm
expire_in: 3h
only:
- develop
- /release/.*/
deploy:
stage: deploy
script:
- /tmp/shared/deploy.sh
artifacts:
paths:
- $CI_PROJECT_DIR/tmp/*
expire_in: 12h
only:
- develop
- /release/.*/
validate:
stage: validate
script:
- /tmp/shared/validate.sh
only:
- develop
- /release\/.*/
Every project that want's to use this pipeline config has to have a .gitlab-ci.yml. In this file the only thing to do is to import the shared pipeline.yml file from the ci_shared_library repo.
# .gitlab-ci.yml
include:
- project: 'ci_shared_library'
ref: master
file: 'pipeline.yml'
With this approach really everything regarding to the pipeline lives in one single repository and is reusable. We have the whole pipeline-template in one file, but i think it would even be possible to split this up to have every single job in a yml-file. This way it would be more flexible and one could create default jobs that can be merged together differently for projects that have similar jobs but not every project needing all jobs...
With GitLab 13.5 (October 2020), the include feature is even more useful:
Validate expanded GitLab CI/CD configuration with the API
Writing and debugging complex pipelines is not a trivial task. You can use the include keyword to help reduce the length of your pipeline configuration files.
However, if you wanted to validate your entire pipeline via the API previously, you had to validate each included configuration file separately which was complicated and time consuming.
Now you have the ability to validate a fully-expanded version of your pipeline configuration through the API, with all the include configuration included.
Debugging large configurations is now easier and more efficient.
See Documentation and Issue.
And:
See GitLab 13.6 (November 2020)
Include multiple CI/CD configuration files as a list
Previously, when adding multiple files to your CI/CD configuration using the include:file syntax, you had to specify the project and ref for each file. In this release, you now have the ability to specify the project, ref, and provide a list of files all at once. This prevents you from having to repeat yourself and makes your pipeline configuration less verbose.
See Documentation) and Issue.
You could look into the concept of Dynamic Child pipeline.
It has evolved with GitLab 13.2 (July 2020):
Dynamically generate Child Pipeline configurations with Jsonnet
We released Dynamic Child Pipelines back in GitLab 12.9, which allow you to generate an entire .gitlab-ci.yml file at runtime.
This is a great solution for monorepos, for example, when you want runtime behavior to be even more dynamic.
We’ve now made it even easier to create CI/CD YAML at runtime by including a project template that demonstrates how to use Jsonnet to generate the YAML.
Jsonnet is a data templating language that provides functions, variables, loops, and conditionals that allow for fully parameterized YAML configuration.
See documentation and issue.

How to cache the entire directory (platformio dependencies) in bitbucket pipelines?

I am running a CI pipeline to build firmware for ESP8266 using plaitformio and bitbucket pipelines, my code builds successfully and now I want to cache the directory that contains the platformio libraries (.piolibdeps). Here are the contains of my platform.ini file.
[env:nodemcuv2]
platform = espressif8266
board = nodemcuv2
framework = arduino
upload_port = 192.168.1.108
lib_deps =
ESPAsyncTCP#1.1.0
OneWire
Time
FauxmoESP
Blynk
DallasTemperature
ArduinoJson
Adafruit NeoPixel
How to cache this directory in BitBucket pipelines? Please see below the contents of bitbucket-pipelines.yml file, with this it is not caching the defined directory, what's wrong here?
image: eclipse/platformio
pipelines:
branches:
develop:
- step:
name: Build Project
caches: # caches the depende
- directories
script: # Modify the commands below to build your repository.
- pio ci --project-conf=./Code/UrbanAquarium.Firmware/platformio.ini ./Code/UrbanAquarium.Firmware/src
- pwd
definitions:
caches:
directories: ./Code/UrbanAquarium.Firmware/.piolibdeps
And here my folder structure.
in case you're still looking for an answer - I think you got it almost right, but probably need to specify a custom --build-dir (so that you can specify the same path for your cache) as well as --keep-build-dir (see https://docs.platformio.org/en/latest/userguide/cmd_ci.html). Also, I'm not sure why you specified a ./Code/UrbanAquarium.Firmware/ prefix.
That said, I've tried the above and it became quickly ugly - for now I'll only cache ~/.platformio, as well as the default pip cache:
image: python:2.7.16
pipelines:
default:
- step:
caches:
- pip
- pio
script:
- pip install -U platformio
- platformio update
- platformio ci src/ --project-conf=platformio.ini
definitions:
caches:
pio: ~/.platformio

Travis cannot find my .travis.yaml file

Whenever I push a new commit, Travis CI fails my build with this message at the top of every log:
WARNING: We were unable to find a .travis.yml file. This may not be
what you want. Build will be run with default settings.
Using worker:
worker-linux-docker-71483f98.prod.travis-ci.org:travis-linux-6
Could not find .travis.yml, using standard configuration.
However, I definitely have a .travis.yaml file in the root of my repository. Here are its contents:
$ cat .travis.yaml
language:
node_js
node_js:
stable
script:
node_modules/grunt-cli/bin/grunt
Some people seem to have encountered similar issues because they renamed their repositories, but I have never changed the name of this repository. Others say it just fixed itself after a couple hours, but it has been 5 days for me and nothing has changed.
Nothing in the Travis CI documentation seems to indicate that I need to do anything more than sync my repos, active the repo I want CI for, and include a .travis.yaml file in the repo. Am I missing something?
You are using the wrong extension for your YAML file.
It needs to be .travis.yml not .travis.yaml.

Where is the /dist directory after a Travis build?

I am trying to deploy an AngularJS app onto a Divshot hosting through Travis CI.
This app contains a /dist directory which is:
where the result of the Grunt build goes (as it does in local)
.gitignored, therefore not pushed (Travis has to rebuild it)
set as the Divshot app's root directory
Travis installs deps and runs the build nicely, thanks to this .travis.yml file:
language: node_js
node_js:
- '0.10'
install:
- "npm install"
- "gem install compass"
- "bower install"
script:
- "grunt build"
deploy:
provider: divshot
environment:
master: development
api_key:
secure: ...
skin_cleanup: true
But when it comes to the deployment, Travis says:
Error: Your app's root directory does not exists.
It's actually a message from divshot-cli because the /dist dir does not exist. I get the exact same message when I do a divshot push in local after having removed the /dist dir.
Here is a build which cannot deploy: https://travis-ci.org/damrem/anm-client/builds/35582994
How come the /dist dir does not exist on the Travis VM after install & building run OK?
Notes:
I tried to replace the script step by a before_deploy step, but the problem remains (build #13).
If I push a pre-existing /dist folder with an index.html in it, it deploys nicely (https://travis-ci.org/damrem/anm-client/builds/35583904)
It looks like this might be caused by a typo. At the bottom you have skin_cleanup: true when it should be skip_cleanup: true. If skip cleanup isn't turned on Travis will "reset" the code to the exact state of git checkout before running deploy.

Resources