Why bitbucket-pipelines doesn't create cache? - bitbucket

Here is result tree on server after my script:
> pwd
/opt/atlassian/pipelines/agent/build
> tree -d
.
├── android-sdk-linux
│ ├── build-tools
│ │ └── 28.0.3
...
├── app
│ ├── build
...
└── readme
8005 directories
Here is my script from https://opatry.net/2017/11/06/bitbucket-pipelines-for-android/:
ci_install.sh
#!/usr/bin/env bash
set -eu
cur_dir=$(cd "$(dirname "$0")"; pwd)
origin_dir=$(cd "${cur_dir}/.."; pwd)
app_dir="${origin_dir}/android"
output_dir="${origin_dir}/artifacts"
default_android_sdk_zip_version="3859397"
android_sdk_zip_version=${1:-${default_android_sdk_zip_version}}
case $(uname -s) in
Linux)
os="linux"
;;
Darwin)
os="darwin"
;;
CYGWIN*|MINGW*)
os="windows"
;;
*)
echo "!! Unsupported OS $(uname -s)"
exit 1
;;
esac
export ANDROID_HOME="${origin_dir}/android-sdk-${os}"
if [ ! -f "${ANDROID_HOME}/tools/bin/sdkmanager" ]; then
# Download and unzip Android sdk
echo "Downloading Android SDK '${android_sdk_zip_version}' for '${os}'"
wget "https://dl.google.com/android/repository/sdk-tools-${os}-${android_sdk_zip_version}.zip"
unzip "sdk-tools-${os}-${android_sdk_zip_version}.zip" -d "${ANDROID_HOME}"
rm "sdk-tools-${os}-${android_sdk_zip_version}.zip"
fi
# Add Android binaries to PATH
export PATH="${ANDROID_HOME}/tools:${ANDROID_HOME}/tools/bin:${ANDROID_HOME}/platform-tools:${PATH}"
# Accept all licenses (source: http://stackoverflow.com/questions/38096225/automatically-accept-all-sdk-licences)
echo "Auto Accepting licenses"
mkdir -p "$ANDROID_HOME/licenses"
echo -e "\n8933bad161af4178b1185d1a37fbf41ea5269c55" > "${ANDROID_HOME}/licenses/android-sdk-license"
echo -e "\n84831b9409646a918e30573bab4c9c91346d8abd" > "${ANDROID_HOME}/licenses/android-sdk-preview-license"
# Update android sdk
echo "Downloading packages described by ${cur_dir}/package_file.txt"
cat "${cur_dir}/package_file.txt"
( sleep 5 && while [ 1 ]; do sleep 1; echo y; done ) | sdkmanager --package_file="${cur_dir}/package_file.txt"
package_file.txt
platform-tools
build-tools;26.0.2
platforms;android-26
bitbucket-pipelines.yml:
image: java:8
pipelines:
branches:
master:
- step:
caches:
- gradle
- android-sdk
script:
- bash ./build/ci_install.sh
- ANDROID_HOME=$PWD/android-sdk-linux bash ./build/android.sh
definitions:
caches:
android-sdk: android-sdk-linux
gradle: gradle
In result:
Build teardown
You already have a 'gradle' cache so we won't create it again
Assembling contents of new cache 'android-sdk'
But in Pipelines -> Caches->Dependency caches cache for android-sdk is not displayed:
And at next run:
Cache "android-sdk": Downloading
Cache "android-sdk": Not found

All works fine. My ci_install.sh file was in /MyProject/utils/pipelines/ci_install.sh so android-sdk-linux was created in /opt/atlassian/pipelines/agent/build/utils/android-sdk-linux folder
So I moved file ci_install.sh to /MyProject/pipelines/ci_install.sh and now android-sdk-linux is created in /opt/atlassian/pipelines/agent/build/android-sdk-linux folder
Removed folder utils from my project

Related

Docker shows me an error of COPY of to fix?

I'm using this container to set up X11 in GitPod.
ARG base
FROM ${base}
# Dazzle does not rebuild a layer until one of its lines are changed. Increase this counter to rebuild this layer.
ENV TRIGGER_REBUILD=1
# Install Xvfb, JavaFX-helpers and Openbox window manager
RUN sudo install-packages xvfb x11vnc xterm openjfx libopenjfx-java openbox
# Overwrite this env variable to use a different window manager
ENV WINDOW_MANAGER="openbox"
USER root
# Change the default number of virtual desktops from 4 to 1 (footgun)
RUN sed -ri "s/<number>4<\/number>/<number>1<\/number>/" /etc/xdg/openbox/rc.xml
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
# Add VNC startup script
COPY start-vnc-session.sh /usr/bin/
RUN chmod +x /usr/bin/start-vnc-session.sh
USER gitpod
# This is a bit of a hack. At the moment we have no means of starting background
# tasks from a Dockerfile. This workaround checks, on each bashrc eval, if the X
# server is running on screen 0, and if not starts Xvfb, x11vnc and novnc.
RUN echo "export DISPLAY=:0" >> /home/gitpod/.bashrc.d/300-vnc
RUN echo "[ ! -e /tmp/.X0-lock ] && (/usr/bin/start-vnc-session.sh &> /tmp/display-\${DISPLAY}.log)" >> /home/gitpod/.bashrc.d/300-vnc
USER root
### checks ###
# no root-owned files in the home directory
RUN notOwnedFile=$(find . -not "(" -user gitpod -and -group gitpod ")" -print -quit) \
&& { [ -z "$notOwnedFile" ] \
|| { echo "Error: not all files/dirs in $HOME are owned by 'gitpod' user & group"; exit 1; } }
USER gitpod
This is where it gets sketchy :
# Install novnc
RUN git clone --depth 1 https://github.com/novnc/noVNC.git /opt/novnc \
&& git clone --depth 1 https://github.com/novnc/websockify /opt/novnc/utils/websockify
COPY novnc-index.html /opt/novnc/index.html
I get this output please help !
COPY failed: file not found in build context or excluded by .dockerignore: stat novnc-index.html: file does not exist
Knowing that my dockerfile is in /src and i'm building in /src . I tried to rebuild with the --no-cache flag and use export DOCKER_BUILDKIT=1 . But still I'm stuck with this problem .

Yocoto project, copy prebuild files to target filesystem

I've cross-compiled opencv3.4, and it is running well on board. The project is managed through Yocto. so I wrote this opencv-gl.bb file to copy prebuilt opencv files to the target filesystem. But after I burned the mirror image to the developed board, I got nothing. It seems the copy command has neveber been executed. Where am I wrong?
SUMMARY = "Install opencv 3.4.14 libraries"
LICENSE = "CLOSED"
LIC_FILES_CHKSUM = ""
SRC_URI = "\
file://etc \
file://usr \
"
S = "${WORKDIR}"
## prebuilt library don't need following steps
do_configure[noexec] = "1"
do_compile[noexec] = "1"
do_package_qa[noexec] = "1"
do_install[nostamp] += "1"
do_install() {
install -d ${D}/usr/local/bin
cp -rf ${S}/usr/bin/* ${D}/usr/local/bin/
install -d ${D}/usr/local/lib
cp -rf ${S}/usr/lib/* ${D}/usr/local/lib/
install -d ${D}/usr/local/include
cp -rf ${S}/usr/include/* ${D}/usr/local/include/
install -d ${D}/usr/local/share
cp -rf ${S}/usr/share/* ${D}/usr/local/share/
}
# let the build system extends the FILESPATH file search path
FILESEXTRAPATHS_prepend := "${THISDIR}/prebuilts:"
FILES_${PN} += " \
/usr/local/bin/* \
/usr/local/lib/* \
/usr/local/include/* \
/usr/local/share/* \
"
# INSANE_SKIP_${PN} += "installed-vs-shipped"
the file structure is as follows:
wb#ubuntu:~/Yocto/meta-semidrive/recipes-test/opencv-gl$ tree -L 3
.
├── opencv-gl.bb
└── prebuilts
├── etc
│   └── ld.so.conf
├── LICENSE
└── usr
├── bin
├── include
├── lib
└── share
7 directories, 3 files

How can I add an executable to my path in a CircleCI job?

I am downloading and unzipping binaryen in a run step.
- run: wget -c https://github.com/WebAssembly/binaryen/releases/download/version_101/binaryen-version_101-x86_64-linux.tar.gz -O - | tar -xz -C /tmp/
I am then updating the path in $BASH_ENV.
- run: echo "export PATH=/tmp/binaryen-version_101/bin/wasm-opt:\${PATH}" >> $BASH_ENV
However, I still get a command not found for wasm-opt.
How can I install the downloaded wasm-opt binary such that another run step can use it?
The main issue is that the PATH variable should contain a list of directories. You added the actual binary itself to the path instead of the directory it resides in.
So for example, instead of /tmp/binaryen-version_101/bin/wasm-opt you want /tmp/binaryen-version_101/bin/. Also, after you add a directory to the PATH you won't be able to run those binaries until the next step.
Here's an example config I made:
version: 2.1
workflows:
main:
jobs:
- build
jobs:
build:
docker:
- image: cimg/base:stable
steps:
- checkout
- run: curl -sSL "https://github.com/WebAssembly/binaryen/releases/download/version_101/binaryen-version_101-x86_64-linux.tar.gz" | tar -xz -C /tmp/
- run: echo 'export PATH=/tmp/binaryen-version_101/bin/:${PATH}' >> $BASH_ENV
- run: wasm-opt

Bitbucket Android pipeline always failed on gradlew

I am trying to use Bitbucket Pipelines for my Android project.
There is my bitbucket-pipelines.yml :
image: javiersantos/android-ci:latest
pipelines:
default:
- step:
script:
- export GRADLE_USER_HOME=`pwd`/.gradle
- chmod +x ./gradlew
- ./gradlew assembleDebug
When i am running my pipeline, i have this error :
+ chmod +x ./gradlew
chmod: cannot access './gradlew': No such file or directory
What i missed in my pipeline configuration ?
You need to cd into the folder that contains your gradlew file, if it is not in the root folder of your Git repository. You can check where is your gradlew file in the Docker build machine, by checking the $BITBUCKET_CLONE_DIR environment variable, or listing the files:
pipelines:
default:
- step:
script:
- echo "The current folder is: $PWD"
- echo "The git repo root folder is: $BITBUCKET_CLONE_DIR"
- echo "The files in the git root directory are: $(ls -la)"
- echo "The folders in the git root directory are: $(echo */)"
- echo "The gradlew file is at location: $( find . -name "gradlew" -type f -print0 | xargs )"
- echo "Setting current working directory to subfolder"
- cd MyProject # This should contain your 'gradlew' file
- chmod +x ./gradlew
- ./gradlew assembleDebug

Run sonarqube scanner with gitlab ci

I am trying to put together a CI environment for a .NET application using the following stack (just the relevant ones):
Debian + mono
Docker
Gitlab CI
Gitlab-multi-runner (as a docker container)
Sonarqube + Postgre
I've used docker-compose to create the container for sonarqube and postgre, both are running and working. I am sadly stuck with executing sonarqube analysis for my build executed by the gitlab runner and all examples I found were using Maven. I've tried to use sonar-scanner as well, no luck so far.
Here are the contents of my gitlab-ci.yml:
image: mono:latest
cache:
paths:
- ./src/T_GitLabCi/packages/
stages:
- build
.shared: &restriction
only:
- master
tags:
- docker
build:
<<: *restriction
stage: build
script:
- nuget restore ./src/T_GitLabCi
- MONO_IOMAP=case xbuild /t:Build /p:Configuration="Release" /p:Platform="Any CPU" ./src/T_GitLabCi/T_GitLabCi.sln
- mono ./tools/NUnitConsoleRunner/nunit3-console.exe ./src/T_GitLabCi/T_GitLabCi.sln --work=./src/T_GitLabCi/test --config=Release
- << EXECUTE SONAR ANALYSIS >>
I am definitely missing something here. Could somebody point me the right direction?
I have projects written in PHP but that shouldn't matter. Here's what I did.
I enabled a private registry hosted on my GitLab installation
In this registry I have a "sonar-scanner" image built from this Dockerfile (it's based on one of the images available on Docker hub):
FROM java:alpine
ENV SONAR_SCANNER_VERSION 2.8
RUN apk add --no-cache wget && \
wget https://sonarsource.bintray.com/Distribution/sonar-scanner-cli/sonar-scanner-${SONAR_SCANNER_VERSION}.zip && \
unzip sonar-scanner-${SONAR_SCANNER_VERSION} && \
cd /usr/bin && ln -s /sonar-scanner-${SONAR_SCANNER_VERSION}/bin/sonar-scanner sonar-scanner && \
apk del wget
COPY files/sonar-scanner-run.sh /usr/bin
and here's the files/sonar-scanner-run.sh file:
#!/bin/sh
URL="<YOUR SONARQUBE URL>"
USER="<SONARQUBE USER THAT CAN ACCESS THE PROJECTS>"
PASSWORD="<USER PASSWORD>"
if [ -z "$SONAR_PROJECT_KEY" ]; then
echo "Undefined \"projectKey\"" && exit 1
else
COMMAND="sonar-scanner -Dsonar.host.url=\"$URL\" -Dsonar.login=\"$USER\" -Dsonar.password=\"$PASSWORD\" -Dsonar.projectKey=\"$SONAR_PROJECT_KEY\""
if [ ! -z "$SONAR_PROJECT_VERSION" ]; then
COMMAND="$COMMAND -Dsonar.projectVersion=\"$SONAR_PROJECT_VERSION\""
fi
if [ ! -z "$SONAR_PROJECT_NAME" ]; then
COMMAND="$COMMAND -Dsonar.projectName=\"$SONAR_PROJECT_NAME\""
fi
if [ ! -z $CI_BUILD_REF ]; then
COMMAND="$COMMAND -Dsonar.gitlab.commit_sha=\"$CI_BUILD_REF\""
fi
if [ ! -z $CI_BUILD_REF_NAME ]; then
COMMAND="$COMMAND -Dsonar.gitlab.ref_name=\"$CI_BUILD_REF_NAME\""
fi
if [ ! -z $SONAR_BRANCH ]; then
COMMAND="$COMMAND -Dsonar.branch=\"$SONAR_BRANCH\""
fi
if [ ! -z $SONAR_ANALYSIS_MODE ]; then
COMMAND="$COMMAND -Dsonar.analysis.mode=\"$SONAR_ANALYSIS_MODE\""
if [ $SONAR_ANALYSIS_MODE="preview" ]; then
COMMAND="$COMMAND -Dsonar.issuesReport.console.enable=true"
fi
fi
eval $COMMAND
fi
Now in my project in .gitlab-ci.yml I have something like this:
SonarQube:
image: <PATH TO YOUR IMAGE ON YOUR REGISTRY>
variables:
SONAR_PROJECT_KEY: "<YOUR PROJECT KEY>"
SONAR_PROJECT_NAME: "$CI_PROJECT_NAME"
SONAR_PROJECT_VERSION: "$CI_BUILD_ID"
script:
- /usr/bin/sonar-scanner-run.sh
That't pretty much all. The above example of .gitlab-ci.yml is simplified since I'm using diffrent builds for master and other branches (like when: manual) and I use this plugin to get feedback in GitLab: https://gitlab.talanlabs.com/gabriel-allaigre/sonar-gitlab-plugin
Feel free to ask if you have any questions. It took me some time to put this all together the way I want it :) Actually I'm still finetuning it.
You need to install sonar-scanner first. You can find portage of sonar-scanner for almost any recent language, for example for npm you don't have to use directly the java executor:
I only add to do this :
npm install --save sonar-scanner
Then I needed to add this in my package.json
"scripts": {
"sonar-scanner": "node_modules/sonar-scanner/bin/sonar-scanner"
}
This is my job in .gitlab-ci.yml:
job_testmaster:
stage: test
script:
- PACKAGE_VERSION=$(node -p "require('./package.json').version")
- echo sonar.projectVersion=${PACKAGE_VERSION} >> sonar-project.properties
- npm run build
- npm run sonar-scanner -- -Dsonar.login=${SONAR_LOGIN}
only:
- master
tags:
- docker
With this, I am able to start sonar analysis, but I am not able to use the quality gates after.
Hope this help.

Resources