I’ve been trying to setup a CI environment on travis for my balena builds. I’ve managed to install balena-cli in travis’ environment, but cannot seem to build with a qemu environment. I’m getting this log with the --debug flag
[debug] new argv=[/home/travis/.nvm/versions/node/v12.21.0/bin/node,/home/travis/build/vivitek/deep-thought/node_modules/.bin/balena,build,--deviceType,raspberrypi3-64,--arch,aarch64,--emulated] length=8
[Debug] Parsing input...
[Debug] Loading project...
[Debug] Resolving project...
[Debug] docker-compose.yml file found at "/home/travis/build/vivitek/deep-thought"
[Debug] Creating project...
[Info] Building for aarch64/raspberrypi3-64
[Build] Building services...
[Build] dhcp Preparing...
[Build] rabbitmq Preparing...
[Build] hotspot Preparing...
[Build] pcap Preparing...
[Build] Built 4 services in 0 seconds
[Error] Build failed.
No such file or directory: /home/travis/.balena/bin
Error: ENOENT: no such file or directory, mkdir '/home/travis/.balena/bin'
For further help or support, visit:
https://www.balena.io/docs/reference/balena-cli/#support-faq-and-troubleshooting
the .travis.yml is the following:
sudo: true
language: node_js
node_js:
- "12"
branches:
only:
- develop
- master
- ROUT-44-continuous-integration
git:
submodules: false
cache:
directories:
- node_modules
before_script:
- npm i -g balena-cli
jobs:
include:
- stage: "build rpi4"
name: "Building on raspberry pi 4"
script: ./build_rpi4.sh
- stage: "build rpi3"
name: "Building on raspberry pi 3"
script: ./build_rpi3.sh
and the script buiöd_rpi4.sh is the following:
#!/usr/bin/env sh
echo -e "Building containers in emulated containers"
balena build --deviceType raspberrypi3-64 --arch aarch64 --emulated --debug
build_rpi3.sh looks mostly the same, only the flags change.
Anyone know what might be wrong ?
The balena CLI will cache downloaded assets (like QEMU for --emulated) in $HOME/.balena by default, and it looks like the HOME directory doesn't exist in Travis-CI.
You can change the balena CLI data directory by setting BALENARC_DATA_DIRECTORY in your environment first. So set it to an absolute path in your Travis workspace and I assume it will work.
https://github.com/balena-io/balena-cli/blob/master/TROUBLESHOOTING.md#how-do-i-make-the-balena-cli-persist-data-in-another-directory
Related
i am new in using bitbucket pipelines. I have an issue related with deploying my dist file to ftp server. this is an error "mirror: Access failed: /opt/atlassian/pipelines/agent/build/dist/*: No such file or directory" that occurs when i am trying to deploy project.
this is my bitbucket.yml file
# Template NodeJS build
# This template allows you to validate your NodeJS code.
# The workflow allows running tests and code linting on the default branch.
image: node:16
pipelines:
branches:
master:
- step:
name: Install dependencies
caches:
- node
script:
- npm install
artifacts:
- node_modules/** # Save modules for next steps
- step:
name: Build project
caches:
- node
script:
- npm run build
artifacts:
- dist/** # Save build for next steps
- step:
name: Deploy to Production
trigger: manual
deployment: Production
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: '/var/www/*******/booking.crt-minds.ru/'
LOCAL_PATH: 'dist/*'
EXTRA_ARGS: "--exclude=.bitbucket/ --exclude=.git/ --exclude=bitbucket-pipelines.yml --exclude=.gitignore" # Ignore these
I have tried to delete local_path in yml and see what happened. but first of all i do not understand if my pipeline has access to ftp server. How can i check it? so then i need to understand how to replace dist folder files in ftp server? May be my bitbucket.yml file incorrect configured?
Telling from the pipe's documentation
LOCAL_PATH: Optional path to local directory to upload. Default ${BITBUCKET_CLONE_DIR}.
I bet it is interpreting the value you passed not as glob pattern but literally a folder named dist/*
Try to drop that /*:
- step:
script:
- pipe: atlassian/ftp-deploy:0.3.7
variables:
USER: $FTP_USERNAME
PASSWORD: $FTP_PASSWORD
SERVER: $FTP_HOST
REMOTE_PATH: /var/www/site
LOCAL_PATH: dist
I', trying to lint dockerfiles using hadolint in Gitlab CI with this snippet from my .gitlab-ci.yml file:
lint-dockerfile:
image: hadolint/hadolint:latest-debian
stage: verify
script:
- mkdir -p reports
- hadolint -f gitlab_codeclimate Dockerfile > reports/hadolint-$(md5sum Dockerfile | cut -d" " -f1).json
artifacts:
name: "$CI_JOB_NAME artifacts from $CI_PROJECT_NAME on $CI_COMMIT_REF_SLUG"
expire_in: 1 day
when: always
reports:
codequality:
- "reports/*"
paths:
- "reports/*"
This used to work perfectly fine but one week ago (without any change on my part) my pipeline started to crash all the time with ERROR: Job failed: exit code 1.
Full log output from job:
Running with gitlab-runner 14.0.0-rc1 (19d2d239)
on docker-auto-scale 72989761
feature flags: FF_SKIP_DOCKER_MACHINE_PROVISION_ON_CREATION_FAILURE:true
Resolving secrets 00:00
Preparing the "docker+machine" executor 00:14
Using Docker executor with image hadolint/hadolint:latest-debian ...
Pulling docker image hadolint/hadolint:latest-debian ...
Using docker image sha256:7caf5ee484b575ecd32219eb6f2a7a114180c41f4d8671c1f8e8d579b53d9f18 for hadolint/hadolint:latest-debian with digest hadolint/hadolint#sha256:2c06786c0d389715dae465c9556582ed6b1c38e1312b9a6926e7916dc4a9c89e ...
Preparing environment 00:01
Running on runner-72989761-project-26715289-concurrent-0 via runner-72989761-srm-1624273099-5f23871c...
Getting source from Git repository 00:02
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/sommerfeld.sebastian/docker-vagrant/.git/
Created fresh repository.
Checking out f664890e as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:01
Using docker image sha256:7caf5ee484b575ecd32219eb6f2a7a114180c41f4d8671c1f8e8d579b53d9f18 for hadolint/hadolint:latest-debian with digest hadolint/hadolint#sha256:2c06786c0d389715dae465c9556582ed6b1c38e1312b9a6926e7916dc4a9c89e ...
$ mkdir -p reports
$ hadolint -f gitlab_codeclimate Dockerfile > reports/hadolint-$(md5sum Dockerfile | cut -d" " -f1).json
Uploading artifacts for failed job 00:03
Uploading artifacts...
reports/*: found 1 matching files and directories
Uploading artifacts as "archive" to coordinator... ok id=1363188460 responseStatus=201 Created token=vNM5xQ1Z
Uploading artifacts...
reports/*: found 1 matching files and directories
Uploading artifacts as "codequality" to coordinator... ok id=1363188460 responseStatus=201 Created token=vNM5xQ1Z
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
I have no idea why my build breaks all of a sudden. I'm using image: docker:stable as image for my whole .gitlab-ci.ymnl file.
Anywone got an idea?
To conclude this question. The issue was an unexpected change in behavior probably caused by an update of the hadolint image used here.
The job was in fact failing because the linter decided to do so. For anyone wanting the job to succeed anyway here is a little trick:
hadolint -f gitlab_codeclimate Dockerfile > reports/hadolint-$(md5sum Dockerfile | cut -d" " -f1).json || true
Given command will force the exit code to be positive no matter what happens.
Another option as #Sebastian Sommerfeld pointed out is to use allow_failure: true which essentially allows the script to fail, which will then be marked in the pipeline overview. Only drawback to this approach is that script execution is interrupted at the point of failure and no further commands may be executed.
Trying to get Ansible set up to learn about it, so could be a very simple mistake but I can't find the answer to it anywhere. When I try to run ansible-playbook it's just simply skipping the job with the following output:
ansible-playbook -i hosts simple-devops-image.yml --check
PLAY [all] ***********************************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************************************************************
[WARNING]: Platform linux on host 127.0.0.1 is using the discovered Python interpreter at /usr/bin/python, but future installation of another Python interpreter could change this. See https://docs.ansible.com/ansible/2.9/reference_appendices/interpreter_discovery.html for more
information.
ok: [127.0.0.1]
TASK [build docker image using war file] *****************************************************************************************************************************************************************************************************************************************************
skipping: [127.0.0.1]
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************************************************************
127.0.0.1 : ok=1 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
My .yml playbook file:
---
- hosts: all
become: yes
tasks:
- name: build docker image using war file
command: docker build -t simple-devops-image .
args:
chdir: /usr/local/src
My hosts file:
[localhost]
127.0.0.1 ansible_connection=local
command module is skipped when executing with check mode. Remove —check from ansible-playbook command to build docker image.
Here is a note from the doc:
Check mode is supported when passing creates or removes. If running in check mode and either of these are specified, the module will check for the existence of the file and report the correct changed status. If these are not supplied, the task will be skipped.
I'm trying to deploy my web app using ftp protocols and the continouis integration of gitlab. The files all get uploaded and the site works fine, but i keep getting the following error when the gitlab runner is almost done.
my gitlab-ci.yml file
stages:
- build
- test
- deploy
build:
stage: build
tags:
- shell
script:
- echo "Building"
test:
stage: test
tags:
- shell
script: echo "Running tests"
frontend-deploy:
stage: deploy
tags:
- debian
allow_failure: true
environment:
name: devallei
url: https://devallei.azurewebsites.net/
only:
- master
script:
- echo "Deploy to staging server"
- apt-get update -qq
- apt-get install -y -qq lftp
- lftp -c "set ftp:ssl-allow yes; set ssl:verify-certificate false; debug; open -u devallei\FTPAccesHoussem,Devallei2019 ftps://waws-prod-dm1-131.ftp.azurewebsites.windows.net/site/wwwroot; mirror -Rev ./frontend/dist /site/wwwroot"
backend-deploy:
stage: deploy
tags:
- shell
allow_failure: true
only:
- master
script:
- echo "Deploy spring boot application"
I expect the runner goes through and passes the job but it gives me the following error.
---- Connecting data socket to (23.99.220.117) port 10033
---- Data connection established
---> ALLO 4329977
<--- 200 ALLO command successful.
---> STOR vendor.3b66c6ecdd8766cbd8b1.js.map
<--- 125 Data connection already open; Transfer starting.
---- Closing data socket
<--- 226 Transfer complete.
---> QUIT
gnutls_record_recv: The TLS connection was non-properly terminated. Assuming
EOF.
<--- 221 Goodbye.
---- Closing control socket
ERROR: Job failed: exit code 1
I don't know the reason for the "gnutls_record_recv: The TLS connection was non-properly terminated. Assuming EOF." error but it makes your lftp command return a non zero exit code. That makes GitLab think your job failed. The best thing would be to fix it.
If you think everything works fine and prevent the lftp command to fail, add an || true to the end of the lftp command. But be aware that your job wouldn't fail even if a real error happens.
Digital ocean kill docker process, why?
cache:
untracked: true
key: "$CI_BUILD_REF_NAME"
paths:
- .yarn
- node_modules/
- client/semantic/
before_script:
- yarn config set cache-folder .yarn
- yarn install
stages:
- build
Compile:
stage: build
script:
- npm run build:prod
artifacts:
paths:
- dist/
cache:
untracked: true
key: "$CI_BUILD_REF_NAME"
paths:
- dist/
After 2 minutes 34 seconds ..
[4/4] Building fresh packages...
Killed
ERROR: Job failed: exit code 1
Why was killed?
I have a local environment, with same linux distribution+docker+gitlab runner. And locally works.
Usually the Killed message comes from the Linux OOM (Out Of Memory) killer. I'm betting if you check dmesg output you will find a OOM message about the process being killed because not enough memory was available. In this scenario, you'll need to give your system some more memory (or, in Digital Ocean case, there may not be any swap space and you could start by creating some).