gitlab-ci - CHROME_BIN still missing in env - environment-variables

In my karma config file, I've set process.env.CHROME_BIN = require('puppeteer').executablePath();.
I've installed puppeteer with npm.
{
...
browsers: ['ChromeHeadlessCustom'],
customLaunchers: {
ChromeHeadlessCustom: {
base: 'ChromeHeadless',
flags: [
'--no-sandbox',
'--disable-setuid-sandbox',
'--disable-gpu',
'--js-flags=--max-old-space-size=8196',
]
}
}
},
Firstly, dependencies are installed then cached.
Then linting and unit test are run at the same time but only on merge_requests.
image: mhart/alpine-node:14
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_VERIFY: 1
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
services:
- name: docker:dind
command: ["--mtu=1300"]
stages:
- dependencies
- test
node_modules:
stage: dependencies
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- node_modules/
policy: push
when: manual
script:
- apk add nss
- npm ci --cache .npm --prefer-offline --also=dev
only:
- merge_requests
allow_failure: false
unit-test:
stage: test
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- node_modules/
policy: pull
script:
- npm run test:ci
only:
- merge_requests
allow_failure: false
eslint:
stage: test
cache:
key: $CI_COMMIT_REF_SLUG
paths:
- node_modules/
policy: pull
script:
- npm run lint
only:
- merge_requests
allow_failure: false
I'm still getting the Please set env variable CHROME_BIN.
Could it be the node_modules are not well followed through jobs?
Should I export CHROME_BIN instead?
Do not hesitate to suggest things I could write/do better in my ci config file.
Thanks for your help!

Related

Is it possible to execute a step for folder and commit specific steps with Bitbucket-pipeline.yml file in bitbucket?

I have 1 common project including both frontend and backend, sometimes backend sometimes frontend is getting new commits but my pipeline yml is working for both of them and deploying both to server even if they have no change. In other words, If I add 1 line of code to frontend, pipeline is deploying backend too. Here is my bitbucket-pipeline.yml
This is an example Starter pipeline configuration
pipelines:
branches:
master:
- step:
name: 'Frontend Build'
image: node:16.4.2
script:
- cd myfrontend
- npm install
- step:
name: 'Backend Build and Package'
image: maven:3.8.3-openjdk-17
script:
- cd myfolder
- mvn clean package
artifacts:
- mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- step:
name: 'Deploy artifacts to Droplet'
deployment: production
script:
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts/target/'
LOCAL_PATH: mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts'
LOCAL_PATH: mybackend/Dockerfile
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/automation-temp-folder'
LOCAL_PATH: mybackend/README.MD
In this example the frontend is not deployed but I will activate it. So What I need is that I want to execute a step according to that which folder/project got commit to in it. e.g. If there is a commit under mybackend then only deploy mybackend and if front end.. so on. Is it possible to execute a step for a specific folder ?
Yes, this is achievable by using condition keyword:
This allows steps to be executed only when a condition or rule is satisfied. Currently, the only condition supported is changesets. Use changesets to execute a step only if one of the modified files matches the expression in includePaths.
Your end result should look similar to this:
pipelines:
branches:
master:
- step:
name: 'Frontend Build'
image: node:16.4.2
script:
- cd myfrontend
- npm install
condition:
changesets:
includePaths:
- "myfrontend/**"
- step:
name: 'Backend Build and Package'
image: maven:3.8.3-openjdk-17
script:
- cd myfolder
- mvn clean package
condition:
changesets:
includePaths:
- "myfolder/**"
artifacts:
- mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- step:
name: 'Deploy artifacts to Droplet'
deployment: production
script:
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts/target/'
LOCAL_PATH: mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts'
LOCAL_PATH: mybackend/Dockerfile
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/automation-temp-folder'
LOCAL_PATH: mybackend/README.MD
condition:
changesets:
includePaths:
- "myfolder/**"
See here for more details.

Drone Pipeline : Drone Cache mount path for Maven Repository not able to resolve

I'm new to Drone pipeline and is interested to use it in my current project for CICD.
My project tech stack is as follows:
Java
Spring Boot
Maven
I have created a sample drone pipeline, but not able to cache the maven dependencies which is downloaded and stored in .m2 folder.
Always say the mount path is not available or not found. Please find the screen shot for the same:
Drone mount path issue
Not sure of the path to provide here. Can someone help me to understand the mount path which we need to provide to cache all the dependencies in .m2 path.
Adding the pipeline information below:
kind: pipeline
type: docker
name: config-server
steps:
name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
name: build
image: maven:3.8.3-openjdk-17
pull: if-not-exists
environment:
M2_HOME: /usr/share/maven
MAVEN_CONFIG: /root/.m2
commands:
mvn clean install -DskipTests=true -B -V
volumes:
name: cache
path: /tmp/cache
name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
mount:
- ./target
- /root/.m2/repository
volumes:
name: cache
path: /tmp/cache
trigger:
branch:
main
event:
push
volumes:
name: cache
host:
path: /var/lib/cache
Thanks in advance..
Resolved the issue. Please find the solution below and working drone pipeline.
kind: pipeline
type: docker
name: data-importer
steps:
- name: restore-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
restore: true
ttl: 1
cache_key: "volume"
archive_format: "gzip"
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
- name: maven-build
image: maven:3.8.6-amazoncorretto-11
pull: if-not-exists
commands:
- mvn clean install -DskipTests=true -Dmaven.repo.local=.m2/repository -B -V
volumes:
- name: cache
path: /tmp/cache
- name: rebuild-cache
image: meltwater/drone-cache
pull: if-not-exists
settings:
backend: "filesystem"
rebuild: true
cache_key: "volume"
archive_format: "gzip"
ttl: 1
mount:
- ./.m2/repository
volumes:
- name: cache
path: /tmp/cache
trigger:
branch:
- main
- feature/*
event:
- push
volumes:
- name: cache
host:
path: /var/lib/cache

Circleci Can we use multiple workflows for multiple type?

I'm new in circleci. I want to install my infrastructure via terraform after that I also want to trigger my build, deploy and push command for aws side. But workflow does not allow me to use plan_approve_apply and build-and-deploy together in understand one workflow. I also try to create multiple workflows (like below example) for each one but also it didn't work. How can I call both in single circli config file
My Circleci config yml file:
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#8.1.0
aws-ecs: circleci/aws-ecs#2.2.1
jobs:
init-plan:
working_directory: /tmp/project
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- checkout
- run:
name: terraform init & plan
command: |
terraform init
terraform plan
- persist_to_workspace:
root: .
paths:
- .
apply:
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- attach_workspace:
at: .
- run:
name: terraform
command: |
terraform apply
- persist_to_workspace:
root: .
paths:
- .
destroy:
docker:
- image: docker.mirror.hashicorp.services/hashicorp/terraform:light
steps:
- attach_workspace:
at: .
- run:
name: destroy
command: |
terraform destroy
- persist_to_workspace:
root: .
paths:
- .
workflows:
version: 2
plan_approve_apply:
jobs:
- init-plan
- apply:
requires:
- init-plan
- hold-destroy:
type: approval
requires:
- apply
- destroy:
requires:
- hold-destroy
workflows: # didn't work
build-and-deploy:
jobs:
- aws-ecr/build_and_push_image:
account-url: "${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com"
repo: "${AWS_RESOURCE_NAME_PREFIX}"
region: ${AWS_DEFAULT_REGION}
tag: "${CIRCLE_SHA1}"
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build_and_push_image
aws-region: ${AWS_DEFAULT_REGION}
family: "${AWS_RESOURCE_NAME_PREFIX}-service"
cluster-name: "${AWS_RESOURCE_NAME_PREFIX}-cluster"
container-image-name-updates: "container=${AWS_RESOURCE_NAME_PREFIX}-service,image-and-tag=${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${AWS_RESOURCE_NAME_PREFIX}:${CIRCLE_SHA1}"

How to configure Gitlab Runner to connect to Artifactory?

I am trying to set Gitlab runner to connect to Artifactory and pull images.My yml file to set RUnner looks like below :
gitlabUrl: https://gitlab.bayer.com/
runnerRegistrationToken: r*******-
rbac:
create: false
serviceAccountName: iz-sai-s
serviceAccount.name: iz-sai-s
runners:
privileged: true
resources:
limits:
memory: 32000Mi
cpu: 4000m
requests:
memory: 32000Mi
cpu: 2000m
What changes are needed to configure my runner properly to connect to Artifactory URL and pull images from there ?
This is an example where my runner runs as docker container with an image having artifactory cli configured in it, so in your case your runner should have jfrog cli configured , next it needs an api key to access artifactory which you ll generate in artifactory and store in gitlab like below picture , exact path would be your repo - settings - CICD - variables
First it authenticates then uploads
publish_job:
stage: publish_artifact
image: xxxxxplan/jfrog-cli
variables:
ARTIFACTORY_BASE_URL: https://xxxxx.com/artifactory
REPO_NAME: my-rep
ARTIFACT_NAME: my-artifact
script:
- jfrog rt c --url="$ARTIFACTORY_BASE_URL"/ --apikey="$ARTIFACTORY_KEY"
- jfrog rt u "target/demo-0.0.1-SNAPSHOT.jar" "$REPO_NAME"/"$ARTIFACT_NAME"_"$CI_PIPELINE_ID.jar" --recursive=false
Mark the answer as accepted if it fulfils your requirement
Also make sure to use indentation in your question which is not there
Edit 1 : Adding the whole gitlab_ci.yml
stages:
- build_unittest
- static_code_review
- publish_artifact
image: maven:3.6.1-jdk-8-alpine
cache:
paths:
- .m2/repository
- target/
variables:
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
build_unittest_job:
stage: build_unittest
script: 'mvn clean install'
tags:
- my-docker
artifacts:
paths:
- target/*.jar
expire_in: 20 minutes
when: manual
code_review_job:
stage: static_code_review
variables:
SONARQUBE_BASE_URL: https://xxxxxx.com
script:
- mvn sonar:sonar -Dsonar.projectKey=xxxxxx -Dsonar.host.url=https://xxxxx -Dsonar.login=xxxxx
tags:
- my-docker
cache:
paths:
- /root/.sonar/cache
- target/
- .m2/repository
when: manual
publish_job:
stage: publish_artifact
image: plan/jfrog-cli
variables:
ARTIFACTORY_BASE_URL: https://xxxx/artifactory
REPO_NAME: maven
ARTIFACT_NAME: myart
script:
- jfrog rt c --url="$ARTIFACTORY_BASE_URL"/ --apikey="$ARTIFACTORY_KEY"
- jfrog rt u "target/demo-SNAPSHOT.jar" "$REPO_NAME"/"$ARTIFACT_NAME"_"$CI_PIPELINE_ID.jar" --recursive=false
tags:
- my-docker
when: manual

GitLab CI pipeline can't find the path for gulp.js

I'm a tad new to this. I've been writing a pipeline that's just for training purposes and I've encountered probably like 15 different errors but the one I'm at currently is really ruining all my fun since I can't get around it.
This is my code:
stages:
- lint-css
- lint-js
- unit-test
image: git.chaosgroup.com:4567/philipa.hristova/test33004__half/dind_node:latest
lint css:
stage: lint-css
before_script:
cache:
untracked: true
tags:
- docker
only:
- web
script:
- ./node_modules/gulp/bin/gulp.js lint-css
lint js:
stage: lint-js
cache:
untracked: true
policy: pull
tags:
- docker
only:
- web
script:
- ./node_modules/gulp/bin/gulp.js lint-js
run unit test:
stage: unit-test
cache:
untracked: true
tags:
- docker
only:
- web
script:
- ./node_modules/gulp/bin/gulp.js test
And the docker image I am using is one I made over a docker:dind image, adding nodejs, npm and gulp. The error I get is this:

Resources