I am trying to set Gitlab runner to connect to Artifactory and pull images.My yml file to set RUnner looks like below :
gitlabUrl: https://gitlab.bayer.com/
runnerRegistrationToken: r*******-
rbac:
create: false
serviceAccountName: iz-sai-s
serviceAccount.name: iz-sai-s
runners:
privileged: true
resources:
limits:
memory: 32000Mi
cpu: 4000m
requests:
memory: 32000Mi
cpu: 2000m
What changes are needed to configure my runner properly to connect to Artifactory URL and pull images from there ?
This is an example where my runner runs as docker container with an image having artifactory cli configured in it, so in your case your runner should have jfrog cli configured , next it needs an api key to access artifactory which you ll generate in artifactory and store in gitlab like below picture , exact path would be your repo - settings - CICD - variables
First it authenticates then uploads
publish_job:
stage: publish_artifact
image: xxxxxplan/jfrog-cli
variables:
ARTIFACTORY_BASE_URL: https://xxxxx.com/artifactory
REPO_NAME: my-rep
ARTIFACT_NAME: my-artifact
script:
- jfrog rt c --url="$ARTIFACTORY_BASE_URL"/ --apikey="$ARTIFACTORY_KEY"
- jfrog rt u "target/demo-0.0.1-SNAPSHOT.jar" "$REPO_NAME"/"$ARTIFACT_NAME"_"$CI_PIPELINE_ID.jar" --recursive=false
Mark the answer as accepted if it fulfils your requirement
Also make sure to use indentation in your question which is not there
Edit 1 : Adding the whole gitlab_ci.yml
stages:
- build_unittest
- static_code_review
- publish_artifact
image: maven:3.6.1-jdk-8-alpine
cache:
paths:
- .m2/repository
- target/
variables:
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
build_unittest_job:
stage: build_unittest
script: 'mvn clean install'
tags:
- my-docker
artifacts:
paths:
- target/*.jar
expire_in: 20 minutes
when: manual
code_review_job:
stage: static_code_review
variables:
SONARQUBE_BASE_URL: https://xxxxxx.com
script:
- mvn sonar:sonar -Dsonar.projectKey=xxxxxx -Dsonar.host.url=https://xxxxx -Dsonar.login=xxxxx
tags:
- my-docker
cache:
paths:
- /root/.sonar/cache
- target/
- .m2/repository
when: manual
publish_job:
stage: publish_artifact
image: plan/jfrog-cli
variables:
ARTIFACTORY_BASE_URL: https://xxxx/artifactory
REPO_NAME: maven
ARTIFACT_NAME: myart
script:
- jfrog rt c --url="$ARTIFACTORY_BASE_URL"/ --apikey="$ARTIFACTORY_KEY"
- jfrog rt u "target/demo-SNAPSHOT.jar" "$REPO_NAME"/"$ARTIFACT_NAME"_"$CI_PIPELINE_ID.jar" --recursive=false
tags:
- my-docker
when: manual
Related
I have 1 common project including both frontend and backend, sometimes backend sometimes frontend is getting new commits but my pipeline yml is working for both of them and deploying both to server even if they have no change. In other words, If I add 1 line of code to frontend, pipeline is deploying backend too. Here is my bitbucket-pipeline.yml
This is an example Starter pipeline configuration
pipelines:
branches:
master:
- step:
name: 'Frontend Build'
image: node:16.4.2
script:
- cd myfrontend
- npm install
- step:
name: 'Backend Build and Package'
image: maven:3.8.3-openjdk-17
script:
- cd myfolder
- mvn clean package
artifacts:
- mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- step:
name: 'Deploy artifacts to Droplet'
deployment: production
script:
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts/target/'
LOCAL_PATH: mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts'
LOCAL_PATH: mybackend/Dockerfile
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/automation-temp-folder'
LOCAL_PATH: mybackend/README.MD
In this example the frontend is not deployed but I will activate it. So What I need is that I want to execute a step according to that which folder/project got commit to in it. e.g. If there is a commit under mybackend then only deploy mybackend and if front end.. so on. Is it possible to execute a step for a specific folder ?
Yes, this is achievable by using condition keyword:
This allows steps to be executed only when a condition or rule is satisfied. Currently, the only condition supported is changesets. Use changesets to execute a step only if one of the modified files matches the expression in includePaths.
Your end result should look similar to this:
pipelines:
branches:
master:
- step:
name: 'Frontend Build'
image: node:16.4.2
script:
- cd myfrontend
- npm install
condition:
changesets:
includePaths:
- "myfrontend/**"
- step:
name: 'Backend Build and Package'
image: maven:3.8.3-openjdk-17
script:
- cd myfolder
- mvn clean package
condition:
changesets:
includePaths:
- "myfolder/**"
artifacts:
- mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- step:
name: 'Deploy artifacts to Droplet'
deployment: production
script:
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts/target/'
LOCAL_PATH: mybackend/target/mybackend-0.0.1-SNAPSHOT.jar
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/artifacts'
LOCAL_PATH: mybackend/Dockerfile
- pipe: atlassian/scp-deploy:0.3.3
variables:
USER: root
SERVER: 138.138.138.138
REMOTE_PATH: '/root/automation-temp-folder'
LOCAL_PATH: mybackend/README.MD
condition:
changesets:
includePaths:
- "myfolder/**"
See here for more details.
ALM used Bitbucket Cloud
CI system used Bitbucket Cloud
Languages of the repository: Angular (Other (for JS, TS, Go, Python, PHP, …))
Error observed
ERROR: Error during SonarScanner execution
ERROR: Not authorized. Please check the property sonar.login or SONAR_TOKEN env variable
Steps to reproduce
SONAR_TOKEN already generated and added to my ENV_VAR
Bitbucket.yaml
image: ‘node:12.22’
clone:
depth: full # SonarCloud scanner needs the full history to assign issues properly
definitions:
caches:
sonar: ~/.sonar/cache # Caching SonarCloud artifacts will speed up your build
steps:
step: &build-test-sonarcloud
name: Build, test and analyze on SonarCloud
caches:
- sonar
script:
- pipe: sonarsource/sonarcloud-scan:1.2.1
variables:
EXTRA_ARGS: ‘-Dsonar.host.url=https://sonarcloud.io -Dsonar.login=${SONAR_TOKEN}’
step: &check-quality-gate-sonarcloud
name: Check the Quality Gate on SonarCloud
script:
- pipe: sonarsource/sonarcloud-quality-gate:0.1.4
pipelines:
branches
Potential workaround
No idea.
if you already install the sonar cloud app to your workspace environment, there is no need to give the sonar url again. The integration process is handling the URL part. Also, you should add your Sonar token variable to Workspace or repo environment. After that, you should login to Sonar Cloud organization account and bind your repo to SonarCloud to be able to evaluate it by Sonar Cloud. Here is my Sonar Cloud setup;
bitbucket-pipelines.yml file,
image:
name: <base image>
clone:
# SonarCloud scanner needs the full history to assign issues properly
depth: full
definitions:
caches:
# Caching SonarCloud artifacts will speed up your build
sonar: ~/.sonar/cache
pipelines:
pull-requests:
'**':
- step:
name: "Code Quality and Security on PR"
script:
- pipe: sonarsource/sonarcloud-scan:1.2.1
variables:
SONAR_TOKEN: '$SONAR_CLOUD_TOKEN'
SONAR_SCANNER_OPTS: -Xmx512m
DEBUG: "true"
branches:
master:
- step:
name: "Code Quality and Security on master"
script:
- pipe: sonarsource/sonarcloud-scan:1.2.1
variables:
SONAR_TOKEN: '$SONAR_CLOUD_TOKEN'
SONAR_SCANNER_OPTS: -Xmx512m
DEBUG: "true"
tags:
'*.*.*-beta*':
- step:
name: "Image Build & Push"
services:
- docker
caches:
- docker
clone:
depth: 1
script:
- <build script>
- step:
name: "Deploy"
deployment: beta
clone:
enabled: false
script:
- <deploy script>
'*.*.*-prod':
- step:
name: "Image Build & Push"
services:
- docker
caches:
- docker
clone:
depth: 1
script:
- <build script>
- step:
name: "Deploy"
deployment: prod
clone:
enabled: false
script:
- <deploy script>
sonar-project.properties file,
sonar.organization=<sonar cloud organization name>
sonar.projectKey=<project key>
sonar.projectName=<project name>
sonar.sources=<sonar evaluation path>
sonar.language=<repo language>
sonar.sourceEncoding=UTF-8
i wrote a pipeline in bitbucket environment but i would like the pipeline to be triggered only when the user run it and not automatically on push or commit.
here is the code:
pipelines:
branches:
new_ui_apk:
- step:
name: Build apk
size: 2x
script:
- JAVA_OPTS="-Xmx2048m -XX:MaxPermSize=2048m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8"
- docker build -t app-release:1.0.0 .
services:
- docker
definitions:
services:
docker:
memory: 7128
actually i use the skip ci tip to avoid it but if another team member push or commit any change, the pipeline will run, how else can i avoid it please?
if you mention the definition under "custom" property it stops listening branches and only acts when a user triggers it.
use this.
pipelines:
custom:
new_ui_apk:
- step:
name: Build apk
size: 2x
script:
- JAVA_OPTS="-Xmx2048m -XX:MaxPermSize=2048m -XX:+HeapDumpOnOutOfMemoryError -Dfile.encoding=UTF-8"
- docker build -t app-release:1.0.0 .
services:
- docker
definitions:
services:
docker:
memory: 7128
The Answer is not good you only need to add trigger: manual
-step
image: XXX
name: XXXX
deployment: XXXX
trigger: manual
script:
- whatever....
And it will be shown a option to be run inside the pipeline options.
I am trying to build a CI/CD pipeline in GitLab. The goal is to build a docker image from a Dockerfile, run tests on the running container, push the image to DockerHub, then deploy it to a Kubernetes cluster. This is what I currently have for my gitlab-ci.yml.
variables:
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_DRIVER: overlay2
CONTAINER_IMAGE: ${DOCKER_USER}/my_app
services:
- docker:19.03.12-dind
build:
image: docker:19.03.12
stage: build
script:
- echo ${DOCKER_PASSWORD} | docker login --username ${DOCKER_USER} --password-stdin
- docker pull ${CONTAINER_IMAGE}:latest || true
- docker build --cache-from ${CONTAINER_IMAGE}:latest --tag ${CONTAINER_IMAGE}:$CI_COMMIT_SHA --tag ${CONTAINER_IMAGE}:latest .
- docker push ${CONTAINER_IMAGE}:$CI_COMMIT_SHA
- docker push ${CONTAINER_IMAGE}:latest
deploy:
image:
name: bitnami/kubectl:1.16.15
entrypoint: [""]
stage: deploy
variables:
GIT_STRATEGY: none
script:
- kubectl get pods -A # <- Won't work until I pass a Kubeconfig file with cluster details
I have a few main questions:
How can I deploy this image? I know I need to pass a KUBECONFIG file to bitnami/kubectl, but not sure how to do that with GitLab CI/CD
Can I pass the built image to a test stage before pushing to DockerHub
---
stages:
- test app
- build
- test
- deploy
test app:
stage: test_app
image: node:latest
script:
- git clone (path to code)
- npm install
- lint
- audit fix
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
build image:
stage: build
script:
- docker build your_image:$CI_COMMIT_REF_NAME
- deploy push your_image:$CI_COMMIT_REF_NAME
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: anchor:latest (one you have built yourself or use another testing suite)
script:
- anchore-cli image add user/image:v1
- anchore-cli image wait user/image:v1
- anchore-cli image content user/image:v1
- image vuln user/image:v1 all
- anchore-cli evaluate check user/image:v1 > result .txt
- if ( grep -ci "fail" result.txt >= 1); then exit 1 fi
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
image:
name: kubectl:latest (build your own image that installed kubectl)
entrypoint: [""]
stage: deploy
tags:
- privileged
# Optional: Manual gate
when: manual
dependencies:
- build-docker
script:
- kubectl config set-cluster k8s --server="$CLUSTER_ADDRESS"
- kubectl config set clusters.k8s.certificate-authority-data $CA_AUTH_DATA
- kubectl config set-credentials gitlab-service-account --token=$K8S_TOKEN
- kubectl config set-context default --cluster=k8s --user=gitlab-service-account --namespace=my-service
- kubectl config use-context default
- kubectl set image $K8S_DEPLOYMENT_NAME $CI_PROJECT_NAME=$IMAGE_TAG
- kubectl rollout restart $K8S_DEPLOYMENT_NAME
1. have variables passed in for cluter address, cert data, and token stuff... so you can target other clusters, pre-prod, prod, qa...
2. you can't test an image that isn't on the repo, as the testing suite needs to pull the image from somewhere... You should have a clean up script running to cleanup old image in your repo anyway, so the initial push should be a (test location)
like: docker push untrusted/image:v1
You should also have before scripts and after scripts... before calls docker login
after calls docker logout...
I do not have an answer for deploying to Kubernetes, but I do recommend publishing a test/construction image to Dockerhub when working a merge request/development branch of building the image. Then only deploy the latest image when you merge the branch to master .
---
stages:
- build
- test
- deploy
build image:
stage: build
script:
- docker build your_iamge:test
- deploy push your_image:test
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: your_image:test
script:
- commands to test image
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
stage: deploy
script:
- docker build your_image:latest
- docker push your_image:latest
rules:
- if: '$CI_COMMIT_REF_NAME == "master"
---
stages:
- build
- test
- deploy
build image:
stage: build
script:
- docker build your_image:$CI_COMMIT_REF_NAME
- deploy push your_image:$CI_COMMIT_REF_NAME
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
test image:
stage: test
image: your_image:test
script:
- commands to test image
rules:
- if: '$CI_COMMIT_REF_NAME != "master"
deploy image:
stage: deploy
script:
- docker build your_image:latest
- docker push your_image:latest
- export BRANCH=${CI_COMMIT_TITLE#*\'}; export BRANCH=${BRANCH%\' into*}
- docker delete your_image:$BRANCH
rules:
- if: '$CI_COMMIT_REF_NAME == "master"
Struggling for a few last days to migrate from CircleCI 1.0 to 2.0 and while the build process is done, deployment is still a big issue. CircleCI documentation is not really of a big help.
Here is a similar config.yml to what I have:
version 2
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- setup_remote_docker
- run
name: Install required stuff
command: [...]
- run:
name: Build
command: docker build -t project .
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- run:
name: Deploy
command: |
bash scripts/deploy/deploy.sh
docker tag project [...]
docker push [...]
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: develop
The issue is in deploy job. I have to specify the docker: -image point but I want to reuse the environment from build job where all the required stuff is installed already. Surely, I could just install them in deploy job, but having multiple deploy jobs leads to code duplication which is something I do not want.
you probably want to persist to workspace and attach it in your deploy job.
you wont need to use '- checkout' after that
https://circleci.com/docs/2.0/configuration-reference/#persist_to_workspace
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- setup_remote_docker
- run
name: Install required stuff
command: [...]
- run:
name: Build
command: docker build -t project .
- persist_to_workspace:
root: ./
paths:
- ./
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- attach_workspace:
at: ./
- run:
name: Deploy
command: |
bash scripts/deploy/deploy.sh
docker tag project [...]
docker push [...]
workflows:
version: 2
build-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: develop
If you label the image built by the build stage, you can then reference it in the deploy stage: https://docs.docker.com/compose/compose-file/#labels