Push build information from Jenkins to Artifactory - jenkins

For scanning purpose, I need to push the images from Jenkins to Artifactory with a build ID, so that they are conceptually grouped.
I'm using the Jenkins Artifactory plugin and so far that's what I achieved:
rtDockerPush(
serverId: "artifactory-dev",
image: "ip-address:port/repo/servce:1.0",
targetRepo: "test",
buildName: "mytest",
buildNumber: "6"
)
But I encounter this error:
INFO: Pushing image:
Executing command: /bin/sh -c git log --pretty=format:%s -1
Apr 12, 2022 11:40:42 AM org.jfrog.build.extractor.packageManager.PackageManagerLogger error
SEVERE: null
java.io.FileNotFoundException
I already configured the server on Jenkins and Artifactory side. Is there anything else that I'm missing?

Related

Access(clone) repository bitbucket from pipeline another repo bitbucket ssh

I have a flutter web project in bitbucket and I am making a pipeline that allows me to use CI/CD. The problem I have, is that the project manages a dependency of a project that is in another repository at bitbucket. I have not been able to find a way to configure the private SSH key in bitbucket and I can access the project in git without problem when doing the build. It gives me the following error:
Downloading Web SDK... 2,828ms
Downloading CanvasKit... 569ms
Running "flutter pub get" in build...
Git error. Command: `git clone --mirror ssh://git#bitbucket.org/... /root/.pub-cache/git/cache/barest-playground-47e65fcf6973f19ceed46038aa27a70e7bc4d47b`
stdout:
stderr: Cloning into bare repository '/root/.pub-cache/git/cache/'...
Warning: Permanently added the RSA host key for IP address '18.205.93.0' to the list of known hosts.
git#bitbucket.org: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
My pipeline
image: cirrusci/flutter
pipelines:
branches:
develop:
- step:
name: Build
caches:
- node
size: 2x
script:
- ./run.sh dev
artifacts:
- build/**
- step:
name: Deploy to Firebase
deployment: dev
script:
- pipe: atlassian/firebase-deploy:1.1.0
variables:
FIREBASE_TOKEN: $FIREBASE_TOKEN
PROJECT_ID: $PROJECTID
MESSAGE: Deploying in $PROJECTID
EXTRA_ARGS: --only hosting
DEBUG: 'true'
master:
- step:
name: Build
size: 2x
script:
- ./run.sh prod
artifacts:
- build/**
- step:
name: Deploy to Firebase
deployment: prod
script:
- pipe: atlassian/firebase-deploy:1.1.0
variables:
FIREBASE_TOKEN: $FIREBASE_TOKEN
PROJECT_ID: $PROJECTID
MESSAGE: Deploying in $PROJECTID
EXTRA_ARGS: --only hosting
DEBUG: 'false'
First of all, Thanks

Gitlab CI job works fine but always crashes with exit code 1

I', trying to lint dockerfiles using hadolint in Gitlab CI with this snippet from my .gitlab-ci.yml file:
lint-dockerfile:
image: hadolint/hadolint:latest-debian
stage: verify
script:
- mkdir -p reports
- hadolint -f gitlab_codeclimate Dockerfile > reports/hadolint-$(md5sum Dockerfile | cut -d" " -f1).json
artifacts:
name: "$CI_JOB_NAME artifacts from $CI_PROJECT_NAME on $CI_COMMIT_REF_SLUG"
expire_in: 1 day
when: always
reports:
codequality:
- "reports/*"
paths:
- "reports/*"
This used to work perfectly fine but one week ago (without any change on my part) my pipeline started to crash all the time with ERROR: Job failed: exit code 1.
Full log output from job:
Running with gitlab-runner 14.0.0-rc1 (19d2d239)
on docker-auto-scale 72989761
feature flags: FF_SKIP_DOCKER_MACHINE_PROVISION_ON_CREATION_FAILURE:true
Resolving secrets 00:00
Preparing the "docker+machine" executor 00:14
Using Docker executor with image hadolint/hadolint:latest-debian ...
Pulling docker image hadolint/hadolint:latest-debian ...
Using docker image sha256:7caf5ee484b575ecd32219eb6f2a7a114180c41f4d8671c1f8e8d579b53d9f18 for hadolint/hadolint:latest-debian with digest hadolint/hadolint#sha256:2c06786c0d389715dae465c9556582ed6b1c38e1312b9a6926e7916dc4a9c89e ...
Preparing environment 00:01
Running on runner-72989761-project-26715289-concurrent-0 via runner-72989761-srm-1624273099-5f23871c...
Getting source from Git repository 00:02
$ eval "$CI_PRE_CLONE_SCRIPT"
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/sommerfeld.sebastian/docker-vagrant/.git/
Created fresh repository.
Checking out f664890e as master...
Skipping Git submodules setup
Executing "step_script" stage of the job script 00:01
Using docker image sha256:7caf5ee484b575ecd32219eb6f2a7a114180c41f4d8671c1f8e8d579b53d9f18 for hadolint/hadolint:latest-debian with digest hadolint/hadolint#sha256:2c06786c0d389715dae465c9556582ed6b1c38e1312b9a6926e7916dc4a9c89e ...
$ mkdir -p reports
$ hadolint -f gitlab_codeclimate Dockerfile > reports/hadolint-$(md5sum Dockerfile | cut -d" " -f1).json
Uploading artifacts for failed job 00:03
Uploading artifacts...
reports/*: found 1 matching files and directories
Uploading artifacts as "archive" to coordinator... ok id=1363188460 responseStatus=201 Created token=vNM5xQ1Z
Uploading artifacts...
reports/*: found 1 matching files and directories
Uploading artifacts as "codequality" to coordinator... ok id=1363188460 responseStatus=201 Created token=vNM5xQ1Z
Cleaning up file based variables 00:01
ERROR: Job failed: exit code 1
I have no idea why my build breaks all of a sudden. I'm using image: docker:stable as image for my whole .gitlab-ci.ymnl file.
Anywone got an idea?
To conclude this question. The issue was an unexpected change in behavior probably caused by an update of the hadolint image used here.
The job was in fact failing because the linter decided to do so. For anyone wanting the job to succeed anyway here is a little trick:
hadolint -f gitlab_codeclimate Dockerfile > reports/hadolint-$(md5sum Dockerfile | cut -d" " -f1).json || true
Given command will force the exit code to be positive no matter what happens.
Another option as #Sebastian Sommerfeld pointed out is to use allow_failure: true which essentially allows the script to fail, which will then be marked in the pipeline overview. Only drawback to this approach is that script execution is interrupted at the point of failure and no further commands may be executed.

Gitlab CI trying to run powershell when not asked to

I am trying to build web application with gitlab-CI.
I created runner with this configuration:
name = "REDACTED"
url = "REDACTED"
token = REDACTED
executor = "docker-windows"
[runners.custom_build_dir]
[runners.cache]
[runners.cache.s3]
[runners.cache.gcs]
[runners.cache.azure]
[runners.docker]
tls_verify = false
image = "mcr.microsoft.com/powershell"
privileged = false
disable_entrypoint_overwrite = false
oom_kill_disable = false
disable_cache = false
volumes = ["c:\\cache"]
shm_size = 0
Then my .gitlab-ci.yml looks like this
image: microsoft/dotnet:latest
stages:
- build
- test
before_script:
- "dotnet restore"
node_build:
stage: build
only:
- master
script:
- "echo Stage - Build started"
- "cd ./WebApplication"
- dir
- dotnet build
node_test:
stage: test
only:
- master
script:
- "echo Stage - Test started"
- "cd ./WebApplication"
- dir
- dotnet build
When the pipeline is ran, output looks like this
Running with gitlab-runner 13.11.0 (7f7a4bb0)
on REDACTED REDACTED
Preparing the "docker-windows" executor
Using Docker executor with image microsoft/dotnet:latest ...
Pulling docker image microsoft/dotnet:latest ...
Using docker image sha256:34f6f2295334d34567c67059f7c28836c79e014d0c4fadf54de3978798640003 for microsoft/dotnet:latest with digest microsoft/dotnet#sha256:61d86fc52893087df54b0579fcd9c33e144a4b3d34c543a94e6a6b376c74285d ...
Preparing environment
Running on REDACTED via
REDACTED ...
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository in C:/builds/REDACTED /c-sharp-ci-test/.git/
Checking out bbb22919 as master...
git-lfs/2.11.0 (GitHub; windows amd64; go 1.14.2; git 48b28d97)
Skipping Git submodules setup
Executing "step_script" stage of the job script
Using docker image sha256:34f6f2295334d34567c67059f7c28836c79e014d0c4fadf54de3978798640003 for microsoft/dotnet:latest with digest microsoft/dotnet#sha256:61d86fc52893087df54b0579fcd9c33e144a4b3d34c543a94e6a6b376c74285d ...
Cleaning up file based variables
ERROR: Job failed (system failure): Error response from daemon: container e144f05bdd00b4e744554345666afbc008ee2437c7d56bf4a98fbd949a88b1b2 encountered an error during hcsshim::System::CreateProcess: failure in a Windows system call: The system cannot find the file specified. (0x2)
[Event Detail: Provider: 00000000-0000-0000-0000-000000000000]
[Event Detail: Provider: 00000000-0000-0000-0000-000000000000]
[Event Detail: onecore\vm\compute\management\orchestration\vmhostedcontainer\processmanagement.cpp(173)\vmcomputeagent.exe!00007FF7D970B1D7: (caller: 00007FF7D96BE70B) Exception(6) tid(37c) 80070002 The system cannot find the file specified.
CallContext:[\Bridge_ProcessMessage\VmHostedContainer_ExecuteProcess]
Provider: 00000000-0000-0000-0000-000000000000] extra info: {"CommandLine":"powershell -NoProfile -NoLogo -InputFormat text -OutputFormat text -NonInteractive -ExecutionPolicy Bypass -Command -","User":"ContainerUser","WorkingDirectory":"C:\\","Environment"
When I look into log, it says it tried to run step_script stage of the job, which I never specified and it tries to run powershell. Why is that happening and how can I get rid of it ? I supose dotnet:latest does not have powershell in it as it is not needed for building.
First, it is always best to use a fixed tag instead of the shifting "latest": from one build to the next, "latest" can reference a new image version.
Second try a specific dotnet image like mcr.microsoft.com/dotnet/core/sdk:3. instead of microsoft/dotnet:xxx: note though they are likely to use Powershell, as seen in their Dockerfile
Try one of the .Dotnet Samples outside of GitLab to see if you can make it work manually, then include it in your gitlab-ci.yml.
Note: from gitlab-org/gitlab-runner issue 26418, step_script would be equivalent to build_script.

Artifactory Plugin Proxy Results in /v1/_ping: Bad Gateway

Why do I get /v1/_ping: Bad Gateway errors when I follow the instructions for using artifactory plugin with docker?
jenkins 2.60.3 with Artifactory Plugin 2.12.2
Enable Build-Info proxy for Docker images on port 9999
jenkins /var/lib/jenkins/secrets/jfrog/certs/jfrog.proxy.crt added to $JAVA_HOME/jre/lib/security/cacerts on jenkins master and slave
jfrog nginx self sign cert added to $JAVA_HOME/jre/lib/security/cacerts on jenkins master and slave
access to jenkins:9999 open between hosts
/etc/systemd/system/docker.service.d/http-proxy.conf has contained the following with no difference to the tests
[Service]
Environment="HTTP_PROXY=http://jenkins:9999/"
[Service]
Environment="HTTPS_PROXY=https://jenkins:9999/"
Local docker test (docker login 127.0.0.1:9999) results in
Error response from daemon: Login: Bad Request to URI: /v1/users/ (Code: 400; Headers: map[Content-Length:[30] Content-Type:[text/html; chars...
Jenkins test results in com.github.dockerjava.api.exception.BadRequestException: Bad Request to URI: /images/artifactory:<port>/hello-world:latest/json
Errors in Jenkins log
SEVERE: (DISCONNECTED) [id: ..., L:0.0.0.0/0.0.0.0:... ! R:artifactory/...:5000]:
Caught an exception on ProxyToServerConnection
io.netty.handler.codec.DecoderException:
javax.net.ssl.SSLHandshakeException: General SSLEngine problem
...
Caused by: sun.security.validator.ValidatorException: PKIX path building
failed: sun.security.provider.certpath.SunCertPathBuilderException:
unable to find valid certification path to requested target
My virtual repo, its remote and local work when I don't use the jenkins proxy but according to the plugin docs I require jenkins proxy to get the build info I need to CI/CD promotion.
Adding the certs to cacerts is somewhat less effective, if jenkins doesn't use that cert file. I'm unsure if adding a cert to a store requires a restart in jenkins, but it does seem to be the case for tomcat so that's probably just how jenkins works.
Configure jenkins instance to use a private keystore cloudbees doc on keystore
Copy $JENKINS_HOME/secrets/jfrog/certs/jfrog.proxy.crt to /etc/docker/certs.d/:/ca.crt
restart docker
Restart jenkins
test proxy via command line while tailing jenkins log - PASS
docker rmi artifactory:5000/hello-world:latest
docker pull artifactory:5000/hello-world:latest
This should use /etc/systemd/system/docker.service.d/http-proxy.conf HTTP_PROXY and go to jenkins proxy when then goes to the actual artifactory host. The required keys should be found in the store so ssl handshake will be good and v2 api used. If not, you'll see errors in jenkins.log
test helloworld on node via shell
node("docker-experiments") {
withCredentials([usernamePassword(
credentialsId: 'artifactory.jenkins.user',
passwordVariable: 'ARTIFACTORY_PASSWORD',
usernameVariable: 'ARTIFACTORY_USER')]) {
sh "uname -a "
def registry="artifactory:5000"
def tag="${registry}/hello-world:${BUILD_NUMBER}-shelltest"
stage('login') {
sh "docker login ${registry} -u ${ARTIFACTORY_USER} -p ${ARTIFACTORY_PASSWORD}"
}
stage('pull and tag') {
sh "docker pull hello-world"
sh "docker tag hello-world:latest ${tag}"
}
stage('push') {
sh "docker push ${tag}"
}
}
}
test helloworld on node via artifactory plugin
node("docker-experiments") {
withCredentials([usernamePassword(
credentialsId: 'artifactory.jenkins.user',
passwordVariable: 'ARTIFACTORY_PASSWORD',
usernameVariable: 'ARTIFACTORY_USER')]) {
def server = Artifactory.server "artifactory01"
def artDocker = Artifactory.docker(username: ARTIFACTORY_USER,
password: ARTIFACTORY_PASSWORD)
def registry="artifactory:5000"
def tag="${registry}/hello-world:${BUILD_NUMBER}-artifactoryTest"
def dockerInfo
stage('pull and tag') {
sh "docker tag hello-world:latest ${tag}"
}
stage('push') {
dockerInfo = artDocker.push "${tag}", "docker-local"
}
stage('publish') {
server.publishBuildInfo(dockerInfo)
}
}
}

Gets error "Cannot get CSRF" when trying to install jenkins-plugin using ANSIBLE

I am using ANSIBLE to install jenkins on CENTOS.
The installation works fine but when it comes to the task of installing plugin, i get the following error.
fatal: [jenkins]: FAILED! => {"changed": false, "details": "Request failed: <urlopen error [Errno 111] Connection refused>", "failed": true, "msg": "Cannot get CSRF"}
The code is as follows.
- name: Install jenkins
rpm_key:
state: present
key: https://pkg.jenkins.io/redhat-stable/jenkins.io.key
- name: Add Repository for jenkins
yum_repository:
name: jenkins
description: Repo needed for automatic installation of Jenkins
baseurl: http://pkg.jenkins.io/redhat-stable
gpgcheck: yes
gpgkey: https://pkg.jenkins.io/redhat-stable/jenkins.io.key
#Pre requisite: add key and repo
- name: Install jenkins
yum:
name: jenkins
state: present
#Start/Stop jenkins
- name: Start jenkins service
service:
name: jenkins
state: started
#Start jenkins on startup
- name: Start jenkins on boot
shell: chkconfig jenkins on
- name: Install build-pipeline
jenkins_plugin:
name: build-pipeline-plugin
notify:
- "restart jenkins-service"
You don't seem to wait between starting up jenkins and trying to install the plugin. The jenkins_plugin requires a running and working jenkins installation, so you should do a wait between Start jenkins service and Install build-pipeline:
- name: Wait for Jenkins to start up
uri:
url: http://localhost:8080
status_code: 200
timeout: 5
register: jenkins_service_status
# Keep trying for 5 mins in 5 sec intervals
retries: 60
delay: 5
until: >
'status' in jenkins_service_status and
jenkins_service_status['status'] == 200
In order to skip the startup wizard, I found this (offcourse googling)
- name: Jenkins Skip startUp for MI
lineinfile:
dest=/etc/sysconfig/jenkins
regexp='^JENKINS_JAVA_OPTIONS='
line='JENKINS_JAVA_OPTIONS="-Djava.awt.headless=true -Djenkins.install.runSetupWizard=false"'
register: result_skip_startup_wizard

Resources