Prisma Randomously not finding docker image in jenkins - docker

We have two stages one to build the docker image and another one to scan it with prisma pluging.
build image :
stage('Build Docker image preproduction') {
steps {
script {
dockerImage = docker.build("${env.docker_image_name}")
}
}
}
stage('Prisma Cloud Scan') {
steps {
prismaCloudScanImage dockerAddress: "$DOCKER_HOST", image: "${env.docker_image_name}:latest", logLevel: 'debug', resultsFile: 'prisma-cloud-scan-results.json'
}
}
This works fine most of the time, but in some situation almost ( 1 over 20 ) the job failled and we get this error:
[PRISMACLOUD] Scanning images remotely on default-5mn8k
[PRISMACLOUD] Waiting for scanner to complete
[PRISMACLOUD] /home/jenkins/agent/workspace/ild_chore_add-prisma-to-pipeline/twistcli6275500796561372150 images scan otherimagename:1234 --docker-address tcp://localhost:2375 --min-scan-time 1611048549280 --ci --publish --details --address https://XXXXXXXXXprisma_host_hereXXXXX --ci-results-file prisma-cloud-scan-results.json
[ild_chore_add-prisma-to-pipeline] $ /home/jenkins/agent/workspace/ild_chore_add-prisma-to-pipeline/twistcli6275500796561372150 images scan otherimagename:1234 --docker-address tcp://localhost:2375 --min-scan-time 1611048549280 --ci --publish --details --address https://XXXXXXXXXprisma_host_hereXXXX --ci-results-file prisma-cloud-scan-results.json
[PRISMACLOUD] failed to find image otherimagename:1234
[PRISMACLOUD] Scanner failed to run properly. Status: 1
and before this message we can see in the console that the image is already present in the docker host:
+ docker build -t otherimagename:1234 .
Sending build context to Docker daemon 20.54MB
Step 1/2 : FROM nginx:stable
---> b9e1dc12387a
Step 2/2 : COPY docs /usr/share/nginx/html
---> Using cache
---> 09787d1a562e
Successfully built 09787d1a562e
Successfully tagged otherimagename:1234
Can you help me figure out what's going on? we also set up one sleep time between the two steps, but still facing the issue.

Thanks, #EFOE, this hint of the docker config helped. I ran into the same problem via Jenkins. The scans were running on Jenkins EC2 Jenkins agents, both Win and Linux for respective images. While there were no issues with the Linux image scans, the Windows scans failed to find the docker images.
I debugged the docker daemon logs on the windows EC2 agents and found that the images were actually accessible locally on those agents, but the Prisma plugin was unable to access the Docker API for the image details.
Since my agent never had any browsers installed (IE was broken), once I installed chrome as a browser, the Prisma plugin was able to access the docker images as well as perform the scans. So basically my agent needed a client to access the Docker API.
There were no issues when accessing the twistcli binaries by Prisma. Just had issues with only the Prisma Jenkins plugin for Windows.
Hopefully, this will help if someone runs into similar issues.

Related

Jenkins build Docker container on remote host with dockerfile

I'm quite new to Jenkins and spent 2 whole days not twisting my head (and google and stackoverflow) around, how to get a docker container built on a remote host (from Jenkins host perspective).
My setup:
Docker runs on a MacOS machine (aka the "remote host")
Jenkins runs as docker container on this machine
Bitbucket Cloud runs at Atlassian
PyCharm is my development tool - running on the MacOS machine
Everything works fine so far. Now, I want Jenkins to build a docker container (on the "remote host") containing my python demo.
I'm using a dockerfile in my project:
FROM python:3
WORKDIR /usr/src/app
COPY . .
CMD ["test.py"]
ENTRYPOINT ["python3"]
I'm trying to build a jenkinsfile, I'm expecting to do 2 things
Pull the repo
Build the docker image with the help of the dockerfile on the "remote host"
Docker is installed as plugin and configured.
Docker is installed via Jenkins configuration.
Docker remote host is set up in "Cloud" setup in Jenkins - connection works (with the help of socat running as docker container)
Docker Host ist set to the remote host IP and port 2376
I'm using a jenkins pipeline project.
Most promising threat about using remote hosts is of course https://www.jenkins.io/doc/book/pipeline/docker/#using-a-remote-docker-server
But using docker.withServer('tcp://192.168.178.26:2376') (in my case, locally, no credentials because not reachable from outside), I had no luck at all.
Most common error message: hudson.remoting.ProxyException: groovy.lang.MissingMethodException: No signature of method: org.jenkinsci.plugins.docker.workflow.Docker.withServer() is applicable for argument types: (java.lang.String, java.lang.String) values: [tcp://192.168.178.26:2376]
If I try to let Jenkins build it inside it's own container with its own docker, it tells me /var/jenkins_home/workspace/dockerbuild#tmp/durable-6e12255b/script.sh: 1: /var/jenkins_home/workspace/dockerbuild#tmp/durable-6e12255b/script.sh: docker: not found
Strange, as I thought, docker was installed. But I want to build at remote host anyway.
In my eyes the most promising jenkinsfile is the following - but to be honest, I am totally lost at the moment and really need some help:
node {
checkout scm
docker.withServer('tcp://192.168.178.26:2376')
def customImage = docker.build("my-image:${env.BUILD_ID}")
customImage.inside {
sh 'make test'
}
I appreciate any hint and am greatful for your help.
Best regards
Muhackl

GitLab CI/CD Docker-In-Docker Failing with Custom DIND Service

I've had CI/CD set up in our private GitLab instance for a while now to build some packages, create docker images from them, and push them to our internal registry. The configuration looks like this:
stages:
- build
services:
- docker:18.09.4-dind
image: localregistry/utilities/tools:1.0.5
build:
stage: build
script:
- mvn install
- docker build -t localregistry/proj/image:$VERSION .
- docker push localregistry/proj/image:$VERSION
tags:
- docker
This has worked quite well up until today, when we started getting hit with rate limiting errors from Docker. We have a large company so this isn't entirely unexpected, but it prompted me to look at locally caching some of the docker images that we use frequently. As a quick test, I pulled, retagged, and pushed to our local registry the docker:18.09.4-dind image and changed the line in the CI/CD configuration to:
services:
- localregistry/utilities/docker:18.09.4-dind
To my surprise when running the CI/CD job, while the image appeared to start up fine I started having docker problems:
$ docker build -t localregistry/proj/image:$VERSION .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
The next hour or so was spent examining the runner and the various docker environments that get executed there, trying to figure out what the difference could be from simply retagging the DIND image, but couldn't figure anything out; the only difference that could be found is that DOCKER_HOST=tcp://docker:2375 was set in the environment when using docker:18.09.4-dind, but not when using localregistry/utilities/docker:18.09.4-dind - although setting it explicitly didn't help, triggering this message:
error during connect: Get http://docker:2375/v1.39/containers/json?all=1: dial tcp: lookup docker on 151.124.118.131:53: no such host
During that time the rate limit was lifted and I was able to switch back to the normally tagged version, but I can't see a reason why a locally tagged version wouldn't work; any ideas as to why this is?
I guess your whole problem would be solved with using an alias for your new docker dind image. Just replace the services section with the following:
services:
- name: localregistry/utilities/docker:18.09.4-dind
alias: docker
This causes your docker daemon (dind) service to be accessible under the name docker, which is the default hostname for docker daemon.
See also extended docker configuration options in GitLab CI for more details.

Cassandra/Scylla on docker without internet in linux server

I have installed docker on redhat/centos server. docker services are running fine but how I can install or build cassandra/scylla image on docker. my server is not connected with internet so while building cassandra/scylla image or run then getting below error "Unable to find image" with timeout exception.
Can anyone help how to build cassandra/Scylla docker image without internet?
Thanks.
Once you download the image though, it is very simple to take it offline and load it into an offline system using docker save to export the image as a file, and docker load to import the image back into docker.
The problem does not seem to be related to Apache Cassandra or Scylla.
You do need access to Docker hub to download the relevant image for the first time, for example when running docker run hello-world
Once you solve that, you can move to run Apache Cassandra or Scylla with, for example with
docker run --name some-scylla -d scylladb/scylla

jenkins pipeline docker build on docker agent

I've got a jenkins declarative pipeline build that runs gradle and uses a gradle plugin to create a docker image. I'm also using a dockerfile agent directive, so the entire thing runs inside a docker container. This was working great with jenkins itself installed in docker (I know, that's a lot of docker). I had jenkins installed in a docker container on docker for mac, with -v /var/run/docker.sock:/var/run/docker.sock (DooD) per https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/. With this setup, the pipeline docker agent ran fine, and the docker build command within the pipeline docker agent ran fine as well. I assumed jenkins also mounted the docker socket on its inner docker container.
Now I'm trying to run this on jenkins installed on an ec2 instance with docker installed properly. The jenkins user has the docker group as its primary group. The jenkins user is able to run "docker run hello-world" successfully. My pipeline build starts the docker agent container (based on the gradle image with various things added) but when gradle attempts to run the docker build command, I get the following:
* What went wrong:
Execution failed for task ':docker'.
> Docker execution failed
Command line [docker build -t config-server:latest /var/lib/****/workspace/nfig-server_feature_****-HRUNPR3ZFDVG23XNVY6SFE4P36MRY2PZAHVTIOZE2CO5EVMTGCGA/build/docker] returned:
Cannot connect to the Docker daemon. Is the docker daemon running on this host?
Is it possible to build docker images inside a docker agent using declarative pipeline?
Yes, it is.
The problem is not with Jenkins' declarative pipeline, but how you're setting up and running things.
From the error above, looks like there's a missing permission which needs to be granted.
Maybe if you share what your configuration looks like and how your're running things, more people can help.

Jenkins docker setup

I am using Jenkins to make build of project, but now my client wants to make builds inside of a Docker image. i have installed docker on server and its running on 172.0.0.1:PORT. I have installed Docker plugin and assigned this TCP URL to Docker URL. I have also created an image with the name jenkins-1
In configure project I use Build environment Build with Docker Container and provide image name. and then in Build in put Execute Shell and then Build it
But it gives the Error:
Pull Docker image jenkins-1 from repository ...`
$ docker pull jenkins-1`
Failed to pull Docker image jenkins-1`
FATAL: Failed to pull Docker image jenkins-1`
java.io.IOException: Failed to pull Docker image jenkins-1``
at com.cloudbees.jenkins.plugins.docker_build_env.PullDockerImageSelector.prepare DockerImage(PullDockerImageSelector.java:34)`
at com.cloudbees.jenkins.plugins.docker_build_env.DockerBuildWrapper.setUp(DockerB`uildWrapper.java:169)`
at hudson.model.Build$BuildExecution.doRun(Build.java:156)`
at `hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:534)`
at hudson.model.Run.execute(Run.java:1720)`
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)`
at hudson.model.ResourceController.execute(ResourceController.java:98)`
at hudson.model.Executor.run(Executor.java:404)`
Finished: FAILURE`
I just have run into the same issue. There is a 'Verbose' check-box in the configuration of build environment after selecting 'Advanced...' link to expand on the error details:
CloudBees plug-in Verbose option
In my case I ran out of space downloading the build Docker images. Expanding ec2 volume has resolved the issue.
But there is an ongoing trouble with space as docker does not auto cleans images and I have ended up adding a manual cleanup step into the build:
docker volume ls -qf dangling=true | xargs -r docker volume rm
Complete build script:
https://bitbucket.org/vk-smith/dotnetcore-api/src/master/ci-build.sh?fileviewer=file-view-default

Resources