GCP Container Scanning not triggered by pushing to Artifact Repository - docker

Even though the Container Scanning API is enabled and Vulnerability scanning is enabled for the Artifact Registry, newly pushed images are not being scanned.
On-demand scan results also don't show up in the Artifact Repository UI.
Is there something else I need to enable?

I believe your containers are being pushed to Google Container Registry (GCR), as evident by "Container Registry host" being eu.gcr.io in your first screenshot. The results should be available in the GCR page and won't show up in the Artifact Registry page.
On-demand scans are not accessible through the UI (Artifact Registry or GCR pages). You can only access the results through the gcloud CLI: https://cloud.google.com/container-analysis/docs/on-demand-scanning-howto#retrieve

Related

Google Artifact Registry and Jenkins: prevent deploy containers with high or critical vulnerabilities

I have a Jenkins Pipeline that is building a container image and is pushing it to Google Artifact Registry successfully. I have another job that takes the image tag and can deploy it into the K8s cluster, but for security reasons I need to include in my pipeline a step that reviews the vulnerabilities from the artifact registry scan and prevent the deployment if there are high or critical vulnerabilities, what would be the best option for accomplish with it?
I solved it with the use of SDK:
https://cloud.google.com/sdk/gcloud/reference/artifacts/docker/images/describe
Just used:
gcloud artifacts docker images describe IMAGE --show-package-vulnerability
Note that the service account we are using in jenkins should get the appropriate permissions.

remote Docker Repository behaviour in Artifactory

I have a remote docker repository configured in Artifactory (to docker hub). To test it I've created docker image A and pushed it to docker hub.
The image is user-name/image:latest.
Now I can pull it from artifactory using artifactory-url/docker/user-name/image:latest.
Now I've updated image A to image B and pushed it to docker hub. When I remove my local images and pull this image again from Artifactory I still get the image A (so it seems the cache is used). When I set the following setting to zero (Metadata Retrieval Cache Period) I'll pull the updated image B.
All fine. Now I increase the Metadata Retrieval Cache Period setting again. I've now deleted the image from docker hub and try to pull it again using artifactory. This fails while I was hoping it would just pull the image from the Artifactory cache?
I can also not pull it using the cache directly: docker pull artifactory-url/docker-cache/user-name/image:latest.
Is there a way to use a docker image from artifactory which is deleted in the remote repository?
The first part you wrote is ok and its the expected behavior. Now, for the second part, it's also the expected behavior and I will explain why - when you use a virtual repository as your Artifactory Docker registry, it will always search for artifacts in the local repositories first, then in the remote-cache, and only then in the remote itself. However, if Artifactory finds the package in the local or remote-cache repositories, it will always also check the remote for newer versions. This causes cached images that are deleted from the remote itself to not be downloadable from the remote-cache in Artifactory, since Artifactory receives a 404 error from the remote repository. You can fix this by moving the image to the local repository, and you will be able to pull it.

Problem with pulling docker images from gcr in kubernetes plugin / jenkins

I have a gke cluster with a running jenkins master. I am trying to start a build. I am using a pipeline with a slave configured by the kubernetes plugin (pod Templates). I have a custom image for my jenkins slave published in gcr (private access). I have added credentials (google service account) for my gcr to jenkins. Nevertheless jenkins/kubernetes is failing to start-up a slave because the image can't be pulled from gcr. When I use public images (jnlp) there is no issue.
But when I try to use the image from gcr, kubernetes says:
Failed to pull image "eu.gcr.io/<project-id>/<image name>:<tag>": rpc error: code = Unknown desc = Error response from daemon: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Although the pod is running in the same project as the gcr.
I would expect jenkins to start the slave even if I use a image from gcr.
Even if the pod is running in a cluster in the same project, is not authenticated by default.
Is stated that you've already set up the Service Account and I assume that there's a furnished key in the Jenkins server.
If you're using the Google OAuth Credentials Plugin you can then also use the Google Container Registry Auth Plugin to authenticate to a private GCR repository and pull the image.

Automatically pull docker image container

I have a linux bare-metal server with docker installed.
I work on an asp.net core project on my computer.
My source code is pushed on github.
Each time i commit and push something, github triggers a webhook on my docker hub account.
Docker hub builds me a new image which contains my asp.net core application binaries. (docker hub also run the tests)
This image works fine when i pull it manually on my server.
My question is how can i do this automatically ? Is there a way for my server to "detect" that docker hub contains a new version of the image and run something to pull this image and fire database migrations automatically ?
Thanks
If you have a public ip which external internet such as dockerhub could visit you, then you can use Docker Hub Webhooks:
You can create webhooks like next diagram, set the url which external could visit your service, when image was pushed, it will post some json data to the url you afforded, one example data here, then your own url could receive data and do related things as you like.
And, if you use jenkins, there are lots of plugin help you to do similar things: refer Triggering Docker pipelines with Jenkins, also Polling Docker Registries for Image Changes
If you not have a public ip which dockerhub could visit, then I guess you had to poll dockerhub to see if new image there...

Docker trigger jenkins job when image is pushed

I am trying to build a jenkins job(trigger builds remotely) on docker image build, build all I am getting on docker hub is following:
HISTORY
ID Status Date & Time
7345... ! ERROR 10/12/17 10:03
Reason (I assume): Docker is not authenticated to post to the jenkins url.
Question: How can I trigger the job automatically when an image gets pushed to docker hub?
Pull and run Watchtower docker image to poll any third-party public Docker image on Docker Hub or Quay that you need (typically as a base image of your own containers). Here's how. "Polling" here does not imply crudely pulling the whole image every 5 minutes or so - we are monitoring periodically for changes in the image, downloading only the checksum (SHA digest) most of the time (when there are no changes in the locally cached image).
Install the Build Token Root Plugin in your Jenkins server and set it up to receive Slack-formatted notifications secured with a token to trigger builds remotely or - safer - locally (those triggers will be coming from Watchtower container, not Slack). Here's how.
Set up Watchtower to post Slack messages to your Jenkins endpoint upon every change in the image(s) (tags) that you want. Here's how.
Optionally, if your scale is so large that you could end up overloading and bringing down the entire Docker Hub with a flood HTTP GET requests (should the time triggers go wrong and turn into a tight loop) make sure to build in some safety checks on top of Watchtower to "watch the watchman".
You can try the following plugin: https://wiki.jenkins.io/display/JENKINS/CloudBees+Docker+Hub+Notification
Which claims to do what you're looking for.
You can configure a WebHook in DockerHub wich will trigger the Jenkins-Build.
Docker Hub webhooks targeting your Jenkings server endpoint require making periodic copies of the image to another repo that you own [see my other answer with Docker Hub -> Watchman -> Jenkins integration through Slack notifications].
More details
You need to set up a cron job with periodic polling (docker pull) of the source repo to [docker] pull its `latest' tag, and if a change is detected, re-tag it as your own and [docker] push to a repo you own (e.g. a "clone" of the source Docker Hub repo) where you have set up a webhook targeting your Jenkings build endpoint.
Then and only then (in a repo you own) will Jenkins plugins such as Docker Hub Notification Trigger work for you.
Polling for Dockerfile / release changes
As a substitute of polling the registry for image changes (which need not generate much network traffic thanks to the local cache of docker images) you can also poll the source Dockerfile on Github using wget. For instance Dockerfiles of the official Docker Hub images are here. In case when the Github repo makes releases, you can get push notifications of them using Github Watch > Releases Only feature and if they have CI docker builds. Docker images will usually be available with a delay after code releases, even with complete automation, so image polling is more reliable.
Other projects
There was also a proposal for a 2019 Google Summer of Code project called Polling Docker Registries for Image Changes that tried to solve this problem for Jenkins users (incl. apparently Google), but sadly it was not taken up by participants.
Run a cron job with a periodic docker search to list all tags in the docker image of interest (here's the script). Note that this script requires the substitution of the jannis/jq image with an existing image (e.g. docker run --rm -i imega/jq).
Save resulting tags list to a file, and monitor it for changes (e.g. with inotifywait).
Fire a POST request using curl to your Jenkins server's endpoint using Generic Webhook Trigger plugin.
Cautions:
for efficiency reasons this tags listing script should be limited to a few (say, 3) top pages or simple repos with a few tags,
image tag monitoring relies on tags being updated correctly (automatically) after each image change, rather than being stuck in the past, like say Ubuntu tags (e.g. trusty-20190515 was updated a few days ago - late November, without the change in its mid-May tag).

Resources