Retrieve image digest value from Docker task inside azure pipelines - docker

I am using Azure Pipelines (with YAML format) to build Dockerfile and push the image to Azure Container Registry.
Here is part of the YAML definition:
- task: Docker#2
displayName: Build Dockerfile
inputs:
command: 'build'
containerRegistry: 'containerRegistry'
repository: '$(imageRepository)'
Dockerfile: 'src/Api/Dockerfile'
buildContext: '.'
tags: '$(imageTag)'
- task: Docker#2
displayName: Push image
inputs:
command: push
containerRegistry: 'containerRegistry'
repository: '$(imageRepository)'
tags: '$(imageTag)'
So my question is, is there a way to retrieve the digest value on docker push task, so I can use it in the next tasks?
It seems that in the older versions of the Docker task, that was possible and there was a task parameter imageDigestFile, I am referring to Docker#0.
Unfortunately now that looks deprecated and I can't find a way to do it using the latest version.
Thanks!
Best regards,
Nikolay

Unfortunately now that looks deprecated and I can't find a way to do it using the latest version.
This is a known issue for the latest version of docker push task:
How to use output of DockerV2 task
That because that product team have tried to limit the number of inputs to the task to simplify it in DockerV2. So they have not provided the support for image digest files. But image digest is written to the output called DockerOutput.
The source code here.
And the product team will work with the people involved in the design of this task and see how to do this work.
To resolve this issue, we could try to use the older versions of the Docker task or parse the output DockerOutput and fetch the image digest

Related

In Azure DevOps, my ECR push image task fails after a few tries with no insight

- task: Docker#2
displayName: Build an image
inputs:
command: build
repository: weather-update-project
dockerfile: '**/Dockerfile'
buildContext: '$(Build.SourcesDirectory)'
tags: 'latest'
- task: ECRPushImage#1
inputs:
awsCredentials: 'weather'
regionName: us-west-2
imageSource: 'imagename'
sourceImageName: 'weather-update-project'
sourceImageTag: 'latest'
pushTag: 'latest'
repositoryName: 'weather-update-project'
I'm building an image and then trying to push that image to ECR. When it gets to the ECR push image task, it tries to push a few times and then gives me the error "The process '/usr/bin/docker' failed with exit code 1" and that's it. There's no other information in my logs in regards to the error like there normally is. What is possibly happening? My ECR is public and all of my credentials are correct. Here's my YAML code for the docker build and ecrpushimage tasks in Azure DevOps
My Repository name that contains my dockerfile is 'weather-update-project' and my ECR repository also has the name 'weather-update-project'
Can you please validate on what agent this is running on & if Docker is there or not?
Is the image being created properly?
While executing the ECRPushImage task at the beginning it should show at least the configuration log like below, if not then it is related to docker on that agent.
Configuring credentials for task
...configuring AWS credentials from service endpoint 'xxxxxxxxxxxx'
...endpoint defines standard access/secret key credentials
Configuring region for task
...configured to use region us-east-1, defined in tas

How to implement a multistage Docker build with GitHub Actions?

Problem
I have a multistage (two stages) docker build for my container, lets name it cont, that I want so automate via GitHub Actions. The first stage/docker-image of the build-process does seldomly change and takes very long to build; lets call this cont-build. I want to to reduce build duration by not building cont-build every time I build the whole project.
When running that build locally, I have the image cont-build easily available through my local docker instance. I struggle to transfer this simple availability to GitHub Actions.
I checked the Docker and GitHub docs, but was unable to find a way of implementing this. It is so simple on a local machine, so I thought it cannot be that hard on GitHub-Actions...
Approach
To persist the cont-build image, there seem to be different approaches
Use some sort of GitHub cache. I am not sure about the duration images are cached for.
Pull image from DockerHub, which in the case of long build times may be much faster than building
The second one seems more straight forward and less complex to me. So my approach was to publish cont-build to DockerHub and pull cont-build in the GitHub Action every time I want to build cont.
I tried using uses: Docker://${{ secrets.DOCKERHUB_USERNAME }}/cont-build, but do not know where to place it.
Question
Where/how do I pull the image cont-build that is required by the Dockerfile-cont "Build and push" in the workflow below? Also, if my approach is bad, how is the general approach to multi-stage builds where one stage of the build does not/seldomly change, especially taking into account the fact that GitHub-caches might be deleted after a while?
I realise that I can use something like FROM mydockerID/cont-build:latest in Dockerfile-cont, but that does not seem to be the solution that leverages the whole GitHub-Workflow environment. This would also mean that I have to enter my docker-ID in clear text as opposed to using a GitHub-Secret.
name: CI for cont
on: workflow_dispatch
jobs:
docker:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout#v2
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
-
name: Login to DockerHub
uses: docker/login-action#v1
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
-
name: Build and push
id: docker_build
uses: docker/build-push-action#v2
with:
context: ./Code/
file: ./Code/Dockerfile-cont
push: true
tags: ${{ secrets.DOCKERHUB_USERNAME }}/cont:latest
-
name: Image digest
run: echo ${{ steps.docker_build.outputs.digest }}
The problem with multi-stage builds is that if you want caching to work you need:
Access to the intermediate stages as well as part of the rebuild.
To use --cache-from to refer to the previous images, including intermediate steps.
If you think about how rebuilds would work, if you are missing intermediate stages the builder will go "huh I guess I don't have that in cache" and rebuild; it can't tell if final stage would need to be rebuilt or not until it's gone through all previous steps.
So you need to do the following song and dance, assuming two stages, "build" and runtime:
Pull "yourimage:latest" and "yourimage:build".
Build and tag each intermediate stage, e.g. "yourimage:build", "yourimage:latest", with --cache-from=yourimage:build --cache-from=yourimage:latest.
Push both those images.
You can see specific details and more extended explanation, and example solution, at https://pythonspeed.com/articles/faster-multi-stage-builds/

How to deploy image to EC2 usign Docker and Circleci. CI CD

I am trying to implement CI/CD. So far what I have is a CI using CircleCI, and
circleci/aws-ecr#6.15.0
orb, so when I push to master the image is created and pushed to ECR. That works perfect, but what I want to implement is the deployment.
I think that I am not searching well in google, but I can not find any tutorial or explanation of how to do it.
I read that the orb
circleci/aws-ecs#01.4.0
and seems to do the job but I really dont know how to proceed.
This is my yaml file:
version: 2.1
orbs:
node: circleci/node#4.1.0
aws-ecr: circleci/aws-ecr#6.15.0
aws-ecs: circleci/aws-ecs#01.4.0
workflows:
build_and_push_image:
jobs:
- aws-ecr/build-and-push-image:
repo: node.js-api
tag: "latest,v0.1.${CIRCLE_BUILD_NUM}"
dockerfile: "Dockerfile"
path: .
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: "${AWS_RESOURCE_NAME_PREFIX}-service"
cluster-name: "${AWS_RESOURCE_NAME_PREFIX}-cluster"
container-image-name-updates: "container=${AWS_RESOURCE_NAME_PREFIX}-service,tag=${CIRCLE_SHA1}"
Error:
An error occurred (ClientException) when calling the
DescribeTaskDefinition operation: Unable to describe task definition.
What I want to achieve is an automated deploy, I choose CircleCI because I thought it would be a good option also with my bitbucket repo, but if there is another way to achieve this I am totally open to hear suggestions.
Thanks!!

GitLabCI Kaniko on shared runner "error checking push permissions -- make sure you entered the correct tag name"

This similar question is not applicable because I am not using Kubernetes or my own registered runner.
I am attempting to build a Ruby-based image in my GitLabCI pipeline in order to have my gems pre-installed for use by subsequent pipeline stages. In order to build this image, I am attempting to use Kaniko in a job that runs in the .pre stage.
build_custom_dockerfile:
stage: .pre
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
variables:
IMAGE_TAG: ${CI_COMMIT_REF_SLUG}-${CI_COMMIT_SHORT_SHA}
script:
- echo "{\"auths\":{\"${CI_REGISTRY}\":{\"username\":\"${CI_REGISTRY_USER}\",\"password\":\"${CI_REGISTRY_PASSWORD}\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context ${CI_PROJECT_DIR} --dockerfile ${CI_PROJECT_DIR}/dockerfiles/custom/Dockerfile --destination \
${CI_REGISTRY_IMAGE}:${IMAGE_TAG}
This is of course based on the official GitLabCI Kaniko documentation.
However, when I run my pipeline, this job returns an error with the following message:
error checking push permissions -- make sure you entered the correct tag name, and that you are authenticated correctly, and try again: getting tag for destination: registries must be valid RFC 3986 URI authorities: registry.gitlab.com
The Dockerfile path is correct and through testing with invalid Dockerfile paths to the --dockerfile argument, it is clear to me this is not the source of the issue.
As far as I can tell, I am using the correct pipeline environment variables for authentication and following the documentation for using Kaniko verbatim. I am running my pipeline jobs with GitLab's shared runners.
According to this issue comment from May, others were experiencing a similar issue which was then resolved when reverting to the debug-v0.16.0 Kaniko image. Likewise, I changed the Image name line to name: gcr.io/kaniko-project/executor:debug-v0.16.0 but this resulted in the same error message.
Finally, I tried creating a generic user to access the registry, using a deployment key as indicated here. Via the GitLabCI environment variables project settings interface, I added two variables corresponding to the username and key, and substituted these variables in my pipeline script. This resulted in the same error message.
I tried several variations on this approach, including renaming these custom variables to "CI_REGISTRY_USER" and "CI_REGISTRY_PASSWORD" (the predefined variables). I also made sure neither of these variables was marked as "protected". None of this solved the problem.
I have also tried running the tutorial script verbatim (without custom image tag), and this too results in the same error message.
Has anyone had any recent success in using Kaniko to build Docker images in their GitLabCI pipelines? It appears others are experiencing similar problems but as far as I can tell, no solutions have been put forward and I am not certain whether the issue is on my end. Please let me know if any additional information would be useful to diagnose potential problem sources. Thanks all!
I ran into this issue before many times forgetting that the variable was set to protected thus will only be exported to protected branches.
Hey i got it working but it was quite a hassle to find out.
The credentials i had to use were my git username and password not the registry user/passwordd!
Here is what my gitlab-ci.yml looks like (of course you would need to replace everything with variables but i was too lazy to do it until now)
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
tags:
- k8s
script:
- echo "{\"auths\":{\"registry.mydomain.de/myusername/mytag\":{\"username\":\"myGitusername\",\"password\":\"myGitpassword\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination registry.mydoamin.de/myusername/mytag:$CI_COMMIT_SHORT_SHA

Can I run a job under another job's steps in CircleCI 2.0?

Is it possible to have another job run in the context of another job? I have some jobs that have some steps in common, and I don't want to repeat these steps in the different jobs.
push-production-image:
docker:
- image: google/cloud-sdk:latest
working_directory: ~/app
steps:
- setup-gcp-docker
- run: docker push [image]
No you cannot however YAML itself has a way to solve this problem with what is called YAML Anchors and Aliases.
Here's a blog post I wrote on how to do specifically this: https://circleci.com/blog/circleci-hacks-reuse-yaml-in-your-circleci-config-with-yaml/

Resources