Circleci passing a docker image in workflow jobs - circleci

Is it possible to pass docker images built in earlier job in circle ci
example
jobs:
build:
steps:
- checkout
// build image
deploy:
steps:
- deploy earlier image
i cant see how i can access the image without rebuilding it

Each job can run on a different host, so to share the image you would need to push it to a registry from the job that builds it.
To reference the same job that was pushed you'll need an identifier that is known ahead of time. A good example of this is the CIRCLE_SHA1 environment variable. You can use this variable as the image tag
jobs:
build:
machine: true
steps:
...
- run: |
docker build -t repo/app:$CIRCLE_SHA1 .
docker push repo/app:$CIRCLE_SHA1
test:
docker:
- image: repo/app:$CIRCLE_SHA1
steps:
...

I believe you can achieve this by persisting the image to a workspace and then attaching the workspace when you want to deploy it. See CircleCI's workspace documentation here: https://circleci.com/docs/workspaces

Related

Run GitHub workflow on Docker image with a Dockerfile?

I would like to run my CI on a Docker image. How should I write my .github/workflow/main.yml?
name: CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master ]
jobs:
build:
name: build
runs:
using: 'docker'
image: '.devcontainer/Dockerfile'
steps:
- uses: actions/checkout#v2
- name: Build
run: make
I get the error:
The workflow is not valid. .github/workflows/main.yml
(Line: 11, Col: 5): Unexpected value 'runs'
I managed to make it work but with an ugly workaround:
build:
name: Build Project
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v1
- name: Build docker images
run: >
docker build . -t foobar
-f .devcontainer/Dockerfile
- name: Build exam
run: >
docker run -v
$GITHUB_WORKSPACE:/srv
-w/srv foobar make
Side question: where can I find the documentation about this? All I found is how to write actions.
If you want to use a container to run your actions, you can use something like this:
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://{host}/{image}:{tag}
steps:
...
Here is an example.
If you want more details about the jobs.<job_id>.container and its sub-fields, you can check the official documentation.
Note that you can also use docker images at the step level: Example.
I am reposting my answer to another question, in order to be sure to find it while Googling it.
The best solution is to build, publish and re-use a Docker image based on your Dockerfile.
I would advise to create a custom build-and-publish-docker.yml action following the Github documentation: Publishing Docker images.
Assuming your repository is public, you should be able to automatically upload your image to ghcr.io without any required configuration. As an alternative, it's also possible to publish the image to Docker Hub.
Once your image is built and published (based on the on event of the action previously created, which can be triggered manually also), you just need to update your main.yml action so it uses the custom Docker image. Again, here is a pretty good documentation page about the container option: Running jobs in a container.
As an example, I'm sharing what I used in a personal repository:
Dockerfile: the Docker image to be built on CI
docker.yml: the action to build the Docker image
lint.yml: the action using the built Docker image

Gitlab CI: old Docker images?

Our project uses a multi-stage CI setup where the first stage checks for modification of files like package-lock.json and Gemfile.lock, compiles all these dependencies and then pushes them to the Gitlab container registry.
Using --cache-from in Docker build based on the current mainline branch, this is quite fast and the Docker layering mechanism helps to prevent repetition of steps.
Subsequent stages and jobs then use the Docker image pushed in the first stage as their image:.
Abbreviated configuration for readability:
stages:
- create_builder_image
- test
Create Builder Image:
stage: create_builder_image
script:
- export DOCKER_BRANCH_TAG=$CI_COMMIT_REF_SLUG
# do stuff to build the image, using cache to speed it up
- docker push $GITLAB_IMAGE/builder:$DOCKER_BRANCH_TAG
Run Tests:
image: $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
stage: test
script:
# do stuff in the context of the image build in the first stage
Unfortunately, when working on longer-running feature branches, we now have a situation where it looks like the image in the second step is sometimes outdated and not pulling the latest version from the registry before starting the job, which makes subsequent jobs complain about missing dependencies.
Is there anything I can do to force it to always pull the latest image for each job?
As already written in the comments, i would not use the $CI_COMMIT_REF_SLUG for tagging. Simply because it is not guaranteed that all pipelines will run in the same order, and this alone can create issues. The same one you are currently experiencing.
I recommend to use $CI_COMMIT_SHA as it is bound to the pipeline. I would also rely on previous builds for caching and i will shortly outline my approach here
stages:
- create_builder_image
- test
- deploy
Create Builder Image:
stage: create_builder_image
script:
- (docker pull $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG && export DOCKER_CACHE_TAG=$CI_COMMIT_REF_SLUG) || (docker pull $GITLAB_IMAGE/builder:latest && export DOCKER_CACHE_TAG=latest) || true
- docker build --cache-from $GITLAB_IMAGE/builder:$DOCKER_CACHE_TAG ...
# do stuff to build the image, using cache to speed it up
- docker push $GITLAB_IMAGE/builder:$CI_COMMIT_SHA
Run Tests:
image: $GITLAB_IMAGE/builder:$CI_COMMIT_SHA
stage: test
script:
# do stuff in the context of the image build in the first stage
Push image: # pushing the image for the current branch ref, as i know it is a working image and it can than be used for caching by others.
image: docker:20
stage: deploy
variables:
GIT_STRATEGY: none
stage: push
script:
- docker pull $GITLAB_IMAGE/builder:$CI_COMMIT_SHA
- docker tag $GITLAB_IMAGE/builder:$CI_COMMIT_SHA $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
- docker push $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
i know it might generate additional build steps, but this way, you can ensure that you will always have the image which belong to the pipeline. You still can use caching and layering from docker, and beneficiary, the image will not be pushed currently if the tests are failing.
Furthermore you can also create an step before creating the build image, where you can figure out, if you do need a new image at all.

Configure bitbucket-pipeline.yml to use a DockerFile from repository to build image when running a pipeline

I am new on creating pipelines on bitbucket to automate building a specific branch after merge.
The project is written in C++ and has the following structure:
PROJECT FOLDER
- .devcontainer/
- devcontainer.json
- bin/
- doc/
- lib/
- src/
- CMakeLists.txt
- ...
- CMakeLists.txt
- clean.sh
- compile.sh
- configure.sh
- DockerFile
- bitbucket-pipelines.yml
We created a DockerFile with all the settings required to build the project. Is there any way I can reference the docker image on bitbucket-pipeline.yml to the DockerFile from the repository?
I have been able to upload the docker image on my docker hub and use it with my credentials by defining:
image:
name: <dockerhubname>/<dockername>
username: $DOCKER_HUB_USERNAME
password: $DOCKER_HUB_PASSWORD
email: $DOCKER_HUB_EMAIL
but I am not sure how to do so bitbucket takes the DockerFile from the repository and uses it to build the image, and if by doing it like this, the build time will increase.
Thanks in advance!
In case you want to build your image during your pipelines process you need the same steps as if your image was built in your machine:
Build your image docker build -t $APP_NAME .
Push it to your repo (e.g. docker hub) docker push $APP_NAME:$VERSION
You can do something like this:
steps:
- step: &build
name: Build Docker Image
services:
- docker
script:
- docker build -t $APP_NAME .
- docker push $APP_NAME:$VERSION
Think that every step in your pipelines runs in a docker container and that allows you to do whatever you want. The docker service allows you to use a out of the box docker client. Then after pushed you can use the image in another step. You just need to specified the image for the step.

Keeping docker builds in Gitlab CI with docker-compose

I have a repository that includes three parts: frontend, admin and server. Each contains its own Dockerfile.
After building the image I wanted to add a test for admin. My tests go through but take a lot of time because it pulls the base image and builds everything from scratch on each stage (like 8mins per stage). This is my .gitlab-ci.yml
image: tmaier/docker-compose
services:
- docker:dind
stages:
- build
- test
build:
stage: build
script:
- docker login -u $CI_DEPLOY_USER -p $CI_DEPLOY_PASSWORD $CI_REGISTRY
- docker-compose build
- docker-compose push
test:admin:
stage: test
script:
- docker-compose -f docker-compose.yml -f docker-compose.test.yml up admin
I am not quite sure if I need to push/pull images between stages or if I should do that with artifacts/cache/whatever. As I understood I only need to push/pull if I want to deploy my images to another server. But also I added a docker-compose push which runs through but Gitlab doesn't show me any images in my registry.
I have been researching a lot on this but most example code I found was only about a single docker container and they didn't make use of docker-compose.
Any ideas? :)
Gitlab currently has no way to share Docker images between stages as artifacts. They have had an outstanding feature request for this for 3 years.
You'll need to push the docker image to the docker registry and pull it in later stages that need it. (Or do everything related to the image in one stage)
Mark could you show the files docker-compose.yml docker-compose.test.yml?
May be you try to push and pull different images. BTW try place docker login at before_script section that make it works at all jobs.

Bitbucket Pipelines - steps - docker - cant find image

I'm building my pipline to create a docker image, then push it to AWS. I have it broken into steps, and in Bitbucket, you have to tell it what artifacts to share between them. I have a feeling this is a simple bug, but I just cannot figure it out.
It's failing at 'docker tag' in step 4 with:
docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Error response from daemon: No such image: projectname:v.11
Basically it cannot find the docker image created...
Here's my pipeline script (some of it simplified)
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
name: 1. Install dotnet
script:
# Do things
- step:
name: 2. Install AWS CLI
script:
# Do some more things
- step:
name: 3. Build Docker Image
script:
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
artifacts:
- ./**
- step:
name: 4. Push Docker Image to AWS
script:
# Tag and push my docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Now, I know this script works, but only if I remove all the steps. For whatever reason, step 4 doesn't have access to the docker image created in step 3. Any help is appreciated!
Your docker images are not stored in the folder where you start the build, so they are not saved to the artefacts, and not available in the next step.
Even if they were (you could pack/unpack it through docker save), you would probably run against the size limits for artefacts, not to mention the time the time it takes to pack/unpack.
I guess you'd be better off if you created a Dockerfile in your project yourself, and combine step 1 & 2 there. Your bitbucket pipeline could then be based on a docker image that already contains the AWS-cli and uses docker as a service, and your one step would then consist of building your project's Dockerfile and uploading to AWS. This also lowers your dependency on bitbucket pipelines, as
The Docker image is not being passed from step 3 to step 4 as the Docker image is not stored in the build directory.
The simplest solution would be to combine all four of your steps into a single step as follows:
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
script:
# Install dependencies
- ./install-dot-net
- ./install-aws-cli
# Build the Docker image
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
# Tag and push the Docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER

Resources