Bitbucket Pipelines - steps - docker - cant find image - docker

I'm building my pipline to create a docker image, then push it to AWS. I have it broken into steps, and in Bitbucket, you have to tell it what artifacts to share between them. I have a feeling this is a simple bug, but I just cannot figure it out.
It's failing at 'docker tag' in step 4 with:
docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Error response from daemon: No such image: projectname:v.11
Basically it cannot find the docker image created...
Here's my pipeline script (some of it simplified)
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
name: 1. Install dotnet
script:
# Do things
- step:
name: 2. Install AWS CLI
script:
# Do some more things
- step:
name: 3. Build Docker Image
script:
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
artifacts:
- ./**
- step:
name: 4. Push Docker Image to AWS
script:
# Tag and push my docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Now, I know this script works, but only if I remove all the steps. For whatever reason, step 4 doesn't have access to the docker image created in step 3. Any help is appreciated!

Your docker images are not stored in the folder where you start the build, so they are not saved to the artefacts, and not available in the next step.
Even if they were (you could pack/unpack it through docker save), you would probably run against the size limits for artefacts, not to mention the time the time it takes to pack/unpack.
I guess you'd be better off if you created a Dockerfile in your project yourself, and combine step 1 & 2 there. Your bitbucket pipeline could then be based on a docker image that already contains the AWS-cli and uses docker as a service, and your one step would then consist of building your project's Dockerfile and uploading to AWS. This also lowers your dependency on bitbucket pipelines, as

The Docker image is not being passed from step 3 to step 4 as the Docker image is not stored in the build directory.
The simplest solution would be to combine all four of your steps into a single step as follows:
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
script:
# Install dependencies
- ./install-dot-net
- ./install-aws-cli
# Build the Docker image
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
# Tag and push the Docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER

Related

A locally built Docker image within a Bitbucket Pipeline

What I need is a way to build a Dockerfile within the repository as an image and use this as the image for the next step(s).
I've tried the Bitbucket Pipeline configuration below but in the "Build" step it doesn't seem to have the image (which was built in the previous step) in its cache.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
services:
- docker
caches:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World"
- composer --version
services:
- docker
caches:
- docker
I've tried the answer on the StackOverflow question below but the context in that question is pushing the image in the following step. It's not about using the image which was built for the step itself.
Bitbucket pipeline use locally built image from previous step
There's a few conceptual mistakes in your current pipeline. Let me first first run through those before giving you some possible solutions.
Clarifications
Caching
Bitbucket Pipelines uses the cache keyword to persist data across multiple pipelines. Whilst it will also persist across steps, the primary use-case is for the data to be used on separate builds. The cache takes 7 days to expire, and thus will not be updated with new data during those 7 days. You can manually delete the cache on the main Pipelines page. If you want to carry data across steps in the same pipelines, you should use the artifacts keyword.
Docker service
You should only need to use the docker service whenever you want to have a docker daemon available to your build. Most commonly whenever you need to use a docker command in your script. In your second step, you do not need this. So it doesn't need the docker service.
Solution 1 - Combine the steps
Combine the steps, and run composer within the created image by using the docker run command.
pipelines:
branches:
main:
- step:
name: Docker image and build
script:
- docker build -t foo/bar .docker/composer
# Replace <destination> with the working directory of the foo/bar image.
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Solution 2 - Using two steps with DockerHub
This example keeps the two step approach. In this scenario, you will push your foo/bar image to a public repository in Dockerhub. Pipelines will then pull it to use in the subsequent step.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker login -u $DOCKERHUB_USER -p $DOCKERHUB_PASSWORD
- docker push foo/bar
services:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
If you'd like to use a private repository instead, you can replace the second step with:
...
- step:
name: Build
image:
name: foo/bar
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
email $DOCKERHUB_EMAIL
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
To expand on phod's answer. If you really want two steps, you can transfer the image from one step to another.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker image save foo/bar -o foobar.tar.gz
services:
- docker
caches:
- docker
artifacts:
- foobar.tar.gz
- step:
name: Build
script:
- docker image load -i foobar.tar.gz
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Note that this will upload all the layers and dependencies for the image. It can take quite a while to execute and may therefor not be the best solution.

Configure bitbucket-pipeline.yml to use a DockerFile from repository to build image when running a pipeline

I am new on creating pipelines on bitbucket to automate building a specific branch after merge.
The project is written in C++ and has the following structure:
PROJECT FOLDER
- .devcontainer/
- devcontainer.json
- bin/
- doc/
- lib/
- src/
- CMakeLists.txt
- ...
- CMakeLists.txt
- clean.sh
- compile.sh
- configure.sh
- DockerFile
- bitbucket-pipelines.yml
We created a DockerFile with all the settings required to build the project. Is there any way I can reference the docker image on bitbucket-pipeline.yml to the DockerFile from the repository?
I have been able to upload the docker image on my docker hub and use it with my credentials by defining:
image:
name: <dockerhubname>/<dockername>
username: $DOCKER_HUB_USERNAME
password: $DOCKER_HUB_PASSWORD
email: $DOCKER_HUB_EMAIL
but I am not sure how to do so bitbucket takes the DockerFile from the repository and uses it to build the image, and if by doing it like this, the build time will increase.
Thanks in advance!
In case you want to build your image during your pipelines process you need the same steps as if your image was built in your machine:
Build your image docker build -t $APP_NAME .
Push it to your repo (e.g. docker hub) docker push $APP_NAME:$VERSION
You can do something like this:
steps:
- step: &build
name: Build Docker Image
services:
- docker
script:
- docker build -t $APP_NAME .
- docker push $APP_NAME:$VERSION
Think that every step in your pipelines runs in a docker container and that allows you to do whatever you want. The docker service allows you to use a out of the box docker client. Then after pushed you can use the image in another step. You just need to specified the image for the step.

Docker with BitBucket

I'm trying to make automatic publishing using docker + bitbucket pipelines; unfortunately, I have a problem. I read the pipelines deploy instructions on Docker Hub, and I created the following template:
# This is a sample build configuration for Docker.
# Check our guides at https://confluence.atlassian.com/x/O1toN for more examples.
# Only use spaces to indent your .yml configuration.
# -----
# You can specify a custom docker image from Docker Hub as your build environment.
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script: # Modify the commands below to build your repository.
# Set $DOCKER_HUB_USERNAME and $DOCKER_HUB_PASSWORD as environment variables in repository settings
- export IMAGE_NAME=paweltest/tester:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t paweltest/tester .
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push paweltest/tester:tagname
I have completed the data, but after doing the push, I get the following error when the build starts:
unable to prepare context: lstat/opt/atlassian/pipelines/agent/build/Dockerfile: no dry file or directory
What would I want to achieve? After posting changes to the repository, I'd like for an image to be automatically built and sent to the Docker hub, preferably to the target server where the application is.
I've looked for a solution and tried different combinations. For now, I have about 200 commits with Failed status and no further ideas.
Bitbucket pipelines is a CI/CD service, you can build your applications and deploy resources to production or test server instance. You can build and deploy docker images too - it shouldn't be a problem unless you do something wrong...
All defined scripts in bitbucket-pipelines.yml file are running in a container created from the indicated image(atlassian/default-image:2 in your case)
You should have Dockerfile in the project and from this file you can build and publish a docker image.
I created simple repository without Dockerfile and I started build:
unable to prepare context: unable to evaluate symlinks in Dockerfile
path: lstat /opt/atlassian/pipelines/agent/build/Dockerfile: no such
file or directory
I need Dockerfile in my project to build an image(at the same level as the bitbucket-pipelines.yml file):
FROM node:latest
WORKDIR /src/
EXPOSE 4000
In next step I created a public DockerHub repository:
I also changed your bitbucket-pipelines.yml file(you forgot to mark the new image with a tag):
image: atlassian/default-image:2
pipelines:
default:
- step:
services:
- docker
script:
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t appngpl/stackoverflow-question-56065689 .
# add new image tag
- docker tag appngpl/stackoverflow-question-56065689 appngpl/stackoverflow-question-56065689:$BITBUCKET_COMMIT
# authenticate with the Docker Hub registry
- docker login --username $DOCKER_HUB_USERNAME --password $DOCKER_HUB_PASSWORD
# push the new Docker image to the Docker registry
- docker push appngpl/stackoverflow-question-56065689:$BITBUCKET_COMMIT
Result:
Everything works fine :)
Bitbucket repository: https://bitbucket.org/krzysztof-raciniewski/stackoverflow-question-56065689
GitHub image repository: https://hub.docker.com/r/appngpl/stackoverflow-question-56065689

gitlab runner using wrong docker image for build container

I set up an gitlab-ci-multi-runner on my VM.
In the build process I provide an docker container with all the dependencies.
But when I run it, the runner uses a different (wrong) docker image for his build container.
The messages look like that:
Running with gitlab-ci-multi-runner 9.2.1 (f0xxxx4) on runnerVM
(f5xxxxf0) Using Docker executor with image
docker.com/xxx/xxx/docker-build:stable ... Using docker image
sha256:fe32xxx...xxxa63c for predefined container... Pulling
docker image docker.com/xxx/xxx/docker-build:stable ... Using
docker image docker.com/xxx/xxx/docker-build:stable
ID=sha256:9608xxx...xxxdf09 for build container...
Can someone tell me why the runner uses a different docker image for build container?
Why is it not taking the predefined container (because that's the right one...)
Here you can see my gitlab-ci.yml:
image: docker.com/xxx/xxx/docker-build:stable
before_script:
- echo "Before script"
after_script:
- echo "After Script"
stages:
- build
- test
- deploy
build_release:
stage: build
script:
- sudo make all BUILD_TYPE=Release
only:
- master
tags:
- tag1
build_debug:
stage: build
script:
- sudo make all BUILD_TYPE=Debug
only:
- develop
- runner-test
tags:
- tag1
- tag2
In your .gitlab-ci.yml you are referencing the complete URL to your container; it should however be in the format group/container, e.g. library/nginx.
Optionally, you may use a specific version, e.g. library/nginx:1.13.9.
For more information, see: https://docs.gitlab.com/ce/ci/docker/using_docker_images.html
I assume you are using the docker executor. Therefore the gitlab-ci-runner is creating a new Image gitlab/gitlab-runner-helper which will isolate the build-steps from your VM's docker-environment. This image shall be the predefined container.
The stages themself will be performed inside containers of images you specify for the job or the image you specify globally for all jobs. This container is the build container.
The build container should be made from the image you specify with image on top of your .gitlab-ci.yml. You can verify it by doing
$ docker image ls | grep -E '(fe32|9608)'
on your VM. It shows you the image names and tags of your predefined- and build-container.

How to rebuild docker image on push before CI script jobs

I want to generate Dockerfile in GitLab CI script and build it. Then use this newly generated image in build jobs. How can I do this? Tried to use global before_script, but it already starts in default container. I need to do this out of any containers.
before_script is run before every job so it's not something you want. But you can have a first job to do the image build and take advantage of the fact that each job can use a different Docker image. The build of the image is covered in the manual.
Option A (uhm... sort of OK)
Have 2 runners, one with a shell executor (tagged shell) and one with a Docker executor (tagged docker). You would then have a first stage with a job dedicated to building the docker image. It would use the shell runner.
image_build:
stage: image_build
script:
- # create dockerfile
- # run docker build
- # push image to a registry
tags:
- shell
The second job would then run using the runner with docker executor and use this created image:
job_1:
stage: test
image: [image you created]
script:
- # your tasks
tags:
- docker
The problem with this is that the runner would need to be part of the docker group which has security implications.
Option B (better)
The second option would do the same but would have only one runner using Docker executor. The Docker image would be built within a running container (gitlab/dind:latest image) = "docker in docker" solution.
stages:
- image_build
- test
image_build:
stage: image_build
image: gitlab/dind:latest
script:
- # create dockerfile
- # run docker build
- # push image to a registry
job_1:
stage: test
image: [image you created]
script:
- # your tasks

Resources