Bitbucket pipeline use locally built image from previous step - bitbucket

If I'd like to build a docker image in one pipeline step, then use it in a following step - how would I do that?
eg
default:
- step:
name: Build
image:
script:
- docker build -t imagename:local .
- docker images
- step:
name: Deploy
image:
script:
- docker images
In this example, the image shows up in the first step, but not the second

You would use Docker Save/Load in conjunction with bitbucket artifacts.
Example:
- step:
name: Build docker image
script:
- docker build -t "repo/imagename" .
- docker save --output tmp-image.docker repo/imagename
artifacts:
- tmp-image.docker
- step:
name: Deploy to Test
deployment: test
script:
- docker load --input ./tmp-image.docker
- docker images
Source: Link

Related

Recover docker image after a gitlab-ci run

Let's say I build a docker image and then run some CI build like this:
stages:
- create_builder_image
- test
Create Builder Image:
stage: create_builder_image
script:
- export DOCKER_BRANCH_TAG=$CI_COMMIT_REF_SLUG
# do stuff to build the image, using cache to speed it up
- docker push $GITLAB_IMAGE/builder:$DOCKER_BRANCH_TAG
Run Tests:
image: $GITLAB_IMAGE/builder:$CI_COMMIT_REF_SLUG
stage: build
script:
# build some stuff in the image
Then I want to push the resulting image, with the builded stuff inside
docker-package:
stage: package
script:
- docker commit ?
- docker push dockerhub:latest
That may not be possible at all.
Similar to In Gitlab CI/CD, how to commit and publish the docker container that is running our stages

A locally built Docker image within a Bitbucket Pipeline

What I need is a way to build a Dockerfile within the repository as an image and use this as the image for the next step(s).
I've tried the Bitbucket Pipeline configuration below but in the "Build" step it doesn't seem to have the image (which was built in the previous step) in its cache.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
services:
- docker
caches:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World"
- composer --version
services:
- docker
caches:
- docker
I've tried the answer on the StackOverflow question below but the context in that question is pushing the image in the following step. It's not about using the image which was built for the step itself.
Bitbucket pipeline use locally built image from previous step
There's a few conceptual mistakes in your current pipeline. Let me first first run through those before giving you some possible solutions.
Clarifications
Caching
Bitbucket Pipelines uses the cache keyword to persist data across multiple pipelines. Whilst it will also persist across steps, the primary use-case is for the data to be used on separate builds. The cache takes 7 days to expire, and thus will not be updated with new data during those 7 days. You can manually delete the cache on the main Pipelines page. If you want to carry data across steps in the same pipelines, you should use the artifacts keyword.
Docker service
You should only need to use the docker service whenever you want to have a docker daemon available to your build. Most commonly whenever you need to use a docker command in your script. In your second step, you do not need this. So it doesn't need the docker service.
Solution 1 - Combine the steps
Combine the steps, and run composer within the created image by using the docker run command.
pipelines:
branches:
main:
- step:
name: Docker image and build
script:
- docker build -t foo/bar .docker/composer
# Replace <destination> with the working directory of the foo/bar image.
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Solution 2 - Using two steps with DockerHub
This example keeps the two step approach. In this scenario, you will push your foo/bar image to a public repository in Dockerhub. Pipelines will then pull it to use in the subsequent step.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker login -u $DOCKERHUB_USER -p $DOCKERHUB_PASSWORD
- docker push foo/bar
services:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
If you'd like to use a private repository instead, you can replace the second step with:
...
- step:
name: Build
image:
name: foo/bar
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
email $DOCKERHUB_EMAIL
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
To expand on phod's answer. If you really want two steps, you can transfer the image from one step to another.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker image save foo/bar -o foobar.tar.gz
services:
- docker
caches:
- docker
artifacts:
- foobar.tar.gz
- step:
name: Build
script:
- docker image load -i foobar.tar.gz
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Note that this will upload all the layers and dependencies for the image. It can take quite a while to execute and may therefor not be the best solution.

Publishing image with docker from gitlab ci

I am trying to create my war artifact with gradle and push it to my remote image repo. But the problem is it I am getting
COPY failed: stat /var/lib/docker/tmp/docker-builder756634785/build/libs/myartifact.war: no such file or directory.
So, It cannot reach to my artifact
how can I point to the correct location?
//gitlab-ci.yaml
stages:
- build
variables:
GRADLE_OPTS: "-Dorg.gradle.daemon=false"
GRADLE_OPTS: "-Dorg.gradle.caching=true"
build:
image: gradle:alpine
stage: build
script:
- ./gradlew clean build -i
docker_build:
image: docker:latest
stage: build
services:
- docker:dind
script:
- docker build --pull -t myrepo.io/myimage:latest .
- docker login myrepo.io -u username -p pass
- docker push myrepo.io/myimage:latest
You need to export your artifact that you generated in the build job and after that you will be able to download it on the docker_build job (using dependency)
In this doc you have a lot of examples about how to handle it https://docs.gitlab.com/ee/ci/yaml/#artifacts
and look at this example: https://docs.gitlab.com/ee/ci/yaml/#dependencies

Gitlab CI - Build Docker Image With Shared Runner (cannot connect to Docker Daemon)

I am currently using Gitlab Shared Runners to build and deploy my project (at least I'm trying too !).
I have the gitlab-ci.yml below :
image: java:8-jdk
stages:
- build
- package
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
- docker info
cache:
paths:
- .gradle/wrapper
- .gradle/caches
build:
stage: build
script:
- ./gradlew build
artifacts:
paths:
- build/libs/*.jar
expire_in: 1 week
only:
- master
docker-build:
image: docker:stable
services:
- docker:dind
stage: package
script:
docker build -t registry.gitlab.com/my-project .
docker push registry.gitlab.com/my-project
after_script:
- echo "End CI"
First, build stage is doing great, but there is a problem with the second stage when I'm trying to build and push my docker image.
I get this log :
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
It seems that Gitlab is using a shared runner that can't build a docker image, but I don't know how I can change that. I cannot change the configuration of my runner, because I'm using shared runners. I also tried to put some tags to my second stages, in hope that a more suitable runner would have to take care of my job, but I'm still getting this error.
Thank you for your help.
I believe you need to set DOCKER_HOST to connect to the DinD running in another container:
docker-build:
image: docker:stable
services:
- docker:dind
stage: package
script:
- export DOCKER_HOST=tcp://docker:2375/
- docker build -t registry.gitlab.com/my-project .
- docker push registry.gitlab.com/my-project
If your shared runner executor is of type docker you may try this setup :
stages:
- build
- package
before_script:
- export GRADLE_USER_HOME=`pwd`/.gradle
- docker info
cache:
paths:
- .gradle/wrapper
- .gradle/caches
build:
image: java:8-jdk
stage: build
script:
- ./gradlew build
artifacts:
paths:
- build/libs/*.jar
expire_in: 1 week
only:
- master
docker-build:
stage: package
script:
docker build -t registry.gitlab.com/my-project .
docker push registry.gitlab.com/my-project
after_script:
- echo "End CI"
Even we have faced the same problem in our org. We found that there is a long standing issue with the docker in docker area for gitlab which can be tracked in these issues #3612, #2408 and #2890 as well.
We have found that in our case using docker binding was much suitable for our usecase than the docker-in-docker one. so, we used the solution in their official page.
I know this has been already answered but this may help some one who have a similar usecase :)

Bitbucket Pipelines - steps - docker - cant find image

I'm building my pipline to create a docker image, then push it to AWS. I have it broken into steps, and in Bitbucket, you have to tell it what artifacts to share between them. I have a feeling this is a simple bug, but I just cannot figure it out.
It's failing at 'docker tag' in step 4 with:
docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Error response from daemon: No such image: projectname:v.11
Basically it cannot find the docker image created...
Here's my pipeline script (some of it simplified)
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
name: 1. Install dotnet
script:
# Do things
- step:
name: 2. Install AWS CLI
script:
# Do some more things
- step:
name: 3. Build Docker Image
script:
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
artifacts:
- ./**
- step:
name: 4. Push Docker Image to AWS
script:
# Tag and push my docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Now, I know this script works, but only if I remove all the steps. For whatever reason, step 4 doesn't have access to the docker image created in step 3. Any help is appreciated!
Your docker images are not stored in the folder where you start the build, so they are not saved to the artefacts, and not available in the next step.
Even if they were (you could pack/unpack it through docker save), you would probably run against the size limits for artefacts, not to mention the time the time it takes to pack/unpack.
I guess you'd be better off if you created a Dockerfile in your project yourself, and combine step 1 & 2 there. Your bitbucket pipeline could then be based on a docker image that already contains the AWS-cli and uses docker as a service, and your one step would then consist of building your project's Dockerfile and uploading to AWS. This also lowers your dependency on bitbucket pipelines, as
The Docker image is not being passed from step 3 to step 4 as the Docker image is not stored in the build directory.
The simplest solution would be to combine all four of your steps into a single step as follows:
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
script:
# Install dependencies
- ./install-dot-net
- ./install-aws-cli
# Build the Docker image
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
# Tag and push the Docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER

Resources