A locally built Docker image within a Bitbucket Pipeline - docker

What I need is a way to build a Dockerfile within the repository as an image and use this as the image for the next step(s).
I've tried the Bitbucket Pipeline configuration below but in the "Build" step it doesn't seem to have the image (which was built in the previous step) in its cache.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
services:
- docker
caches:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World"
- composer --version
services:
- docker
caches:
- docker
I've tried the answer on the StackOverflow question below but the context in that question is pushing the image in the following step. It's not about using the image which was built for the step itself.
Bitbucket pipeline use locally built image from previous step

There's a few conceptual mistakes in your current pipeline. Let me first first run through those before giving you some possible solutions.
Clarifications
Caching
Bitbucket Pipelines uses the cache keyword to persist data across multiple pipelines. Whilst it will also persist across steps, the primary use-case is for the data to be used on separate builds. The cache takes 7 days to expire, and thus will not be updated with new data during those 7 days. You can manually delete the cache on the main Pipelines page. If you want to carry data across steps in the same pipelines, you should use the artifacts keyword.
Docker service
You should only need to use the docker service whenever you want to have a docker daemon available to your build. Most commonly whenever you need to use a docker command in your script. In your second step, you do not need this. So it doesn't need the docker service.
Solution 1 - Combine the steps
Combine the steps, and run composer within the created image by using the docker run command.
pipelines:
branches:
main:
- step:
name: Docker image and build
script:
- docker build -t foo/bar .docker/composer
# Replace <destination> with the working directory of the foo/bar image.
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Solution 2 - Using two steps with DockerHub
This example keeps the two step approach. In this scenario, you will push your foo/bar image to a public repository in Dockerhub. Pipelines will then pull it to use in the subsequent step.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker login -u $DOCKERHUB_USER -p $DOCKERHUB_PASSWORD
- docker push foo/bar
services:
- docker
- step:
name: Build
image: foo/bar
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version
If you'd like to use a private repository instead, you can replace the second step with:
...
- step:
name: Build
image:
name: foo/bar
username: $DOCKERHUB_USERNAME
password: $DOCKERHUB_PASSWORD
email $DOCKERHUB_EMAIL
script:
- echo "Hello, World. I'm running insider of the previously pushed foo/bar container"
- composer --version

To expand on phod's answer. If you really want two steps, you can transfer the image from one step to another.
pipelines:
branches:
main:
- step:
name: Docker Image(s)
script:
- docker build -t foo/bar .docker/composer
- docker image save foo/bar -o foobar.tar.gz
services:
- docker
caches:
- docker
artifacts:
- foobar.tar.gz
- step:
name: Build
script:
- docker image load -i foobar.tar.gz
- docker run -v $BITBUCKET_CLONE_DIR:<destination> foo/bar composer --version
services:
- docker
Note that this will upload all the layers and dependencies for the image. It can take quite a while to execute and may therefor not be the best solution.

Related

GitLab CI run docker container of other Repository

I am generally still relatively new to the GitLab CI topic and unfortunately I cannot test this myself yet, so this is more of a theoretical attempt.
I want to start a Docker container from one of my other projects in Gitlab in the CI pipeline of my main project.
This Container (I now call it Mock-Container) is created and published in the GitLab CI pipeline of the corresponding project and contains various mocked services.
In the project in which I want to run the Mock-Container, it should be able to start that container in the GitLab CI.
I know it is possible to use a build of the project in a different stage in the same pipeline, like here for example:
variables:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE:latest
is it for example possible if the $CI_REGISTRY_IMAGE used in CONTAINER-IMAGE-Variables is like:
registry.gitlab.com/foo/bar/mainproject
to add a variable here like:
MOCK_CONTAINER_IMAGE: registry.gitlab.com/foo/bar/mockproject:latest
so I could for example could use it in the services list in the test stage:
build:
stage: build
image: quay.io/podman/stable
script:
- podman login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY --log-level=debug
- podman build --format docker --pull -t $CONTAINER_TEST_IMAGE .
- podman push $CONTAINER_TEST_IMAGE
test:
stage: test
image:
name: postman/newman
entrypoint: [ "" ]
services:
- name: $CONTAINER_TEST_IMAGE
alias: main-project
- name: $MOCK_CONTAINER_IMAGE
alias: mock-container
...
Is this possible or is there a better way to achieve this.
If you're asking that you want to set a variable in the .gitlab-ci.yml file with the registry URL of the other container like this:
variables:
MOCK_CONTAINER_IMAGE: registry.gitlab.com/foo/bar/mockproject:latest
then yes you can. And you can use the variable in different stages in your file as you please. If you want to pull this image here from the registry, you can do that in a stage as well.
Check this reference for more info: https://docs.gitlab.com/ee/ci/yaml/#variables

Configure bitbucket-pipeline.yml to use a DockerFile from repository to build image when running a pipeline

I am new on creating pipelines on bitbucket to automate building a specific branch after merge.
The project is written in C++ and has the following structure:
PROJECT FOLDER
- .devcontainer/
- devcontainer.json
- bin/
- doc/
- lib/
- src/
- CMakeLists.txt
- ...
- CMakeLists.txt
- clean.sh
- compile.sh
- configure.sh
- DockerFile
- bitbucket-pipelines.yml
We created a DockerFile with all the settings required to build the project. Is there any way I can reference the docker image on bitbucket-pipeline.yml to the DockerFile from the repository?
I have been able to upload the docker image on my docker hub and use it with my credentials by defining:
image:
name: <dockerhubname>/<dockername>
username: $DOCKER_HUB_USERNAME
password: $DOCKER_HUB_PASSWORD
email: $DOCKER_HUB_EMAIL
but I am not sure how to do so bitbucket takes the DockerFile from the repository and uses it to build the image, and if by doing it like this, the build time will increase.
Thanks in advance!
In case you want to build your image during your pipelines process you need the same steps as if your image was built in your machine:
Build your image docker build -t $APP_NAME .
Push it to your repo (e.g. docker hub) docker push $APP_NAME:$VERSION
You can do something like this:
steps:
- step: &build
name: Build Docker Image
services:
- docker
script:
- docker build -t $APP_NAME .
- docker push $APP_NAME:$VERSION
Think that every step in your pipelines runs in a docker container and that allows you to do whatever you want. The docker service allows you to use a out of the box docker client. Then after pushed you can use the image in another step. You just need to specified the image for the step.

How can I deploy a dockerized Node app to a DigitalOcean server using Bitbucket Pipelines?

I've got a NodeJS project in a Bitbucket repo, and I am struggling to understand how to use Bitbucket Pipelines to get it from there onto my DigitalOcean server, where it can be served on the web.
So far I've got this
image: node:10.15.3
pipelines:
default:
- parallel:
- step:
name: Build
caches:
- node
script:
- npm run build
So now the app was built and should be saved as a single file server.js in a theoretical /dist directory.
How now do I dockerize this file and then upload it to my DigitalOcean?
I can't find any examples for something like this.
I did find a Docker template in the Bitbucket Pipelines editor, but it only somewhat describes creating a Docker image, and not at all how to actually deploy it to a DigitalOcean server (or anywhere)
- step:
name: Build and Test
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- docker build . --file Dockerfile --tag ${IMAGE_NAME}
- docker save ${IMAGE_NAME} --output "${IMAGE_NAME}.tar"
services:
- docker
caches:
- docker
artifacts:
- "*.tar"
- step:
name: Deploy to Production
deployment: Production
script:
- echo ${DOCKERHUB_PASSWORD} | docker login --username "$DOCKERHUB_USERNAME" --password-stdin
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- docker load --input "${IMAGE_NAME}.tar"
- VERSION="prod-0.1.${BITBUCKET_BUILD_NUMBER}"
- IMAGE=${DOCKERHUB_NAMESPACE}/${IMAGE_NAME}
- docker tag "${IMAGE_NAME}" "${IMAGE}:${VERSION}"
- docker push "${IMAGE}:${VERSION}"
services:
- docker
You would have to SSH into your DigitalOcean VPS and then do some steps there:
Pull the current code
Build the docker file
Deploy the docker file
An example could look like this:
Create some script like "deployment.sh" in your repository root:
cd <path_to_local_repo>
git pull origin master
docker container stop <container_name>
docker container rm <container_name>
docker image build -t <image_name> .
docker container run -itd --name <container_name> <image_name>
and then add the following into your pipeline:
# ...
- step:
deployment: staging
script:
- cat ./deployment.sh | ssh <ssh_user>#<ssh_host>
You have to add your ssh key for your repository on your server, though. Check out the following link, on how to do this: https://confluence.atlassian.com/display/BITTEMP/Use+SSH+keys+in+Bitbucket+Pipelines
Here is a similar question, but using PHP: Using BitBucket Pipelines to Deploy onto VPS via SSH Access

Bitbucket Pipelines - steps - docker - cant find image

I'm building my pipline to create a docker image, then push it to AWS. I have it broken into steps, and in Bitbucket, you have to tell it what artifacts to share between them. I have a feeling this is a simple bug, but I just cannot figure it out.
It's failing at 'docker tag' in step 4 with:
docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Error response from daemon: No such image: projectname:v.11
Basically it cannot find the docker image created...
Here's my pipeline script (some of it simplified)
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
name: 1. Install dotnet
script:
# Do things
- step:
name: 2. Install AWS CLI
script:
# Do some more things
- step:
name: 3. Build Docker Image
script:
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
artifacts:
- ./**
- step:
name: 4. Push Docker Image to AWS
script:
# Tag and push my docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
Now, I know this script works, but only if I remove all the steps. For whatever reason, step 4 doesn't have access to the docker image created in step 3. Any help is appreciated!
Your docker images are not stored in the folder where you start the build, so they are not saved to the artefacts, and not available in the next step.
Even if they were (you could pack/unpack it through docker save), you would probably run against the size limits for artefacts, not to mention the time the time it takes to pack/unpack.
I guess you'd be better off if you created a Dockerfile in your project yourself, and combine step 1 & 2 there. Your bitbucket pipeline could then be based on a docker image that already contains the AWS-cli and uses docker as a service, and your one step would then consist of building your project's Dockerfile and uploading to AWS. This also lowers your dependency on bitbucket pipelines, as
The Docker image is not being passed from step 3 to step 4 as the Docker image is not stored in the build directory.
The simplest solution would be to combine all four of your steps into a single step as follows:
image: atlassian/default-image:latest
options:
docker: true
pipelines:
branches:
dev:
- step:
script:
# Install dependencies
- ./install-dot-net
- ./install-aws-cli
# Build the Docker image
- export DOCKER_PROJECT_NAME=projectname
- docker build -t $DOCKER_PROJECT_NAME:latest -t $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER .
# Tag and push the Docker image to ECR
- export DOCKER_PROJECT_NAME=projectname
- docker tag $DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER
- docker push $AWS_REGISTRY_URL/$DOCKER_PROJECT_NAME:v.$BITBUCKET_BUILD_NUMBER

Use docker without registry for gitlab-ci

My school has a personal gitlab setup, but it doesn't have a registry setup for docker images.
What I want to do is run my pipeline with docker, so that I can build, test etc in a docker environment.
Right now i am trying random stuff because I don't know what I am doing. This is what I have now:
Gitlab-ci:
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
My secret variables on gitlab:
My error message in the pipeline:
Something else I tried uses a gitlab repo. This uses the docker image for ros correctly, but in my application I also use opencv, so I want to add more to the docker image. If i know how to do that in the example below, thats also an option. On top of this, in the example below i can't run tests.
Gitlab-ci:
image: ros:kinetic-ros-core
stages:
- build
variables:
ROS_PACKAGES_TO_INSTALL: ""
USE_ROSDEP: "true"
cache:
paths:
- ccache/
before_script:
- git clone https://gitlab.com/VictorLamoine/ros_gitlab_ci.git
- source ros_gitlab_ci/gitlab-ci.bash
catkin_make:
stage: build
script:
- catkin_make
catkin_build:
stage: build
script:
- catkin build --summarize --no-status --force-color
As I said I have tried many things, this is just the latest thing I have tried. How can I run my runners and gitlab-ci with docker without a gitlab registry?
Just use it withouth registry.
You just need to insert this to gitlab runner config file:
pull_policy = "if-not-present"
Thats enough, and remove commands like:
docker push ...
docker pull ...
Or even insert "|| true" at the end of the push pull command if you want to keep push pull in case, like this:
docker pull ... || true;
Which keeps your code to continue if command fail.
Just dont forget that : pull_policy = "if-not-present" , which allow You to run docker image withouth pull and push.
As image is in case if mussing builded, this works.
example:
[[runners]]
name = "Runner name"
url = ...
...
executor = "docker"
[runners.docker]
image = ...
pull_policy = "if-not-present"
...
You can change these secret variables to point to docker-hub registry server.
You have to create your account on that https://hub.docker.com/ and then use that details to configure - gitlab secret variables.

Resources