I want to cache the /var/lib/docker folder on my gitlab pipeline, which I believe will make the docker not build everything again on the second run. It's a self compiled docker file.
build_linux:
tags:
- linuxvm
stage: build
cache:
- key:?
paths:
- /var/lib/docker
What should I put as a key, though? I believe it shouldn't be a file, so I'm a little bit lost.
You can try key: $CI_COMMIT_REF_SLUG which is your branch name
Related
I am new on creating pipelines on bitbucket to automate building a specific branch after merge.
The project is written in C++ and has the following structure:
PROJECT FOLDER
- .devcontainer/
- devcontainer.json
- bin/
- doc/
- lib/
- src/
- CMakeLists.txt
- ...
- CMakeLists.txt
- clean.sh
- compile.sh
- configure.sh
- DockerFile
- bitbucket-pipelines.yml
We created a DockerFile with all the settings required to build the project. Is there any way I can reference the docker image on bitbucket-pipeline.yml to the DockerFile from the repository?
I have been able to upload the docker image on my docker hub and use it with my credentials by defining:
image:
name: <dockerhubname>/<dockername>
username: $DOCKER_HUB_USERNAME
password: $DOCKER_HUB_PASSWORD
email: $DOCKER_HUB_EMAIL
but I am not sure how to do so bitbucket takes the DockerFile from the repository and uses it to build the image, and if by doing it like this, the build time will increase.
Thanks in advance!
In case you want to build your image during your pipelines process you need the same steps as if your image was built in your machine:
Build your image docker build -t $APP_NAME .
Push it to your repo (e.g. docker hub) docker push $APP_NAME:$VERSION
You can do something like this:
steps:
- step: &build
name: Build Docker Image
services:
- docker
script:
- docker build -t $APP_NAME .
- docker push $APP_NAME:$VERSION
Think that every step in your pipelines runs in a docker container and that allows you to do whatever you want. The docker service allows you to use a out of the box docker client. Then after pushed you can use the image in another step. You just need to specified the image for the step.
I have successfully created an artifact and have proven to myself that it's available to the next job / the job where I need it. But I actually need to use it inside a container that i'm building.
but I don't know how to do this.
Here's what I have so far:
stages:
- build
- deploy
job_that_creates:
image: node:10.19
stage: build
script:
- npm install
- make
- make source-package
cache:
key: ${CI_COMMIT_REF_SLUG}
paths:
- node_modules/
artifacts:
when:
paths:
- my.tar.bz2
expire_in: 2 days
job_that_consumes:
stage: deploy
script:
- ls -lah
The output from the "ls" command shows me the tar file.
But ultimately, I need to do something like this in the job_that_consumes:
job_that_consumes:
stage: deploy
script:
- ls -lah
image: custom_image
- somehow extract the zip to a specific location.
I've been trying to google but so far, I haven't picked the right key words. Presently looking at how to copy into a container.
EDIT 1
For now, what I'm testing is copying the tar to the volume on the host for the runner, and then from there copying to the container.
But the reason I don't like this is I feel like I'm marrying the container to the host … and I'd have to be sure that I create runners on all my hosts the exact same way.
Is there a better way?
When I understand your question correctly, you want to have the artefact within the docker container you are deploy in step2?!
That is not easy possible because the job is running in a so called gitlab-runner. The runner is not persistent, so when the runner ends, all data is lost. That is why we have artefacts.
Now the good thing:
Artefacts are available from outside of gitlab for 4 weeks default (you can even have artefacts there longer or shorter), mean you can reach it and download it with every application you like (e.g. curl).
Here is the gitlab documentation to this feature:
https://docs.gitlab.com/ee/ci/pipelines/job_artifacts.html#downloading-the-latest-artifacts
Copying that tar, like you say within your edit, is not even a bad thing by design.
I dont see why that should be a worse solution then downloading it by curl later, its just a more narrow solution because it only works WHILE your runner is alive, while the downloading method works for weeks afterwards.
I have a node.js Project which I run as Docker-Container in different environments (local, stage, production) and therefor configure it via .env-Files. As always advised I don't store the .env-Files in my remote repository which is Gitlab. My production- and stage-systems are run as kubernetes cluster.
What I want to achieve is an automated build via Gitlab's CI for different environments (e.g. stage) depending on the commit-branch (named stage as well), meaning when I push to origin/stage I want an Docker-image to be built for my stage-environment with the corresponding .env-File in it.
On my local machine it's pretty simple, since I have all the different .env-Files in the root-Folder of my app I just use this in my Dockerfile
COPY .env-stage ./.env
and everything is fine.
Since I don't store the .env-Files in my remote repo, this approach doesn't work, so I used Gitlab CI Variables and created a variable named DOTENV_STAGE of type file with the contents of my local .env-stage file.
Now my problem is: How do I get that content as .env-File inside the docker image that is going to be built by gitlab since that file is not yet a file in my repo but a variable instead?
I tried using cp (see below, also in the before_script-section) to just copy the file to an .env-File during the build process, but that obviously doesn't work.
My current build stage looks like this:
image: docker:git
services:
- docker:dind
build stage:
only:
- stage
stage: build
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- cp $DOTENV_STAGE .env
- docker pull $GITLAB_IMAGE_PATH-$CI_COMMIT_BRANCH || true
- docker build --cache-from $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH --file=Dockerfile-$CI_COMMIT_BRANCH -t $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH:$CI_COMMIT_SHORT_SHA .
- docker push $GITLAB_IMAGE_PATH/$CI_COMMIT_BRANCH
This results in
Step 12/14 : COPY .env ./.env
COPY failed: stat /var/lib/docker/tmp/docker-builder513570233/.env: no such file or directory
I also tried cp $DOTENV_STAGE .env as well as cp $DOTENV_STAGE $CI_BUILDS_DIR/.env and cp $DOTENV_STAGE $CI_PROJECT_DIR/.env but none of them worked.
So the part I actually don't know is: Where do I have to put the file in order to make it available to docker during build?
Thanks
You should avoid copying .env file into the container altogether. Rather feed it from outside on runtime. There's a dedicated prop for that: env_file.
web:
env_file:
- .env
You can store contents of the .env file itself in a Masked Variable in the GitLabs CI backend. Then dump it to .env file in the runner and feed to Docker compose pipeline.
After some more research I stumbled upon a support-forum entry on gitlab.com, which exactly describes my situation (unfortunately it got deleted in the meanwhile) and it got solved by the same approach I was trying to use, namely this:
...
script:
- cp $DOTENV_STAGE $CI_PROJECT_DIR/.env
...
in my .gitlab-ci.yml
The part I was actually missing was adjusting my .dockerignore-File accordingly (removing .env from it) and then removing the line
COPY .env ./.env
from my Dockerfile
An alternative approach I thought about after joyarjo's answer could be to use a ConfigMap for Kubernetes. But I didn't try it yet
While I can successfully write a .gitlab-ci.yml on gitlab.com that passes my artifact to public/,
image: ruby:2.3
pages:
script:
- bundle install
- bundle exec jekyll build -d public
artifacts:
paths:
- public
I would like .gitlab-ci.yml (a container (#1) based on a docker image (#2)) to instead use a Dockerfile to build a different image (#3) to create the artifact and, again, pass the artifact from #3 to container#1/public/ so it is publicly accessible on the web.
# Dockerfile
FROM ruby:2.3
...
The rationale for this is that I might share the repo with a colleague who understands Dockerfiles but is not interested in using GitLab or .gitlab-ci.yml. So I want .gitlab-ci.yml to use Dockerfile for my purposes and my colleague can use Dockerfile in whatever way is most comfortable for them--Dockerfile is the shared, reproducible way of making the artifact.
It seems I can build Docker images in GitLab CI/CD, but I'm not sure I can do so on gitlab.com rather than self-hosted GitLab. This would then allow me to push image to a registry and use the image from the registry in a later stage (including the artifact). Although I'm not sure how I would extract it?
docker build -t myimagename . does not work in GitLab CI/CD on gitlab.com (unless I add in some services to the .gitlab-ci.yml?). Even if it did, how would I extract the artifact from the myimagename container into the GitLab CI/CD "container"?
I am using external docker image from dockerhub.
In each step the dockerimage is pulled from dockerhub again and again. Yes it is desired workflow.
My question is can we cache this image, so that it wont pull from dockerhub in each step? This DockerImage is not going to change frequently, as it has only node and meteor as preinstalled.
So is it possible to cache the docker image?
Original bitbucket-pipeline.yml
image: tasktrain/node-meteor-mup
pipelines:
branches:
'{develop}':
- step:
name: "Client: Install Dependencies"
caches:
- node
script:
- npm install
- npm run setup-meteor-client-bundle
artifacts:
- node_modules/**
- step:
name: "Client: Build for Staging"
script:
- npm run build-browser:stag
artifacts:
- dist/**
- step:
name: "Client: Deploy to Staging"
deployment: staging
script:
- pipe: atlassian/aws-s3-deploy:0.2.2
variables:
AWS_ACCESS_KEY_ID: $AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY: $AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
S3_BUCKET: $S3_STAGING_BUCKET_NAME
LOCAL_PATH: 'dist'
ACL: "public-read"
DELETE_FLAG: "true"
EXTRA_ARGS: "--follow-symlinks --quiet"
- step:
name: "Server: Build and Deploy to Staging"
script:
- cd server
- mup setup --config=.deploy/mup-settings.stag.js
- mup deploy --config=.deploy/mup-settings.stag.js --settings=meteor-settings.stag.json
As the OP said in comments to the other answer, defining a Docker cache doesn't work for the build image itself
image: tasktrain/node-meteor-mup
which is always downloaded for each step and then the step scripts are executed in that image. Afaik, the Docker cache
services:
- docker
caches:
- docker
only works for images pulled or built in a step.
However, Bitbucket Pipelines has recently started caching public build images internally, according to this blog post:
Public image caching – Behind the scenes, Pipelines has recently started caching public Docker images, resulting in a noticeable boost to startup time to all builds running on our infrastructure.
There is also an open feature request to also cache private build images.
It is indeed possible to cache dependencies and docker is one of the pre-defined caches of Bitbucket Pipelines
pipelines:
default:
- step:
services:
- docker
caches:
- docker
script:
- docker pull my-own-repository:5000/my-image