Docker & gitlab-ci: Build application in Dockerfile but also create artifact - docker

I have a Dockerfile which builds my web application and then moves the built application to an nginx folder such that I only have to start the docker image locally and then access my application via localhost (I left out any details because for the moment I don't think they are necessary).
Now the problem is that I would also like to create an artifact in the gitlab-ci pipeline with the same Dockerfile. This artifact basically is the built application which is then processed later on.
How can I "copy" the application folder from inside the Dockerimage to the gitlab-ci environment?
Edit: I found
script:
- docker container create --name dummy ${IMAGE}
- docker cp dummy:/usr/share/nginx/html web
- docker rm -f dummy
artifacts:
paths:
- web
to be a solution.

My solution:
script:
- docker container create --name dummy ${IMAGE}
- docker cp dummy:/usr/share/nginx/html web
- docker rm -f dummy
artifacts:
paths:
- web

Related

start docker container from within self hosted bitbucket pipeline (dind)

I work on a spring-boot based project and use a local machine as test environment to deploy it as a docker container.
I am in the middle of creating a bitbucket pipeline that automates everything between building and deploying. For this pipeline I make use of a self hosted runner (docker) that also runs on the same machine and docker instance where I plan to deploy my project.
I managed to successfully build the project (mvn and docker), and load the docker image into my GCP container registry.
My final deployment step (docker run xxx, see yml script below) was also successful but since it is running in a container itself it was not running the script on the top level docker.
as far as I understand the runner itself has access to the host docker, because the docker.sock is mounted. but for each step another container is created which does not have access to the docker.sock, right? So basically I need to know how to give access to this file unless there's a better solution to that.
here the shortened pipeline definition:
image: maven:3.8.7-openjdk-18
definitions:
services:
docker:
image: docker:dind
pipelines:
default:
# build only for feature branches or so
branches:
test:
# build, docker and upload steps
- step:
name: Deploy
deployment: test
image: google/cloud-sdk:alpine
runs-on:
- 'self.hosted'
- 'linux'
caches:
- docker
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- VERSION="${BITBUCKET_BUILD_NUMBER}"
- DOCKER_IMAGE="${DOCKER_REGISTRY}/${IMAGE_NAME}:${VERSION}"
# Authenticating with the service account key file
- echo $GCLOUD_API_KEYFILE > ./gcloud-api-key.json
- gcloud auth activate-service-account --key-file gcloud-api-key.json
- gcloud config set project $GCLOUD_PROJECT
# Login with docker and stop old container (if exists) and run new one
- cat ./gcloud-api-key.json | docker login -u _json_key --password-stdin https://eu.gcr.io
- docker ps -q --filter "name=${IMAGE_NAME}" | xargs -r docker stop
- docker run -d -p 82:8080 -p 5005:5005 --name ${IMAGE_NAME} --rm ${DOCKER_IMAGE}
services:
- docker

How can I deploy a dockerized Node app to a DigitalOcean server using Bitbucket Pipelines?

I've got a NodeJS project in a Bitbucket repo, and I am struggling to understand how to use Bitbucket Pipelines to get it from there onto my DigitalOcean server, where it can be served on the web.
So far I've got this
image: node:10.15.3
pipelines:
default:
- parallel:
- step:
name: Build
caches:
- node
script:
- npm run build
So now the app was built and should be saved as a single file server.js in a theoretical /dist directory.
How now do I dockerize this file and then upload it to my DigitalOcean?
I can't find any examples for something like this.
I did find a Docker template in the Bitbucket Pipelines editor, but it only somewhat describes creating a Docker image, and not at all how to actually deploy it to a DigitalOcean server (or anywhere)
- step:
name: Build and Test
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- docker build . --file Dockerfile --tag ${IMAGE_NAME}
- docker save ${IMAGE_NAME} --output "${IMAGE_NAME}.tar"
services:
- docker
caches:
- docker
artifacts:
- "*.tar"
- step:
name: Deploy to Production
deployment: Production
script:
- echo ${DOCKERHUB_PASSWORD} | docker login --username "$DOCKERHUB_USERNAME" --password-stdin
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- docker load --input "${IMAGE_NAME}.tar"
- VERSION="prod-0.1.${BITBUCKET_BUILD_NUMBER}"
- IMAGE=${DOCKERHUB_NAMESPACE}/${IMAGE_NAME}
- docker tag "${IMAGE_NAME}" "${IMAGE}:${VERSION}"
- docker push "${IMAGE}:${VERSION}"
services:
- docker
You would have to SSH into your DigitalOcean VPS and then do some steps there:
Pull the current code
Build the docker file
Deploy the docker file
An example could look like this:
Create some script like "deployment.sh" in your repository root:
cd <path_to_local_repo>
git pull origin master
docker container stop <container_name>
docker container rm <container_name>
docker image build -t <image_name> .
docker container run -itd --name <container_name> <image_name>
and then add the following into your pipeline:
# ...
- step:
deployment: staging
script:
- cat ./deployment.sh | ssh <ssh_user>#<ssh_host>
You have to add your ssh key for your repository on your server, though. Check out the following link, on how to do this: https://confluence.atlassian.com/display/BITTEMP/Use+SSH+keys+in+Bitbucket+Pipelines
Here is a similar question, but using PHP: Using BitBucket Pipelines to Deploy onto VPS via SSH Access

Docker Update Code in Volume with Gitlab CI / CD

i am learning docker and i just encountered a problem i cannot solve.
I want to update source code in my docker swarm nodes when i make changes and push them. I just have a index php which echos "Hello World" and shows phpinfo. I am using data volumes since its recommended for production ( bind mounts for dev ).
my problem is: how to i update source code while using volumes? whats the best practice for this scenario?
Currently when i push changes to gitlab in my index php my gitlab-runner recreates the Docker Image and updates my swarm service.
This works when i change the php version in my Dockerfile but changes in index.php wont be affected.
My example Dockerfile looks like this. i just copy the index.php to /var/www/html in the container and thats it.
When i deploy my swarm stack the first time everything works
FROM php:7.4.5-apache
# copy files
COPY src/index.php /var/www/html/
# apahe settings
RUN echo 'ServerName localhost' >> /etc/apache2/apache2.conf
My gitlab-ci.yml looks like this
build docker image:
stage: build
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -t $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:latest
tags:
- build-image
deploy docker image:
stage: deploy
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker service update --with-registry-auth --image $CI_REGISTRY_IMAGE:latest
$SWARM_SERVICE_NAME -d
tags:
- deploy-stack
Docker images generally contain an application's source code and the dependencies required to run it. Volumes are used for persistent data that needs to be preserved across changes to the underlying application. Imagine a database: if you upgraded from somedb:1.2.3 to somedb:1.2.4, you'd need to replace the database application binary (in the image) but would need to preserve the actual database contents (in a volume).
Especially in a clustered environment, don't try storing your application code in volumes. If you delete the part of your deployment setup that attempts this, then when containers redeploy with an updated image, they'll see the updated code.

Integrating docker with gitlab-ci - how does the docker image get built and used?

I was previously using the shell for my gitlab runner to build my project. So far I have set up the pipeline that will run whatever commands I have set in the gitlab-ci.yml file seen below:
gitlab-ci.yml using shell runner
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Now, I want to switch to a docker image. I have reconfigured the runner to use a docker image, and I specified the image in my new gitlab-ci.yml file seen below. I followed the gitlab-ci docker tutorial and this is where it left off so I'm not entirely sure where to go from here:
gitlab-ci.yml using docker runner
image: node:8.10.0
before_script:
- npm install
- npm install --save #angular/material #angular/cdk
cache:
paths:
- node_modules/
stages:
- dev
- staging
- production
build_dev:
stage: dev
script:
- rm ./package-lock.json
- npm run build
- ./node_modules/#angular/cli/bin/ng test --browsers PhantomJS --watch false
Questions:
With my current gitlab-ci.yml file, how does this build a docker image/does it even build one? If it does, what does that mean? Currently the pipeline passed, but I have no idea if it did in a docker image or not (am I supposed to be able to tell?).
Also, let's say the docker image was created, ran the tests, and the pipeline passed; it should push the code to a new repository (not included in yml file yet). From what I gathered, the image isn't being pushed, it's just the code, right? So what do I do with this created docker image?
How does the Dockerfile get used? I see no link between the gitlab-ci.yml file and Dockerfile.
Do I need to surround all commands in the gitlab-ci.yml file in docker run <commands> or docker exec <commands>? Without including one of these 2 commands, it seems like it would just run on the server and not in a docker image.
I've seen people specify an image in both the gitlab-ci.yml file and Dockerfile. I have an angular project, and I specified an image of image: node:8.10.0. In the Dockerfile, should I specify the same image? I've seen some projects where they are completely different and I'm wondering what the use of both images are/if picking one image over another will severely impact my builds.
You have to take a different approach on building your app if you want to fully dockerize it. Export angular things into Dockerfile and get docker operations inside your .gitlab-ci instead of angular stuff like here:
stages:
- build
# - release
# - deploy
.build_template: &build_definition
stage: build
image: docker:17.06
services:
- docker:17.06-dind
script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $CONTAINER_RELEASE_IMAGE || true
- docker build --cache-from $CONTAINER_RELEASE_IMAGE -t $CONTAINER_IMAGE -f $DOCKERFILE ./
- docker push $CONTAINER_IMAGE
build_app_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/app:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/app:latest
DOCKERFILE: ./Dockerfile.app
build_nginx_job:
<<: *build_definition
variables:
CONTAINER_IMAGE: $CI_REGISTRY_IMAGE/nginx:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE/nginx:latest
DOCKERFILE: ./Dockerfile
You can set up a few build jobs - for production, development, staging etc.
Right next to your .gitlab-ci.yaml you can put Dockerfile and Dockerfile.app - Dockerfile.app stands for building you angular app:
FROM node:10.5.0-stretch
RUN mkdir -p /usr/src/app
RUN mkdir -p /usr/src/remote
WORKDIR /usr/src/app
COPY . .
# do your commands here
Now with your app built, it can be served via a web server - it's your choice and a different configuration that follows with each choice - cant even scratch a surface here. That'd be implemented in Dockerfile - we usually use Nginx in our company.
From here on it's about releasing your images and deploying them. I've only specified how to build them in docker as it seems this is what the question is about.
If you want to deploy your image and run it somewhere - choose a provider - AWS, Heroku, own infrastructure - have it your way, but this is far too much to cover all those in a single answer so I'll leave it for another question when you specify where'd you like to deploy your newly built images and how would you like to serve it. In our company, we orchestrate things with Rancher, but there are multiple awesome and competing options in the market.
Edit for a custom registry
The above .gitlab-ci configuration works with Gitlab's "internal" registry only, in case you want to utilize your own registry, change the values accordingly:
#previous configs
script:
- docker login -u mysecretlogin -p mysecretpasswd registry.local.com
# further configs
from -u gitlab-ci-token to your login in the registry,
from $CI_JOB_TOKEN to your password
from $CI_REGISTRY to your registry address
Those values should be stored in Gitlab's CI secret variables and referenced via env variables so that they are not saved in the repository.
Finally, your script might look like below in case you decided to protect these values. Refer to Gitlab's official docs on how to add secret CI variables - super easy task.
#previous configs
script:
- docker login -u $registrylogin -p $registrypasswd $registryaddress
# further configs

How to push multiple images needed for docker-compose to GitLab registry in GitLab CI?

I recently got into CI/CD, and a good starting point for me was GitLab, since they provide an easy interface for that and i got started about what pipelines and stages are, but i have run into some kind of contradictory thought about GitLab CI running on Docker.
My app runs on Docker Compose. It contains (blah blah) that makes it easy to build & run containers. Each service in the Docker Compose creates a single Docker container, excepting the php-fpm one, which is able to do the thing called "horizontal scale", so I can scale it later.
I will use that Docker Compose for production, I am currently using it in development and I want to use it too in CI/CD pipelines.
However the .gitlab-ci.yml provides support for only one image, so I have to build it and push it to either their GitLab Registry or Docker Hub in order to pull it later in the CI/CD process.
How can I build my Docker Compose's service as a single image in order to push it to the Registry/Docker so I can pull it in the CI/CD?
My project contains a docker folder and a docker-compose.yml. In the docker folder, each service has its own separate directory (php-fpm, nginx, mysql, etc.) and each one (prepare yourself) contains a Dockerfile with build details, especially the php-fpm one (deps and libs are strong with this one)
Each service in the docker-compose.yml has a build context in each of their own folder.
If I was unclear, I can provide additonal info.
However the .gitlab-ci.yml provides support for only one image
This is not true. From the official documentation:
Your image will be named after the following scheme:
<registry URL>/<namespace>/<project>/<image>
GitLab supports up to three levels of image repository names.
Following examples of image tags are valid:
registry.example.com/group/project:some-tag
registry.example.com/group/project/image:latest
registry.example.com/group/project/my/image:rc1
So the solution to your problem is simple - just build individual images and push them to GitLab container registry under different image name.
If you would like an example, my pipelines are set up like this:
.template: &build_template
image: docker:stable
services:
- docker:dind
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker pull $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest || true
- if [ -z ${CI_COMMIT_TAG+x} ];
then docker build
--cache-from $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest
--file $DOCKERFILE_NAME
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_TAG
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest . ;
else docker build
--cache-from $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest
--file $DOCKERFILE_NAME
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA
--tag $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest . ;
fi
- docker push $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA
- if [ -z ${CI_COMMIT_TAG+x} ]; then
docker push $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_TAG;
fi
- docker push $CI_REGISTRY_IMAGE/$IMAGE_NAME:latest
build:image1:
<<: *build_template
variables:
IMAGE_NAME: image1
DOCKERFILE_NAME: Dockerfile.1
build:image2:
<<: *build_template
variables:
IMAGE_NAME: image2
DOCKERFILE_NAME: Dockerfile.2
And you should be able to pull the same image using $CI_REGISTRY_IMAGE/$IMAGE_NAME:$CI_COMMIT_SHA in later pipeline jobs or your compose file (provided that the variables are passed to where you run your compose file).
You don't need dind to run a docker-compose stack. You can run multiple docker-compose up commands.
acceptance_testing:
stage: test
before_script:
- docker-compose -p $CI_JOB_ID up -d
script:
- docker-compose -p $CI_JOB_ID exec -T /run/your/test/suite.sh
after_script:
- docker-compose -p $CI_JOB_ID down -v --remove-orphans || true
I think you search something like this
# .gitlab-ci.yml
image: docker
services:
- docker:dind
build:
script:
- apk add --no-cache py-pip
- pip install docker-compose
- docker-compose up -d
Also good to know:
In Docker, what's the difference between a container and an image?
Building Docker images with GitLab CI/CD
I have a project of Drupal which contains two images: one for Drupal source code & another for MySQL database.
I tagged them:
docker build -t registry.mysite.net/drupal/blog/blog_db:v1.3 mysql/db
docker build -t registry.mysite.net/drupal/blog/blog_drupal:v1.3 src/drupal
Where registry.mysite.net is the url of the git site, and can be found under Container registry settings.
drupal is the group name,
blog is the project name,
blog_db is the image for database, mysql/db is the location for the Dockerfile, and likewise for the other image.
And then to push it to gitlab use:
docker push registry.mysite.net/drupal/blog/blog_db:v1.3
docker push registry.mysite.net/drupal/blog/blog_drupal:v1.3
Hope this might help someone.

Resources