I have node.js application that I need to deploy to exising kubernetes cluster.
The cluster is setup using kops on AWS.
I have created .gitlab-ci.yml file for building docker images.
So, whenever a change is pushed to either master or develop branch. It will build the docker image.
I have already followed steps defined here to add an existing cluster.
Now, I have to roll update to exisitng kubernetes cluster..
# This file is a template, and might need editing before it works on your project.
docker-build-master:
# Official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:prod" .
- docker push "$CI_REGISTRY_IMAGE:prod"
only:
- master
docker-build-dev:
# Official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:dev" .
- docker push "$CI_REGISTRY_IMAGE:dev"
only:
- develop
For now, I am using shared runner.
How can I integrate kubernetes deployment after image is built with gitlab ci/cd to deploy on aws (cluster is created with kops)?
For registry I am using gitlab's container registry not docker hub.
Update
I changed configuration and doing below,
stages:
- docker-build
- deploy
docker-build-master:
image: docker:latest
stage: docker-build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:prod" .
- docker push "$CI_REGISTRY_IMAGE:prod"
only:
- master
deploy-prod:
stage: deploy
image: roffe/kubectl
script:
- kubectl apply -f scheduler-deployment.yaml
only:
- master
docker-build-dev:
image: docker:latest
stage: docker-build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:dev" .
- docker push "$CI_REGISTRY_IMAGE:dev"
only:
- develop
But now I am getting below error.
roffe/kubectl with digest roffe/kubectl#sha256:ba13f8ffc55c83a7ca98a6e1337689fad8a5df418cb160fa1a741c80f42979bf ...
$ kubectl apply -f scheduler-deployment.yaml
error: the path "scheduler-deployment.yaml" does not exist
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
File scheduler-deployment.yaml does exist in the root directory.
Related
I am using Gitlab-cli to run my builds and publish to docker registry on every commit to the master branch. Below is the build file named gitlab-ci.yml. How can I cache the image layers inside my registry so that if there is no change in the pom.xml, the image will use the cache version of the docker image.
docker-build-master:
# Official docker image.
image: docker:latest
stage: build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
I am have been using the standard docker in docker approach to build custom docker images and I was wondering if there was a way to modify the .gitlab-ci.yml to build tags.
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker info | grep Registr
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
Personally, I think the new features in docker hub make this really easy to do in github, but I but existing repo is in gitlab. Any examples would be amazing, thanks.
you are able to build tags just by specifying tags in your conditionals.
For example:
only:
- tags
I have a Python based repository and I am trying to setup Gitlab CI to Build a Docker image using Dockerfile and pushing the image to Gitlab's Registry.
Before building and deploying the Docker image to registry, I want to run my unit tests using Python. Here is my current gitlab-ci.yml file that only does testing:
image: python:3.7-slim
before_script:
- pip3 install -r requirements.txt
test:
variables:
DJANGO_SECRET_KEY: some-key-here
script:
- python manage.py test
build:
DO NOT KNOW HOW TO DO IT
I am checking some templates from Gitlab's website and found one for Docker:
# This file is a template, and might need editing before it works on your project.
# Official docker image.
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
build:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG" .
- docker push "$CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG"
except:
- master
However, both of these do not work for me because I need to have python for testing and docker for building the image. Is there way to do it with Gitlab CI without creating a custom Docker image that has both python and Docker installed?
I found out that I can create multiple jobs, each with their own images:
stages:
- test
- build
test:
stage: test
image: python:3.7-slim
variables:
DJANGO_SECRET_KEY: key
before_script:
- pip3 install -r requirements.txt
script:
- python manage.py test
only:
- master
build:
stage: build
image: docker:latest
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" .
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
I am running a yml script in gitlab where I am using gitlab-ci.yaml to create a docker image and push it to Google Registry. Where as I am unable to run the commands. Here is my gitlab code.
image: docker:latest
services:
- docker:dind
variables:
SPRING_PROFILES_ACTIVE: gitlab-ci
stages:
- build
- imagecreation
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B"
artifacts:
paths:
- target/*.war
docker-build:
image: google/cloud-sdk
stage: imagecreation
script:
- docker build -t gcr.io/project-test-to/counter .
- gcloud docker -- push gcr.io/project-test-to/counter
k8s-deploy:
image: google/cloud-sdk
stage: deploy
script:
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config unset container/use_client_certificate
- gcloud container clusters get-credentials gitlab --zone us-central1-a --project project-test-to
- kubectl apply -f deployment.yaml
Her is the error that I am getting.Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: exit code 1
How can I run the docker commands in this image?(google/cloud-sdk)
You need to add Docker in Docker service then bind your runner on this service with DOCKER_HOST . Here my configuration
job:publish:api:
image: google/cloud-sdk:latest
stage: publish
when: on_success
services:
- docker:dind
before_script:
- echo $GCLOUD_SERVICE_KEY > ${HOME}/gcloud-service-key.json
- gcloud auth activate-service-account --key-file ${HOME}/gcloud-service-key.json
- gcloud auth configure-docker
script:
- docker build --compress -t ${GCLOUD_IMAGE_CI_FULLNAME} .
- docker push ${GCLOUD_IMAGE_CI_FULLNAME}
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_TLS_CERTDIR: ""
Since the gcloud docker command simply passes commands to docker, and the docker image google/cloud-sdk doesn't seem to ship with docker installed, you might need to mount your host socket into the container like this:
docker-build:
image: google/cloud-sdk
stage: imagecreation
script:
- docker build -t gcr.io/project-test-to/counter .
- gcloud docker -- push gcr.io/project-test-to/counter
volume:
- "/var/run/docker.sock:/var/run/docker.sock"
Please also keep in mind the command gcloud docker is deprecated.
Following up on the original answer, the "gcloud docker" command is now deprecated as mentioned. To push an image to Container Registry, you just need to run the following:
$docker push
I would also suggest setting up the docker command so that you can run it as non root by following the steps mentioned in this documentation.
If this is CircleCI you are using to build the image you can use setup_remote_docker as described in:
https://circleci.com/docs/2.0/configuration-reference/#setup_remote_docker
https://circleci.com/blog/docker-what-you-should-know/
Then the config.yml looks something like:
create_app_docker:
docker:
- image: google/cloud-sdk:latest
steps:
- checkout
- setup_remote_docker
- run:
name: Build docker with application
command: |
docker --version
docker build -t my-app ./
docker images
I'm trying to tag auto-built Docker images on my private Registry within GitLab-CI, but the 'release' job fails with:
Error response from daemon: No such image: dev.skibapro.de:5050/dransfeld/dockerci-test:v0.4
This is my .gitlab-ci.yml, the build and test jobs run without errors and docerci-test:v0.4 is present in my Registry after the pipeline has run.
image: docker:stable
variables:
DOCKER_DRIVER: overlay2
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_NAME
services:
- docker:dind
stages:
- build
- test
- release
before_script:
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
build:
only:
- tags
stage: build
script:
- docker build -t $IMAGE_TAG -f docker/Dockerfile .
- docker push $IMAGE_TAG
test:
only:
- tags
stage: test
script:
- docker run $IMAGE_TAG /usr/local/bin/test.sh
release:
only:
- tags
stage: release
script:
- docker tag $IMAGE_TAG "$CI_REGISTRY_IMAGE:latest"
This is the error I'm getting in the job log:
$ docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Login Succeeded
$ docker tag $IMAGE_TAG "$CI_REGISTRY_IMAGE:latest"
Error response from daemon: No such image: dev.skibapro.de:5050/dransfeld/dockerci-test:v0.4
ERROR: Job failed: exit code 1
I don't know it the image just isn't present yet when the 'release' stage runs, or if I'm asking docker to do something it can't... I want the latest tag to only be applied after the test stage finished successfully.
Altough Docker seems to support tagging images in remote registries (Add remote tag to a docker image), GitLab needs to pull the image from the remote registry first. From GitLab's Blog (https://about.gitlab.com/2016/05/23/gitlab-container-registry/)
release-image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE
- docker push $CONTAINER_RELEASE_IMAGE