/bin/bash: line 117: kubectl: command not found gitlab-ci - docker

I am not able to use kubectl command inside gitlab-ci.yml file.
I have already gone through the steps mentioned in the doc to add an existing cluster in the doc.
Nowhere they have mentioned, How can I use kubectl.
I tried the below configuration.
stages:
- docker-build
- deploy
docker-build-master:
image: docker:latest
stage: docker-build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:prod" .
- docker push "$CI_REGISTRY_IMAGE:prod"
only:
- master
deploy-prod:
stage: deploy
image: roffe/kubectl
script:
- kubectl apply -f scheduler-deployment.yaml
only:
- master
But I am getting below error,
Executing "step_script" stage of the job script
00:01
Using docker image sha256:c8d24d490701efec4c8d544978b3e4ecc4855475a221b002a8f9e5e473398805 for roffe/kubectl with digest roffe/kubectl#sha256:ba13f8ffc55c83a7ca98a6e1337689fad8a5df418cb160fa1a741c80f42979bf ...
$ kubectl apply -f scheduler-deployment.yaml
error: unable to recognize "scheduler-deployment.yaml": Get http://localhost:8080/api?timeout=32s: dial tcp [::1]:8080: connect: connection refused
Cleaning up file based variables
00:00
ERROR: Job failed: exit code 1
Clearly, it is not able to connect to the cluster, or maybe trying to connect to the cluster inside this roffe/kubectl image container.
When I remove the image, I get this error.
/bin/bash: line 117: kubectl: command not found
I have gone through the whole doc I couldn't find a single example or reference that explains this part.
Please suggest how I can deploy to the existing k8s cluster.
Update
I went through this doc and I am using defined variables in my gitlab-ci.yml to update the context of the kubectl.
But still it doesn't work.
deploy-prod:
stage: deploy
image: roffe/kubectl
script:
- echo $HOME
- echo $KUBECONFIG
- echo $KUBE_URL
- mkdir -p $HOME/.kube
- echo -n $KUBECONFIG | base64 -d > $HOME/.kube/config
- kubectl get pods
only:
- master
Error that I get,
$ echo $HOME
/root
$ echo $KUBECONFIG
$ echo $KUBE_URL
$ mkdir -p $HOME/.kube
$ echo -n $KUBECONFIG | base64 -d > $HOME/.kube/config
$ kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1

To automate deployment with an existing cluster, You need to follow below steps:
1. Add you cluster to gitlab project.
Follow this doc, and add your existing cluster
2. Build your project and push to docker or any registry
stages:
- docker-build
- deploy
docker-build-master:
image: docker:latest
stage: docker-build
services:
- docker:dind
before_script:
- docker login -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" $CI_REGISTRY
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE:prod" .
- docker push "$CI_REGISTRY_IMAGE:prod"
only:
- master
3. Apply your deployment.yml to cluster
To use kubectl inside gitlab-ci.yml, you need an image that has kubectl. The one that you have used will work.
But the kubectl inside the container has no idea about the context of cluster that you added earlier.
Now this is where gitlab's envrionment variable play their role.
The default environment scope is *, which means all jobs, regardless of their environment, use that cluster. Each scope can be used only by a single cluster in a project, and a validation error occurs if otherwise. Also, jobs that don’t have an environment keyword set can’t access any cluster. see here
So, when you added your cluster, it goes by default in scope * and will be passed to every job provided it uses some environment.
Also, When you create a cluster, gitlab create environment variables for that cluster by default, see here.
Important thing to notice is that it also adds an environment variable KUBECONFIG.
In order to access your Kubernetes cluster, kubectl uses a configuration file. The default kubectl configuration file is located at ~/.kube/config and is referred to as the kubeconfig file.
kubeconfig files organize information about clusters, users, namespaces, and authentication mechanisms. The kubectl command uses these files to find the information it needs to choose a cluster and communicate with it.
The loading order follows these rules:
If the --kubeconfig flag is set, then only the given file is loaded. The flag may only be set once and no merging takes place.
If the $KUBECONFIG environment variable is set, then it is parsed as a list of filesystem paths according to the normal path delimiting rules for your system.
Otherwise, the ${HOME}/.kube/config file is used and no merging takes place.
So, kubectl command can use the variable KUBECONFIG to set the context see here.
So, your deployment job will be like below,
deploy-prod:
stage: deploy
image: roffe/kubectl
script:
- kubectl get pods
- kubectl get all
- kubectl get namespaces
- kubectl apply -f scheduler-deployment.yaml
environment:
name: production
kubernetes:
namespace: default
only:
- master
You can also set a namespace for the job with environment.kubernetes.namespace

this image (roffe/kubectl) doesn't have kubectl package so you can add some package kubectl and configuration to connect your kubernetes cluster.

Related

start docker container from within self hosted bitbucket pipeline (dind)

I work on a spring-boot based project and use a local machine as test environment to deploy it as a docker container.
I am in the middle of creating a bitbucket pipeline that automates everything between building and deploying. For this pipeline I make use of a self hosted runner (docker) that also runs on the same machine and docker instance where I plan to deploy my project.
I managed to successfully build the project (mvn and docker), and load the docker image into my GCP container registry.
My final deployment step (docker run xxx, see yml script below) was also successful but since it is running in a container itself it was not running the script on the top level docker.
as far as I understand the runner itself has access to the host docker, because the docker.sock is mounted. but for each step another container is created which does not have access to the docker.sock, right? So basically I need to know how to give access to this file unless there's a better solution to that.
here the shortened pipeline definition:
image: maven:3.8.7-openjdk-18
definitions:
services:
docker:
image: docker:dind
pipelines:
default:
# build only for feature branches or so
branches:
test:
# build, docker and upload steps
- step:
name: Deploy
deployment: test
image: google/cloud-sdk:alpine
runs-on:
- 'self.hosted'
- 'linux'
caches:
- docker
script:
- IMAGE_NAME=$BITBUCKET_REPO_SLUG
- VERSION="${BITBUCKET_BUILD_NUMBER}"
- DOCKER_IMAGE="${DOCKER_REGISTRY}/${IMAGE_NAME}:${VERSION}"
# Authenticating with the service account key file
- echo $GCLOUD_API_KEYFILE > ./gcloud-api-key.json
- gcloud auth activate-service-account --key-file gcloud-api-key.json
- gcloud config set project $GCLOUD_PROJECT
# Login with docker and stop old container (if exists) and run new one
- cat ./gcloud-api-key.json | docker login -u _json_key --password-stdin https://eu.gcr.io
- docker ps -q --filter "name=${IMAGE_NAME}" | xargs -r docker stop
- docker run -d -p 82:8080 -p 5005:5005 --name ${IMAGE_NAME} --rm ${DOCKER_IMAGE}
services:
- docker

which gitlab-executor to choose so that i can use many docker images in a pipeline?

i have this pipeline to execute :
stages:
- build-gitlab
- deploy-uat
build:
image: node:14-alpine
stage: build-gitlab
services:
- docker
before_script:
- docker login $CI_REGISTRY_URL -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD
script:
- docker build --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA .
- docker tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA $CI_FRONTEND_REGISTRY_URL
- docker push $CI_FRONTEND_REGISTRY_URL
deploy:
image:
name: bitnami/kubectl:latest
stage: deploy-uat
before_script:
- kubectl config set-cluster deploy-cluster --server="$K8S_SERVER" --insecure-skip-tls-verify
- kubectl config set-credentials gitlab --token=$(echo $K8S_TOKEN | base64 -d)
- kubectl config set-context deploy-cluster --cluster=deploy-cluster --namespace=ns-frontend-dev --user=gitlab
- kubectl config use-context deploy-cluster
script:
- envsubst < deploy.tmpl > deploy.yaml
- kubectl apply -f deploy.yaml
Initially i defined a runner for my gitlab with shell executor. Docker is installed in my runner that is why the build stage executed itself successfully. But if i would like to use multiple docker images as you can see in my gitlab-ci.yaml file, the shell executor is not the appropriate one.
I saw this documentation about gitlab executors
but it is not explicit enough.
i register a new runner with docker executor, then i got this result :
Preparing the "docker" executor
Using Docker executor with image node:14-alpine ...
Starting service docker:latest ...
Pulling docker image docker:latest ...
Using docker image sha256:0f8d12a73562adf6588be88e37974abd42168017f375a1e160ba08a7ee3ffaa9 for docker:latest with digest docker#sha256:75026b00c823579421c1850c00def301a6126b3f3f684594e51114c997f76467 ...
Waiting for services to be up and running (timeout 30 seconds)...
*** WARNING: Service runner-jdn9pn3z-project-33-concurrent-0-0e760484a3d3cab3-docker-0 probably didn't start properly.
Health check error:
service "runner-jdn9pn3z-project-33-concurrent-0-0e760484a3d3cab3-docker-0-wait-for-service" health check: exit code 1
Health check container logs:
2023-01-18T15:50:31.037166246Z FATAL: No HOST or PORT found
and the deploy part did not succeed. What is the right executor to choose between :
docker, shell, ssh, kubernetes, custom, parallels, virtualbox, docker+machine, docker-ssh+machine, instance, docker-ssh
And how to use it

DinD gitlab-runner : Warning service runner-xxx-project-xx-concurrent-x-docker-x probably didn't start properly

I tested a gitlab-runner on a virtual machine, it worked perfectly. I followed this tutorial at part Use docker-in-docker executor :
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
When i register a runner with exactly the same configuration on my dev server, the runner is called when there is a commit but i got alot of errors :
*** WARNING: Service runner-XXX-project-XX-concurrent-X-docker-X probably didn't start properly.
ContainerStart: Error response from daemon: Cannot link to a non running container: /runner-XXX-project-XX-concurrent-X-docker-X AS /runner-XXX-project-XX-concurrent-X-docker-X-wait-for-service/service (executor_docker.go:1337:1s)
DEPRECATION: this GitLab server doesn't support refspecs, gitlab-runner 12.0 will no longer work with this version of GitLab
$ docker info
error during connect: Get http://docker:2375/v1.39/info: dial tcp: lookup docker on MY.DNS.IP:53: no such host
ERROR: Job failed: exit code 1
I believe all these error are due to the first warning. I tried to :
Add a second DNS with 8.8.8.8 IP to my machine, same error
Add privileged=true manually in /etc/gitlab-runner/config.toml, same error, so it's not due to the privileged = true parameter
Replace tcp://docker:2375 by tcp://localhost:2375, can't find docker daemon on the machine when docker info
gitlab-ci.yml content :
image: docker:stable
stages :
- build
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
services:
- docker:dind
before_script:
- docker info
build-folder1:
stage: build
script:
- docker build -t image1 folder1/
- docker run --name docker1 -p 3001:5000 -d image1
only:
refs:
- dev
changes:
- folder1/**/*
build-folder2:
stage: build
script:
- docker build -t image2 folder2/
- docker run --name docker2 -p 3000:3000 -d image2
only:
refs:
- dev
changes:
- folder2/**/*
If folder1 of branch dev is modified, we build and run the docker1
If folder2 of branch dev is modified, we build and run the docker2
docker version on dev server :
docker -v
Docker version 17.03.0-ce, build 3a232c8
gitlab-runner version on dev server :
gitlab-runner -v
Version: 11.10.1
I will try to provide an answer for you, as I come to fix this same problem when trying yo run DinD.
This message:
*** WARNING: Service runner-XXX-project-XX-concurrent-X-docker-X probably didn't start properly.
Means that either you have not properly configured your runner, or it is not linked by the gitlab-ci.yml file. You should be able to ckeck the ID of the runner used in the log page at Gitlab.
To start with, verify that you entered the gitlab-runner register command right, with the proper registration token.
Second, since you are setting a specific runner manually, verify that you have set some unique tag to it (eg. build_docker), and call it from your gitlab-ci.yml file. For example:
...
build-folder1:
stage: build
script:
- docker build -t image1 folder1/
- docker run --name docker1 -p 3001:5000 -d image1
tags:
- build_docker
...
That way it should work.

Docker container does not run on a EC2 instance part of EC2 Cluster

I am trying to automate the deployment process of my project. The environment looks like:
we use GitLab to store our code
we execute a CD/CI pipeline within GitLab to build a Docker image and to store it in Amazon repository
once the build stage is completed, Docker have to run in deployment stage the latest image on the first of two instances and after successful execution to scale the containers on the second instance.
This is how the .gutlab-ci.yml file looks like:
image: docker:latest
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_DRIVER: overlay2
testBuild:
stage: build
script:
- docker login -u AWS -p <password> <link to Amazons' repo>
- docker build -t <repo/image:latest> app/
- docker push <repo/image:latest>
testDeploy:
stage: deploy
variables:
AWS_DEFAULT_REGION: "us-east-2"
AWS_ACCESS_KEY_ID: "access key"
AWS_SECRET_ACCESS_KEY: "ssecretAK"
AWS_CLUSTER: "testCluster"
AWS_SIZE: "2"
before_script:
- apk add --update curl
- curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
- chmod +x /usr/local/bin/ecs-cli
script:
- docker login -u AWS -p <password> <repo_link>
- docker run --rm --name <name-ofcontainer> -p 80:8000 -i <repo/image:latest>
- ecs-cli configure --region $AWS_DEFAULT_REGION --access-key $AWS_ACCESS_KEY_ID --secret-key $AWS_SECRET_ACCESS_KEY --cluster $AWS_CLUSTER
- ecs-cli scale --capability-iam --size $AWS_SIZE
only:
- development
Now when the script is successfully executed I SSH the instances and and enter docker ps -a it does not list a running container also it does not find the image with docker image.
If I enter manually the commands on one of the instances the website is available.
My questions is how to make the container available?
EDIT 1:
We use shared runner, if this is what you asks. The reason we use docker:dind is because when we do not use it the following error occurs and we cannot go further:
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

GitLab CI runner can't connect to unix:///var/run/docker.sock in kubernetes

GitLab's running in kubernetes cluster. Runner can't build docker image with build artifacts. I've already tried several approaches to fix this, but no luck. Here are some configs snippets:
.gitlab-ci.yml
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B --settings settings.xml"
artifacts:
paths:
- target/*.jar
docker-build:
stage: package
script:
- docker build -t gitlab.my.com/group/app .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN gitlab.my.com/group/app
- docker push gitlab.my.com/group/app
config.toml
concurrent = 1
check_interval = 0
[[runners]]
name = "app"
url = "https://gitlab.my.com/ci"
token = "xxxxxxxx"
executor = "kubernetes"
[runners.kubernetes]
privileged = true
disable_cache = true
Package stage log:
running with gitlab-ci-multi-runner 1.11.1 (a67a225)
on app runner (6265c5)
Using Kubernetes namespace: default
Using Kubernetes executor with image docker:latest ...
Waiting for pod default/runner-6265c5-project-4-concurrent-0h9lg9 to be running, status is Pending
Waiting for pod default/runner-6265c5-project-4-concurrent-0h9lg9 to be running, status is Pending
Running on runner-6265c5-project-4-concurrent-0h9lg9 via gitlab-runner-3748496643-k31tf...
Cloning repository...
Cloning into '/group/app'...
Checking out 10d5a680 as master...
Skipping Git submodules setup
Downloading artifacts for maven-build (61)...
Downloading artifacts from coordinator... ok id=61 responseStatus=200 OK token=ciihgfd3W
$ docker build -t gitlab.my.com/group/app .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
ERROR: Job failed: error executing remote command: command terminated with non-zero exit code: Error executing in Docker Container: 1
What am I doing wrong?
Don't need to use this:
DOCKER_DRIVER: overlay
cause it seems like OVERLAY isn't supported, so svc-0 container is unable to start with it:
$ kubectl logs -f `kubectl get pod |awk '/^runner/{print $1}'` -c svc-0
time="2017-03-20T11:19:01.954769661Z" level=warning msg="[!] DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING [!]"
time="2017-03-20T11:19:01.955720778Z" level=info msg="libcontainerd: new containerd process, pid: 20"
time="2017-03-20T11:19:02.958659668Z" level=error msg="'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay support loaded."
Also, add export DOCKER_HOST="tcp://localhost:2375" to the docker-build:
docker-build:
stage: package
script:
- export DOCKER_HOST="tcp://localhost:2375"
- docker build -t gitlab.my.com/group/app .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN gitlab.my.com/group/app
- docker push gitlab.my.com/group/app
When using Kubernetes, you have to adjust your Build image to connect with the Docker engine.
Add to your build image:
DOCKER_HOST=tcp://localhost:2375
Quote from the docs:
Running the docker:dind also known as the docker-in-docker image is also
possible but sadly needs the containers to be run in privileged mode.
If you're willing to take that risk other problems will arise that might not
seem as straight forward at first glance. Because the docker daemon is started
as a service usually in your .gitlab-ci.yaml it will be run as a separate
container in your pod. Basically containers in pods only share volumes assigned
to them and an IP address by wich they can reach each other using localhost.
/var/run/docker.sock is not shared by the docker:dind container and the docker
binary tries to use it by default. To overwrite this and make the client use tcp
to contact the docker daemon in the other container be sure to include
DOCKER_HOST=tcp://localhost:2375 in your environment variables of the build container.
Gitlab-CI on Kubernetes
based on #Yarik 's comment what worked for me was
- export DOCKER_HOST=$DOCKER_PORT
no other answers worked.
I had the same problem, and I could not get the above workarounds to work for me (I did not try the volumes trick mentioned by #fkpwolf).
Now GitLab has an alternative solution by using Kaniko, which did work for me:
The .gitlab-ci.yaml could then be something like this, in that case:
stages:
- build
- package
- deploy
maven-build:
image: maven:3-jdk-8
stage: build
script: "mvn package -B --settings settings.xml"
artifacts:
paths:
- target/*.jar
docker-kaniko-build:
stage: package
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
script:
- echo "{\"auths\":{\"gitlab.my.com\":{\"username\":\"gitlab-ci-token\",\"password\":\"$CI_BUILD_TOKEN\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination gitlab.my.com/group/app
From the GitLab docs it is mentioned that:
kaniko solves two problems with using the docker-in-docker build method:
Docker-in-docker requires privileged mode in order to function, which is a significant security concern.
Docker-in-docker generally incurs a performance penalty and can be quite slow.
See: https://docs.gitlab.com/ee/ci/docker/using_kaniko.html

Resources