In my company we use GitLab as a source control and also use GitLab's Docker registry.
Each repository contains a Dockerfile for building an image and Kubernetes yaml's for defining the Pod/Service/Deployment/etc of that project.
The problem is that in the yaml's the image references the GitLab registry by its url
(Example from the Redis repository)
I don't like it for two reasons
It would make switching to another registry provider really hard since you'll have to go over your entire code base and change all the Kubernetes yaml files.
If a developer wants to test his app via minikube, he has to make sure that the image is stored and up to date in the registry. In our company, pushing to the registry is something that is done as part of the CI pipeline.
Is there a way to avoid storing the registry url in the repository?
Logging in to the docker registry beforehand doesn't solve it, you still have to provide the full image name with the url.
You can use an environment variable with shell scripts.
Here is the working example.
1) Create an environment variable
export IMAGE_URL=node:7-alpine
2) Create a 1 line shell script. This script purpose is replacing your env value in .yaml file with your actual environment variable.
echo sed 's/\$IMAGE_URL'"/$IMAGE_URL/g" > image_url.sh
3) Create sample yaml for example mypod.yaml.tmpl
cat > mypod.yaml.tmpl << EOL
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: mypod
image: $IMAGE_URL
# Just spin & wait forever
command: [ "/bin/ash", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
EOL
4) Run kubectl apply
cat mypod.yaml.tmpl | sh image_url.sh | kubectl apply -f -
Related
I have a Flask web application running as a Docker image that is deployed to a Kubernetes pod running on GKE. There are a few environment variables necessary for the application which are included in the docker-compose.yaml like so:
...
services:
my-app:
build:
...
environment:
VAR_1: foo
VAR_2: bar
...
I want to keep these environment variables in the docker-compose.yaml so I can run the application locally if necessary. However, when I go to deploy this using a Kubernetes deployment, these variables are missing from the pod and it throws an error. The only way I have found to resolve this is to add the following to my deployment.yaml:
containers:
- name: my-app
...
env:
- name: VAR_1
value: foo
- name: VAR_2
value: bar
...
Is there a way to migrate the values of these environment variables directly from the Docker container image into the Kubernetes pod?
I have tried researching this in Kubernetes and Docker documentation and Google searching and the only solutions I can find say to just include the environment variables in the deployment.yaml, but I'd like to retain them in the docker-compose.yaml for the purposes of running the container locally. I couldn't find anything that explained how Docker container environment variables and Kubernetes environment variables interacted.
Kompose can translate docker compose files to kubernetes resources:
https://kubernetes.io/docs/tasks/configure-pod-container/translate-compose-kubernetes/
Let us assume docker-compose file and kubernetes runs the same way,
Both take a ready to use image and schedule a new pod or container based on it.
By default this image accept a set of env variables, to send those variables: docker-compose manage them in a way and kubernetes in an another way. (a matter of syntax)
So you can use the same image over compose and over kubernetes, but the syntax of sending the env variables will differ.
If you want them to presist no matter of the deployment and tool, you can always hardcode those env variables in the image itself, in another term, in your dockerfile that you used to build the image.
I dont recommend this way ofc, and it might not work for you in case you are using pre-built official images, but the below is an example of a dockerfile with env included.
FROM alpine:latest
# this is how you hardcode it
ENV VAR_1 foo
COPY helloworld.sh .
RUN chmod +x /helloworld.sh
CMD ["/helloworld.sh"]
If you want to move toward managing this in a much better way, you can use an .env file in your docker-compose to be able to update all the variables, especially when your compose have several apps that share the same variables.
app1:
image: ACRHOST/app1:latest
env_file:
- .env
And on kubernetes side, you can create a config map, link your pods to that configmap and then you can update the value of the configmap only.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
kubectl create configmap <map-name> <data-source>
Also note that you can set the values in your configmap directly from the .env file that you use in docker, check the link above.
The docker-compose.yml file and the Kubernetes YAML file serve similar purposes; both explain how to create a container from a Docker image. The Compose file is only read when you're running docker-compose commands, though; the configuration there isn't read when deploying to Kubernetes and doesn't make any permanent changes to the image.
If something needs to be set as an environment variable but really is independent of any particular deployment system, set it as an ENV in your image's Dockerfile.
ENV VAR_1=foo
ENV VAR_2=bar
# and don't mention either variable in either Compose or Kubernetes config
If you can't specify it this way (e.g., database host names and credentials) then you need to include it in both files as you've shown. Note that some of the configuration might be very different; a password might come from a host environment variable in Compose but a Kubernetes Secret.
I'm executing CI jobs with gitlab-ci runner which is configured with kubernetes executor, and actually runs on openshift. I want to be able to build docker images to dockerfiles, with the following constraints:
The runner (openshift pod) is ran as user with high and random uid (234131111111 for example).
The runner pod is not privileged.
Not having cluster admin permissions, or ability to reconfigure the runner.
So obviously DinD cannot work, since is requires special docker device configuration. Podman, kaniko, buildah, buildkit and makisu don't work for random non-root user and without any volume.
Any suggestions?
DinD (Docker-in-Docker) does work in OpenShift 4 gitlab runners... just made it, and it was... a fight! Fact is, the solution is extremely brittle to any change of a version elsewhere. I just tried e.g. to swap docker:20.10.16 for docker:latest or docker:stable, and that breaks.
Here is the config I use inside which it does work:
OpenShift 4.12
the RedHat certified GitLab Runner Operator installed via the OpenShift Cluster web console / OperatorHub; it features gitlab-runner v 14.2.0
docker:20.10.16 & docker:20.10.16-dind
Reference docs:
GitLab Runner Operator installation guide: https://cloud.redhat.com/blog/installing-the-gitlab-runner-the-openshift-way
Runner configuration details: https://docs.gitlab.com/runner/install/operator.html and https://docs.gitlab.com/runner/configuration/configuring_runner_operator.html
and this key one about matching pipeline and runner settings: https://docs.gitlab.com/ee/ci/docker/using_docker_build.html which is actually the one to follow very precisely for your settings in gitlab .gitlab-ci.yml pipeline definitions AND runner configuration config.toml file.
Installation steps:
follow docs 1 and 2 in reference above for the installation of the Gitlab Runner Operator in OpenShift, but do not instantiate yet a Runner from the operator
on your gitlab server, copy the runner registration token for a group-wide or projec-wide runner registration
elswhere in a terminal session where the oc CLI is installed, login to the openshift cluster via the 'oc' CLI such as to have cluster:admin or system:admin role
create a OpenShift secret like:
vi gitlab-runner-secret.yml
apiVersion: v1
kind: Secret
metadata:
name: gitlab-runner-secret
namespace: openshift-operators
type: Opaque
stringData:
runner-registration-token: myRegistrationTokenHere
oc apply -f gitlab-runner-secret.yml
create a Custom configuration map; note that OpenShift operator will merge the supplied content to that of the config.toml generated by the gitlab runner operator itself; therefore, we only provide the fields we want to complement (we cannot even override an existing field value). Note too that the executor is preset to "kubernetes" by the OC Operator. For the detailed understanding, see docs hereabove.
vi gitlab-runner-config-map.toml
[[runners]]
[runners.kubernetes]
host = ""
tls_verify = false
image = "alpine"
privileged = true
[[runners.kubernetes.volumes.empty_dir]]
name = "docker-certs"
mount_path = "/certs/client"
medium = "Memory"
oc create configmap gitlab-runner-config-map --from-file config.toml=gitlab-runner-config-map.toml
create a Runner to be deployed by the operator (adjust the url)
vi gitlab-runner.yml
apiVersion: apps.gitlab.com/v1beta2
kind: Runner
metadata:
name: gitlab-runner
namespace: openshift-operators
spec:
gitlabUrl: https://gitlab.example.com/
buildImage: alpine
token: gitlab-runner-secret
tags: openshift, docker
config: gitlab-runner-config-map
oc apply -f gitlab-runner.yml
you shall then see the runner just created via the openshift console (installed operators > gitlab runner > gitlab runner tab), followed by the outomatic creation of a PoD (see workloads). You may even enter a terminal session on the PoD and type for instance: gitlab-runner list to see the location of the config.toml file. You shall also see on the gitlab repo server console the runner being listed at the group or project level. Of course, firewalls in between your OC cluster and your gitlab server may ruin your endeavors at this point...
the rest of the trick takes place in your .gitlab-ci.yml file, e.g. (extract only showing one job at some stage). For the detailed understanding, see doc Nb 3 hereabove. the variable MY_ARTEFACT is pointing to a sub-dirctory in the relevant git project/repo in which a Dockerfile is contained that you have already successfully executed in your IDE for instance; and REPO_PATH holds a common prefix string including a docker Hub repository path and some extra name piece. You adjust all that to your convenience, BUT don't edit any of the first 3 variables defined under this job and do not change the docker[dind] version; it would break everything.
my_job_name:
stage: my_stage_name
tags:
- openshift # to run on specific runner
- docker
image: docker:20.10.16
variables:
DOCKER_HOST: tcp://docker:2376
DOCKER_TLS_CERTDIR: "/certs"
DOCKER_TLS_VERIFY: 1
DOCKER_CERT_PATH: "$DOCKER_TLS_CERTDIR/client"
REPO_TAG: ${REPO_PATH}-${MY_ARTEFACT}:${IMAGE_TAG}
services:
- docker:20.10.16-dind
before_script:
- sleep 10 && docker info #give time for starting the service and confirm good setup in logs
- echo $DOKER_HUB_PWD | docker login -u $DOKER_HUB_USER --password-stdin
script:
- docker build -t $REPO_TAG ./$MY_ARTEFACT
- docker push $REPO_TAG
There you are, trigger the gitlab pipeline...
If you miss-configured anything, you'll get the usual error message "is the docker daemon running?" after a claim regarding failing access to "/var/run/docker.sock" or failing connection to "tcp://localhost:2375". And no-no! port 2376 is not a typo but the exact value to use at step 8 hereabove.
So far so good? ... not yet!
Security settings:
Well, you may now see your docker builds starting (meanin D-in-D is OK), and then failing for security sake (or locked up).
Although we set 'privileged=true' at step 5:
Docker comes with a nasty (and built-in) feature: it runs by default as 'root' in every container it builds, and for building containers.
on the other hand, OpenShift is built with strict security in mind, and would prevent any pod to run as root.
So we have to change security settings to enable those runners to execute in privileged mode, reason why it is important to restrict these permissions to a namespace, here 'openshift-operators' and the specific account 'gitlab-runner-sa'.
`oc adm policy add-scc-to-user privileged -z gitlab-runner-sa -n openshift-operators`
The above will create a RoleBinding that you may remove or change as required. Fact is, 'gitlab-runner-sa' is the service account used by the Gitlab Runner Operator to instantiate runner pod's, and '-z' indicates to target the permission settings to a service account (not a regular user account). '-n' references the specific namespace we use here.
So you can now build images.... but may still be defeated when importing those images into an OpenShift project and trying to execute the generated pod's. There are two contraints to anticipate:
OpenShift will block any image that requires to run as 'root', i.e. in privileged mode (the default in docker run and docker compose up). ==> SO, PLEASE ENSURE THAT ALL THE IMAGES YOU WILL BUILD WITH DOCKER-in-DOCKER can run as a non root user with the dockerfile directive USER : !
... but the above may not be suffient! indeed, by default, OpenShift generates a random user ID to launch the container and ignores the one set in docker build as USER :. To effectively allow the container to switch to the defined user you have to bind the service account that runs your pods to the "anyuid" Security Context Constraint. This is easy to achieve via a role binding, else the command in oc CLI:
oc adm policy add-scc-to-user anyuid -n myProjectName -z default
where -z denotes a service account into the -n namespace.
I have this dockerfile
FROM rabbitmq:3.7.12-management
CMD . /files/envinfo && echo $RABBITMQ_DEFAULT_USER && rabbitmq-server
In the envinfo I have this content
export RABBITMQ_DEFAULT_USER='anothername'
When the docker starts up the echo of RABBITMQ_DEFAULT_USER really prints out anothername. But when the service starts it doesnt see it.
If I set the environment variable another way from the kubernetes file it works as it should.
You can see the rabbitmq image I extend here.
https://github.com/docker-library/rabbitmq/blob/35b41e318d9d9272126f681be74bcbfd9712d71b/3.8/ubuntu/Dockerfile
I have another process that fetches the file and puts it in /files/envinfo to make it available for this docker image when it starts. So I cant use environment settings from kubernetes.
Looking forward to hear some suggestions =)
I agree with #code-gorilla use Kubernetes environment variables. But another way to do it is to source the environment variables before the entry point:
ENTRYPOINT ["source /files/envinfo && docker-entrypoint.sh"]
Overriding CMD will only change the argument to ENTRYPOINT, that's probably why it doesn't work for you.
You can try to debug it further:
1) Connect to the container and check if the env variable set within it
docker exec -it <container-id> /bin/bash
Then in your container:
echo $RABBITMQ_DEFAULT_USER
2) Use the Kubernetes evironment variable configuration instead of script execution in CMD
See the docs.
apiVersion: v1
kind: Pod
metadata:
name: envar-demo
labels:
purpose: demonstrate-envars
spec:
containers:
- name: envar-demo-container
image: gcr.io/google-samples/node-hello:1.0
env:
- name: DEMO_GREETING
value: "Hello from the environment"
- name: DEMO_FAREWELL
value: "Such a sweet sorrow"
Note: I suspect that an evironment variable set within the CMD command is not available in all shells of the container, e.g. when you open a new bash within it. This is what the Kubernetes config takes care of.
I'm using Kubernates for production environment (I'm new for these kinds of configuration), This is an example for one of my depolyment files(with changes):
apiVersion: apps/v1
kind: Deployment
metadata:
name: myProd
labels:
app: thisIsMyProd
spec:
replicas: 3
selector:
matchLabels:
app: thisIsMyProd
template:
metadata:
labels:
app: thisIsMyProd
spec:
containers:
- name: myProd
image: DockerUserName/MyProdProject # <==== Latest
ports:
- containerPort: 80
Now, I wanted to make it works with the travis ci, So I made something similar to this:
sudo: required
services:
- docker
env:
global:
- LAST_COMMIT_SHA=$(git rev-parse HEAD)
- SERVICE_NAME=myProd
- DOCKER_FILE_PATH=.
- DOCKER_CONTEXT=.
addons:
apt:
packages:
- sshpass
before_script:
- docker build -t $SERVICE_NAME:latest -f $DOCKER_FILE_PATH $DOCKER_CONTEXT
script:
# Mocking run test cases
deploy:
- provider: script
script: bash ./deployment/deploy-production.sh
on:
branch: master
And finally here is the deploy-production.sh script:
#!/usr/bin/env bash
# Log in to the docker CLI
echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_USERNAME" --password-stdin
# Build images
docker build -t $DOCKER_USERNAME/$SERVICE_NAME:latest -t $DOCKER_USERNAME/$SERVICE_NAME:$LAST_COMMIT_SHA -f $DOCKER_FILE_PATH $DOCKER_CONTEXT
# Take those images and push them to docker hub
docker push $DOCKER_USERNAME/$SERVICE_NAME:latest
docker push $DOCKER_USERNAME/$SERVICE_NAME:$LAST_COMMIT_SHA
# Run deployment script in deployment machine
export SSHPASS=$DEPLOYMENT_HOST_PASSWORD
ssh-keyscan -H $DEPLOYMENT_HOST >> ~/.ssh/known_hosts
# Run Kubectl commands
kubctl apply -f someFolder
kubctl set image ... # instead of the `...` the rest command that sets the image with SHA to the deployments
Now here are my questions:
When travis finish its work, the deploy-production.sh script with run when it is about merging to the master branch, Now I've a concern about the kubectl step, for the first time deployment, when we apply the deployment it will pull the image from dockerhup and try to run them, after that the set image command will run changing the image of these depolyment. Does this will make the deployment to happen twice?
When I try to deploy for the second time, I noticed the deployment used old version from the latest image because it found it locally. After searching I found imagePullPolicy and I set it to always. But imagine that I didn't use that imagePullPolicy attribute, what would really happen in this case? I know that old-version code containers for the first apply command. But isn't running the set image will fix that? To clarify my question, Is kubernetes using some random way to select pods that are going to go down? Like it doesn't mark the pods with the order which the commands run, so it will detect that the set image pods should remain and the apply pods are the one who needs to be terminated?
Doesn't pulling every time is harmful? Should I always make the deployment image somehow not to use the latest is better to erase that hassle?
Thanks
If the image tag is the same in both apply and set image then only the apply action re-deploy the Deployment (in which case you do not need the set image command). If they refer to different image tags then yes, the deployment will be run twice.
If you use latest tag, applying a manifest that use the latest tag with no modification WILL NOT re-deploy the Deployment. You need to introduce a modification to the manifest file in order to force Kubernetes to re-deploy. Like for my case, I use date command to generate a TIMESTAMP variable that is passed as in the env spec of the pod container which my container does not use in any way, just to force a re-deploy of the Deployment. Or you can also use kubectl rollout restart deployment/name if you are using Kubernetes 1.15 or later.
Other than wasted bandwidth or if you are being charged by how many times you pull a docker image (poor you), there is no harm with additional image pull just to be sure you are using the latest image version. Even if you use a specific image tag with version numbers like 1.10.112-rc5, they will be case where you or your fellow developers forget to update the version number when pushing a modified image version. IMHO, imagePullPolicy=always should be the default rather than explicitly required.
I am trying to implement CI/CD pipeline for my spring boot microservice deployment. Here I have some sample microservices. When I am exploring about Kubernetes , I found that pods, services, replica sets/ controller, statefulsets etc. I understood those Kubernetes terminologies properly. And I am planning to use Docker hub for my image registry.
My Requirement
When there is a commit made to my SVN code repository, then the Jenkins need to pull code from Subversion repository and need to build the project , create docker image, push into Docker hub - as mentioned earlier. And after that need to deploy into my test environment from Dockerhub by pulling by Jenkins.
My Confusion
When am I creating services and pods, how I can define the docker image path within pod/services / statefulsets? Since it pulling from Docker hub for deployment.
Can I directly add kubectl command within Jenkins pipeline schedule job? How can I use kubectl command for Kubernetes deployment?
Jenkins can do anything you can do given that the tools are installed and accessible. So an easy solution is to install docker and kubectl on Jenkins and provide him with the correct kube config so he can access the cluster. So if your host can use kubectl you can have a look at the $HOME/.kube/config file.
So in your job you can just use kubectl like you do from your host.
Regarding the images from Docker Hub:
Docker Hub is the default Docker Registry for Docker anyway so normally there is no need to change anything in your cluster only if you want to use your own Private Hosted Registry. If you are running your cluster at any cloud provider I would use there Docker registries because they are better integrated.
So this part of a deployment will pull nginx from Docker Hub no need to specify anything special for it:
spec:
containers:
- name: nginx
Image: nginx:1.7.9
So ensure Jenkins can do the following things from command line:
build Docker images
Push Docker Images (make sure you called docker login on Jenkins)
Access your cluster via kubectl get pods
So an easy pipeline needs to simply do this steps:
trigger on SVN change
checkout code
create a unique version which could be Build number, SVN Revision, Date)
Build / Test
Build Docker Image
tag Docker Image with unique version
push Docker Image
change image line in Kubernetes deployment.yaml to newly build version (if your are using Jenkins Pipeline you can use readYaml, writeYaml to achive this)
call kubectl apply -f deployment.yaml
Depending on your build system and languages used there are some useful tools which can help building and pushing the Docker Image and ensuring a unique tag. For example for Java and Maven you can use Maven CI Friendly Versions with any maven docker plugin or jib.
To create deployment you need to create a yaml file.
In the yaml file you the row:
image: oronboni/serviceb
Leads you to the container that in this case in DockerHub:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: serviceb
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: serviceb
template:
metadata:
labels:
app: serviceb
spec:
containers:
- name: serviceb
image: oronboni/serviceb
ports:
- containerPort: 5002
I strongly suggest that you will see the kubernetes deployment webinar in the link below:
https://m.youtube.com/watch?v=_vHTaIJm9uY
Good luck.