I'm currently working on project which is using Kubernetes pods to deploy applications on local machine. To avoid traffic on the Development server DB, I am migrating that process on local development environment. For this I am using Liquibase to allow DB changelogs to automatically apply once the kube pods are initialized.
I already have a mssql pod running on my cluster which consists of the target Database I want to update.
This is my docker file which contains the liquibase image, the credentials are same as my mssql pod.
FROM liquibase/liquibase
ENV URL=jdbc:sqlserver://localhost:1433;database=PlayerDB
ENV USERNAME=sa
ENV PASSWORD=pl#y123
ENV CHANGELOGFILE=changelog.xml
CMD ["sh", "-c", "docker-entrypoint.sh --url=${URL} --username=${USERNAME} --password=${PASSWORD} --classpath=/liquibase/changelog --changeLogFile=${CHANGELOGFILE} update"]
This is my changelog file which contains the sql statements I want to trigger on my DB inside mssql pod
change-map.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: player-changelog-v1
data:
changelog.sql: |-
--liquibase formatted sql
--changeset play:1
--Database: PlayerDB
CREATE TABLE test_table (test_id INT, test_column VARCHAR, PRIMARY KEY (test_id));
Finally I have created a job which intends to update the Database by creating the new table
apiVersion: batch/v1
kind: Job
metadata:
name: liquibase-job-v1
spec:
template:
spec:
containers:
- name: liquibase
image: play/liquibase
env:
- name: URL
value: jdbc:sqlserver://localhost:1433;database=PlayerDB
- name: USERNAME
value: sa
- name: PASSWORD
value: pl#y123
- name: CHANGELOGFILE
value: changelog.sql
volumeMounts:
- name: config-vol
mountPath: /liquibase/changelog
restartPolicy: Never
volumes:
- name: config-vol
configMap:
name: player-changelog-v1
The pod runs and then terminates on it's own but the table is not getting created.
If you logs are giving out the Starting Liquibase Liquibase Community 3.8.0 by Datical Errors: The option --url is required. The option --changeLogFile is required. then why don't you define parameters in the commands as well? They are mandatory Liquibase parameters to run the liquibase updates
Related
Assume there is a backend application with a private key stored in a .env file.
For the project file structure:
|-App files
|-Dockerfile
|-.env
If I run the docker image locally, the application can be reached normally by using a valid public key during the API request. However, if I deploy the container into AKS cluster by using same docker image, the application failed.
I am wondering how the container in a AKS cluster handle the .env file. What should I do to solve this problem?
Moving this out of comments for better visibility.
First and most important is docker is not the same as kubernetes. What works on docker, won't work directly on kubernetes. Docker is a container runtime, while kubernetes is a container orchestration tool which sits on top of docker (not always docker now, containerd is used as well).
There are many resources on the internet which describe the key difference. For example this one is from microsoft docs
First configmaps and secrets should be created:
Creating and managing configmaps and creating and managing secrets
There are different types of secrets which can be created.
Use configmaps/secrets as environment variables.
Further referring to configMaps and secrets as environment variables looks like (configmaps and secrets have the same structure):
apiVersion: v1
kind: Pod
metadata:
name: pod-example
spec:
containers:
- ...
env:
-
name: ADMIN_PASS
valueFrom:
secretKeyRef: # here secretref is used for sensitive data
key: admin
name: admin-password
-
name: MYSQL_DB_STRING
valueFrom:
configMapKeyRef: # this is not sensitive data so can be used configmap
key: db_config
name: connection_string
...
Use configmaps/secrets as volumes (it will be presented as file).
Below the example of using secrets as files mounted in a specific directory:
apiVersion: apps/v1
kind: Deployment
metadata:
...
spec:
containers:
- ...
volumeMounts:
- name: secrets-files
mountPath: "/mnt/secret.file1" # "secret.file1" file will be created in "/mnt" directory
subPath: secret.file1
volumes:
- name: secrets-files
secret:
secretName: my-secret # name of the Secret
There's a good article which explains and shows use cases of secrets as well as its limitations e.g. size is limited to 1Mb.
Background
I have set up some Self Hosted, Azure Devops build agents, inside my AKS cluster. This is the documentation: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/docker?view=azure-devops
The agents have been successfully created, I can see them in my Agent Pools and target them from my pipeline.
One of the first things my pipeline does, is build and push some docker images. This is a problem inside a self hosted agent. The documentation includes the below warning and link:
In order to use Docker from within a Docker container, you bind-mount the Docker socket.
If you're sure you want to do this, see the bind mount documentation on Docker.com.
Files
Kubernetes Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: azdevops-deployment
labels:
app: azdevops-agent
spec:
replicas: 1 #here is the configuration for the actual agent always running
selector:
matchLabels:
app: azdevops-agent
template:
metadata:
labels:
app: azdevops-agent
spec:
containers:
- name: kubepodcreation
image: AKRTestcase.azurecr.io/kubepodcreation:5306
env:
- name: AZP_URL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_URL
- name: AZP_TOKEN
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_TOKEN
- name: AZP_POOL
valueFrom:
secretKeyRef:
name: azdevops
key: AZP_POOL
volumeMounts:
- mountPath: /var/run/docker.sock
name: docker-volume
volumes:
- name: docker-volume
hostPath:
path: /var/run/docker.sock
Error
Attempting to run the pipeline gives me the following error:
##errorUnhandled: Unable to locate executable file: 'docker'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
Questions
Is it possible (and safe) to build and push docker images from an Azure Devops build agent running in a docker container?
How can I modify the Kubernetes deployment file, to bind mount the docker socket.
Any help will be greatly appreciated.
I tried to set variables in Azure Pipelines' Release which can be used Command Task in Release to replace variables' values to Docker Kubernetes' .yaml file.
It works fine for me but I need to prepare several Command Tasks to replace variables one by one.
For example, I set variable TESTING1_(value:Test1), TESTING2_(value:Test2) and TESTING3_(value:Test3) in Pipelines' Release. Then I only used Command Task to replace TESTING1_ to $(TESTING1_) in Docker Kubernetes' .yaml file. Below is original environment setting in .yaml file:
spec:
containers:
- name: devops
env:
- name: TESTING1
value: TESTING1_
- name: TESTING2
value: $(TESTING2_)
After ran Pipelines' Release, print out results in NodeJS were:
console.log(process.env.TESTING1); --> Test1
console.log(process.env.TESTING2); --> $(TESTING2_)
console.log(process.env.TESTING3); --> undefined
i think you should be use config maps for that (maybe update values in config maps). you shouldn't be updating containers directly. this gives you flexibility and management. example:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
name: special-config
key: special.how
and then if some value changes you update the config map and all the pods that reference this config map get new value.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-using-configmap-data
I have been using kubernetes for a while now.
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.0+2831379", GitCommit:"283137936a
498aed572ee22af6774b6fb6e9fd94", GitTreeState:"not a git tree", BuildDate:"2016-07-05T15:40:25Z", GoV
ersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.0", GitCommit:"5cb86ee022267586db
386f62781338b0483733b3", GitTreeState:"clean", BuildDate:"", GoVersion:"", Compiler:"", Platform:""}
I usually set an Ingress, Service and Replication Controller for each project.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: portifolio
name: portifolio-ingress
spec:
rules:
- host: www.cescoferraro.xyz
http:
paths:
- path: /
backend:
serviceName: portifolio
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: portifolio
name: portifolio
labels:
name: portifolio
spec:
selector:
name: portifolio
ports:
- name: web
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: v1
kind: ReplicationController
metadata:
namespace: portifolio
name: portifolio
labels:
name: portifolio
spec:
replicas: 1
selector:
name: portifolio
template:
metadata:
namespace: portifolio
labels:
name: portifolio
spec:
containers:
- image: cescoferraro/portifolio:latest
imagePullPolicy: Always
name: portifolio
env:
- name: KUBERNETES
value: "true"
- name: BRANCH
value: "production"
My "problem" is that for deploying my app I usually do:
kubectl -f delete kubernetes.yaml
kubectl -f create kubernetes.yaml
I wish I could use a single command to deploy, whenever my app is up or down. Rolling updates do not work when I use the same image,(I think its a bug on my kubernetes server version). But it also do not work when the app has never been deployed at all.
I have read about Deployments, I wonder how it would help me?
Goals
1. Deploy if app is brand new
2. Replace existing pods with new ones using a new image from docker registry.
I don't think keeping all resources inside one single manifest helps you with what you want to achieve, since your Service, Ingress and ReplicationController are not likely to change simultaneously.
If all you want to do is roll out new pods, I would recommend you to replace your ReplicationController with a Deployment. Manifests have almost the exact same syntax so it's easy to migrate from standard RCs, and you could perform a server-side rolling update with a single kubectl replace -f manifest.yml.
Please note that even with a Deployment resource you can't trigger a redeployment if nothing changed in your manifest. kubectl replace would just do nothing. Therefore you could for example increment or change a tag inside your manifest in order to force the deployment, if needed (eg. revision: 003).
As already written in the previous answer, it is recommended to use a Deployment instead of a ReplicationController for this.
Using imagePullPolicy: Always will only ensure that Kubernetes does a docker pull before starting new PODs. It does not force recreation of PODs when nothing in the Deployment resource changes.
I would suggest to add 2 things to your solution:
Add a label to the Deployment with the value CURRENT_DATE as a placeholder value
Add a simple shell script to your project which replaces the placeholder with the current date+time and then uses kubectl to apply the resources.
Example Bash script
#!/usr/bin/env bash
sed "s/CURRENT_DATE/$(date)/" kubernetes.yaml | kubectl apply -f -
Then use this script for redeployment instead of calling kubectl by yourself.
This is only meant as a very simple example. When it comes to creating/applying/patching resources in Kubernetes, things tend to get more and more complicated by time. If this happens, consider using some more advanced templating solutions, e.g. by using Python and Jinja2.
You could use a deployment for this. Create it the first time, and after that you only need to do kubectl set image deploy/my-app app=user/image:tag --record and you're good to go.
Doing that, you can also do cool things like kubectl rollout undo deploy/my-app or get history and status.
You might consider using Argo.
Argo is an open-source workflow engine for Kubernetes. It allows to define complex microservices-based application deployment using YAML in source repo and automatically re-deploy app on YAML change (e.g. on every commit to production branch) .
How can I inject code/files directly into a container in Kubernetes on Google Cloud Engine, similar to the way that you can mount host files / directories with Docker, e.g.
docker run -d --name nginx -p 443:443 -v "/nginx.ssl.conf:/etc/nginx/conf.d/default.conf"
Thanks
It is possible to use ConfigMaps to achieve that goal:
The following example mounts a mariadb configuration file into a mariadb POD:
ConfigMap
apiVersion: v1
data:
charset.cnf: |
[client]
# Default is Latin1, if you need UTF-8 set this (also in server section)
default-character-set = utf8
[mysqld]
#
# * Character sets
#
# Default is Latin1, if you need UTF-8 set all this (also in client section)
#
character-set-server = utf8
collation-server = utf8_unicode_ci
kind: ConfigMap
metadata:
name: mariadb-configmap
MariaDB deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: mariadb
labels:
app: mariadb
spec:
replicas: 1
template:
metadata:
labels:
app: mariadb
version: 10.1.16
spec:
containers:
- name: mariadb
image: mariadb:10.1.16
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mariadb
key: rootpassword
volumeMounts:
- name: mariadb-data
mountPath: /var/lib/mysql
- name: mariadb-config-file
mountPath: /etc/mysql/conf.d
volumes:
- name: mariadb-data
hostPath:
path: /var/lib/data/mariadb
- name: mariadb-config-file
configMap:
name: mariadb-configmap
It is also possible to use subPath feature that is available in kubernetes from version 1.3, as stated here.
I'm not sure you can do that exactly. Kubernetes does things quite differently than docker, and isn't really ideal for interacting with the 'host' you are probably used to with docker.
A few alternative possibilities come to mind. First, and probably least ideal but closest to what you are asking, would be to add the file after the container is running, either by adding commands or args to the pod spec, or using kubectl exec and echo'ing the contents into the file. Second would be to create a volume where that file already exists, e.g. create a GCE or EBS disk, add that file, and then mount the file location (read-only) in the container's spec. Third, would be to create a new docker image where that file or other code already exists.
For the first option, the kubectl exec would be for one-off jobs, it isn't very scalable/repeatable. Any creation/fetching at runtime adds that much overhead to the start time for the container, so I normally go with the third option, building a new docker image whenever the file or code changes. The more you change it, the more you'll probably want a CI system (like drone) to help automate the process.
Add a comment if I should expand any of these options with more details.
Kubernetes allows you to mount volumes into your pod. One such volume type is hostPath (link) which allows you to mount a directory from the host into the pod.