Business requirement is following:
Stop running container
Modify environment (Ex.Change value of DEBUG_LEVEL environment variable)
Start container
This is easily achievable using docker CLI
docker create/docker stop/docker start
How to do it using kubernetes?
Additional info:
We are migrating from Cloud Foundry to Kubernetes. In CF, you deploy application, stop application, set environment variable, start application. The same functionality is needed.
For those who are not aware of CF application. It is like docker container with single running (micro)service.
Typically, you would run your application as a Deployment or as a StatefulSet. In this case, just change the value of the environment variable in the template and reapply the Deployment (or StatefulSet). Kubernetes will do the rest for you.
click here to refer the documentation
Let's say you are creating a pod/deployment/statefulset using the following command.
kubectl apply -f blueprint.yaml
blueprint.yaml is the YAML file which contains the blueprint of your pod/deployment/statefulset object.
Method 1 - If you specify the environment variables in the YAML file
Then you can change the blueprint.yaml to modify the value of environment variable, .
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Then execute same command again to apply the changes.
Method 2 - If you specify the environment variables in the dockerfile
You should build your docker image with a new tag. Then change the docker image tag in the blueprint.yaml file and execute the same command again to apply the changes.
Method 3
You can also delete and create the pod/deployment/statefulset again.
kubectl delete -f blueprint.yaml
kubectl apply -f blueprint.yaml
There is also another possibility :
Define container environment variables using configmap data
Let Kubernetes react upon ConfigMap changes. It does not trigger restart of Pods by default, unless you change Pod spec somehow. Here is an article, that describes how to achieve it using SHA-256 hash generated of our ConfigMap.
Related
I'm a beginner in NiFi setup. I'm planning to start a NiFi cluster on Kubernetes. In normal installation, I saw that, we can change the NiFi configurations under the file 'nifi.properties'. But, when it comes to docker image, I also saw that we can change that by using environment variables. In most of the cases, the properties mentioned in the nifi.properties file can be easily converted into its equivalent environment variable.
Eg:
nifi.web.http.host <=> NIFI_WEB_HTTP_HOST
But in some cases, the environment variable is different. Eg:
nifi.zookeeper.connect.string != NIFI_ZK_CONNECT_STRING
From where do we get the full list of NiFi environment variable for Docker image. Any help like links or directions is very much appreciated.
You need to look into the documentation (or the source code) of the NiFi docker images your are using. For example agturley/nifi and apache/nifi.
When you enter the docker container you can see secure.sh and start.sh under the path /opt/nifi/scripts. These are the scripts that make all prop_replace
i have simple docker-copose.yml which builds 4 containers. The containers run's on EC2.
docker-compose change ~ twice a day on master branch, and each change we need to deploy the new containers on production
this is what i'm doing:
docker-compose down --rmi all
git pull origin master
docker-compose build -d
i'm removing images to avoid conflicts so that once i'm starting the service i have fresh images
This process takes me around ~ 1 minutes,
what is the best practice to spin up docker-compose, any suggestion to improve this ?
You can do the set of commands you show natively in Docker, without using git or another source-control tool as part of the deployment process.
Whenever you have a change to your source tree, build a new Docker image and push it to a Docker repository. This can be Docker Hub, or if you're on AWS already, Amazon ECR. Each build should have a unique image tag, such as a source control commit ID or a time stamp. You can set up a continuous-integration tool to do all of this for you automatically.
Once you have this, your docker-compose.yml file needs to be updated with the version number to deploy. If you only have a single image you're deploying, you can straightforwardly use Compose variable substitution to fill it in
image: 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:${TAG:-latest}
If you have multiple images you can set multiple environment variables or produce an updated docker-compose.yml file with the values filled in, but you will need to know all of the image versions together at deployment time.
Now when you go to deploy it you only need to run
TAG=20200317.0412 docker-compose up -d
to set the environment variable and trigger Compose. Compose will see that the image you're trying to run for that container is different from what's already running, pull the updated image, and replace the container for you. You don't need to manually remove the old containers or stop the entire stack.
If git is part of your workflow now, it's probably because you're mounting application code into your container. You will also need to delete any volumes: that overwrite the content in the image. Also make sure you make this change in your CI system (so you're testing the actual image you're deploying to production) and in development (similarly).
This particular task becomes slightly easier with a cluster-management system like Kubernetes (or Amazon EKS), though it brings many other complexities elsewhere. In Kubernetes you need to send an updated Deployment spec to the Kubernetes API server, but you can do this without direct ssh access to the target system and only needing to know the specific version of the one image you're updating, and with multiple replicas you can get a zero-downtime upgrade. Both using a Docker repository and using a unique image tag per build are basically required in this setup: images are the only way code gets into the cluster, and changing the image tag string is what triggers code to be redeployed.
I am deploying some apps in kubernetes,and my apps using a config management tool called apollo.This tool need to define the apps running environment(develop\test\production......) through this ways:1.java args 2.application.properties 3./etc/settings/data.properties. Now I am running apps in Kubernetes,the question is,how to define running environment variable?
1.if I choose java args,so I should keep some scripts like: start-develop-env.sh/start-test-env.sh/start-pro-env.sh
2.if I choose application.properties,I should keep application-develop.properties/application-test.properties.....
3.if I choose /etc/settings/data.properties,It is impossible to login every docker container to define the config file of each environment.
what is the best way to solve the problem? write in kubernetes deployment yaml and my apps could not read it(define variable in batch pods collections in one place is better).
You can implement #2 and #3 using a configmap. You can define the properties file as a configmap, and mount that into the containers, either as application.properties or data.properties. The relevant section in k8s docs is:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/
Using java args might be more involved. You can define a script as you said, and run that script to setup the environment for the container. You can store that script as a ConfigMap as well. Or, you can define individual environment variables in your deployment yaml, define a ConfigMap containing properties, and populate those environment variables from the configmap. The above section also describes how to setup environment variables from a configmap.
I would like to be able to test my docker application on local before sending it to the cluster. I want to use mini Kube for this. Meanwhile, instead of having multiple kube config files which would define env variables for the cloud environment and for my local machine, I would like to override some of the env variables when running in local. I can see that you can do something like that with docker compose:
docker-compose up -f docker-compose.yml -f docker-compose.e2e.yml.
The second file would only have the overriding values. Yes, there are two files but I find it clean.
Is there a way to do something similar with Kube/minikube? Or even something better ???
I think you are asking how to pass different environment values into your Pods depending upon which environment they are deployed to. One pattern to achieve this is to deploy with helm. Then you use templated versions of your kubernetes descriptors for deployment. You also have a values.yaml file that contains values to be injected into the descriptors. You can switch and overlay values.yaml files at the time of install to control which values are injected for a given installation.
If you are asking how to switch whether a kubectl command runs against local or cloud without having to keep switching your kubeconfig file then you can add both contexts to your kubeconfig and use kubectl context to switch between them, as #Ijaz Khan suggests
I have a Node.JS based application consisting of three services. One is a web application, and two are internal APIs. The web application needs to talk to the APIs to do its work, but I do not want to hard-code the IP address and ports of the other services into the codebase.
In my local environment I am using the nifty envify Node.JS module to fix this. Basically, I can pretend that I have access to environment variables while I'm writing the code, and then use the envify CLI tool to convert those variables to hard-coded strings in the final browserified file.
I would like to containerize this solution and deploy it to Kubernetes. This is where I run into issues...
I've defined a couple of ARG variables in my Docker image template. These get turned into environment variables via RUN export FOO=${FOO}, and after running npm run-script build I have the container I need. OK, so I can run:
docker build . -t residentmario/my_foo_app:latest --build-arg FOO=localhost:9000 BAR=localhost:3000
And then push that up to the registry with docker push.
My qualm with this approach is that I've only succeeded in punting having hard-coded variables to the container image. What I really want is to define the paths at pod initialization time. Is this possible?
Edit: Here are two solutions.
PostStart
Kubernetes comes with a lifecycle hook called PostStart. This is described briefly in "Container Lifecycle Hooks".
This hook fires as soon as the container reaches ContainerCreated status, e.g. the container is done being pulled and is fully initialized. You can then use the hook to jump into the container and run arbitrary commands.
In our case, I can create a PostStart event that, when triggered, rebuilds the application with the correct paths.
Unless you created a Docker image that doesn't actually run anything (which seems wrong to me, but let me know if this is considered an OK practice), this does require some duplicate work: stopping the application, rerunning the build process, and starting the application up again.
Command
Per the comment below, this event doesn't necessarily fire at the right time. Here's another way to do it that's guaranteed to work (and hence, superior).
A useful Docker container ends with some variant on a CMD serving the application. You can overwrite this run command in Kubernetes, as explained in the "Define a Command and Arguments for a Container" section of the documentation.
So I added a command to the pod definition that ran a shell script that (1) rebuilt the application using the correct paths, provided as an environment variable to the pod and (2) started serving the application:
command: ["/bin/sh"]
args: ["./scripts/build.sh"]
Worked like a charm.