I need to update certificates that are currently in docker containers running via kubernetes pods. The three pods containing these certificates are titled 'app', 'celery' and 'celery beat'
When I run
kubectl exec -it app -- sh
and then ls
I can see that the old certificates are there. I have new certificates on my VM filesystem and need to get these into the running pods so the program starts to work again. I tried rebuilding the docker images used to create the running containers (using the existing docker compose file), but that didn't seem to work. I think the filesystem in the containers was initially mounted using docker volumes. That presumably was done locally whereas now the project is on a remote Linux VM. What would be the natural way to get the new certs into the running pods leaving everything else the same?
I can kubectl cp the new certs in, the issue with that is that when the pods get recreated, they revert back to the old certificates.
Any help would be much appreciated.
Check in your deployment file, in the volume section, if there is some mention of configmap, secret, PV or PVC with a name more likely "certs" (normally we use names like this), if it exist, and the mention is secret or configmap, you just need to update this resource directly. If the mention is a PV or PVC, you'll need to update it by CLI for example, and I suggest you to change to a secret.
Command to check your deployment resource: kubectl get deploy <DEPLOY NAME> -o yaml (if you don't use deployment, change it to the right resource kind).
Also, you can access your pod shell and run df -hT this probably will prompt your drives and mount points.
In the worst scenario when the certs were added during the container build, you can solve it by (This is not the best practice. The best practice is to build a new image):
Edit the container image, remove the certs, push with a new tag (don't overwrite the old one).
Create a secret with the new certs
Mount this secret in the same path and using the same names.
Change the image version in the deployment.
You can use the kubectl edit deploy <DEPLOY NAME> to edit your resource.
To edit your container image, use docker commit: https://docs.docker.com/engine/reference/commandline/commit/
Related
The newer docker compose (vs docker-compose) allows you to set secrets in the build section. This is nice because if you do secrets at runtime then the file is readable by anyone that can get into the container by reading /run/secrets/<my_secret>.
Unfortunately, it appears that it's only possible to pass the secrets via either the environment or a file. Doing it via the environment doesn't seem like a great idea because someone on the box could read the /proc/<pid>/environment while the image is being built to snag the secrets. Doing it via a file on disk isn't good because then the secret is being stored on disk unencrypted.
It seems like the best way to do this would be with something like
docker swarm init
$(read -sp "Enter your secret: "; echo $REPLY) | docker secret create my_secret -
docker compose build --no-cache
docker swarm leave --force
Alas, it appears that Docker can't read from the swarm for build time secrets for some unknown reason.
What is the best way to do this? This seems to be a slight oversight, along the lines of docker secrete create not having a way to prompt for the value instead of having to resort to to hacks like above to keep the secret out of your bash history.
UPDATE: This is for SWARM/Remote docker systems, not targeted on local build time secrets. (I realised you were asking for those primarily and just mentioned swarm in the second part of the question. I believe it still holds good advice for some so ill leave the answer undeleted.
Docker Swarm can only read runtime-based secrets you create with the docker secret create command and must already exist on the cluster when deploying stack. We had been in the same situation before. We solved the "issue" using docker contexts. You can create an SSH-based docker context which points to a manager (we just use the first one). Then on your LOCAL device (we use Win as the base platform and WSL2/Linux VM for the UNIX part), you can simply run docker commands with inline --context property. More on context on official docs. For instance: docker --context production secret create .... And so on.
I have running docker container A and I want to create pod with container A.
Is it possible?
If it isn't, Can I hold container state "created" in kubernetes?
I also tried setting containerID to the running containerID in the pod.yaml file, and tried to change the containerID to kubectl edit on the already running pod, but not all succeeded.
All together running the container and running the pod both are different.
If you want to run container A in a pod follow steps below:
1. create a docker image from container A and push to docker registry
2. create a deployment.yaml file for Kubernetes and mention this container docker pull URL and tag in image and tag section
3. deploy pod using kubectl apply -f deployment.yaml
There's no way to "import" a pre-existing Docker container into a Kubernetes pod. Kubernetes always manages the entire lifecycle of a container, including deciding which host to run it on.
If your workflow involves doing some manual setup in between docker create and docker start, you should try to automate this; Kubernetes has nothing equivalent and in fact sometimes it will work against you. If a node gets destroyed (either because an administrator drained it or because its hard disk crashed or something else) Kubernetes will try to relocate every pod that was there, which means containers will get destroyed and recreated somewhere else with no notice to you. If you use a deployment to manage your pods (and you should) you will routinely have multiple copies of a pod, and there you'd have to do your manual setup on all of them.
In short: plan on containers being destroyed and recreated regularly and without your intervention. Move as much setup as you can into your container's entrypoint, or if really necessary, an init container that runs in the pod. Don't expect to be able to manually set up a pod before it runs. Follow this approach in pure-Docker space, too: a single container on its own shouldn't be especially valuable and you should be able to docker rm && docker run a new copy of it without any particular problems.
I am trying to setup the AKS in which I have used azure disk to mount the source code of the application. When I am using kubectl describe pods command then also it is showing as mounted but I dont know how may I copy the code into that?
I got some recommendations that use kubectl cp command but my pod name is changing each time whenever I am deploying so please let me know what should i do?
you'd need to copy files to the disk directly (not to the pod). you can use your pod or worker node to do that. You can use kubectl cp to copy files to the pod and then move it to the mounted disk like you normally would. or you can ssh to the worker node and copy files over ssh to the node and put files to the mounted disk.
OS: Amazon Linux (hosted on AWS)
Docker version: 17.x
Tools: Ansible, Docker
Our developers use Ansible to be able to spin up individual AWS spot environments that get populated with docker images that get built on their local machines, pushed into a docker registry created on the AWS spot machine, then pulled down and run.
When the devs do this locally on their Macbooks, ansible will orchestrate building the code with sbt, spin up an AWS spot instance, run a docker registry, push the image into the docker registry, command the instance to pull down the image and run it, run a testsuite, etc.
To make things better and easier for non-devs to be able to run individual test environments, we put the ansible script behind Jenkins and use their username to let ansible create a domain name in Route53 that points to their temporary spot instance environment.
This all works great without the registry -- i.e. using JFrog Artifactory to have these dynamic envs just pull pre-built images. It lets QA team members spin up any version of the env they want. But now to allow it to build code and push, I need to have an insecure registry and that is where things fell apart...
Since any user can run this, the Route53 domain name is dynamic. That means I cannot just hardcode in daemon.json the --insecure-registry entry. I have tried to find a way to set a wildcard registry but it didnt seem to work for me. Also since this is a shared build server (the one that is running the ansible commands) so I dont want to keep adding entries and restarting docker because other things might be running.
So, to summarize the questions:
Is there a way to use a wildcard for the insecure-registry entry?
How can I get docker to recognize insecure-registry entry without restarting docker daemon?
So far I've found this solution to satisfy my needs, but not 100% happy yet. I'll work on it more. It doesn't handle the first case of a wildcard, but it does seem to work for the 2nd question about reloading without restart.
First problem is I was editing the wrong file. It doesn't respect /etc/sysconfig/docker nor does it respect $HOME/.docker/daemon.json. The only file that works on Amazon Linux for me is /etc/docker/daemon.json so I manually edited it and then tested a reload and verified with docker info. I'll work on this more to programmatically be able to insert entries as needed, but the manual test works:
sudo vim /etc/docker/daemon.json
sudo systemctl reload docker.service
docker info
I have an Image which i should add a dependency to it. Therefore I have tried to change the image when is running on the container and create new Image.
I have follow this article with the following commands after :
kubectl run my-app --image=gcr.io/my-project-id/my-app-image:v1 --port 8080
kubectl get pods
kubectl exec -it my-app-container-id -- /bin/bash
then in the shell of container, i have installed the dependency using "pip install NAME_OF_Dependncy".
Then I have exited from the shell of container and as it have been explained in the article, i should commit the change using this command :
sudo docker commit CONTAINER_ID nginx-template
But I can not find the corresponding command for Google Kubernetes Engine with kubectl
How should i do the commit in google container engine?
As with K8s Version 1.8. There is no way to do Hot Fix changes directly to the images.For example, Committing new image from running container. If you still change or add something by using exec it will stay until the container is running. It's not best practice in K8s eco-system.
The recommended way is to use Dockerfile and customise the images according to the necessity and requirements.After that, you can push that images to the registry(public/ private ) and deploy it with K8s manifest file.
Solution to your issue
Create a Dockerfile for your images.
Build the image by using Dockerfile.
Push the image to the registry.
write the deployment manifest file as well service manifest file.
apply the manifest file to the k8s cluster.
Now If you want to change/modify something, you just need to change/modify the Dockerfile and follow the remaining steps.
As you know that containers are a short living creature which does not have persist changed behaviour ( modified configuration, changing file system).Therefore, It's better to give new behaviour or modification at the Dockerfile.
Kubernetes Mantra
Kubernetes is Cloud Native product which means it does not matter whether you are using Google Cloud, AWS or Azure. It needs to have consistent behaviour on each cloud provider.