How to create a directory with its current hostname in AKS.
what are the possibilities to achieve this ?
tried with $HOSTNAME which is not working from Dockerfile. And its working manually when we executed inside the AKS pod.
Related
I need to update certificates that are currently in docker containers running via kubernetes pods. The three pods containing these certificates are titled 'app', 'celery' and 'celery beat'
When I run
kubectl exec -it app -- sh
and then ls
I can see that the old certificates are there. I have new certificates on my VM filesystem and need to get these into the running pods so the program starts to work again. I tried rebuilding the docker images used to create the running containers (using the existing docker compose file), but that didn't seem to work. I think the filesystem in the containers was initially mounted using docker volumes. That presumably was done locally whereas now the project is on a remote Linux VM. What would be the natural way to get the new certs into the running pods leaving everything else the same?
I can kubectl cp the new certs in, the issue with that is that when the pods get recreated, they revert back to the old certificates.
Any help would be much appreciated.
Check in your deployment file, in the volume section, if there is some mention of configmap, secret, PV or PVC with a name more likely "certs" (normally we use names like this), if it exist, and the mention is secret or configmap, you just need to update this resource directly. If the mention is a PV or PVC, you'll need to update it by CLI for example, and I suggest you to change to a secret.
Command to check your deployment resource: kubectl get deploy <DEPLOY NAME> -o yaml (if you don't use deployment, change it to the right resource kind).
Also, you can access your pod shell and run df -hT this probably will prompt your drives and mount points.
In the worst scenario when the certs were added during the container build, you can solve it by (This is not the best practice. The best practice is to build a new image):
Edit the container image, remove the certs, push with a new tag (don't overwrite the old one).
Create a secret with the new certs
Mount this secret in the same path and using the same names.
Change the image version in the deployment.
You can use the kubectl edit deploy <DEPLOY NAME> to edit your resource.
To edit your container image, use docker commit: https://docs.docker.com/engine/reference/commandline/commit/
I'm going to transfer what I worked on the previous EC2 to the ECS.
In traditional EC2, the -v /home/ubuntu:/data option allowed the volume to be set.
First, I added volume through "Volume add in task definition" and proceeded with mounting as before.
However, this did not produce a normal result.
So I have some concerns.
For Ubuntu, it's the /home/ubuntu path, but I'm not sure how the ECS Fargate path is configured.
Secondly, I am wondering if adding :/data at the end of the container path is the right way.
Defined Volume
Volume set to existing EC2 written in JSON
Mount Points in ECS
With Fargate you would need to use an EFS volume for this. You don't have access to host volumes with Fargate.
I'm trying to use two Docker containers. One contains the Jenkins application and the other is a Nginx server. I am building a React application with Jenkins and I would like to copy my Dist file into the Nginx container. How can I do it?
I try to do something like that:
click here
One of the easiest ways to achieve this is using a volume. Use a "nginx_data" volume which will be mounted inside the Nginx containers www folder and at the same time mount the volume inside the Jenkins container. Then you can simply copy files to that volume location inside the Jenkins container and it will automatically be visible inside Nginx's container. More on volumes here.
I have a Jenkins instance setup using Googles Jenkins on Kubernetes solution. I have not changed any of the settings of the Kubernetes Pod.
When I trigger a new job I am successfully able to get everything up and running until the point of my tests.
My tests use docker-compose. First I make sure to install docker (1.5-1+b1) and docker-compose (1.8.0-2) on the instance (I know I can optimize this by using an image that already includes these, but I am still just in proof-of-concept).
When I run the docker-compose up command everything works and the services start their initialization scripts. However, the mounts are empty. I have verified that the files exist on the Jenkins slave, and the mount is created inside the docker service when I run docker-compose, however they are empty.
Some information:
In order to get around file permissions I am using /tmp as the Jenkins Workspace. I am using SCM to pull my files (successfully) and in the docker-compose file I specify version: '2' and the mount paths with absolute paths. The volume section of the service that fails looks like this:
volumes:
- /tmp/automation:/opt/automation
I changed the command that is run in the service to ls /opt/automation and the result is an empty directory.
What am I missing? I just want to mount a directory into my docker-compose service. This works perfectly from Windows, Ubuntu, and Centos devices. Why won't it work using the Kubernetes instance?
I found the reason it fails here:
A Docker container in a Docker container uses the parent HOST's Docker daemon and hence, any volumes that are mounted in the "docker-in-docker" case is still referenced from the HOST, and not from the Container.
Therefore, the actual path mounted from the Jenkins container "does not exist" in the HOST. Due to this, a new directory is created in the "docker-in-docker" container that is empty. Same thing applies when a directory is mounted to a new Docker container inside a Container.
So it seems like it will be impossible to mount something from the outer docker into the inner docker. And another solution must be found.
I have an Image which i should add a dependency to it. Therefore I have tried to change the image when is running on the container and create new Image.
I have follow this article with the following commands after :
kubectl run my-app --image=gcr.io/my-project-id/my-app-image:v1 --port 8080
kubectl get pods
kubectl exec -it my-app-container-id -- /bin/bash
then in the shell of container, i have installed the dependency using "pip install NAME_OF_Dependncy".
Then I have exited from the shell of container and as it have been explained in the article, i should commit the change using this command :
sudo docker commit CONTAINER_ID nginx-template
But I can not find the corresponding command for Google Kubernetes Engine with kubectl
How should i do the commit in google container engine?
As with K8s Version 1.8. There is no way to do Hot Fix changes directly to the images.For example, Committing new image from running container. If you still change or add something by using exec it will stay until the container is running. It's not best practice in K8s eco-system.
The recommended way is to use Dockerfile and customise the images according to the necessity and requirements.After that, you can push that images to the registry(public/ private ) and deploy it with K8s manifest file.
Solution to your issue
Create a Dockerfile for your images.
Build the image by using Dockerfile.
Push the image to the registry.
write the deployment manifest file as well service manifest file.
apply the manifest file to the k8s cluster.
Now If you want to change/modify something, you just need to change/modify the Dockerfile and follow the remaining steps.
As you know that containers are a short living creature which does not have persist changed behaviour ( modified configuration, changing file system).Therefore, It's better to give new behaviour or modification at the Dockerfile.
Kubernetes Mantra
Kubernetes is Cloud Native product which means it does not matter whether you are using Google Cloud, AWS or Azure. It needs to have consistent behaviour on each cloud provider.