I want to use kubernetes as my default development environment for that I set up the cluster locally with docker as explained in the official doc. I push my example to a github repository
My set up steps after having a kubernetes cluster running were:
* cd cluster_config/app && docker build --tag=k8s_php_dev . && cd ../..
* kubectl -s http://127.0.0.1:8080 create -f cluster_config/app/app.rc.yml
* kubectl -s http://127.0.0.1:8080 create -f cluster_config/app/app.services.yml
My issues comes since I want to map a local directory as a volume inside my app pod so I can share dynamically the files in there between my local host and the pod, so i can develop, change the files; and dynamically update on the service.
I use a a volume with a hostPath. The pod, replication controller and service are created successfully but the pod do not share the directory not even have the file on the supposed on the mountPath.
What I'm doing wrong?
Thanks
The issue was on the volume definition, the hostPath.path property should hold the absolute address of the directory to mount.
Example:
hostPath:
path: /home/bitgandtter/Documents/development/php/k8s_devel_env
Related
I have different services on the docker-compose file that pulls some images to create containers on ACI.
Everything works fine on my local machine when I mount different directories and sub folders to docker containers:
volumes:
- folder/sub_folder/sub/folder:/etc/nginx/certs
But spinning up instances on ACI requires using azure_file driver which I use but I am not able to mount subfolders from this fileshare to a path.
I created a volume in the compose file:
volumes:
data-volume:
driver: azure_file
driver_opts:
share_name: acishare
storage_account_name: storageaccount
storage_account_key: /run/secrets/storage_account_key.txt
and I have tried this for a container
services:
app:
volumes:
- data-volume:/etc/nginx/
The above works fine but mounts the home directory of the file share which is understandable since no directory was specified.
I did some research and saw that on AKS, one could specify the directory of the file as the share name. Tried this with the backward slash() but I got an error message saying the fileshare doesn't exist:
volumes:
data-volume:
driver: azure_file
driver_opts:
share_name: acishare/sub_directory/sub_directory
storage_account_name: storageaccount
storage_account_key: /run/secrets/storage_account_key.txt
I have also tried adding the path to the volume but this won't work too:
volumes:
- data-volume/sub_directory/sub_directory:/etc/nginx/
What is the correct way to mount different subfolders of Azure File Share to an ACI?
PS: My codebase is on github and I am using workflow to upload-batch files to the azure file share because I need to copy (mount) a subfolder to the wwwroot directory in the container.
The repo also has some configuration files that need to be directly mounted to the container. These files are also not on the root folder but inside different subfolders.
If there is a better alternative to handle situations like this, I don't mind. I have tried using blob storage but couldn't come up with a way to go about it.
maybe you could try this:
volumeMounts:
- mountPath: /main/path
subPath: data
name: whatevername
Check the following link I hope it helps.
azure subdirs
Cheers!
I tried to reproduce but unable mount a single file/folder from Azure File Share to Azure Container Instance.
Deploy container and mount volume
az container create \
--resource-group $ACI_PERS_RESOURCE_GROUP \
--name hellofiles \
--image mcr.microsoft.com/azuredocs/aci-hellofiles \
--dns-name-label aci-demo \
--ports 80 \
--azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME \
--azure-file-volume-account-key $STORAGE_KEY \
--azure-file-volume-share-name $ACI_PERS_SHARE_NAME \
--azure-file-volume-mount-path /aci/logs/
.
STORAGE_KEY=$(az storage account keys list --resource-group $ACI_PERS_RESOURCE_GROUP --account-name $ACI_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" --output tsv)
echo $STORAGE_KEY
Here one more thing notice whatever the files you will update in file share it will update in container as well after you mount the file share as volume to the container.
May this be the reason we can not mount specific file and folder as volume in container.
There are some limitations to this as well,
• You can only mount Azure Files shares to Linux containers. Review more about the differences in feature support for Linux and Windows container groups in the overview.
• You can only mount the whole share and not the subfolders within it.
• Azure file share volume mount requires the Linux container run as root.
• Azure File share volume mounts are limited to CIFS support.
• Share cannot be mounted as read-only.
• You can mount multiple volumes but not with Azure CLI and would have to use ARM templates instead.
Reference: https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
https://www.c-sharpcorner.com/article/mounting-azure-file-share-as-volumes-in-azure-containers-step-by-step-demo/
We have mounted a folder in a Linux machine to our docker container application using (docker-compose)
volumes:
- /mnt/share:/mnt/share
The /mnt/share is a mounted folder in the machine(Not a real folder in the machine, its our file server). IF for some reason that mount is lost and then remounted again.
The application running in the docker container is not having access to the mounted folder until the container is restarted.
You might want to use to use a Volume Driver instead of bind-mounting a local filesystem.
See Share data among machines
Without knowing more about your environment it is impossible to give a more detailed answer. It would be helpful to know if your container runs in a AWS data center or if you use nfsv3, nfsv4 or cifs for mounting.
The following solution helped me to continue.
I wrote a script to check whether the folder exists.
The script is then called a command in the docker-compose file.
version:"3"
services:
flowable-task-handler:
build: flowable-task-handler
ports:
- "8085:8085"
command: bash -c "/wait_for_file_mount.sh /mnt/share/fileshares/ && java -jar /app.jar"
wait_for_file_mount.sh
#!/bin/sh
# Used to check whether the mount folder is ready for flowable to use
mountedfolder="$1"
until [ -d "$mountedfolder" ];
do sleep 2;
echo error "Mounted folder not found : $mountedfolder";
done;
Its a spring boot application. I have removed the entrypoint in the DockerFile and application is started using the command in docker compose(java -jar /app.jar")
defining the mount propagation as ":shared" should fix this:
-v /autofs:/autofs:shared \
not sure about docker-compose - I don't really use that. but you can define a docker volume with mount propagation and put this into your DC file.
I am trying to change my existing deployment logic/switch to kubernetes (My server is in gcp and till now I used docker-compose to run my server.) So I decided to start by using kompose and generating services/deployments using my existing docker-compose file. After running
kompose --file docker-compose.yml convert
#I got warnings indicating Volume mount on the host "mypath" isn't supported - ignoring path on the host
After a little research I decided to use the command below to "fix" the issue
kompose convert --volumes hostPath
And what this command achieved is -> It replaced the persistent volume claims that were generated with the first command to the code below.
volumeMounts:
- mountPath: /path
name: certbot-hostpath0
- mountPath: /somepath
name: certbot-hostpath1
- mountPath: /someotherpath
name: certbot-hostpath2
- hostPath:
path: /path/certbot
name: certbot-hostpath0
- hostPath:
path: /path/cert_challenge
name: certbot-hostpath1
- hostPath:
path: /path/certs
name: certbot-hostpath2
But since I am working in my local machine
kubectl apply -f <output file>
results in The connection to the server localhost:8080 was refused - did you specify the right host or port?
I didn't want to connect my local env with gcp just to generate the necessary files, is this a must? Or can I move this to startup-gcp etc
I feel like I am in the right direction but I need a confirmation that I am not messing something up.
1)I have only one compute engine(VM instance) and lots of data in my prod db. "How do I"/"do I need to" make sure I don't lose any data in db by doing something?
2)In startup-gcp after doing everything else (pruning docker images etc) I had a docker run command that makes use of docker/compose 1.13.0 up -d. How should I change it to switch to kubernetes?
3)Should I change anything in nginx.conf as it referenced to 2 different services in my docker-compose (I don't think I should since same services also exist in kubernetes generated yamls)
You should consider using Persistent Volume Claims (PVCs). If your cluster is managed, in most cases it can automatically create the PersistentVolumes for you.
One way to create the Persistent Volume Claims corresponding to your docker compose files is using Move2Kube(https://github.com/konveyor/move2kube). You can download the release and place it in path and run :
move2kube translate -s <path to your docker compose files>
It will then interactively allow you configure PVCs.
If you had a specific cluster you are targeting, you can get the specific storage classes supported by that cluster using the below command in a terminal where you have set your kubernetes cluster as context for kubectl.
move2kube collect
Once you do collect, you will have a m2k_collect folder, which you can then place it in the folder where your docker compose files are. And when you run move2kube translate it will automatically ask you whether to target this specific cluster, and also option to choose the storage class from that cluster.
1)I have only one compute engine(VM instance) and lots of data in my
prod db. "How do I"/"do I need to" make sure I don't lose any data in
db by doing something?
Once the PVC is provisioned you can copy the data to the PVC by using kubectl cp command into a pod where the pvc is mounted.
2)In startup-gcp after doing everything else (pruning docker images
etc) I had a docker run command that makes use of docker/compose
1.13.0 up -d. How should I change it to switch to kubernetes?
You can potentially change it to use helm chart. Move2Kube, during the interactive session, can help you create helm chart too. Once you have the helm chart, all you have to do is "helm upgrade -i
3)Should I change anything in nginx.conf as it referenced to 2
different services in my docker-compose (I don't think I should since
same services also exist in kubernetes generated yamls)
If the services names are name in most cases it should work.
I'm currently migrating a legacy server to Kubernetes, and I found that kubectl or dashboard only shows the latest log file, not the older versions. In order to access the old files, I have to ssh to the node machine and search for it.
In addition to being a hassle, my team wants to restrict access to the node machines themselves, because they will be running pods from many different teams and unrestricted access could be a security issue.
So my question is: can I configure Kubernetes (or a Docker image) so that these old (rotated) log files are stored in some directory accessible from inside the pod itself?
Of course, in a pinch, I could probably just execute something like run_server.sh | tee /var/log/my-own.log when the pod starts... but then, to do it correctly, I'll have to add the whole logfile rotation functionality, basically duplicating what Kubernetes is already doing.
So there are a couple of ways to and scenarios for this. If you are just interested in the log of the same pod from before last restart, you can use the --previous flag to look at logs:
kubectl logs -f <pod-name-xyz> --previous
But since in your case, you are interested in looking at log files beyond one rotation, here is how you can do it. Add a sidecar container to your application container:
volumeMounts:
- name: varlog
mountPath: /tmp/logs
- name: log-helper
image: busybox
args: [/bin/sh, -c, 'tail -n+1 -f /var/log/*.log']
volumeMounts:
- name: varlog
mountPath: /tmp/logs
volumes:
- name: varlog
hpostPath: /var/log
This will allow the directory which has all logs from /var/log directory from host to /tmp/log inside the container and the command will ensure that content of all files is flushed. Now you can run:
kubectl logs <pod-name-abc> -c count-log-1
This solution does away with SSH access, but still needs access to kubectl and adding a sidecar container. I still think this is a bad solution and you consider of one of the options from the cluster level logging architecture documentation of Kubernetes such as 1 or 2
Is there a way to create a directory on the local file system via yaml file if it does not exist?
I currently am mounting a dir from my local file sys inside the container and it works. But if the dir on the file system does not exist, container launch fails as the dir cannot be mounted. How can I make this as seamless as possible and embed the dir creation logic in the swarm yaml file?
As far as I know, docker-compose doesn't permit this, you probably have to do this by hand.
But you could also use an automation tool like puppet or ansible to handle such step to deploy your application and create the appropriate directories and set up your servers.
Here is how your tasks could look like in an ansible playbook to deploy a simple app and create a directory to mount your containers volumes on for instance :
- name: copy docker content
copy:
src: /path/to/app_src
dest: /path/to/app_on_server
- name: create directory for volume
file:
name: /path/to/mountpoint
state: directory
- name: start containers
shell: docker-compose up -d --build
args:
chdir: /path/to/app_on_server
(Note that this snippet is here to provide a general idea of the concept, you'd probably have to set up become directives, permissions, ownership, software installation and many other steps very specific to your application)
The cleanest way would be, that you get the Dockerfile for example from the official Nginx image and add an additional RUN mkdir /my/folder to it.
Afterwards you build your own Docker image for the Nginx via docker build .. Then you have a clean image which contains what you need based on the official source.