We intend to deploy a trained model in production. since we can not keep the same in the code base, we need to upload into the cloud and refer it on runtime.
We are using kubernetes, and I'm relatively new to it. Below is my stepwise understanding on how to solve this.
build a persistent volume with my trained model (size around 30MB)
mount the persistent volume into pod with a single container.
keep this pod running. refer to the model from a python script via pod.
I tried referring documentation pv with no luck. I also tried to move the model to PV via "kubectl cp", with no success.
Any idea on how to resolve this? any helps would be appreciated.
Related
I am new to KubeFlow and trying to port / adapt an existing solution to run in KubeFlow pipelines. The issue I am solving now is that the existing solution shared data via a mounted volume. I know this is not the best practice for components exchanging data in KubeFlow however this will be a temporary proof of concept and I have no other choice.
I am facing issues with accessing an existing Volume from the pipeline. I am basically running the code from KubeFlow documentation here, but pointing to an existing K8S Vo
def volume_op_dag():
vop = dsl.VolumeOp(
name="shared-cache",
resource_name="shared-cache",
size="5Gi",
modes=dsl.VOLUME_MODE_RWO
)
The Volume shared-cache exists:
However when I run the pipeline a new volume is created:
What am I doing wrong? I obviously don't want to create a new volume every time I run the pipeline but instead mount an existing one.
Edit: Adding KubeFlow versions:
kfp (1.8.13)
kfp-pipeline-spec (0.1.16)
kfp-server-api (1.8.3)
Have a look at the function kfp.onperm.mount_pvc. You can find values for the arguments pvc_name and volume_name via the console command
kubectl -n <your-namespace> get pvc.
The way you use it is by writing the component as if the volume is already mounted and following the example from the doc when binding it in the pipeline:
train = train_op(...)
train.apply(mount_pvc('claim-name', 'pipeline', '/mnt/pipeline'))
Also note, that both the volume and the pipeline must be in the same namespace.
I am searching for a tutorial or a good reference to perform docker container live migration in Kubernetes between two hosts (embedded devices - arm64 architecture).
As far as I searched on the internet resources, I could not find a complete documentation about it. I am a newbe and it will be really helpful if someone could provide me any good reference materials so that I can improve myself.
Posting this as a community wiki, feel free to edit and expand.
As #David Maze said in terms of containers and pods, it's not really a live migration. Usually pods are managed by deployments which have replicasets which control pods state: they are created and in requested amount. Any changes in amount of pods (e.g. you delete it) or using image will trigger pods recreation.
This also can be used for scheduling pods on different nodes when for instance you need to perform maintenance on the node or remove/add one.
As for your question in comments, it's not necessarily the same volume as it can I suppose have a short downtime.
Sharing volumes between kubernetes clusters on premise (cloud may differ) is not a built-in feature. You may want to look at nfs server deployed in your network:
Mounting external NFS share to pods
Perhaps a silly question with no sense:
In a kubernetes deployment (or minikube), when a pod container crashes, i would like to analyze the file system at that moment. In this way, i could see core dumps or any other useful information.
I know that i could mount a volume or PVC to get core dumps from a host-defined core pattern location, and i also could get logs by mean a rsyslog sidecar or any other way, but i still would like to do "post-mortem" analysis if possible. I assume that kubernetes should provide (but i don't know how, that's the reason of my question) some mechanism to do this forensics tasks easing the life to all of us, because in a production system we could need to analyze killed/exited containers.
I tried playing directly with docker run without --rm option, but can't get nothing useful from inspection to get useful information or recreate the file system in last moment that had the container alive.
Thank u very much!
When a pod container crashes, i would like to analyze the file system at that moment.
POD (Containers) natively use non-persistent storage.
When a container exits/terminates, so does the container’s storage.
POD (Container) can be connected to storage that is external. This will allows for the storage of persistent data (you can configure volume mount as path to core dump etc..), since this external storage is not removed when a container is stopped/killed will help you with more flexibility to analysis the file system. Configuring container file system storage with commonly used file systems such as NFS .. etc ..
I have defined job in my kubernetes cluster which suppose to create some folder with some data. Now I would like to share this folder between all other pods, because they need to use this data.
Currently other pods are not running if above mentioned job is not finished.
So I think about volumes. Let's say - result of the job is mounted folder, which is accessible from other pods when job is finished.
Other pods in cluster needs only environment variable - path to this mounted folder.
Could you please how I could define this?
ps. I know this is not a very good use case, however I Have legacy monolit application with lots of dependencies.
I'm assuming that the folder you are referring to is a single folder at some disk that can be mounted by multiple clients.
Check here , or at your volume plugin documentation reference if the access mode you are requesting is supported.
Create the persistent volume claim that your pods will use, no need for the matching volume to exists yet. You can use label/expression matching to make sure that this PVC will only be satisfied by the persistent volume you will be creating at your job.
At your job add a final task that creates the persistent volume claim that satisfies the PVC.
Create your pods adding the PVC as volume. I don't think pod presets are needed, plus they are alpha, not enabled by default, and not widely used, but depending on your case you might want to take a look at them.
at the moment I try to figure out a good setup for my application in amazon ecs.
My application needs a config file. Now I want to have a container to hold my config file so when I want to change something I don't need to redeploy my application.
I can't find any best practice method for this. What I found out is that the ecs tasks just make a docker run and you can't make a docker create.
Does anyone have an idea how I can manage my config files for my applications?
Most likely using Docker for this is overkill. How complex is the data? If it's simple key-value pairs I would use DynamoDB and get rid of the file completely. Another option would be using EFS for the file, or attaching/detaching an EBS volume.
You should not do that, it makes it fragile and you're not guaranteed to be able to access it from all containers across a cluster (or you end up having that on all instances which wastes resources). Why not package it up with the container as-is or package it as much as possible and provide environment variables to fill in the gap? If you really want to go this route I highly suggest something like S3