Using gcloud, I want to create an instance template of type n1-standard-1 with an attached GPU and a docker container. This can be done through the console but I want to do it from the command line.
It is possible to create an instance template with GPU using gcloud alpha compute instance-templates create and the --accelerator option.
It is also possible to create an instance template with a container using gcloud alpha compute instance-templates create-with-container but in this case the --accelerator option is not recognized.
... but it is not possible to both specify the container image and request GPU or am I missing something ? Any work-around beside creating the template manualy using the console ?
It is possible to create an instance with accelerators and a container with gcloud by creating the instance with accelerators and then using gcloud beta compute instances update-container to set the container, but it is not currently possible to create an instance template with accelerators and a container with gcloud.
You can file a feature request for that functionality at:
https://issuetracker.google.com/issues/new?component=187143&template=0
Related
I am new to KubeFlow and trying to port / adapt an existing solution to run in KubeFlow pipelines. The issue I am solving now is that the existing solution shared data via a mounted volume. I know this is not the best practice for components exchanging data in KubeFlow however this will be a temporary proof of concept and I have no other choice.
I am facing issues with accessing an existing Volume from the pipeline. I am basically running the code from KubeFlow documentation here, but pointing to an existing K8S Vo
def volume_op_dag():
vop = dsl.VolumeOp(
name="shared-cache",
resource_name="shared-cache",
size="5Gi",
modes=dsl.VOLUME_MODE_RWO
)
The Volume shared-cache exists:
However when I run the pipeline a new volume is created:
What am I doing wrong? I obviously don't want to create a new volume every time I run the pipeline but instead mount an existing one.
Edit: Adding KubeFlow versions:
kfp (1.8.13)
kfp-pipeline-spec (0.1.16)
kfp-server-api (1.8.3)
Have a look at the function kfp.onperm.mount_pvc. You can find values for the arguments pvc_name and volume_name via the console command
kubectl -n <your-namespace> get pvc.
The way you use it is by writing the component as if the volume is already mounted and following the example from the doc when binding it in the pipeline:
train = train_op(...)
train.apply(mount_pvc('claim-name', 'pipeline', '/mnt/pipeline'))
Also note, that both the volume and the pipeline must be in the same namespace.
According to Creating and configuring instances and my own testing, the Google Container Optimized OS launches the specified container on instance startup.
However, I'd like to execute my own startup script which would include running the container. Is there any way to prevent this default behaviour of automatically running the container on startup?
Specifiying a custom startup script for the instance doesn't seem to prevent the default behaviour.
You can create a COS instance and either specify a cloud-init or a startup script.
Then use gcloud compute instances create (rather than gcloud compute instances create-with-container) and --metadata-from-file or --metadata=startup-script= respectively.
Is there a way to convert Dockerfile to an EC2 instance (for example)?
I.e., a script to interpret the Dockerfile script and install all the correct versions of dependencies and any other deployment operations on a bare metal ec2 instance.
I do not mean to run the docker images inside Docker but to deploy it directly on the instance.
I do not think you can do this with the help of tools, but you can do this with the help of Dockerfile itself.
First, choose the OS for your EC2 launch which used in the Dockerfile that you can find in the start of Dockerfile, suppose FROM ubuntu, so choose ubuntu for your EC2 machine rest of the command will be same for as you perform in the Dockerfile.
But again we also need behaviour like Docker means to say that we just want to create it once and run on different EC2 machine on a different region, so for this you need to launch the instance and prepare one instance and test it accordingly then create AWS AMI from that EC2 instance, now you can treat this AWS AMI like Docker image.
Amazon Machine Image (AMI)
An Amazon Machine Image (AMI) provides the information required to
launch an instance. You must specify an AMI when you launch an
instance. You can launch multiple instances from a single AMI when you
need multiple instances with the same configuration. You can use
different AMIs to launch instances when you need instances with
different configurations
creating-an-ami
Or the second option is to put the complete script in the user-data section, you can consider this entrypoint of the Docker where we want to prepare thing during run time.
I work on VM on google cloud for my Machine learning work.
In order to avoid installing all the libraries and module from scratch every time I create a new VM on GCP or whatever, I want to save the VM that I created on Google Cloud and save on GitHub as a docker image. So that next time, I would just load it and run it as a docker image and get my VM ready for work.
Is this a straightforward task? Any ideas on how to do that, please?
When you create a Compute Engine instance, it is built from an artifact called an "image". Google provides some OS images from which you can build. If you then modify these images by (for example) installing packages or performing other configuration, you can then create a new custom image based upon your current VM state.
The recipe for this task is fully documented within the Compute Engine documentation here:
https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images
Once you have created a custom image, you can instantiate new VM instances from these custom images.
Business requirement is following:
Stop running container
Modify environment (Ex.Change value of DEBUG_LEVEL environment variable)
Start container
This is easily achievable using docker CLI
docker create/docker stop/docker start
How to do it using kubernetes?
Additional info:
We are migrating from Cloud Foundry to Kubernetes. In CF, you deploy application, stop application, set environment variable, start application. The same functionality is needed.
For those who are not aware of CF application. It is like docker container with single running (micro)service.
Typically, you would run your application as a Deployment or as a StatefulSet. In this case, just change the value of the environment variable in the template and reapply the Deployment (or StatefulSet). Kubernetes will do the rest for you.
click here to refer the documentation
Let's say you are creating a pod/deployment/statefulset using the following command.
kubectl apply -f blueprint.yaml
blueprint.yaml is the YAML file which contains the blueprint of your pod/deployment/statefulset object.
Method 1 - If you specify the environment variables in the YAML file
Then you can change the blueprint.yaml to modify the value of environment variable, .
https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/
Then execute same command again to apply the changes.
Method 2 - If you specify the environment variables in the dockerfile
You should build your docker image with a new tag. Then change the docker image tag in the blueprint.yaml file and execute the same command again to apply the changes.
Method 3
You can also delete and create the pod/deployment/statefulset again.
kubectl delete -f blueprint.yaml
kubectl apply -f blueprint.yaml
There is also another possibility :
Define container environment variables using configmap data
Let Kubernetes react upon ConfigMap changes. It does not trigger restart of Pods by default, unless you change Pod spec somehow. Here is an article, that describes how to achieve it using SHA-256 hash generated of our ConfigMap.