I work on VM on google cloud for my Machine learning work.
In order to avoid installing all the libraries and module from scratch every time I create a new VM on GCP or whatever, I want to save the VM that I created on Google Cloud and save on GitHub as a docker image. So that next time, I would just load it and run it as a docker image and get my VM ready for work.
Is this a straightforward task? Any ideas on how to do that, please?
When you create a Compute Engine instance, it is built from an artifact called an "image". Google provides some OS images from which you can build. If you then modify these images by (for example) installing packages or performing other configuration, you can then create a new custom image based upon your current VM state.
The recipe for this task is fully documented within the Compute Engine documentation here:
https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images
Once you have created a custom image, you can instantiate new VM instances from these custom images.
Related
I would like to create a vmware image based on an linux ISO file. The whole thing should take place within a build pipeline - using docker images.
My approach is that I look for a suitable docker image with an free esxi server and use this as the basis in the build pipeline. Unfortunately I can not find an image on dockerhub.
Isn't this approach possible?
I would have expected that several people would have done this before me and that I could use an existing docker image accordingly.
I have Websphere Application Server 8.5.5.14 hosting my ERP. I want to dockerize the application and deploy it into Kubernetes cluster. Can anyone provide me information on how to create image out of my existing WAS 8.5.5.14.
In theory you could do this by creating a tar ball of the filesystem and importing it into docker to make an image via something like:
cat WAS.tar | docker import - appImage
but there's going to be a number of issues you'll need to avoid, for example, if you have resources (jdbc drivers,resource adapters, etc), the tarball will need to have all of those included. You'll also need to expose all of the required ports for your app and its administration. A better way and best practice to solve this would be to start with an IBM supported image of traditional WAS and build your system atop it.
There are detailed instructions to do this at https://github.com/WASdev/ci.docker.websphere-traditional#docker-hub-image
F Rowe's answer is good; if you follow their advice of using the official images you will be using WebSphere v9.0 in the container. You can use this tool that can help figure out if there are any changes you need to make to your application in order to get it working in the container. It also generates some of the wsadmin scripts to configure the server in the image.
I have GKE and I need to use customised Ubuntu image for GKE nodes. I am planning to enable autoscaling. So I require to install TLS certificates to trust the private docker registry in each nodes. It's possible for existing nodes manually. But when I enable auto scale the cluster, it will spin up the nodes. Then docker image pull request will fail, because of the docker cannot trust the private docker registry which hosted in my on premise.
I have created a customised Ubuntu image and uploaded to image in GCP. I was trying to create a GKE and tried to set the node's OS image as that image which I created.
Do you know how to create a GKE cluster with customised Ubuntu Image? Has anyone experienced with incidents like this?
Node pools in GKE are based off GCE instance templates and can't be modified. That means that you aren't allowed to set metadata such as startup-scripts or make them based on custom images.
However, an alternative approach might be deploying a privileged DaemonSet that manipulates the underlying OS settings and resources.
Is important to mention that giving privileges to resources in Kubernetes must be done carefully.
You can add a custom pool where the image is Ubuntu and be sure to add the special GCE instance metadata startup-script and then you can put your customization on it.
But my advice is to put the URL of a shell script stored in a bucket of the same project, GCE will download every time a new node is created and will execute it on the startup as a root.
https://cloud.google.com/compute/docs/startupscript#cloud-storage
I have an aws ec2 account, where I am running couple of web apps on nginx. I don't know much about docker, except it is a container that takes snapshot of filesystem. Now, for some reason I am forced to switch accounts. I have opened a new aws ec2 account. Can I use docker to set up a container in my old virtual system, then get an image and deploy in my new system? This way I can remove the headache of having to install many components, configure nginx and all applications in my new system. Can I do that? If so, how?
According to the best practices of Docker and its CaaS, images are not supposed to "virtualize" a whole lot of services, on the contrary. Docker does not aim at taking a snapshot of the system (it uses FS overlay to create images, but theses are not snapshots).
So basically, if your (yet unclear) question is: "Can I virtualize my whole system into one image" the answer is: "No".
What you can do is using an image for each of your service (you'll find everything you need on the hub.docker) to keep a clean system on your new one.
Another solution would be to list all the installed Linux packages on your old system, and installed them on the new one and copy all the configuration files.
I have a Docker image (in Ubuntu 14.04 environment) that I want to upload to Google Compute Engine and run as a Compute Engine (not App Engine) instance.
There is a presentation (by Google's Marc Cohen) about how to do this but it leaves out key steps (on page 34) about how to convert the Docker image to raw tar.gz format.
Can someone tell me the exact steps to
convert Docker image to correct format
upload to google storage
create google compute engine image
start google compute engine instance
If you are not bound to your Ubuntu image, then you could just use the ready made VMs with Docker support (Debian Wheezy) and drop your containers in.
For more info on using Docker on GCE, see:
Container-optimized Google Compute Engine images
Containers on Google Cloud Platform
The documentation on packaging has comprehensive steps to do all what you have enumerated.
There are some specific requirements that your install must include in order for it to be compatible with GCE; its a long list of kernel compatibility flags, disk types, NTP settings, etc. etc. so copy-pasting it here will not be prudent as this information is likely to change as GCE is updated by Google.