I am having a problem and need help, How to add custom vm image in to Aks cluster as a nodepool with vmscalset using terraform code in Azure,
The AKS uses Ubuntu 16.04.LTS image as the host OS and there is no option to select an alternate operating system. See more details here.
If you want to use your own custom image, you can use the aks-engine instead and apply the custom-image.
Related
I have a kubernetes cluster that I have stood up with terraform in GCP. Now I want to deploy/run my Docker image to/on it, from the GCP console I would do this by going to the workloads section of the kubernetes engine portion of the console and then selecting Deploy a containerized application, I however want to do this with terraform, and am having difficulty determining how to do this and finding good reference examples for how to do it. Any examples on how to do this would be appreciated.
Thank you!
You need to do 2 things:
For managing workloads on Kubernetes, you can use this Kubectl Terraform provider
For custom images that preset in a 3rd party registry, you'll need to create a Kubernetes secret of type Docker and then use it in your manifests via imagePullSecrets attribute. Check out this example.
I would like to create a vmware image based on an linux ISO file. The whole thing should take place within a build pipeline - using docker images.
My approach is that I look for a suitable docker image with an free esxi server and use this as the basis in the build pipeline. Unfortunately I can not find an image on dockerhub.
Isn't this approach possible?
I would have expected that several people would have done this before me and that I could use an existing docker image accordingly.
I work on VM on google cloud for my Machine learning work.
In order to avoid installing all the libraries and module from scratch every time I create a new VM on GCP or whatever, I want to save the VM that I created on Google Cloud and save on GitHub as a docker image. So that next time, I would just load it and run it as a docker image and get my VM ready for work.
Is this a straightforward task? Any ideas on how to do that, please?
When you create a Compute Engine instance, it is built from an artifact called an "image". Google provides some OS images from which you can build. If you then modify these images by (for example) installing packages or performing other configuration, you can then create a new custom image based upon your current VM state.
The recipe for this task is fully documented within the Compute Engine documentation here:
https://cloud.google.com/compute/docs/images/create-delete-deprecate-private-images
Once you have created a custom image, you can instantiate new VM instances from these custom images.
I have GKE and I need to use customised Ubuntu image for GKE nodes. I am planning to enable autoscaling. So I require to install TLS certificates to trust the private docker registry in each nodes. It's possible for existing nodes manually. But when I enable auto scale the cluster, it will spin up the nodes. Then docker image pull request will fail, because of the docker cannot trust the private docker registry which hosted in my on premise.
I have created a customised Ubuntu image and uploaded to image in GCP. I was trying to create a GKE and tried to set the node's OS image as that image which I created.
Do you know how to create a GKE cluster with customised Ubuntu Image? Has anyone experienced with incidents like this?
Node pools in GKE are based off GCE instance templates and can't be modified. That means that you aren't allowed to set metadata such as startup-scripts or make them based on custom images.
However, an alternative approach might be deploying a privileged DaemonSet that manipulates the underlying OS settings and resources.
Is important to mention that giving privileges to resources in Kubernetes must be done carefully.
You can add a custom pool where the image is Ubuntu and be sure to add the special GCE instance metadata startup-script and then you can put your customization on it.
But my advice is to put the URL of a shell script stored in a bucket of the same project, GCE will download every time a new node is created and will execute it on the startup as a root.
https://cloud.google.com/compute/docs/startupscript#cloud-storage
How should applications be scripted/automatically deployed when in LXD containers?
For example is best way to deploy applications in LXD containers to use a bash script (which deploys an application)? How to execute this bash script inside the container by executing a command on the host?
Are there any tools/methods of doing this in a similar way to Docker recipes?
In my case, I use Ansible to:
build the LXD containers (web, database, redis for example).
connect to the containers and deploy the services and code needed.
you can build your own images for example with the services and/or code already deployed and build specific containers from this images.
I was doing this from before LXD had Ansible support (Ansible 2.2) i prefer to use ssh instead of lxd connection, when i connect to the containers to deploy services/code. they comes with a profile where i had setup my ssh public key (to have direct ssh connection by keys ... no passwords)
Take a look at my open source project on bitbucket devops_lxd_containers It includes:
Scripts to build lxd image templates including Apache, tomcat, haproxy.
Scripts to demonstrate custom application image builds such as Apache hosting and key/value content and haproxy configured as a router.
Code to launch the containers and map ports so they are accessible to the larger network
Code to configure haproxy as layer 7 proxy to route http requests between boxes and containers based on uri prefix routing. Based on where it previously deployed and mapped ports.
At the higher level it accepts a data drive spec and will deploy an entire environment compose of many containers spread across many hosts and hook them all up to act as a cohesive whole via a layer 7 proxy.
Extensive documentation showing how I accomplished each major step using code snippets before automating.
Code to support zero-outage upgrades using the layer7 ability to gracefully bleed off old connections while accepting new connections at the new layer.
The entire system is built on the premise that image building is best done in layers. We build a updated Ubuntu image. From it we build a hardened Ubuntu image. From it we build a basic Apache image. From it we build an application specific image like our apacheKV sample. The goal is to never rebuild any more than once and to re-use the common functionality such as the basicJDK as the source for all JDK dependent images so we can avoid having duplicate code in any location. I have strived to keep Image or template creation completely separate from deployment and port mapping. The exception is that I could not complete creation of the layer 7 routing image until we knew everything about how other images would be mapped.
I've been using Hashicorp Packer with the ansible provisioner using ansible_connection = lxd
Some notes here for constructing a template
When iterating through local files on your host system you may need to be using ansible_connection = local (e.g for stat & friends)
Using local_action in ansible with the lxd connection is still
action inside the container when using stat (but not with include_vars & lookup function for files)
Using lots of debug messages in Ansible is helpful to know which local environment ansible is actually operating in.
I'm surprised no one here mentioned Canonicals own tool for managing LXD.
https://juju.is
it is super simple, well supported, and the only caveat is it requires you turn off ipv6 at the LXD/LXC side of things (in the network bridge)
snap install juju --classic
juju bootstrap localhost
from there you can learn about juju models, deploy machines or prebaked images like ubuntuOS
juju deploy ubuntu