I have a fullstack application running in minikube cluster, i'm using docker as a minikube driver, and it's working perfectly, unfortunately i'm still new to kubernetes, how can i run this same cluster on a new machine, i'm loading local docker images of my app services to minikube.
I came up with this method:
Push all of my app images to docker hub
Export all my services and deployments as yaml files ( after modiying image pull source)
Install minikube on the new machine
Create a new cluster
Apply all declaration files
Exposing my frontend service
App can be accessed on localhost:someport
any suggestions of an easier way to achieve this?
Related
I am working on a web application with all the infrastructure based on Kubernetes. In my local environment, I am using Skaffold.
I have two computers (Desktop and Laptop) with 8Gb of RAM each. By starting minikube (virtualbox driver) and skaffold dev the Deskop is freezing.
So I decided to use the Laptop for coding and the Desktop for running minikube and everything related.
I successfully managed to set up kubeconfig on the laptop to have a context with the minikube server.
Actually, The issue is skaffold.
When I run skaffold dev, it fails because minikube of the Deskop doesn't see the images build by skaffold on my laptop. kubectl get po returns ImagePullBackOff.
That is because skaffold uses the local docker to build the image.
The question is how to make skaffold use the docker install in my Desktop?
I changed the docker context of my laptop so that it's linked to the Desktop context but it's still not working, skaffold is still using the default docker context installed in my laptop.
How to make the images build by Skaffold being available on my Desktop?
Is it possible for Skaffold to use a remote docker context? If yes, how?
Minikube uses its own Docker installation to power its cluster. This daemon runs in Minikube's VM (or container, if using the docker driver) and is completely independent from the host's Docker daemon (your Desktop). You can access to Minikube's daemon by setting the environment returned by minikube docker-env.
I am fairly new to docker and kubernetes and going through kubernetes docs it says,
When using a single VM for Kubernetes, it’s useful to reuse Minikube’s built-in Docker daemon. Reusing the built-in daemon means you don’t have to build a Docker registry on your host machine and push the image into it. Instead, you can build inside the same Docker daemon as Minikube, which speeds up local experiments.
So, my understanding is there are two instances are running in my local machine, one in macOS and the other one in the VM.
Suppose I created a image using the docker instance on my macOS, and then I want to use it on Kubernetes then,
Question 1: Do I strictly need to create a local registry and then pull it from within Kubernetes cluster ?
It further says,
To work with the Docker daemon on your Mac/Linux host, use the docker-env command in your shell: eval $(minikube docker-env)
Running this creates a few environment variables in the current shell.
Question 2: Will this be able to pull images that I build from within the docker in my macOS without creating the local registry.
Minikube runs a single-node Kubernetes cluster inside a Virtual Machine (VM) on your laptop.
So it will create a k8s setup on a VM running on your macOS.
eval $(minikube docker-env)
this command on you macOS will help you to switch context to docker, so that you can run docker commands from your macOS.
Question 1: Do I strictly need to create a local registry and then pull it from within Kubernetes cluster ?
No you don't need to explicitly create a local registry as everthing runs on a single VM in minikube.
Question 2: Will this be able to pull images that I build from within the docker in my macOS without creating the local registry?
By switching docker env context on your host machine you can able to pull the images you don't need create a registry for that. Remember your macOS is not the part of you k8s cluster. Your k8s cluster is running on the single VM created by minikube.
I have an application which consists of 2 docker containers. Both are small and need to interact with each other quite often through rest api.
How can I deploy both of them to a single Virtial Machine in Google Cloud?
Usually, when creating virtual machine, I get to chose a container image to deploy: Deploy a container image to this VM instance.
I can specify one of my images and get it running in the VM. Can I set multiple images?
You can not deploy multiple containers per VM.
Please consider this limitation when deploying containers on VMs:
1.You can only deploy one container for each VM instance. Consider Google Kubernetes Engine if you need to deploy multiple containers per
VM instance.
2.You can only deploy containers from a public repository or from a private repository at Container Registry. Other private repositories
are currently not supported.
3.You can't map a VM instance's ports to the container's ports (Docker's -p option).
4.You can only use Container-Optimized OS images with this deployment method. You can only use this feature through the Google Cloud
Platform Console or the gcloud command-line tool, not the API.
You can use docker-compose to deploy multi-container applications.
To achieve this on Google Cloud, you'll need:
ssh access to VM
docker and docker compose installed on the VM
I am currently trying to port a service across to asp-net 1.0 and get it up and running in a local Kubernetes cluster, or even a single node (Kubernetes Master and 1 minion). I have successfully managed the first part and had my service running in kestrel using Docker within a Boot2Docker VM and also Centos7. I am now trying to get my container up and running in Kubernetes. I have been trawling Google for a guide in doing this and everywhere I turn this seems a rather convoluted task. Has anyone else achieved this and have any useful guides/links?
You are on the right path, just a few additional steps:
Package your app into a docker image, use the aspnet base image and add your code (https://hub.docker.com/r/microsoft/aspnet/)
Push your image up to a docker repo
Deploy that image to your cluster
The basic rule of thumb is just get your app dockerized then you can run it in k8s.
I started Kubernetes master and minion on local machine using Vagrant. I can create a json file for my Kubernetes pod where I can start several public containers.
However, one Docker container is local one, ontop on java:8-jdk, configured with DockerFile.
How can I reference this local Docker container in the kubernetes json pod so Kubernetes can run it?
In other words, does Kubernetes support docker build ;)
After you build the docker image, you can "side-load" it into your locally available images by running docker load -i /path/to/image.tar. Once you've done this, Kubernetes will be able to load the image without reaching out to an external hub.