Run a web page similar to kubernetes dashboard - docker

I a want to run a web page similar like kubernetes dashboard.The web page takes input from the user and generates a small file but i want the web page to be loaded without using any server. kubernetes is deploying a pod and bringing up the web page i want to do the same.If kubernetes is also using a server how is it using it(is it directly downloading it with the OS in the pod or how is kubernetes doing it).
Overview I want to know how kubernetes dashboard is getting deployed is it using a server if so how is it getting the server installed in the kubernetes pod else how is it bring up the UI.

Actually, Kubernetes plays the role as an orchestrator and provides sufficient way for building communication channels between containers in the cluster and uses Docker by default as a container runtime.
Containers represent run-time environment for images, however images consist with OS layer and application binaries, a good explanation you can find here. In order to build own image you might consider two ways to afford this: create an image from existing one in Docker Hub or compose image from Dockerfile.To store the customized image might be the option to push it into Docker Hub repository or stand for some private isolated repo by deploying a Registry server.
When you are ready with an image, and you plan to implement application in Kubernetes cluster, that's a good time to create first microservice. Although, there are tons of materials about Kubernetes cluster and its run-time engine architecture in the globe, I would focus on the application deployment lifecycle.
Deployment is the main mechanism which defines how are Pods should to be implemented within a cluster and provides specific configuration for further application run-time workflow.
Service describes a way how the particular Pod will communicate with other resources within a cluster, providing endpoint IP address and port where your application will respond.
In general scenario with Kubernetes Dashboard, the method in use kubectl proxy will expose the application by proxying gateway between host and Kubernetes API, which is more like for testing purposes and not secure, in comparison with Nodeport type which brings more convenient way to make application accessible outside the cluster, as described in this Stack thread.
I encourage you to get some more learning stuff in the official Kubernetes documentation.

Related

Kubernetes - from Minikube to production

I have created a simple PHP api application that works with a mysql database to store data. I have been experimenting with Kubernetes on my Windows 10 machine through Minikube.
I have just about got my head round the ideas involved, yet I’m not sure about how to implement this properly. So far I have used Kompose to create a set of yaml files from an existing docker-compose file. This has been half successful.
To get my application code into a pod hosting PHP, I have been using hostPath to share from my local machine. I mount to the minikube machine and share from there. I was having trouble sharing by other means. The application code is hosted in a github repo.
My questions are:
Is mounting my application code into a pod (assuming this is similar to what happens in docker) the correct way to do this? I’m not clear exactly what information is held on an image retrieved from the docker hub. Although I have read up on containers isolating the build environment from your machine.
How does this approach to translate into a production environment hosted on a cloud? I see there are various storage types. I had for example, wanted to try deploying on AWS just to see how this would work in practice.
I’m really looking for guidance to go from the tutorials found on the web working on my machine, to something that could be done for a customer hosted on the cloud. This might scale up to a more microservices style architecture over time.
The approach you are describing is mostly for development setups, where you want to mount your code into the container as a volume so you don't have to rebuild every time your code changes. Typically done with a docker-compose file.
For production setups, you want the docker image to correctly work and only mount volumes to data you want to persist, typically databases are the core example. For this EKS is deeply integrated into the AWS infrastructure and will create EBS volumes on demand. You don't need to provision any volume or even care for most cases (unless you need multiple read-write volumes needed for scaling).
For a PHP application you really should not persist any data in the pod, because it will create other issues when you need to scale the application. Also, a good approach for managing files that need to persist is S3 (AWS simple storage service).
So generally speaking, you need a deployment per application a service to access each pod on that application and then an ingress object to route traffic from the internet to each pod.
Your application docker image is really the core. You just build it with your code inside. Make sure to pass configuration using environment variable or configuration file so you can connect to the database.
Now for kubernetes, for each compoment (e.g. PHP application, MySQL) you will most likely create a deployment k8s manifest that points to the docker image and add some configuration environment variables.
For production, you will need persistence volume. On aws you can simply use EBS-backed volumes
To get traffic from Internet to your PHP application, you will need to add one or more k8s components:
K8s Service manifest that exposes your PHP deployment/pod on a stable address. If you only have q or very few services, you can use LoadBalancer which on cloud like AWS will create an ALB/ELB (might need to add annotation to your service)
An ingress which is just a reverse proxy (contour, nginx, traefik). On cloud environment it will map to an ALB/ELB. The advantage of this is that you can have a single ALB for all your services i.e. save money. Also you can configure routing path or TLS termination in one place.

Not able to connect to a container(Created via Rest API) in Kubernetes

I am creating a docker container ( using docker run) in a kubernetes Environment by invoking a rest API.
I have mounted the docker.sock of the host machine and i am building an image and running that image from RESTAPI..
Now i need to connect to this container from some other container which is actually started by Kubectl from deployment.yml file.
But when used kubeclt describe pod (Pod name), my container created using Rest API is not there.. So where is this container running and how can i connect to it from some other container ?
Are you running the container in the same namespace as namespace with deployment.yml? One of the option to check that would be to run -
kubectl get pods --all-namespaces
If you are not able to find the docker container there than I would suggest performing below steps -
docker ps -a {verify running docker status}
Ensuring that while mounting docker.sock there are no permission errors
If there are permission errors, escalate privileges to the appropriate level
To answer the second question, connection between two containers should be possible by referencing cluster DNS in below format -
"<servicename>.<namespacename>.svc.cluster.local"
I would also request you to detail steps, codes and errors(if there are any) for me to better answer the question.
You probably shouldn't be directly accessing the Docker API from anywhere in Kubernetes. Kubernetes will be totally unaware of anything you manually docker run (or equivalent) and as you note normal administrative calls like kubectl get pods won't see it; the CPU and memory used by the pod won't be known about by the node interface and this could cause a node to become over utilized. The Kubernetes network environment is also pretty complicated, and unless you know the details of your specific CNI provider it'll be hard to make your container accessible at all, much less from a pod running on a different node.
A process running in a pod can access the Kubernetes API directly, though. That page notes that all of the official client libraries are aware of the conventions this uses. This means that you should be able to directly create a Job that launches your target pod, and a Service that connects to it, and get the normal Kubernetes features around this. (For example, servicename.namespacename.svc.cluster.local is a valid DNS name that reaches any Pod connected to the Service.)
You should also consider whether you actually need this sort of interface. For many applications, it will work just as well to deploy some sort of message-queue system (e.g., RabbitMQ) and then launch a pool of workers that connects to it. You can control the size of the worker queue using a Deployment. This is easier to develop since it avoids a hard dependency on Kubernetes, and easier to manage since it prevents a flood of dynamic jobs from overwhelming your cluster.

Container Orchestration for provisioning single containers based on user action

I'm pretty new to Docker orchestration and managing a fleet of containers. I'm wanting to build an app that would give the user a container when they ran a command. What is the best tool and best way to accomplish this?
I plan on having a pool of CoreOS servers to run the containers on and I'm imagining the scheduler to have an API that I can just call to create the container.
Most of what I have seen with Nomad, Kubernetes, Docker Swarm, etc is how to provision multiple clusters of containers all doing the same thing. I'm wanting to be able to create a single container based on a users command and then be able to communicate with an API on that container. Anyone have experience with this?
I'd look at Kubernetes + the Jobs API (short lived) or Deployments (long lived)
I'm not sure exactly what you mean by command, but I'll assume its some sort of dev env triggered by a CLI, make-dev.
User triggers make-dev, which sends a webhook to your app sitting in front of the Jobs API, ideally doing rate-limiting and/or auth.
Your app takes the command, sanity checks it, then fires off a Job/Deployment request + an Ingress rule + Service
Kubernetes will schedule it out across your fleet of machines
Your app waits for the pod to start, then returns back the address of the API with a unique identifier (the same thing in the ingress rule) like devclusters.com/foobar123
User now accesses their service at that address. Internally Kubernetes uses the ingress and service to route the requests to your pod
This should scale well, and if your different environments use the same base container image, they should start really fast.
Plug: If you want an easy CoreOS + Kubernetes cluster plus a UI try https://coreos.com/tectonic
I plan on having a pool of CoreOS servers to run the containers on and I'm imagining the scheduler to have an API that I can just call to create the container
kubernetes comes with a RESTful API that you can use to directly create pods (the unit of work in kubernetes which contains one or more containers) within your cluster.
The command line utility kubectl also interacts with the cluster in the exact same way, via the api. There are client libraries written in golang, Java, and Python at the moment with others on the way to help communicate with the cluster.
If you later want a higher level abstraction to manage pods, update them and manage their lifetimes, looking at one of the controllers (replicaset, replication controller, deployment, statefulset) should help.

What is the difference between kubernetes and GKE?

I know that GKE is driven by kubernetes underneath. But I don't seem to still get is that what part is taken care by GKE and what by k8s in the layering? The main purpose of both, as it appears to me is to manage containers in a cluster. Basically, I am looking for a simpler explanation with an example.
GKE is a managed/hosted Kubernetes (i.e. it is managed for you so you can concentrate on running your pods/containers applications)
Kubernetes does handle:
Running pods, scheduling them on nodes, guarantee no of replicas per Replication Controller settings (i.e. relaunch pods if they fail, relocate them if the node fails)
Services: proxy traffic to the right pod wherever it is located.
Jobs
In addition, there are several 'add-ons' to Kubernetes, some of which are part of what makes GKE:
DNS (you can't really live without it, even thought it's an add-on)
Metrics monitoring: with influxdb, grafana
Dashboard
None of these are out-of-the-box, although they are fairly easy to setup, but you need to maintain them.
There is no real 'logging' add-on, but there are various projects to do this (using Logspout, logstash, elasticsearch etc...)
In short Kubernetes does the orchestration, the rest are services that would run on top of Kubernetes.
GKE brings you all these components out-of-the-box, and you don't have to maintain them. They're setup for you, and they're more 'integrated' with the Google portal.
One important thing that everyone needs is the LoadBalancer part:
- Since Pods are ephemeral containers, that can be rescheduled anywhere and at any time, they are not static, so ingress traffic needs to be managed separately.
This can be done within Kubernetes by using a DaemonSet to fix a Pod on a specific node, and use a hostPort for that Pod to bind to the node's IP.
Obviously this lacks fault tolerance, so you could use multiple and do DNS round robin load balancing.
GKE takes care of all this too with external Load Balancing.
(On AWS, it's similar, with ALB taking care of load balancing in Kubernetes)
GKE (Google Container Engine) is only container platform, which Kubernetes can manage. It is not a kubernetes-like with "differences".
As mentioned in "Docker and Kubernetes and AppC " (May 2015, that can change):
Docker is currently the only supported runtime in GKE (Google Container Engine) our commercial containers product, and in GAE (Google App Engine), our Platform-as-a-Service product.
You can see Kubernetes used on GKE in this example: "Spinning Up Your First Kubernetes Cluster on GKE" from Rimantas Mocevicius.
The gcloud API will still make kubernetes commands behind the scene.
GKE will organize its platform through Kubernetes master
Every container cluster has a single master endpoint, which is managed by Container Engine.
The master provides a unified view into the cluster and, through its publicly-accessible endpoint, is the doorway for interacting with the cluster.
The managed master also runs the Kubernetes API server, which services REST requests, schedules pod creation and deletion on worker nodes, and synchronizes pod information (such as open ports and location) with service information.
In short, without getting into technical details,
GKE is managed Kubernetes, similar to how Google's Cloud Composer is managed Apache Airflow and Cloud Dataflow is managed Apache Beam.
So, some of Google Cloud Platform's services (GKE, Cloud Composer, Cloud Dataflow) are managed implementations of various open source technologies (Kubernetes, Airflow, Beam).

Container delivery on amazon ecs

I’m using Amazon ECS to auto deploy my containers on uat/production.
What is the best way to do that?
I have a REST api with a several front-end clients
Should I package my api container with nginx in the same container?
And do the same thing with the others front end clients.
Or I have to write a big task definition to bring together all my containers(db, nginx, php, api, clients) :(, but that's mean that I should redeploy all my infrastructure at each push uat/prod
I'm very confusing.
I would avoid including too much in a single container. Try and distill your containers down to one process doing one thing. If all you're doing is serving up a REST API for consumption by your front end, just put the essential pieces in for that and no more.
In my experience you also want your ECS tasks to be able to handle failure gracefully and restart, and the more complicated your containers are the harder this is to get right.
Depending on your requirements I would look into using ELB instead of nginx, you can have your ECS cluster point at an ELB and not have to deal with that piece at all.
Do not use ECS - it's too crude. I was using it as a platform for our staging/production environments and had odd problems during deployments - sometimes it worked well, sometimes - not (with the same Docker images). ECS provides not clear model of container deployment and maintenance.
There is another good, stable and predictive option - Docker Cloud service. It's new tool (a.k.a. Tutum) that was acquired by Docker. I switched the CI/CD to use it and we're happy with it.
Bind Amazon user credentials to Docker Cloud account. Docker Cloud uses AWS (or other provider) API for creating appropriate computer instances.
Create Node. Select Amazon EC2 instance type and parameters of storage, security group and so on. New instance will contain installed docker software and managing container that handles messages from Docker Cloud (deploy, destroy and others).
Create Stackfile, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/. Stackfile is a definition of container group you required. You can define different scaling/distribution models for your containers using specific Stackfile options like deployment strategy, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/#deployment-strategy-1.
Define ELB configurations in AWS for your new instances.
P.S. I'm not a member of Docker team and I like other AWS services :).
Here is my two cents on the topic, the question is not really related to ecs, it applies to any body deploying their apps on docker.
I would suggest separating the containers, one for nginx and one for API.
if they need to be co-located on the same instance, on ECS you can define them as part of the same task and on kubernetes you can make them part of same pod.
Define a docker link between the nginx and the api container. This will allow the nginx process to talk to api container without the api container exposing its ports to the host.
One advantage of using the container running platforms such as kubernetes and ecs is that they ensure each of the container run all the time and dynamically restart if one of the processes/containers go down.
Separating the containers will allow these platforms to monitor both the processes separately. When you combine the two into one container the docker container can only run with one of the processes in foreground, so you will loose the advantage of auto-healing for one of the processes.
Also moving from nginx to ELB is not a straightforward solution, you may have redirections and other things configured on the nginx, which are not available on ELB(As of date).
If you also need the ELB, there is no harm in forwarding the requests from the ELB to the nginx port.

Resources