access google cloud TPU in self-managed k8s cluster - docker

Is it possible to access google cloud TPU resources in a self-managed k8s cluster(not GKE)? Is there a plugin of any sort to access the TPU resources from within the docker containers?

Cloud TPU has been built into GKE and to do so a custom resource was defined with a separate control logic to handle that resource. This code is built into GKE, and if you wanted to self-manage a k8s cluster, you'd probably want to write this yourself. Personally, my recommendation would be to use TPUs through GKE as they're best supported that way.

Related

which is better way to install jenkins docker or kubernetes

I am new to devops. I want to install jenkins. So out of all options available to install jenkins provided in official documentation which one should I use. I am zeroed on docker or kubernetes. So parameters I am looking for decision are below.
portability - can be installed on any major os or cloud provider.
minimal changes to move to production.
Kubernetes is a container orchestrator that may use Docker as its container runtime. So, they are quite different things—essentially, different levels of abstraction.
You could theoretically run an application at both of these abstraction levels. Here's a comparison:
Docker
You can run an application as a Docker container on any machine that has Docker installed (i.e. any OS or cloud provider instance that supports Docker). However, you would need to implement any operations-related features that are relevant for production, such as health checks, replication, load balancing, etc. yourself.
Kubernetes
Running an application on Kubernetes requires a Kubernetes cluster. You can run a Kubernetes cluster either on-premises, in the cloud, or use a managed Kubernetes service (such as Amazon EKS, Google GKE, or Azure AKS). The big advantage of Kubernetes is that it provides all the production-relevant features mentioned above (health checks, replication, load balancing, etc.) as part of the platform. So, you don't need to implement them yourself but just use the primitives that Kubernetes provides to you.
Regarding your two requirements, Kubernetes provides both of them, while using Docker alone does not provide easy production-readiness (requirement 2). So, if you're opting for production stability, setting up a Kubernetes cluster is certainly worth the effort.

Does docker-compose have something similar to service accounts and kubernetes-client library?

By creating Service accounts for pods, it is possible to access the kubectl api's of the whole cluster from any pod. Kubernetes-client libraries implemented in different languages make it possible to have a pod in cluster that serves this purpose.
Does docker-compose have something similar to this? My requirement is to control the life cycle(create, list, scale, destroy, restart, etc) of all the services defined in a compose file. As far as I've searched, no such feature is available for compose.
Or does docker-swarm provide any such features?
Docker provides an API which can be used to interact with the daemon. In fact that is exactly what docker-compose is using to achieve the functionality provided.
Docker does not provide fine grained access control like kubernetes does, though. But you can mount the docker socket to a container and make use of the API. A good example or that is ‚portainer‘ which provides a web based UI for docker.

What is the simplest reasonable Kubernetes setup?

I'm interested in getting started with Kubernetes, but my needs are simple and it does not look simple. I have a number of containerized applications that I deploy to container servers. I use nginx as a reverse proxy to expose these applications.
As far as I can tell, Kubernetes is meant to simplify management of setups like this. But I'm not sure the setup investment is worth it, given that I only realistically need one instance of each app running.
What is the simplest reasonable Kubernetes setup that I can deploy a few containerized applications to?
EDIT: If I start using Kubernetes, it will be using only on-site servers. The applications in question are ones I’ve developed for my employer, who requires that everything stays on-site.
On developers machine; you should use minikube.
On Azure / Google / Amazon ..etc; you should use managed kubernetes services
On Prem you should deploy kubernetes with on your own setup.
3.1. https://github.com/kelseyhightower/kubernetes-the-hard-way
3.2. with kubeadm
3.3 with ansible scripts like kubespray
If you choose kubeadm installation,while you are upgrading kubernetes cluster, again you should use kubeadm again. Best way to deploy on prem is using kubeadm, kube-spray or automating it with Pivotal's Bosh scripts
As you want to get started with Kubernetes, I assume that you want to set-up for your local development, I think that minikube is a best candidate for this purpose. You can also take a look at interactive tutorials from official Kubernetes website, I find it very helpful.
Take a look at this opinionated cluster setup k8s-snowflake and deploy it somewhere like Azure, or Google Compute.
It's a great exercise to figure out how kubernetes clusters work at a low level, but when you're ready to go to production, take a look at Google's Container Engine or AWS Elastic Container Engine. These ease the management of clusters immensely and exposes all the other benefits of the cloud platform to your kubernetes workloads.
Well according to previous answers you should start with minikube on your machine.
Regarding the futher dev/test/staging/prod deployment it depends. There are a couple of solutions:
use clusters provided by Google, Azure or AWS (AWS EKS is not ready yet - https://aws.amazon.com/eks/)
the hard way - setup own cluster or EC2 machines or similar
use tools like Rancher - some additional abstration over k8s for easy start, from version 2.0 k8s will be default mechanism orchiestration for Rancher
Update 31-01-2018:
regarding the hard way - there are of course some tools which helps with that approach like helm
The Kubernetes docs provide an excellent page to choose between the different options to setup kubernetes. The page is found under Picking the Right Solution.
If you want a simple way for setting a kubernetes cluster on premise, check kubeadm.

What is the difference between kubernetes and GKE?

I know that GKE is driven by kubernetes underneath. But I don't seem to still get is that what part is taken care by GKE and what by k8s in the layering? The main purpose of both, as it appears to me is to manage containers in a cluster. Basically, I am looking for a simpler explanation with an example.
GKE is a managed/hosted Kubernetes (i.e. it is managed for you so you can concentrate on running your pods/containers applications)
Kubernetes does handle:
Running pods, scheduling them on nodes, guarantee no of replicas per Replication Controller settings (i.e. relaunch pods if they fail, relocate them if the node fails)
Services: proxy traffic to the right pod wherever it is located.
Jobs
In addition, there are several 'add-ons' to Kubernetes, some of which are part of what makes GKE:
DNS (you can't really live without it, even thought it's an add-on)
Metrics monitoring: with influxdb, grafana
Dashboard
None of these are out-of-the-box, although they are fairly easy to setup, but you need to maintain them.
There is no real 'logging' add-on, but there are various projects to do this (using Logspout, logstash, elasticsearch etc...)
In short Kubernetes does the orchestration, the rest are services that would run on top of Kubernetes.
GKE brings you all these components out-of-the-box, and you don't have to maintain them. They're setup for you, and they're more 'integrated' with the Google portal.
One important thing that everyone needs is the LoadBalancer part:
- Since Pods are ephemeral containers, that can be rescheduled anywhere and at any time, they are not static, so ingress traffic needs to be managed separately.
This can be done within Kubernetes by using a DaemonSet to fix a Pod on a specific node, and use a hostPort for that Pod to bind to the node's IP.
Obviously this lacks fault tolerance, so you could use multiple and do DNS round robin load balancing.
GKE takes care of all this too with external Load Balancing.
(On AWS, it's similar, with ALB taking care of load balancing in Kubernetes)
GKE (Google Container Engine) is only container platform, which Kubernetes can manage. It is not a kubernetes-like with "differences".
As mentioned in "Docker and Kubernetes and AppC " (May 2015, that can change):
Docker is currently the only supported runtime in GKE (Google Container Engine) our commercial containers product, and in GAE (Google App Engine), our Platform-as-a-Service product.
You can see Kubernetes used on GKE in this example: "Spinning Up Your First Kubernetes Cluster on GKE" from Rimantas Mocevicius.
The gcloud API will still make kubernetes commands behind the scene.
GKE will organize its platform through Kubernetes master
Every container cluster has a single master endpoint, which is managed by Container Engine.
The master provides a unified view into the cluster and, through its publicly-accessible endpoint, is the doorway for interacting with the cluster.
The managed master also runs the Kubernetes API server, which services REST requests, schedules pod creation and deletion on worker nodes, and synchronizes pod information (such as open ports and location) with service information.
In short, without getting into technical details,
GKE is managed Kubernetes, similar to how Google's Cloud Composer is managed Apache Airflow and Cloud Dataflow is managed Apache Beam.
So, some of Google Cloud Platform's services (GKE, Cloud Composer, Cloud Dataflow) are managed implementations of various open source technologies (Kubernetes, Airflow, Beam).

What is the difference between Docker Swarm and Kubernetes/Mesophere?

From what I understand, Kubernetes/Mesosphere is a cluster manager and Docker Swarm is an orchestration tool. I am trying to understand how they are different? Is Docker Swarm analogous to the POSIX API in the Docker world while Kubernetes/Mesosphere are different implementations? Or are they different layers?
Disclosure: I'm a lead engineer on Kubernetes
Kubernetes is a cluster orchestration system inspired by the container orchestration that runs at Google. Built by many of the same engineers who built that system. It was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems.
Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781
Swarm is a very simple add-on to Docker. It currently does not provide all the features of Kubernetes. It is currently hard to predict how the ecosystem of these tools will play out, it's possible that Kubernetes will make use of Swarm.

Resources