GCE persistent disk, kubernetes, and data persistence - docker

I'm having quite a bit of fun with the gcePersistentDisk in the context of pods inside of kubernetes:
Currently I'm attempting to get my rethinkdb cluster to work well with a gcePersistentDisk mounted volume in order to facilitate backups, data recovery, data integrity, etc. This is proving a bit more difficult than I originally anticipated. So, I have a few questions:
1: Should I even be attempting to use the gcePersistentDisk for this use case? Or should I be using persistentVolumes, and using the file system/persistentVolumes on my host kubelets in order to persist the data, and only using gcePersistentDisk when I'm doing a backup?
2: [EDIT: FIGURED OUT]
3: Pretty sure this is just a bug, but if you attempt to scale up a pod with a gcePersistentDisk mounted as a volume, it does not throw the usual:
'The ReplicationController "rethinkdb" is invalid:spec.template.spec.volumes.GCEPersistentDisk.ReadOnly: invalid value 'false': ReadOnly must be true for replicated pods > 1, as GCE PD can only be mounted on multiple machines if it is read-only.'
, but rather just hangs on the command line and loops forever when I view the kublet's logs.
4: Am I going completely in the wrong direction for solving this issue? And if so, how do I persist the DB data from my pods?

Unfortunately I don't know anything about rethinkdb, but it's very reasonable to use a gcePersistentDisk to store the data. That way if the Kubernetes node running your pod dies, the pod can be restarted on another node (assuming you have more than one node in your Kubernetes cluster) and continue to access the data when it comes back up. I don't think there's any reason you need to use persistent volumes here; straight-up GCEPersistentDisk as the VolumeSource should be fine.
I'm not sure why you're losing your data when you scale the RC down to 0 and back up to 1. My understanding is that the PD should be re-mounted.

Related

how do i know when should i use a stateless pod or a stateful one?

I am some kind of new to Kubernetes and Docker and I was studying the concept of statelessness and statefulness and I understand that stateless microservices don't store data on the host, whereas stateful microservices require some kind of storage on the host who serves the requests but if it's up to me I will always use a stateful one why should I ever use a stateless pod? what is the advantage of statelessness?
For a typical Kubernetes Pod, it will be managed by a higher-level controller like a Deployment. You might set the Deployment to have replicas: 3 so that if one of them fails the other two can pick up the load. On an update the existing Pods will get deleted and recreated. If there's heavy load, you can set up a HorizontalPodAutoscaler to increase that replica count for you, which will create more pods when needed.
All of this is really straightforward if your containers are stateless, and there are no consequences to kubectl delete pod.
The problem with a stateful pod is, well, the state. Kubernetes gives you some choices on where to store data, but most of them can only be used on one pod at a time; if you have multiple replicas then each generally needs its own local storage, and the application needs to know how to reconcile the multiple copies of it. (Or, if you can set up something like an NFS server, the application needs to know how to handle concurrent writes.) Operationally, you need to know how to back up and restore all of the individual little volumes that are getting created along the way.
A standard approach is to minimize the number of places where state is stored, and use network I/O from stateless applications to put things in places. The state doesn't even need to be in the cluster: if your application is running in AWS, you could have containers that principally store data in RDS hosted relational databases and Amazon's S3 object store but keep nothing locally, and you can then use normal backup and management approaches for those out-of-cluster stores.

What purpose to ephemeral volumes serve in Kubernetes?

I'm starting to learn Kubernetes recently and I've noticed that among the various tutorials online there's almost no mention of Volumes. Tutorials cover Pods, ReplicaSets, Deployments, and Services - but they usually end there with some example microservice app built using a combination of those four. When it comes to databases they simply deploy a pod with the "mongo" image, give it a name and a service so that other pods can see it, and leave it at that. There's no discussion of how the data is written to disk.
Because of this I'm left to assume that with no additional configuration, containers are allowed to write files to disk. I don't believe this implies files are persistent across container restarts, but if I wrote a simple NodeJS application like so:
const fs = require("fs");
fs.writeFileSync("test.txt", "blah");
const value = fs.readFileSync("test.txt", "utf8");
console.log(value);
I suspect this would properly output "blah" and not crash due to an inability to write to disk (note that I haven't tested this because, as I'm still learning Kubernetes, I haven't gotten to the point where I know how to put my own custom images in my cluster yet -- I've only loaded images already on Docker Hub so far)
When reading up on Kubernetes Volumes, however, I came upon the Ephemeral Volume -- a volume that:
get[s] created and deleted along with the Pod
The existence of ephemeral volumes leads me to one of two conclusions:
Containers can't write to disk without being granted permission (via a Volume), and so every tutorial I've seen so far is bunk because mongo will crash when you try to store data
Ephemeral volumes make no sense because you can already write to disk without them, so what purpose do they serve?
So what's up with these things? Why would someone create an ephemeral volume?
Container processes can always write to the container-local filesystem (Unix permissions permitting); but any content that goes there will be lost as soon as the pod is deleted. Pods can be deleted fairly routinely (if you need to upgrade the image, for example) or outside your control (if the node it was on is terminated).
In the documentation, the types of ephemeral volumes highlight two major things:
emptyDir volumes, which are generally used to share content between containers in a single pod (and more specifically to publish data from an init container to the main container); and
injecting data from a configMap, the downward API, or another data source that might be totally artificial
In both of these cases the data "acts like a volume": you specify where it comes from, and where it gets mounted, and it hides any content that was in the underlying image. The underlying storage happens to not be persistent if a pod is deleted and recreated, unlike persistent volumes.
Generally prepackaged versions of databases (like Helm charts) will include a persistent volume claim (or create one per replica in a stateful set), so that data does get persisted even if the pod gets destroyed.
So what's up with these things? Why would someone create an ephemeral volume?
Ephemeral volumes are more of a conceptual thing. The main need for this concept is driven from microservices and orchestration processes, and also guided by 12 factor app. But why?
Because, one major use case is when you are deploying a number of microservices (and their replicas) using containers across multiple machines in a cluster you don't want a container to be reliant on its own storage. If containers rely on their on storage, shutting them down and starting new ones affects the way your app behaves, and this is something everyone wants to avoid. Everyone wants to be able to start and stop containers whenever they want, because this allows easy scaling, updates, etc.
When you actually need a service with persistent data (like DB) you need a special type of configuration, especially if you are running on a cluster of machines. If you are running on one machine, you could use a mounted volume, just to be sure that your data will persist even after container is stopped. But if you want to just load balance across hundreds of stateless API services, ephemeral containers is what you actually want.

What is Kubernetes StateFulSet?

I try to understand what is a Statefulset but I can not understand this. I know just it is like a ReplicaSet and copies from the Container will created. I have understood even it is appropriate for databases but the rest i could not understand. Can anybody explain that, maybe with a example or a use case ?
Thanks for your helps.
As name says statefulsets : In kubernetes if you are running the stateful application that time instead of deployment you have to use the statefulsets.
Stateful sets are used for application storing data in memory, session and handling state.
For example stateful set can be useful for Elasticsearch, Redis
Yes, it is similar to a replicaset but with some extra features like ensuring that only one pod is created at time and allowing a templated volume config so each pod gets different volumes (unlike a RS where the pod template results in every pod being identical). These features are aimed specifically at stateful applications like databases, though the specifics would depend on the database and are out of scope for an SO question.

Kubernetes Volume definition - explanation

I am trying to be more familiar with Kubernetes orchestration tool and I faced a conceptual issue in case of volumes.
From my understanding, a volume allocates a space on the drive in order to persist data and this volume can be mount on a pod. This is ok until now.
But what will happen in the scenario below:
We have 3 pods and each of them has mounted volume which we persist some data. In some time we don't need 3 pods anymore and we kill one of them. What about its volume and its data? These data will be lost or should we transfer these data somehow to another volume?
Sorry for this bad definition, but I am trying to understand.
Thanks in advance!
A Volume is a way to describe a place where data can be stored. It does not hae to be on a local drive, it does not have to be a network block storage. A whole bunch of volume implementations are available ranging from emptyDir, hostPath, via iSCSI, EBS all the way to NFS or GlusterFS. A volume is a place where you define a piece of more or less posix compliant filesystem.
What happens with it when it's pod is gone is mostly up to what you are using. For example EBS volume can be scraped but NFS share may stay exactly as it was.
There is even more, as you can have Persistent Volume Claims, Volume Claim Templates and Persistent Volumes, which all build up upon the Volume concept it self to provide usefull abstractions.
I strongly encurage you to read and play with all of them to get better understanding of how storage can be managed in Kubernetes.

Handling multiple persistant volumes for a Rethinkdb Docker Swarm

I'm currently using RethinkDB across cloud servers by manually joining each server at setup. I'm interested in moving over to a Swarm approach to make scaling and failover easier. The current approach is cumbersome to scale.
In the current manual approach, I simply create a local folder on each server for RDB and mount as a volume to store its data. However, using a Swarm means that I'd need to handle volumes more dynamically. Each container will need a distinct volume to keep data separate in case of errors.
Any recommendations on how to handle this scenario? A lot of the tutorials I've seen so far mention Flocker to manage persistent storage, but I can't see that being handled dynamically.
Currently I am struggling with a situation like this. I've created a temporary fix with GlusterFS.
What you do is install GlusterFS on all the Docker nodes and mount the folders. This way the data exists on all the nodes. But this is less than ideal if you have a lot of writes. This could be slow because of the way Gluster treats your data replication to prevent data loss. It is solid, but I have some issues with the speed.
In your case I would suggest looking into Flocker. Flocker is a volume plugin that migrates your data when a container moves to another host. I haven't had any experience with it, but in my case the concept of Flocker renders useless, I need my data in multiple containers on multiple hosts (Read only) This is where Gluster came into play

Resources