I have setup a kubernetes cluster with three nodes. All nodes are Linux centos machines.
I need persistent volume to store data and I am trying to achive this.
I was following this tutorial. But it only covers a one node cluster.
https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
Since, my cluster consist of three node, I could not use local path. Previous tutorial does not worked for me.
I need a network path and using NFS seems a reasonable solution to me. ( Is there any good alternative I would like to hear.)
Using NFS network mount contains two steps.
First, Creating a persistent volume on a network path.
Second, define this network path as a persistent volume and use it.
Second step pretty straight forward. Its is explained in kubernetes documentation and there is even sample yaml.
documentation:https://kubernetes.io/docs/concepts/storage/persistent-volumes/#persistent-volumes
example: https://github.com/kubernetes/examples/blob/master/staging/volumes/nfs/nfs-pv.yaml
First part also seems straight forward. Its explained in following document
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-ubuntu-16-04#step-5-%E2%80%94-creating-the-mount-points-on-the-client
/etc/exports
directory_to_share client(share_option1,...,share_optionN)
/etc/exports
/var/nfs/general 203.0.113.256(rw,sync,no_subtree_check)
/home 203.0.113.256(rw,sync,no_root_squash,no_subtree_check)
But when you export a path as a NFS path you should make some configuration and give clients some rights. Basically you need client ip.
With kubernetes we use abstraction such as pods and we don't want to deal with real machines and theirs ip addresses. So, the problem startes here.
So, I don't want to give nodes ip to nfs server. (They might change in he first place.) There should be a better solution that all pods (in any node) should be able to connect to NFS network path.
Even allowing all ip without restriction or allowing an ip range might solve the issue. I would like to hear if there is such solution. But, I would also like to hear what is the best practice. How everybody else use NFS network path from kubernetes.
I could not find any solution yet. If you solved similar problem, please let me know how you solve it. Any documenatation on this issue will be good too. Thanks in advance!
You asked for best practices and from what I've found I think that the best option would be white-listing the IP addresses. Since you do not want to do that, there are also some workarounds answers posted on SO created by people who had similar issues with dynamic IP clients in NFS. You can find a link to deployment using glusterfs in the posted answers. If you want NFS with dynamic IP (because it can change), you can use DNS names instead of IP. If you need dynamic provisioning, use glusterfs.
I will add some information about the volumes as you asked. Might give you some light on the topic and help with the issue.
Since pods are ephemeral you need to move the volume outside the Pod - therefore making it independent from the Pods - so the volume would persist its state in case of Pod failure. Kubernetes supports several types of Volumes.
You could use NFS (more on NFS in the previous link) - so after the Pod is removed the volume gets unmounted, but it still exists. This is also not desired in your situation as the user needs to know the file system type and other details about the volume it will want to connect to. When you go to the examples in the documentation about NFS yaml files, you will see that their kind is defined as a Persistent Volume Claim.
This method is based on creating a series of abstractions that will allow a Node to connect to the Persistent Volume, but the user will not need any backend details, in addition, your node can connect to many providers.
Related
I have 3 replicas of same application running in kubernetes and application exposes an endpoint. Hitting the endpoint sets a boolean variable value to true which is having a use in my application. But the issue is, when hitting the endpoint, the variable is updated only in one of the replicas. How to make the changes in all replicas just by hitting one endpoint?
You need to store your data in a shared database of some kind, not locally in memory. If all you need is a temporary flag, Redis would be a popular choice, or for more durable stuff Postgres is the gold standard. But there's a wide and wonderful world of databases out there, explore which match your use case.
Seems like you're trying to solve an issue with your app using Kubernetes.
Let me elaborate:
If you have several pods behind a service, you can't access all of them using a single request. This have been proposed here, but in my opinion - isn't best practice.
If you need to share data between your apps, you can have them communicate with each other using a cluster service.
You probably assume you can share data using Kubernetes volumes, such as gcePersistentDisk or some other sort of volume, but then again, volumes were not meant to solve such problems.
In conclusion, the way I would solve such issue, is by having the pods communicate changes with each other. Kubernetes can't solve this for you.
EDIT:
Another approach could be having a shared storage (for example a single pod containing mongoDB for example) but I would say that it's a bit of an overkill for your issue. Also because in order to communicate with this pod you would probably need in-cluster communication anyway.
I have a question about best practices designing deployments and or stateful sets for stateful applications like wordpress and co.
The current idea i had was to make a fully dynamic image for one specific cms. With the idea i can mount the project data into it. Like themes, files etc.
In the case of wordpress it would be wp-content/themes. Or is that the wrong way. Is it better to already build the image with the right data and dont worry about the deployment because you already have everything.
What are your experiences with stateful apps and how did you solve those "problems".
thanks for answers :)
I don't think Wordpress is really stateful in this matter and it should be deployed like a regular deployment.
Stateful Set is typically things like databases that needs storage. As an example, Cassandra would typically be a Stateful Set with mounted Volume Claims. When one instance dies, a new one is brought up with the same name, IP address and volume as the old one. After a short while it should be part of the cluster again.
With deployments you will not get the same name or IP address and you can't mount Volume Claims.
All you need to run (wp-content/themes) app it would be nice to put in an image.
All that will change (statefull) you can store in the PVC.
I am trying to be more familiar with Kubernetes orchestration tool and I faced a conceptual issue in case of volumes.
From my understanding, a volume allocates a space on the drive in order to persist data and this volume can be mount on a pod. This is ok until now.
But what will happen in the scenario below:
We have 3 pods and each of them has mounted volume which we persist some data. In some time we don't need 3 pods anymore and we kill one of them. What about its volume and its data? These data will be lost or should we transfer these data somehow to another volume?
Sorry for this bad definition, but I am trying to understand.
Thanks in advance!
A Volume is a way to describe a place where data can be stored. It does not hae to be on a local drive, it does not have to be a network block storage. A whole bunch of volume implementations are available ranging from emptyDir, hostPath, via iSCSI, EBS all the way to NFS or GlusterFS. A volume is a place where you define a piece of more or less posix compliant filesystem.
What happens with it when it's pod is gone is mostly up to what you are using. For example EBS volume can be scraped but NFS share may stay exactly as it was.
There is even more, as you can have Persistent Volume Claims, Volume Claim Templates and Persistent Volumes, which all build up upon the Volume concept it self to provide usefull abstractions.
I strongly encurage you to read and play with all of them to get better understanding of how storage can be managed in Kubernetes.
I'm totally new to docker and started yesterday to do some tutorials. I want to build a small test application consisting of several different services (replicated and so on) that interact with each other and encountered a problem regarding 'service-discovery'. I started with the get-started tutorials on docker.com and at the moment i'm not really sure what's best practice in the world of docker to let the different containers in a network get to know each other...
As this is a rather vague 'problem description', i try to make this more precise. I want to use a few independent services (e.g. with stuff like postgre, mongodb, redis and rabbitmq...) together with a set of worker nodes to which work is assigned by a dedicated master node. Since it seems to be quite convenient, I wanted to use a docker-composer.yml file to define all my services and deploy them as a stack.
Moreover, I created a custom network and since it seems not to be possible to attach a stacked service to a bridge network, I created an attachable overlay network.
To finally get to the point: even though the services are deployed correctly, their actual container-name is random and without using somekind of service registry I'm not able to resolve their addresses.
A simple solution would be to use single containers with fixed container names - however this does not seem to be a best practice solution (even though it is actually just a docker-based DNS that is based on container names rather than domain names). Another problem are the randomly generated container names that contain underscores, and hence these names are not valid addresses that can be resolved...
best regards
Have you looked at something like Kubernetes? To quote from the home page:
It groups containers that make up an application into logical units for easy management and discovery.
Can we share a common/single named volume across multiple hosts in docker engine swarm mode, what's the easiest way to do it ?
If you have an NFS server setup you can use use some nfs folder as a volume from docker compose like this:
volumes:
grafana:
driver: local
driver_opts:
type: nfs
o: addr=192.168.xxx.xx,rw
device: ":/PathOnServer"
In the grand scheme of things
The other answers are definitely correct. If you feel like you're still missing something or are coming to the conclusion that things might never really improve in this space, then you might want to reconsider the use of the typical POSIX-like hierarchical filesystem abstraction. Not all applications really need it (I might go as far as to say that few do). Maybe yours doesn't either.
In defense of filesystems
It is still very common in many circles, but usually these people know their remote/distributed filesystems very well and know how to set them up and leverage them properly (and they might be very good systems too, though often not with existing Docker volume drivers). Sometimes it's also in part because they're simply forced to (codebases that can't or shouldn't be rewritten to support other storage backends). Using, configuring or even writing arbitrary Docker volume drivers would be a secondary concern only.
Alternatives
If you have the option however, then evaluate other persistence solutions for your applications. Many implementations won't use POSIX filesystem interfaces but network interfaces instead, which pose no particular infrastructure-level difficulties in clusters such as Docker Swarm.
Solutions managed by third-parties (e.g. cloud providers)
Should you succeed in removing all dependencies to filesystems for persistent and shared data (it's still fine for transient local state), then you might claim to have fully "stateless" applications. Of course there is often always state persisted somewhere still, but the idea is that you don't handle it yourself. Many cloud providers (if that's where you're hosting things) will offer fully managed solutions for handling persistent state such that you don't have to care about it at all. If you're going this route, do consider managed services that use APIs compatible with implementations that you can use locally for testing (for example by running a Docker container based on an image for that implementation that is provided by a third-party or that you can maintain yourself).
DIY solutions
If you do want to manage persistent state yourself within a Docker Swarm cluster, then the filesystem abstraction is often inevitable (and you'd probably have more difficulties targeting block devices directly anyway). You'll want to play with node and service constraints to ensure the requirements of whatever you use to persist data are fulfilled. For certain things like a central DBMS server it could be easy ("always run the task on that specific node only"), for others it could be way more involved.
The task of setting up, scaling and monitoring such a setup is definitely not trivial, which is why many application developers are happy to let somebody else (e.g. cloud providers) do it. It's still a very cool space to explore however, though given you had to ask that question it's likely not something you should focus on if you're on a deadline.
Conclusion
As always, use the right abstraction for the job, and pause to think about what your strengths are and where to spend your resources.
From scratch, Docker does not support this by itself. You must use additional components either a docker plugin which would provide you with a new layer type for your volumes, or a sync tool directly on your FS which will sync the data for you.
From my point of view, the easiest solution is rsync or more accurately lsyncdn the daemon version of rsync. But I never tried it for docker volumes, so I can't tell if it handle it fine.
Other solutions are offered using Infinit.sh. It basically does the same thing as lsyncd does. It's a one way sync. So if your docker container are RW in their volumes it won't match your expectations. I tried this solution, and it works pretty well for RO operations. And not in production. It's still an alpha version. Infinit is also on the way to provide a docker driver. Not released yet. So I didn't even tried it. Too risky.
Other solutions I found but was unable to install (and so to try) are flocker and glusterFS. Both are designed to create FS Volume based on several HDD from several machines. But none of their repositories were working these past weeks.
Sorry for giving you only weak solutions, but I'm facing the same problem and haven't find yet a perfect solution.
Cheers,
Olivier