I have an data processing application which is updated on a regular basis. This application has a bunch of dependencies which are also updated every now and then. However, different versions of the software (+dependencies) might produce different results (this is expected). The application is run on a remote computer and it can be accessed through a Web page. Every time the user uses the Web page to do some processing she/he also chooses which version of the software he/she wants to use.
Now I am trying to decide which is the best way of keeping track different software (+dependencies) versions. The simplest way of course is to just compile and install each version of my software and its dependencies in a different folder, and then based on the request the user sends, the appropriate folder is selected. However, this sounds very clunky to me. So I thought I could use Docker to keep track of the different software versions. Do you think that it is a good idea? If yes, what is most appropriate to do every time I have a new version of the software (and/or dependencies): 1) Create a new container from scratch with the new version (and end up having multiple containers), or 2) Update the existing container and commit the changes? (I suppose I can access the older commits of the container, right?)
PS: Keep in mind that the reason I looked into Docker and not a simple virtual machine solution is that the application I am running is a high-performance GPU-based software.
Docker is a reasonable choice. Your repository would contain all of the app versions you wish to publish. Note, you will only realize savings if you organize the resulting app filesystem into layers, of which the lower layers are the least likely to change between versions. This will keep the storage requirements at a minimum.
Then you have to decide how you will process each job. A robust (but complex) solution would be to have one or more API containers which take in processing jobs from your user and "dole" them out to worker containers (one or more from each release version). This would provide the lowest response latency and be non-blocking. You can look at different service discovery models to see how your "worker" containers can register with your "manager" containers. This is probably more than you'd like to bite off, but consider using a good key-value database (another container!) like etcd or a 3rd party service discovery tool like zookeeper/eureka/consul.
A much simpler model would have a single API container with one each of the release containers created, but not started. The API container would start, direct, and then stop the appropriate release container. You would incur the startup latency, but this is the least resource intensive... and easiest to manage. But this is a blocking operation.
Somewhere in the middle, but less user friendly is to have each release container running but listening on different host ports (the app always sees the same port). The user would would connect to the port which is servicing the desired release of the app. You'd have to provide some sort of index to make this useful.
Related
I recently finished a project where I created an App consisting of several docker containers. The purpose of the app was to collect some data and safe it to an databank and also allow user interactions over an simple web gui. The app was hosted on four different Raspberry Pi's and it was possible to collect data from all physicial maschines through an api. Further you could do some simple machine learning tasks like calculating anomalies in the sensor data of the Pi's.
Now I'm trying to take the next step and using kubernetes for some load balancing and remote updates. My main goal is to remote update all raspberries from my master node. Which, in theory, would be a very handy feature. Also I want to share the ressources of the Pi's within the cluster for calculations.
I read a lot about Kubernets, Minikube, K3's, Kind and all the different approaches to set up an Kubernetes cluster, but feel like I am missing "a last puzzle piece".
So from what I understood I need an approach which allows me to set up an local (because all machines are laying on my desk/ no cloud needed) multi node cluster. My master node would be (idealy) my laptop, running Ubuntu in a virtual machine. My rasberry's would be my slave/worker nodes. If I would want to update my cluster I can use the kubernetes remote update functionality.
So my question out of this would be: Does it makes sense to use several rasberries as nodes in a kubernetes cluster and to manage them from one master node (laptop) and do you have any suggestions about the way to achieve this setup.
I usally dont like those question not containing any specific code or questions by myself, but feel like an simple hint could accelerate my project noteable. If it's the wrong place please feel free to delete this question.
Best regards
You didn't mention which rpi models you are using, but I assume you are not using rpi zeros.
My main goal is to remote update all raspberries from my master node.
Assuming that by that you mean updating your applications running in kubernetes that is installed on rpi then keep reading. Otherwise ignore all I wrote, and what you probably need is ansible or other simmilar provisioning/configuration-management/application-deployment tool.
Now answering to your question:
Does it makes sense to use several rasberries as nodes in a kubernetes cluster
yes, this is why people created k3s, so such setup is possible using less resources.
and to manage them from one master node (laptop)
assuming you will be using it for learning purpouses then why not. It is possible, but just be aware that when master node goes down (e.g. when you turn off your laptop), all cluster goes down (or at least api-server communication so you wont be able to change cluster's state). Also make sure you are using bridge networking interface for your VM so it is visible in your local network as a standalone instance.
and do you have any suggestions about the way to achieve this setup.
installing k3s on all nodes would be the easiest in your case. There are plenty of resources on the internet explaining how to achieve it.
One last thing I would like to explain is the thing with updates.
Speaking of kubernetes updates you need to know that kubernetes doesn't update itself automatically. You need to explicitly update it. New k8s version is beeing released every 3 months that sometimes "breaks" things and backward compatibility is not possible (so always read changelog before updating stuff because rollbacks may not be possible unless you backed up an etcd cluster earlier).
Speaking of updating applications - To run your app all you do is send yaml files describing your application to k8s and it handles the rest. So if you want to update your app just update the tag on container image to newer version and k8s will handle the updates. Read here more about update strategies in k8s.
Our company is developing an application which runs in 3 seperate kubernetes-clusters in different versions (production, staging, testing).
We need to monitor our clusters and the applications over time (metrics and logs). We also need to run a mailserver.
So basically we have 3 different environments with different versions of our application. And we have some shared services that just need to run and we do not care much about them:
Monitoring: We need to install influxdb and grafana. In every cluster there's a pre-installed heapster, that needs to send data to our tools.
Logging: We didn't decide yet.
Mailserver (https://github.com/tomav/docker-mailserver)
independant services: Sentry, Gitlab
I am not sure where to run these external shared services. I found these options:
1. Inside each cluster
We need to install the tools 3 times for the 3 environments.
Con:
We don't have one central point to analyze our systems.
If the whole cluster is down, we cannot look at anything.
Installing the same tools multiple times does not feel right.
2. Create an additional cluster
We install the shared tools in an additional kubernetes-cluster.
Con:
Cost for an additional cluster
It's probably harder to send ongoing data to external cluster (networking, security, firewall etc.).
3) Use an additional root-server
We run docker-containers on an oldschool-root-server.
Con:
Feels contradictory to use root-server instead of cutting-edge-k8s.
Single point of failure.
We need to control the docker-containers manually (or attach the machine to rancher).
I tried to google for the problem but I cannot find anything about the topic. Can anyone give me a hint or some links on this topic?
Or is it just no relevant problem that a cluster might go down?
To me, the second option sound less evil but I cannot estimate yet if it's hard to transfer data from one cluster to another.
The important questions are:
Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?
Thanks for your hints,
Marius
That is a very complex and philosophic topic, but I will give you my view on it and some facts to support it.
I think the best way is the second one - Create an additional cluster, and that's why:
You need a point which should be accessible from any of your environments. With a separate cluster, you can set the same firewall rules, routes, etc. in all your environments and it doesn't affect your current workload.
Yes, you need to pay a bit more. However, you need resources to run your shared applications, and overhead for a Kubernetes infrastructure is not high in comparison with applications.
With a separate cluster, you can setup a real HA solution, which you might not need for staging and development clusters, so you will not pay for that multiple times.
Technically, it is also OK. You can use Heapster to collect data from multiple clusters; almost any logging solution can also work with multiple clusters. All other applications can be just run on the separate cluster, and that's all you need to do with them.
Now, about your questions:
Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
No, it is not a problem with a separate cluster.
Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
I think, yes. At least I did it several times, and I know some other projects with similar architecture.
Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?
Yes, nothing complex there. Usually, it does not depend on the platform.
I was wondering if it is possible to offer Docker images, but not allow any access to the internals of the built containers. Basically, the user of the container images can use the services they provide, but can't dig into any of the code within the containers.
Call it a way to obfuscate the source code, but also offer a service (the software) to someone on the basis of the container, instead of offering the software itself. Something like "Container as a Service", but with the main advantage that the developer can use these container(s) for local development too, but with no access to the underlying code within the containers.
My first thinking is, the controller of the Docker instances controls everything down to root access. So no, it isn't possible. But, I am new to Docker and am not aware of all of its possibilities.
Is this idea in any way possible?
An obfuscation-based only solution would not be enough, as "Encrypted and secure docker containers" details.
You would need full control of the host your containers are running in order to prevent any "poking". And that is not the case in your scenario, where a developer does have access to the host (ie his/her local development machine) where said container would run.
What is done sometimes is to have some piece of "core" code to run on a remote location (remote server, usb device), in a way that the external piece of code on the one hand can do some client authentication but also and more importantly run some business core code in order to guarantee that the externally located code "has" to be executed to have the things done. If it were only some check that is not actually core code, a cracker could just override it and avoid calling it on the client side. But if the code is actually required to be run and its not then the software won't be able to finish its processing. Of course there is an overhead for all of this, both in complexity and probably computation times, but that's one way you could deploy something that will unfailingly be required to contact your server/external device.
Regards,
Eduardo
One of the features of Erlang (and, by definition, Elixir) is that you can do hot code swap. However, this seems to be at odd with Docker, where you would need to stop your instances and restart new ones with new images holding the new code. This essentially seem to be what everyone does.
This being said, I also know that it is possible to use one hidden node to distribute updates to all other nodes over network. Of course, just like that is sounds like asking for trouble, but...
My question would be the following: has anyone tried and achieved with reasonable success to set up a Docker-based infrastructure for Erlang/Elixir that allowed Hot-code swapping? If so, what are the do's, don'ts and caveats?
The story
Imagine a system to handle mobile phone calls or mobile data access (that's what Erlang was created for). There are gateway servers that maintain the user session for the duration of the call, or the data access session (I will call it the session going forward). Those server have an in-memory representation of the session for as long as the session is active (user is connected).
Now there is another system that calculates how much to charge the user for the call or the data transfered (call it PDF - Policy Decision Function). Both systems are connected in such a way that the gateway server creates a handful of TCP connections to PDF and it drops users sessions if those TCP connections go down. The gateway can handle a few hundred thousand customers at a time. Whenever there is an event that the user needs to be charged for (next data transfer, another minute of the call) the gateway notifies PDF about the fact and PDF subtracts a specific amount of money from the user account. When the user account is empty PDF notifies the gateway to disconnect the call (you've run out of money, you need to top up).
Your question
Finally let's talk about your question in this context. We want to upgrade a PDF node and the node is running on Docker. We create a new Docker instance with the new version of the software, but we can't shut down the old version (there are hundreds of thousands of customers in the middle of their call, we can't disconnect them). But we need to move the customers somehow from the old PDF to the new version. So we tell the gateway node to create any new connections with the updated node instead of the old PDF. Customers can be chatty and also some of them may have a long-running data connections (downloading Windows 10 iso) so the whole operation takes 2-3 days to complete. That's how long it can take to upgrade one version of the software to another in case of a critical bug. And there may be dozens of servers like this one, each one handling hundreds thousands of customers.
But what if we used the Erlang release handler instead? We create the relup file with the new version of the software. We test it properly and deploy to PDF nodes. Each node is upgraded in-place - the internal state of the application is converted, the node is running the new version of the software. But most importantly, the TCP connection with the gateway server has not been dropped. So customers happily continue their calls or are downloading the latest Windows iso while we are upgrading the system. All is done in 10 seconds rather than 2-3 days.
The answer
This is an example of a specific system with specific requirements. Docker and Erlang's Release Handling are orthogonal technologies. You can use either or both, it all boils down to the following:
Requirements
Cost
Will you have enough resources to test both approaches predictably and enough patience to teach your Ops team so that they can deploy the system using either method? What if the testing facility cost millions of pounds (because of the required hardware) and can use only one of those two methods at a time (because the test cycle takes days)?
The pragmatic approach might be to deploy the nodes initially using Docker and then upgrade them with Erlang release handler (if you need to use Docker in the first place). Or, if your system doesn't need to be available during the upgrade (as the example PDF system does), you might just opt for always deploying new versions with Docker and forget about release handling. Or you may as well stick with release handler and forget about Docker if you need quick and reliable updates on-the-fly and Docker would be only used for the initial deployment. I hope that helps.
I'm currently tinkering with a scenario for using CoreOS. It's probably not the 1st class use case. But I'd like to get a pointer if it's valid though. As I'm really at the beginning of getting a grip on CoreOS I hope that my "use case" is not totally off.
Imagine a multi tenant application where every tenant should get it's own runtime environment. Let's take a web app running on Node.js and PostgreSQL for data storage as given. Each tenant environment would be be running on CoreOS in their respective containers. Data persistance is left out for now. For me it's currently more about the general feasibility.
So why CoreOS?
Currently I try to stick with the idea of separated environments per tenant. To optimise the density of DB and web server instances per hardware host I thought CoreOS might be the right choice instead of "classic" virtualisation.
Another reason is that a lot of tenants might not need more than a single, smallish DB instance and a single, smallish web server. But there might be other tenants that need some constantly scaled out deployments. Others might need a temporary scale out during burst times. CoreOS sounds like a good fit here as well.
On the other side there must be a scalable messaging infrastructure (RabbitMQ) in behind that will handle a lot of messages. This infrastructure will be used by all tenants and needs to dynamically scalable at best. Probably there will be a "to be scaled" Elasticsearch infrastructure as well. Viewed through my current "CoreOS for everything goggles" this seems a good fit as well.
In case this whole scenario is generally valid, I currently cannot see how it would be possible to route the traffic for a general available web site to the different tenant containers.
Imagine the app is running at app.greatthing.tld. A user can login and should be presented the app served for it's tenant. Is this something socketplane and/or flannel are there to solve? Or how would a solution look like to get the tenant served by the right containers? I think it's kind of a general issue. But at least in the context of a CoreOS containerized environment I cannot see how to deal with this at all.
CoreOS takes care of scheduling your container in the cluster with their own tools such as fleetctl/etcd/systemd and also takes care of persistent storage when resheduled to a different container using flocker (experimental). They have their own load balancers.