Few questions in this regard.
First, which CNI plugins are officially supported in Azure. As far as I understand kubenet and azure-cni can be used, as they are options in AKS/acs-engine. Is this a correct statement?
Second - which CNIs can be used, like calico, flannel etc. Which can be safely used, and which cannot be used safely.
At the moment azure-cni and kubenet are the only officially supported CNIs. However, acs-engine supports wider variety of CNI plugins, but they are not for Azure
Related
I am working on a project where the requirement just came to create a pod for MTA/SMTP within Kubernetes cluster such that it can be accessed through discovery like other services.
Didn't find anything concrete that I could follow to set up this on a Kubernetes cluster. My question is if there's any way to do this then how? also, is it a good idea to set up it as pod?
Will appreciate any help.
Thanks
You sure can. It's more about opinion and really depends on how you divide your resources into containers or VMs or bare-metal machines.
Some might argue that running something like postfix is more efficient in Kubernetes because the CPU/Memory resources will mostly be used when postfix is actually sending/receiving (a more efficient way of processing the mail queues). There are a few resources that you may able to follow. For example:
https://www.tauceti.blog/post/run-postfix-in-kubernetes/
https://blog.mi.hdm-stuttgart.de/index.php/2019/08/26/creating-an-email-server-environment-on-kubernetes/
Postfix Helm chart: https://hub.helm.sh/charts/halkeye/postfix
It's actually relatively simple to deploy a Postfix MTA relay on Kubernetes (aka Postfix null client).
There's:
an image available on Docker hub,
a Chart on artifacthub and
source on GitHub.
(Disclaimer: I am the author of the chart and image. There are other alternatives, listed on the said GitHub page)
Service can then be simply accessed via smtp-server:587 or similar resolution. The biggest issue you're going to face is configuring the outside services (such as SPF, DNS, registering your block with Microsoft) properly to avoid having your email going into spam.
Most of it is explained nicely in the README.
I'm evaluating the use of Spring-Cloud-DataFlow. I'm wondering why do it supports the use of Kafka and RabitMQ but does not support JMS? Is there a technical reasons? or is it just a mater of contributing and add the JMS support?
There is a variety of JMS-spec implementation from different vendors. In fact, we have an implementation for IBM MQ, Solace and ActiveMQ.
As for the support, since JMS is a spec and there is a variety of vendor-specific investments in the enterprise, we (spring) didn't want to ship binaries that involve vendor-specific licensing terms, so we have opened it up with the partners to support them instead. Example: Solace built a supported version of Solace PubSub+ implementation, and that's hosted in their GitHub, too.
Google PubSub and Azure Event Hubs are the other binder implementations, and they are supported and maintained by them directly. More details here.
Lastly, from SCDF point-of-view, if the Spring Cloud Stream applications are bundled with the particular binder implementation, there's nothing extra required for SCDF. The SCDF-server orchestrates the deployment of the Spring Cloud Stream applications on the targeted platform.
As I know, Google's Kubernetes is based on Google's Borg; however, it seems like Borg is larger than Kubernetes. My understanding is that Borg is a large system containing a sub-system like Kubernetes and its own containers like Docker.
So, I would like to know:
1) In term of containers cluster management, what's the key difference between Borg (sub-system inside) and Kubernetes?
2) In term of container technology, what's the key difference between Borg (sub-system inside) and Docker?
I have no 'inside' knowledge of Borg so this answer is based only on what Google themselves have published here. For much greater detail, you should look into that paper. Section 8 makes specific reference to Kubernetes and is the basis of this answer (along with Kubernetes own docs):
1) Key differences:
Borg groups work by 'job'; Kubernetes adds 'labels' for greater flexibility.
Borg uses an IP-per-machine design; Kubernetes uses a network-per-machine and IP-per-Pod design to allow late-binding of ports (letting developers choose ports, not the infrastructure).
Borg's API seems to be extensive and rich, but with a steep learning curve; Kubernetes APIs are presumably simpler. At least, for someone who hasn't worked with Borg, the Kubernetes API seems pretty clean and understandable.
2) Borg seems to use LMCTFY as its container technology. Kubernetes allows the use of Docker or rkt.
Some other obvious differences are the Borg is not open source and not available for use outside of Google, while Kubernetes is both of those things. Borg has been in production use for more than 10 years, while Kubernetes just hit v1.0 in July 2015.
Hope this helps. Check out that Borg paper; it is worth the time to read the whole thing.
We are creating a new version of payment gateway processor and We want to use docker container with kubernetes but we are worried if Kubernetes and docker container follow the PCI DSS requirements.
We don't find anything clear in PCI DSS specifications.
Re-iterating Tim's comment above: As far as I know nobody has implemented a fully PCI-compliant kubernetes install yet (they might have done and not told us). I don't know of anything specific to Docker or Kubernetes that would prevent you from getting your deployment certified.
PCI-DSS can be achieved by 3rd party solutions.
(Disclaimer - I'm an employee of Twistlock, which has brought a PCI-DSS solution, if you're interested in it please check the following link - https://info.twistlock.com/guide-to-pci-compliance-for-containers)
I've implemented and got PCI-DSS Level 1 certified a K8S Cluster as Cardholder Data Environment for the company I work for.
Don't get intimidated by the requirements, there's always a way to make them "not applicable" or meet them with some elbow oil.
The basics you need to meet to make it compliant are:
use COS so you can skip all the Nodes hardening hassle.
use the --enable-master-authorized-networks flag (Beta) although I haven't had any problems with it yet.
manage the network CIDRs yourself as you need to provide classes that don't change for the docs and show how only
those are authorized to access the cluster.
you must implement a NAT gateway cluster and pass all the K8S traffic through it and setup on the systems a silly
outgoing IDS/IPS solution, I used Suricata. (it's silly, I know)
you need to whitelist all outgoing traffic IPs for any API you're eventually calling from your apps and deny everything else.
PS: I know it sounds like BS, but you gotta do it if you wanna pass compliance.
PPS: remember to harden the NAT gateways, I used ansible with the STIG playbook.
These were the trickiest parts, everything else was cumbersome but manageable. Glad to help.
see this article. While the author is referring to "public iaas" it seems that one could substitute "private kubernetes".
I am trying to learn both Dart and GCE. I already created a server at GCE, but I don't know how to install Dart since I can only use Linux commands on the Debian server.
This is mostly about Dart on AppEngine:
You should be able to find all information here https://www.dartlang.org/cloud/
I did it using the instructions from this page and the linked pages at the bottom.
The discussions here https://groups.google.com/a/dartlang.org/forum/#!forum/cloud provide some useful bits too.
Dart on Compute engine:
Here is a blog post that covers it pretty good http://financecoding.github.io/blog/2013/09/30/getting-started-with-dart-on-compute-engine/ but some things have changed since this was written.
There are docker images https://github.com/dart-lang/dart_docker ready to execute Dart scripts.
Just search for information how to use Docker images with GCE and you should be fine (there should already be a lot available)
Please ask again if you encounter concrete problems.
Dart on AppEngine runs as Managed VM. Managed VMs work differently than real AppEngine VMs that run 'native' supported languages like Java, Python, and Go. Managed VMs are in fact Compute engines but managed by AppEngine. This means they are launched and shut down depending on the load (and depending on some basic configuration settings in app.yaml and also depending on payment settings) while Compute Engines instances are basically always on. You have to manage yourself when instances should be added/removed depending on the load. There is Kubernetes which is a handy tool to make this easier but you have to actually manage your instances. Besides from that there is not much difference between Managed VMs and Compute Engine instances. A difference to native AppEngine is that you can add any libraries and also binaries to Managed VMs like to CE.
There are pricing differences but I don't know details about this myself yet