I am working on a project where the requirement just came to create a pod for MTA/SMTP within Kubernetes cluster such that it can be accessed through discovery like other services.
Didn't find anything concrete that I could follow to set up this on a Kubernetes cluster. My question is if there's any way to do this then how? also, is it a good idea to set up it as pod?
Will appreciate any help.
Thanks
You sure can. It's more about opinion and really depends on how you divide your resources into containers or VMs or bare-metal machines.
Some might argue that running something like postfix is more efficient in Kubernetes because the CPU/Memory resources will mostly be used when postfix is actually sending/receiving (a more efficient way of processing the mail queues). There are a few resources that you may able to follow. For example:
https://www.tauceti.blog/post/run-postfix-in-kubernetes/
https://blog.mi.hdm-stuttgart.de/index.php/2019/08/26/creating-an-email-server-environment-on-kubernetes/
Postfix Helm chart: https://hub.helm.sh/charts/halkeye/postfix
It's actually relatively simple to deploy a Postfix MTA relay on Kubernetes (aka Postfix null client).
There's:
an image available on Docker hub,
a Chart on artifacthub and
source on GitHub.
(Disclaimer: I am the author of the chart and image. There are other alternatives, listed on the said GitHub page)
Service can then be simply accessed via smtp-server:587 or similar resolution. The biggest issue you're going to face is configuring the outside services (such as SPF, DNS, registering your block with Microsoft) properly to avoid having your email going into spam.
Most of it is explained nicely in the README.
Related
I'm trying to create an assignment for students to do that contains the following :
A docker image with issues that have to be scanned and remedied. (using an opensource scanner in kubernetes)
(Maybe) A sample attack scenario that can exploit those vulnerabilities.
The problem arises when I try to find a suitable vulnerable image or create one. I cannot find a base of security issues at all. I really bend my back thinking of a suitable phrase in Google but everything leads merely to some blog posts about how-to scan an image.
I expected a database that might contain multiple sec issues and what causes them. I'd also expect some way to discern which are the most popular ones.
Do you have the source I require ?
Maybe you can just offer me 3-4 common security issues that are good to know and educational when having your first brush with docker ? (And how to create those issues ?)
The whole situation would have been probably easier if I myself would have been an expert in the field, but the thing I do is also my assignment as a student. (So as students we design assignments for each other. )
Looks like you are looking for the Container security hardening and Kubernetes security options maybe.
You can use some tools like
kubesec - Security risk analysis for Kubernetes resources
checkov - Prevent cloud misconfigurations and find vulnerabilities during build-time in infrastructure as code, container images and open-source packages
Trivy - vulnerabilities, misconfigurations, secrets, SBOM in containers, Kubernetes, code repositories, clouds and more
If you are looking for some questions you can set like, this is CKS (Certified Kubernetes Security) exam question
There are a number of pods/container running in the "spectacle" namespace.
Identify and delete the pods which have CRITICAL vulnerabilities.
For this trivy opensource tools comes into the picture to scan the image that you will be using in the deployment of Kubernetes or docker
trivy image --severity CRITICAL nginx:1.16 (Image running in container)
List of few questions you can create lab out of it : https://github.com/moabukar/CKS-Exercises-Certified-Kubernetes-Security-Specialist/tree/main/7-mock-exam-questions
I'm starting with Kubernetes (through GKE) and I want to setup Gitlab Review Apps.
My use case is quite simple. I've read tons of articles but I could not find clear explanations and best practices on the way to do it. This is the reason why I'm asking here.
Here is what I want to achieve :
I have a PHP application, based on Symfony4, versioned on my Gitlab CE instance (self-hosted)
I setup my Kubernetes using GKE into Gitlab
I want, on each merge request, deploy a new environment on my cluster where I am able to test the application and the new feature (this is the principle of Review Apps).
As far as I read, I've only found simple implementations of it. What I want to do, is deploy to a new LAMP (or LEMP) environment to test my new feature.
The thing that I don't understand is how to proceed to deploy my application.
I'm able to write Docker files, store them on mi Gitlab registry, etc ...
In my use case, what is the best way to proceed?
In my application repository, do I have to store a Docker file which includes all my LAMP configuration (a complete image with all my LAMP setup)? I don't like this approach, it seems strange to me.
Do I have to store different custom images (for Apache, MySQL, PHP-FPM, Redis) on my registry and call them and deploy them on GKE during review Stage in my gitlab-cy.yml file?
I'm a little bit stuck on that and I can't share code because it's more about the way to handle everything.
If you have any explanations to help me, it would be great!
I can, of course, explain a little bit more if needed!
Thanks a lot for your help.
Happy new year!
I'm looking for a PaaS solution and lately I've been looking into Pivotal Cloud Foundry and OpenShift Origin, trying to compare them.
And honestly, it feels like both offer pretty much the same functionality, to the point where I don't really see a thing, that is truly specific to one of them.
They achieve different things differently, but the functionality remains the same.
The biggest difference seems to be the runtimes these technologies use, as OpenShift uses Kubernetes and PCF uses Diego, and PCF also has it's own container solution.
So here comes my question. What are the differences?
Is there a killer feature in one of them that I'm missing?
Why should I consider using Rocket instead of Docker in our development pipeline. We would like to use docker to create testable containers, but now there is Rocket which pretends to know the same. If we would like to start containerization should we seriously consider Rocket as it seems it is still pretty new?
There is not much information about Rocket, so I'm no clear where it stays now in 2015.
UPDATE: from https://coreos.com/blog/app-container-and-the-open-container-project/
As we participate in OCP, our primary goals are as follows:
Users should be able to package their application once and have it work with any container runtime (like Docker, rkt, Kurma, or Jetpack)
The standard should fulfill the requirements of the most rigorous security and production environments
The standard should be vendor neutral and developed in the open
Rocket is officially dead: https://github.com/rkt/rkt/issues/4024
After acquisition by Red Hot new owner concentrates efforts on https://podman.io/
podman provides rootless containers. Something that Docker strove to get for a long time (according to the below comment, they finally managed).
As with most competitors both have their advantage and disandvantages.
Docker hub offers a public registry where docker images can be pushed and pulled with ease.
There is also now a free registry offered by GitLab! Its really good.
A core issue at the moment is security. Docker now scan their images for security flaws and report on the security status of each image.
With rocket image signatures are cross checked with the signature of the publisher to see if they have been tampered with. This affords a degree of confidence.
For a fuller discussion on security see https://bobcares.com/blog/docker-vs-rkt-rocket/
With regards standards, it seems that OCI (Open Container Initiative) has been adopted by the big players and will pave the way forward for containerisation standatisation.
We are creating a new version of payment gateway processor and We want to use docker container with kubernetes but we are worried if Kubernetes and docker container follow the PCI DSS requirements.
We don't find anything clear in PCI DSS specifications.
Re-iterating Tim's comment above: As far as I know nobody has implemented a fully PCI-compliant kubernetes install yet (they might have done and not told us). I don't know of anything specific to Docker or Kubernetes that would prevent you from getting your deployment certified.
PCI-DSS can be achieved by 3rd party solutions.
(Disclaimer - I'm an employee of Twistlock, which has brought a PCI-DSS solution, if you're interested in it please check the following link - https://info.twistlock.com/guide-to-pci-compliance-for-containers)
I've implemented and got PCI-DSS Level 1 certified a K8S Cluster as Cardholder Data Environment for the company I work for.
Don't get intimidated by the requirements, there's always a way to make them "not applicable" or meet them with some elbow oil.
The basics you need to meet to make it compliant are:
use COS so you can skip all the Nodes hardening hassle.
use the --enable-master-authorized-networks flag (Beta) although I haven't had any problems with it yet.
manage the network CIDRs yourself as you need to provide classes that don't change for the docs and show how only
those are authorized to access the cluster.
you must implement a NAT gateway cluster and pass all the K8S traffic through it and setup on the systems a silly
outgoing IDS/IPS solution, I used Suricata. (it's silly, I know)
you need to whitelist all outgoing traffic IPs for any API you're eventually calling from your apps and deny everything else.
PS: I know it sounds like BS, but you gotta do it if you wanna pass compliance.
PPS: remember to harden the NAT gateways, I used ansible with the STIG playbook.
These were the trickiest parts, everything else was cumbersome but manageable. Glad to help.
see this article. While the author is referring to "public iaas" it seems that one could substitute "private kubernetes".