On DockerCon 2019 Docker announced the Docker Kubernetes Service (DKS).
Quoted from here:
Docker Enterprise 3.0’s Docker Kubernetes Service (DKS) integrates
Kubernetes container orchestration from the developer desktop to the
production server.
...It also provides an automated way to install and configure
Kubernetes applications across hybrid and multi-cloud deployments.
Other capabilities include security, access control, and lifecycle
management
And from here:
The Docker platform includes a secure and fully-conformant Kubernetes
environment for developers and operators of all skill levels,
providing out-of-the-box integrations for common enterprise
requirements while still enabling complete flexibility for expert
users.
After some searching and research I haven't succeed to fully understand the different solutions and features that DKS has to offer. So, my question is:
What DKS has to offer regarding topics like: Security, Networking, Access Management, etc'?
I'll start with what I discovered so far as an entry point for the discussion, hopefully that others will share there own understanding and experience and maybe provide some references and examples.
This is very basic - but I'll share what I found so far - starting with the product page as my entry point for research.
Security
Secure Kubernetes cluster with TLS authentication and encryption.
Integrated security for the application lifecycle with Docker Content Trust.
Integration with validated and certified 3rd party tools (monitoring, logging, storage,
networking, etc') .
Access control
Restricting visibility for different user groups and operate multi-tenant environments - I found only this: restrict services to worker nodes.
Advanced Access Controls Docker Enterprise includes integrated RBAC that works with corporate LDAP, Active Directory, PKI certificates and/or SAML 2.0 identity provider solutions - I found only this: Configure native Kubernetes role-based access control.
Networking
Found only this which is related to installation of cni plugins.
I think DKS offers much more regarding to integrating with 3rd party networking solutions - Quoted from the product page:
Out-of-the-box Networking Docker Enterprise includes Project Calico by
Tigera as the “batteries included” Kubernetes CNI plug-in for a highly
scalable, networking and routing solution. Get access to overlay
(IPIP), no overlay, and hybrid data-plane networking models in
addition to native Kubernetes ingress controllers for load balancing.
Related
I have been struggling while trying to build a fabric 2.0 network with organizations spread in multiple hosts. The official documentation explains how to deploy two organizations (org1 and org2) using docker, and using configtxlator tool to add new orgs and peers.
The issue here is that in all documentation examples, organizations run in the same docker-engine host, which misses the whole point of distributed systems. Recently I found this blog post that endorses everything I am struggling with:
https://medium.com/#wahabjawed/hyperledger-fabric-on-multiple-hosts-a33b08ef24f
In this post, the author recommends using docker-swarm to create an overlay network that creates a distributed network among multiple Docker daemon hosts.
However, this post is from 2018, and I am wondering if this is still the best solution available? Or if Kubernetes, nowadays, would be the go for choice to create this overlay network?
ps: this network I am building is for academic purposes and research only, related to my PhD. studies.
Yes, you can use docker-swarm to deploy the network. docker-swarm is quite easy when compared to kubernetes. Since you mentioned that it is for academic purpose and research only then docker-swarm is fine.
Or you if want to deploy the production-grade hyperledger fabric you can use open source tool BAF, Blockchain Automation Framework which is an automation framework for rapidly and consistently deploying production-ready DLT platforms to cloud infrastructure.
We are building a web application that connects to a database and does data visualization. It will probably have around 300 users.
We will deploy it with docker.
To increase security, we want to use an openLDAP server that stores user credentials for us. The rationale is, that it is a tried and tested piece of software that is more secure than anything we would code ourselves and we would not have to bother with hashing algorithms, salts, etc. Also, we could assign roles directly in LDAP.
We are thinking about the following architecture (we have to use one single server):
- One docker container with web app
- One docker container running the database
- One docker container running the openLDAP server
My question:
- is openLDAP (or LDAP in general) suitable for that, or is there another solution that would encapsulate authentication in a tried and tested package? (given that LDAP is primarily built for high concurrent loads, which we do not expect)?
- Would using docker, and hence encapsulating the service, increase security in general (assuming proper implementation)?
Thanks a lot!
Yes, OpenLDAP - and LDAP in general - is suitable for username/password authentication, and you get standard password hashing and password policy enforcement in the same package.
Most of these LDAP features are standardized at IETF, so that you can expect the same from all good LDAP server products, including and especially OpenLDAP.
Main references:
Authentication: RFC 4513. Also OpenLDAP-specific info;
Password storage, password hashing: RFC 2307 and draft-stroeder-hashed-userpassword-values. Also OpenLDAP-specific info;
Password policies: draft-behera-ldap-password-policy. Also OpenLDAP-specific info.
Using Docker or other kinds of containers (e.g. LXC) is always a good thing from a security standpoint as it provides a form of isolation of a container (therefore applications running inside) from the others and from the host by default. Yet, it very much depends on your configuration and environment, there are many ways to loosen container isolation (e.g. enabling certain capabilities, mounting shared volumes, etc.). The Docker daemon in particular must be properly secured, since it is the one process that has and needs general privileged access to do all its powerful deeds. Docker security can be furthered improved by combining Docker native security features with Kernel security features, e.g. SELinux, AppArmor, grsec, etc. More info on Docker Security.
spring cloud or kubernates for service govern?
i want to use spring cloud to build microservice,but if the many microservices is online,it will have many online problem,such as Gray release ,monitor,version rollback and so on?what technolygy should use to manage microservices?
Spring Cloud and Kubernetes are the best environments for developing and running microservices, but they are both very different in nature and address different concerns.Spring cloud covers logging, monitoring, service discovery but not scaling and high availability which are very important for Microservice architecture.
Spring cloud has a rich set of java libraries for runtime concerns like client-side service discovery, load balancing, configuration update and metrics. Whereas Kubernetes is not only for Java platform and it doesn't need any specific libraries for service discovery, load balancing, metrics and scheduled jobs. For some areas like logging both use third-party tools like ELK.
Spring Cloud
Rich set of libraries help the developers to integrates different services easily.
Limited to only java
Scaling and high availability is not achieved unless taken care by an orchestrator.
Kubernetes
Is an open-source container management platform which helps to create different environments like dev, test, demo..
Allows to provision resource constraints, scaling and high availability.
It all depends on the use case.Hope this helps.
References :
https://developers.redhat.com/blog/2016/12/09/spring-cloud-for-microservices-compared-to-kubernetes/
http://blog.christianposta.com/microservices/netflix-oss-or-kubernetes-how-about-both/
As an ISV we have an enterprise solution that extents our existing software for our big customers, they must install and configure an Azure SF Cluster on-premises or even in Azure. Our software works mostly with stateless services and only a couple statefull onces. It is also multi-tenant so we can run the software ourself in a cloud environment.
But we also have a third way of using it: We need to ship our software to non-enterprise customers that have our other software on-premises. This is an issue since Service Fabric requires multiple machines that those small customers do not have and certainly do not want to have. Sometimes they are a single user of the software and running it all on a single laptop.
I see several solutions c.q. options:
1. Rewrite the software.
Maintaining the same code base somehow, host as a windows service or something. with topshelf, which is relativly easy to host OWIN / Katana based programs.
Pros
No Service Fabric cluster
Easier installation, for example a windows service
Cons
No statefull services
Multiple visual studio solutions
Developers have to think about way of hosting and Service Fabric being available or not
No reliability and scalability
2. Host on a Single node cluster
Install a cluster as single node on a machine as production environment. Knowing that reliability and scalability is lost, but thatis also with option 1.
Pros
One visual studio solution
Only one codebase, require no modifications to the code, which is easy for developers
Cons
Not supported by Azure Service Fabric for production
No reliability and scalability
3. Ship a cluster inside a single docker container
I know not much about docker, but perhaps it is easy to ship a pre-configured service fabric cluster?
What do you guys (and girls) think? I would love option two or three, but some of our developers are even thinking about option 1 being the better one which I doubt.
Some related links I found:
Option 2: Azure Service Fabric Single Virtual Machine
Option 3: https://github.com/Azure/service-fabric-issues/issues/409
You could investigate using a single server and use that to run 3 to 5 virtual machines, and run your cluster on that. You won't have the ultimate high availability, but you can still enjoy many SF features (stateful services, rolling upgrades, replication). No need to rewrite any software.
istio An open platform to connect, manage, and secure micro-services looks very interesting, but supports only Kubernetes. I couldn't find a roadmap or mention of future support for other container management platforms, specifically Docker Swarm
The project's github site does state the following explicitly:
Istio currently only supports the Kubernetes platform, although we
plan support for additional platforms such as Cloud Foundry, and Mesos
in the near future.
I don't know about the plans for Docker Swarm however I believe it probably would figure in the plans.
The roadmap at https://istio.io/docs/reference/release-roadmap.html shows that VM support is planned for 0.2
You can see that work is happening in the Cloud Foundry world when you see issues such as this.
The docker team indicated recently they are very interested in looking at istio and docker swarm integration so stay tuned this may happen in the next few quarters before you know it :)