We are building a web application that connects to a database and does data visualization. It will probably have around 300 users.
We will deploy it with docker.
To increase security, we want to use an openLDAP server that stores user credentials for us. The rationale is, that it is a tried and tested piece of software that is more secure than anything we would code ourselves and we would not have to bother with hashing algorithms, salts, etc. Also, we could assign roles directly in LDAP.
We are thinking about the following architecture (we have to use one single server):
- One docker container with web app
- One docker container running the database
- One docker container running the openLDAP server
My question:
- is openLDAP (or LDAP in general) suitable for that, or is there another solution that would encapsulate authentication in a tried and tested package? (given that LDAP is primarily built for high concurrent loads, which we do not expect)?
- Would using docker, and hence encapsulating the service, increase security in general (assuming proper implementation)?
Thanks a lot!
Yes, OpenLDAP - and LDAP in general - is suitable for username/password authentication, and you get standard password hashing and password policy enforcement in the same package.
Most of these LDAP features are standardized at IETF, so that you can expect the same from all good LDAP server products, including and especially OpenLDAP.
Main references:
Authentication: RFC 4513. Also OpenLDAP-specific info;
Password storage, password hashing: RFC 2307 and draft-stroeder-hashed-userpassword-values. Also OpenLDAP-specific info;
Password policies: draft-behera-ldap-password-policy. Also OpenLDAP-specific info.
Using Docker or other kinds of containers (e.g. LXC) is always a good thing from a security standpoint as it provides a form of isolation of a container (therefore applications running inside) from the others and from the host by default. Yet, it very much depends on your configuration and environment, there are many ways to loosen container isolation (e.g. enabling certain capabilities, mounting shared volumes, etc.). The Docker daemon in particular must be properly secured, since it is the one process that has and needs general privileged access to do all its powerful deeds. Docker security can be furthered improved by combining Docker native security features with Kernel security features, e.g. SELinux, AppArmor, grsec, etc. More info on Docker Security.
Related
I am posting this question due to lack of experience and I need professional suggestions. The questions in SO are mainly on how to deploy or host multiple websites using Docker running on a single Web Host. This can be done, but is it ideal for moderate traffic websites.
I deploy Docker based Containers in my local machine for development. A software container has a copy of the primary application, as well all dependencies — libraries, languages, frameworks, and everything else.
It becomes easy for me to simply migrate the “docker-compose.yml” or “dockerfile” into any remote Web Server. All the softwares and dependencies get installed and will run just like my local machine.
(Say) I have a VPS and I want to host multiple websites using Docker. The only thing that I need to configure is the Port, so that the domains can be mapped to port 80. For this I have to use an extra NGINX for routing.
But VPS can be used to host multiple websites without the need of Containerisation. So, is there any special benefit of running Docker in Web Servers like AWS, Google, Hostgator, etc., OR Is Docker best or idle for development only in local machine and not to be deployed in Web Servers for Hosting.
The main benefits of docker for simple web hosting are imo the following:
isolation each website/service might have different dependency requirements (one might require php 5, another php 7 and another nodejs).
separation of concerns if you split your setup into multiple containers you can easily upgrade or replace one part of it. (just consider a setup with 2 websites, which need a postgres database each. If each website has its own db container you won't have any issue bumping the postgres version of one of the websites, without affecting the other.)
reproducibility you can build the docker image once, test it on acceptance, promote the exact same image to staging and later to production. also you'll be able to have the same environment locally as on your server
environment and settings each of your services might depend on a different environment (for example smtp settings or a database connection). With containers you can easily supply each container it's specific environment variables.
security one can argue about this one as containers itself won't do much for you in terms of security. However due to easier dependency upgrades, seperated networking etc. most people will end up with a setup which is more secure. (just think about the db containers again here, these can share a network with your app/website container and there is no need to expose the port locally.)
Note that you should be careful with dockers port mapping. It uses the iptables and will override the settings of most firewalls (like ufw) per default. There is a repo with information on how to avoid this here: https://github.com/chaifeng/ufw-docker
Also there are quite a few projects which automate the routing of requests to the applications (in this case containers) very enjoyable and easy. They usually integrate a proper way to do ssl termination as well. I would strongly recommend looking into traefik if you setup a webserver with multiple containers which should all be accessible at port 80 and 443.
I have a running keycloak 8's docker but whenever I restart it, all non-offline session disappears. Result, all users are being disconnected whenever I come to update keycloak.
Causes:
I've read this thread here and understood why access token aren't persisted (mainly performance issue).
As solution I've wanted to use Clusters (as recommended here), and I understood, that the core part is only well managing Infinispan.
Ideas:
I first wanted to store that infinispan outside docker container (in a volume), then search where does the JBoss saves Infinispan in a docker, but i didn't found anything.
Secondly I thought about an SPI to manage user sessions externally, but it seems not to be the right solution, as infinispan does already a good Job.
Setting up then a cluster, helped by this article about Cross-Datacenter support in Keycloak and this other one about Keycloak Cross Data Center Setup in AWS seemed to be a good starting point, but I'm still actually using dockers and I not sure if it's a better idea for me to build docker images from those tutorials.
Any more Idea would be welcome :)
Just yet I've tried using docker cluster a second time, but now using docker swarm with the info from here:
The PING discovery protocol is used by default in udp stack (which is used by default in standalone-ha.xml). Since the Keycloak image runs in clustered mode by default, all you need to do is to run it:
docker run jboss/keycloak
If you run two instances of it locally, you will notice that they form a cluster.
I've deployed very simply 3 instances of keycloak in clustered mode with an external database (postgres) using docker stack and it worked well.
Simpler said, keycloak docker does already handle this use-case when using clusters.
For more about the cluster use-case, please refer to this tutorial about how to setup Keycloak Cluster
On DockerCon 2019 Docker announced the Docker Kubernetes Service (DKS).
Quoted from here:
Docker Enterprise 3.0’s Docker Kubernetes Service (DKS) integrates
Kubernetes container orchestration from the developer desktop to the
production server.
...It also provides an automated way to install and configure
Kubernetes applications across hybrid and multi-cloud deployments.
Other capabilities include security, access control, and lifecycle
management
And from here:
The Docker platform includes a secure and fully-conformant Kubernetes
environment for developers and operators of all skill levels,
providing out-of-the-box integrations for common enterprise
requirements while still enabling complete flexibility for expert
users.
After some searching and research I haven't succeed to fully understand the different solutions and features that DKS has to offer. So, my question is:
What DKS has to offer regarding topics like: Security, Networking, Access Management, etc'?
I'll start with what I discovered so far as an entry point for the discussion, hopefully that others will share there own understanding and experience and maybe provide some references and examples.
This is very basic - but I'll share what I found so far - starting with the product page as my entry point for research.
Security
Secure Kubernetes cluster with TLS authentication and encryption.
Integrated security for the application lifecycle with Docker Content Trust.
Integration with validated and certified 3rd party tools (monitoring, logging, storage,
networking, etc') .
Access control
Restricting visibility for different user groups and operate multi-tenant environments - I found only this: restrict services to worker nodes.
Advanced Access Controls Docker Enterprise includes integrated RBAC that works with corporate LDAP, Active Directory, PKI certificates and/or SAML 2.0 identity provider solutions - I found only this: Configure native Kubernetes role-based access control.
Networking
Found only this which is related to installation of cni plugins.
I think DKS offers much more regarding to integrating with 3rd party networking solutions - Quoted from the product page:
Out-of-the-box Networking Docker Enterprise includes Project Calico by
Tigera as the “batteries included” Kubernetes CNI plug-in for a highly
scalable, networking and routing solution. Get access to overlay
(IPIP), no overlay, and hybrid data-plane networking models in
addition to native Kubernetes ingress controllers for load balancing.
I watched this YouTube video on Docker and at 22:00 the speaker (a Docker product manager) says:
"You're probably thinking 'Docker does not support multi-tenancy'...and you are right!"
But never is any explanation of why actually given. So I'm wondering: what did he mean by that? Why Docker doesn't support multi-tenancy?! If you Google "Docker multi-tenancy" you surprisingly get nothing!
One of the key features most assume with a multi-tenancy tool is isolation between each of the tenants. They should not be able to see or administer each others containers and/or data.
The docker-ce engine is a sysadmin level tool out of the box. Anyone that can start containers with arbitrary options has root access on the host. There are 3rd party tools like twistlock that connect with an authz plugin interface, but they only provide coarse access controls, each person is either allowed or disallowed from an entire class of activities, like starting containers, or viewing logs. Giving users access to either the TLS port or docker socket results in the users being lumped into a single category, there's no concept of groups or namespaces for the users connecting to a docker engine.
For multi-tenancy, docker would need to add a way to define users, and place them in a namespace that is only allowed to act on specific containers and volumes, and restrict options that allow breaking out of the container like changing capabilities or mounting arbitrary filesystems from the host. Docker's enterprise offering, UCP, does begin to add these features by using labels on objects, but I haven't had the time to evaluate whether this would provide a full multi-tenancy solution.
Tough question that others might know how to answer better than me. But here it goes.
Let's take this definition of multi tenancy (source):
Multi-tenancy is an architecture in which a single instance of a software application serves multiple customers.
It's really hard to place Docker in this definition. It can be argued that it's both the instance and the application. And that's where the confusion comes from.
Let's break Docker up into three different parts: the daemon, the container and the application.
The daemon is installed on a host and runs Docker containers. The daemon does actually support multi tenancy, as it can be used my many users on the same system, each of which has their own configuration in ~/.docker.
Docker containers run a single process, which we'll refer to as the application.
The application can be anything. For this example, let's assume the Docker container runs a web application like a forum or something. The forum allows users to sign in and post under their name. It's a single instance that serves multiple customers. Thus it supports multi tenancy.
What we skipped over is the container and the question whether or not it supports multi tenancy. And this is where I think the answer to your question lies.
It is important to remember that Docker containers are not virtual machines. When using docker run [IMAGE], you are creating a new container instance. These instances are ephemeral and immutable. They run a single process, and exit as soon as the process exists. But they are not designed to have multiple users connect to them and run commands simultaneously. This is what multi tenancy would be. Instead, Docker containers are just isolated execution environments for processes.
Conceptually, echo Hello and docker run echo Hello are the same thing in this example. They both execute a command in a new execution environment (process vs. container), neither of which supports multi tenancy.
I hope this answers is readable and answers your question. Let me know if there is any part that I should clarify.
There are many use-cases found for docker, and they all have something to do with portability, testing, availability, ... which are especially useful for large enterprise applications.
Considering a single Linux server in the internet, that acts as mail- web- and application server - mostly for private use. No cluster, no need to migrate services, no similar services, that could be created from the same image.
Is it useful to consider wrapping each of the provided services in a Docker container, instead of just running them directly on the server (in a chroot environment) when considering the security of the whole server, or would that be using a sledgehammer to crack a nut?
As far as I would understand, the security would really be increased, as the services would be really isolated, and even gaining root privileges wouldn't allow to escape the chroot, but the maintenance requirements would increase, as I would need to maintain several independent operations system (security updates, log analysis, ...).
What would you propose, and what experiences have you made with Docker in small environments?
From my point of security is, or will be, one of the strengths of linux containers and Docker. But there is a long way to get a secure environment and completely isolated inside a container. Docker and some other big collaborators like RedHat have shown a lot of efforts and interest in securing containers, and any public security flag (about isolation) in Docker has been fixed. Today Docker is not a replacement in terms of isolation to hardware virtualization, but there are projects working in Hypervisors running container that will help in this area. This issue is more related to companies offering IAAS or PAAS where they use virtualization to isolate each client.
In my opinion for a case as you propose, running each service inside a Docker container provides one more layer in your security scheme. If one of the service is compromised there will be one extra lock to gain access to all your server and the rest of services. Maybe the maintenance of the services increases a little, but if you organize your Dockerfiles to use a common Docker image as base, and you (or somebody else) update that base image regularly, you don't need to update all the Docker container one by one. And also if you use a base image that is update regularly (i.e.: Ubuntu, CentOS) the security issues that affect those images will be updated fixed rapidly and you'd only have to rebuild and relaunch your containers to update them. Maybe is an extra work but if security is a priority, Docker may be an added value.