Docker Daemon per user on host - docker

I have one weird thing to configure is that Can I have docker daemon per user on Host? I want to isolate the process where individual user can have his own docker daemon where the user can run his own services/images/containers and test it. Basically I need this for testing environment where each user shall have his own set of services.
I could see that there is something called docker bridge but I am not sure If I can extend it. Can someone please suggest me somethings.
Edit 1 : Can I use docker-machine for the same? but I am not finding the way to configure it.

I could achieve this with my own Solution. Basically this is easily achievable with custom docker daemon configurations.
This link has all the details. Dockerd
And this talks on securing the tcp socket between client and engine secure docker connection
However running multiple daemons is still a experimental features since global configurations such as Iptables are part of it. For my case I do not need it hence disabled those.
Note : This is adaptable for my use case. If you are with similar scenario and with extra configurations I recommend you to read the Docker Documentation and also a Stackoverflow question if it does not satisfy the thirst.

Related

How to connect and encrypt traffic between dockers runnning on different servers?

I currently have six docker containers that were triggered by a docker-compose file. Now I wish to move some of them to a remote machine and enable remote communication between them.
The problem now is that I also need to add a layer of security by encrypting their traffic.
This should be for a production website and needs to be very stable so I am unsure about which protocols/approaches could be better for this scenario.
I have used port forwarding using ssh and know that could also apply some stability through autossh. But I am unsure if there are other approaches that could help achieve the same idea by also taking into account stability and performance.
What protocols/approaches could help on this aim? How do they differ?
I would not recommend manually configuring docker container connections across physical servers because docker already contains a solution for that called Docker Swarm. Follow this documentation to configure your containers to use a docker swarm. I've done it and it's very cool!

How docker container can communicate in PCF

I have multiple docker containers. In local they talk to each other using network and compose.yml. I have pushed my containers to PCF only i don't know how to make them talk to each other. Can anyone help me out??
My understanding is that you would utilize Cloud Foundry's container to container networking to do this.
https://docs.cloudfoundry.org/concepts/understand-cf-networking.html
By default, no connections are allowed between containers but you can use the cf cli to allow connections on specific ports between your applications. Your applications just need to be configured to start and listen on the ports that you allow.
While it's not docker specific, there's a good example here.
https://github.com/cloudfoundry/cf-networking-examples/blob/master/docs/c2c-no-service-discovery.md
Using docker should be minimally different. You'll obviously need to create your own docker images and make sure that those are going to listen on the correct ports, otherwise it's just an adjustment to how you push the application (i.e. pass the docker image name to cf push). The cf add-network-policy commands should be the same.
Hope that helps!
** UPDATE **
If you are looking for docker-compose like behavior, kind of a run one command and deploy multiple apps, you can achieve this with cf push and a manifest.yml file.
The manifest.yml file allows you to define multiple applications. Thus you can use it to deploy a series of applications that work together, like you often see with docker-compose.
https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html
You have quite a bit of flexibility with manifest.yml, you can deploy buildpack based apps or docker image based apps. You can configure routes, bound services, health checks, memory/disk quotas, and if you're on a new enough version, even processes and sidecars. That said, it can't do 100% of what you can with the cf cli, for example you cannot control the above mentioned network policy using a manifest.yml. If you need to control something not exposed through manifest.yml, the other option would be to script the deployment.

Why doesn't Docker support multi-tenancy?

I watched this YouTube video on Docker and at 22:00 the speaker (a Docker product manager) says:
"You're probably thinking 'Docker does not support multi-tenancy'...and you are right!"
But never is any explanation of why actually given. So I'm wondering: what did he mean by that? Why Docker doesn't support multi-tenancy?! If you Google "Docker multi-tenancy" you surprisingly get nothing!
One of the key features most assume with a multi-tenancy tool is isolation between each of the tenants. They should not be able to see or administer each others containers and/or data.
The docker-ce engine is a sysadmin level tool out of the box. Anyone that can start containers with arbitrary options has root access on the host. There are 3rd party tools like twistlock that connect with an authz plugin interface, but they only provide coarse access controls, each person is either allowed or disallowed from an entire class of activities, like starting containers, or viewing logs. Giving users access to either the TLS port or docker socket results in the users being lumped into a single category, there's no concept of groups or namespaces for the users connecting to a docker engine.
For multi-tenancy, docker would need to add a way to define users, and place them in a namespace that is only allowed to act on specific containers and volumes, and restrict options that allow breaking out of the container like changing capabilities or mounting arbitrary filesystems from the host. Docker's enterprise offering, UCP, does begin to add these features by using labels on objects, but I haven't had the time to evaluate whether this would provide a full multi-tenancy solution.
Tough question that others might know how to answer better than me. But here it goes.
Let's take this definition of multi tenancy (source):
Multi-tenancy is an architecture in which a single instance of a software application serves multiple customers.
It's really hard to place Docker in this definition. It can be argued that it's both the instance and the application. And that's where the confusion comes from.
Let's break Docker up into three different parts: the daemon, the container and the application.
The daemon is installed on a host and runs Docker containers. The daemon does actually support multi tenancy, as it can be used my many users on the same system, each of which has their own configuration in ~/.docker.
Docker containers run a single process, which we'll refer to as the application.
The application can be anything. For this example, let's assume the Docker container runs a web application like a forum or something. The forum allows users to sign in and post under their name. It's a single instance that serves multiple customers. Thus it supports multi tenancy.
What we skipped over is the container and the question whether or not it supports multi tenancy. And this is where I think the answer to your question lies.
It is important to remember that Docker containers are not virtual machines. When using docker run [IMAGE], you are creating a new container instance. These instances are ephemeral and immutable. They run a single process, and exit as soon as the process exists. But they are not designed to have multiple users connect to them and run commands simultaneously. This is what multi tenancy would be. Instead, Docker containers are just isolated execution environments for processes.
Conceptually, echo Hello and docker run echo Hello are the same thing in this example. They both execute a command in a new execution environment (process vs. container), neither of which supports multi tenancy.
I hope this answers is readable and answers your question. Let me know if there is any part that I should clarify.

How to manage many hosts with shipyard

I am trying to use shipyard and mostly I am trying to manage many different hosts in one UI.
But I can't find a way to make shipyard use existing swarm token.
Is there any way to add hosts to shipyard or it is for one host only ?
thanks.
Solved.
I solved it by editing the shipyard deployment script. Also I added parameter, to easily specify swarm token. The shipyard-proxy is no more used.
I also recommend to be attentive when specifying port for docker daemon, because one of the shipyard containers can try to use the standard 2375 port.
I made gist with my code on GitHub. Link to gist.
My answer based on discussion from GitHub.

Is it feasible to control Docker from inside a container?

I have experimented with packaging my site-deployment script in a Docker container. The idea is that my services will all be inside containers and then using the special management container to manage the other containers.
The idea is that my host machine should be as dumb as absolutely possible (currently I use CoreOS with the only state being a systemd config starting my management container).
The management container be used as a push target for creating new containers based on the source code I send to it (using SSH, I think, at least that is what I use now). The script also manages persistent data (database files, logs and so on) in a separate container and manages back-ups for it, so that I can tear down and rebuild everything without ever touching any data. To accomplish this I forward the Docker Unix socket using the -v option when starting the management container.
Is this a good or a bad idea? Can I run into problems by doing this? I did not read anywhere that it is discouraged, but I also did not find a lot of examples of others doing this.
This is totally OK, and you're not the only one to do it :-)
Another example of use is to use the management container to hande authentication for the Docker REST API. It would accept connections on an EXPOSE'd TCP port, itself published with -p, and proxy requests to the UNIX socket.
As this question is still of relevance today, I want to answer with a bit more detail:
It is possible to work with this setup, where you pass the docker socket into a running container. This is done by many solutions and works well. BUT you have to think about the problems, that come with this:
If you want to use the socket, you have to be root inside the container. This allows the execution of any command inside the container. So for example if an intruder controlls this container, he controls all other docker containers.
If you expose the socket with a TCP Port as sugested by jpetzzo, you will have the same problem even worse, because now you won't even have to compromise the container but just the network. If you filter the connections (like sugested in his comment) the first problem stays.
TLDR;
You could do this and it will work, but then you have to think about security for a bit.

Resources