How to manage many hosts with shipyard - docker

I am trying to use shipyard and mostly I am trying to manage many different hosts in one UI.
But I can't find a way to make shipyard use existing swarm token.
Is there any way to add hosts to shipyard or it is for one host only ?
thanks.

Solved.
I solved it by editing the shipyard deployment script. Also I added parameter, to easily specify swarm token. The shipyard-proxy is no more used.
I also recommend to be attentive when specifying port for docker daemon, because one of the shipyard containers can try to use the standard 2375 port.
I made gist with my code on GitHub. Link to gist.
My answer based on discussion from GitHub.

Related

Does docker-compose have something similar to service accounts and kubernetes-client library?

By creating Service accounts for pods, it is possible to access the kubectl api's of the whole cluster from any pod. Kubernetes-client libraries implemented in different languages make it possible to have a pod in cluster that serves this purpose.
Does docker-compose have something similar to this? My requirement is to control the life cycle(create, list, scale, destroy, restart, etc) of all the services defined in a compose file. As far as I've searched, no such feature is available for compose.
Or does docker-swarm provide any such features?
Docker provides an API which can be used to interact with the daemon. In fact that is exactly what docker-compose is using to achieve the functionality provided.
Docker does not provide fine grained access control like kubernetes does, though. But you can mount the docker socket to a container and make use of the API. A good example or that is ‚portainer‘ which provides a web based UI for docker.

How to connect and encrypt traffic between dockers runnning on different servers?

I currently have six docker containers that were triggered by a docker-compose file. Now I wish to move some of them to a remote machine and enable remote communication between them.
The problem now is that I also need to add a layer of security by encrypting their traffic.
This should be for a production website and needs to be very stable so I am unsure about which protocols/approaches could be better for this scenario.
I have used port forwarding using ssh and know that could also apply some stability through autossh. But I am unsure if there are other approaches that could help achieve the same idea by also taking into account stability and performance.
What protocols/approaches could help on this aim? How do they differ?
I would not recommend manually configuring docker container connections across physical servers because docker already contains a solution for that called Docker Swarm. Follow this documentation to configure your containers to use a docker swarm. I've done it and it's very cool!

Docker Daemon per user on host

I have one weird thing to configure is that Can I have docker daemon per user on Host? I want to isolate the process where individual user can have his own docker daemon where the user can run his own services/images/containers and test it. Basically I need this for testing environment where each user shall have his own set of services.
I could see that there is something called docker bridge but I am not sure If I can extend it. Can someone please suggest me somethings.
Edit 1 : Can I use docker-machine for the same? but I am not finding the way to configure it.
I could achieve this with my own Solution. Basically this is easily achievable with custom docker daemon configurations.
This link has all the details. Dockerd
And this talks on securing the tcp socket between client and engine secure docker connection
However running multiple daemons is still a experimental features since global configurations such as Iptables are part of it. For my case I do not need it hence disabled those.
Note : This is adaptable for my use case. If you are with similar scenario and with extra configurations I recommend you to read the Docker Documentation and also a Stackoverflow question if it does not satisfy the thirst.

How can I link a Container Group with a Container?

Note: This is a question related to Docker support in Bluemix.
I know how to link a Container with a Container, using --link parameter when starting the second Container with ice run command.
But I haven't found a way to link them, when using a Container Group. I read the docs and check ice command help with no luck.
The scenario I am trying to achieve is to create a front end Container Group linked to a single backend Container. Any idea or suggestion about how to do it?
This is not supported yet in the current ver of IBM Containers.
You can consider, in the mean time, creating yourself the environment vars, that linking creates.

Is it feasible to control Docker from inside a container?

I have experimented with packaging my site-deployment script in a Docker container. The idea is that my services will all be inside containers and then using the special management container to manage the other containers.
The idea is that my host machine should be as dumb as absolutely possible (currently I use CoreOS with the only state being a systemd config starting my management container).
The management container be used as a push target for creating new containers based on the source code I send to it (using SSH, I think, at least that is what I use now). The script also manages persistent data (database files, logs and so on) in a separate container and manages back-ups for it, so that I can tear down and rebuild everything without ever touching any data. To accomplish this I forward the Docker Unix socket using the -v option when starting the management container.
Is this a good or a bad idea? Can I run into problems by doing this? I did not read anywhere that it is discouraged, but I also did not find a lot of examples of others doing this.
This is totally OK, and you're not the only one to do it :-)
Another example of use is to use the management container to hande authentication for the Docker REST API. It would accept connections on an EXPOSE'd TCP port, itself published with -p, and proxy requests to the UNIX socket.
As this question is still of relevance today, I want to answer with a bit more detail:
It is possible to work with this setup, where you pass the docker socket into a running container. This is done by many solutions and works well. BUT you have to think about the problems, that come with this:
If you want to use the socket, you have to be root inside the container. This allows the execution of any command inside the container. So for example if an intruder controlls this container, he controls all other docker containers.
If you expose the socket with a TCP Port as sugested by jpetzzo, you will have the same problem even worse, because now you won't even have to compromise the container but just the network. If you filter the connections (like sugested in his comment) the first problem stays.
TLDR;
You could do this and it will work, but then you have to think about security for a bit.

Resources