Proprietary Docker Containers - docker

I'm looking for a way to distribute my applications in docker containers/stack. These will go to clients who should be able to start and stop the containers; however, I would prefer they not reverse engineer the content within the containers or run containers on a host other than that which it is shipped on. What's the most effective method of distributing containers to customers?
So far as I can tell, securing the host and having the application follow traditional licensing methods is about as close as I'm going to get and that docker may not provide any added benefits.

Related

Containers Orchestations and some docker functions

I am familiarizing with the architecture and practices to package, build and deploy software or unless, small pieces of software.
If suddenly I am mixing concepts with specific tools (sometimes is unavoidable), let me know if I am wrong, please.
On the road, I have been reading and learning about the images and containers terms and their respective relationships in order to start to build the workflow software systems of a better possible way.
And I have a question about the services orchestration in the context of the docker :
The containers are lightweight and portable encapsulations of an environment in which we have all the binary and dependencies we need to run our application. OK
I can set up communication between containers using container links --link flag.
I can replace the use of container links, with docker-compose in order to automate my services workflow and running multi-containers using .yaml file configurations.
And I am reading about of the Container orchestration term, which defines the relationship between containers when we have distinct "software pieces" separate from each other, and how these containers interact as a system.
Well, I suppose that I've read good the documentation :P
My question is:
A docker level, are container links and docker-compose a way of container orchestration?
Or with docker, if I want to do container orchestration ... should I use docker-swarm?
You should forget you ever read about container links. They've been obsolete in pure Docker for years. They're also not especially relevant to the orchestration question.
Docker Compose is a simplistic orchestration tool, but I would in fact class it as an orchestration tool. It can start up multiple containers together; of the stack it can restart individual containers if their configurations change. It is fairly oriented towards Docker's native capabilities.
Docker Swarm is mostly just a way to connect multiple physical hosts together in a way that docker commands can target them as a connected cluster. I probably wouldn't call that capability on its own "orchestration", but it does have some amount of "scheduling" or "placement" ability (Swarm, not you, decides which containers run on which hosts).
Of the other things I might call "orchestration" tools, I'd probably divide them into two camps:
General-purpose system automation tools that happen to have some Docker capabilities. You can use both Ansible and Salt Stack to start Docker containers, for instance, but you can also use these tools for a great many other things. They have the ability to say "run container A on system X and container B on system Y", but if you need inter-host communication or other niceties then you need to set them up as well (probably using the same tool).
Purpose-built Docker automation tools like Docker Compose, Kubernetes, and Nomad. These tend to have a more complete story around how you'd build up a complete stack with a bunch of containers, service replication, rolling updates, and service discovery, but you mostly can't use them to manage tasks that aren't already in Docker.
Some other functions you might consider:
Orchestration: How can you start multiple connected containers all together?
Networking: How can one container communicate with another, within the cluster? How do outside callers connect to the system?
Scheduling: Which containers run on which system in a multi-host setup?
Service discovery: When one container wants to call another, how does it know who to call?
Management plane: As an operator, how do you do things like change the number of replicas of some specific service, or cause an update to a newer image for a service?

Manage docker containers in a college lab environment

Is there's a proper way to manage and configure docker containers in a college lab environment?
I requested for docker to be installed so that I could experiment with it for a project, but after speaking with our sysadmin, it seems very complicated. Wondering if SO has any insight.
Some exceptions that need to be handled:
Students will download images, which may be bad
Students may leave images running indefinitely
Some containers will require elevated privileges, for networking/IO/et cetera
Students will make images so images may be buggy, if docker is given a sticky permission bit or an elevated user group this may lead to a breach
One of the solutions that comes to mind is to just allow students to use a hypervisor within which they can install whatever software they like, including docker (we currently cannot do so), but that kinda bypasses the advantage of lightweight containers.
Your sysdmins' concerns are reasonable but using Docker should add only minor refinements to your existing security practices.
If your students have internet access today from these machines, then they can:
download binaries that may be bad
leave processes running indefinitely
may require processes with elevated privileges
may create buggy|insecure binaries
Containers provide some partitioning between processes on a machine but essentially all that happens is that namespaces are created and linux processes run in them; the name "containers" is slightly misleading, ps aux will show you all the processes (including container-based processes) running for the user on the machine.
So... Assuming you still need to control what students are downloading from the Internet and what roles they have on the machines:
private Image registries may be used either from the Cloud or locally
Registries can be coupled with vulnerability tools to help identify bad images
Tidying students' "sessions" will cover the processes in Docker containers too
Privilege escalations aren't complex (different but not complex)
Using some form of VM virtualization on bare metal machine is a good idea
If you were use Cloud-based VMs (or Containers), you can destroy these easily
One area where I find Docker burdensome is in managing the container life-cycle (rm old containers, tidying up images) but this should be manageable.

Advantages of dockerizing Java Springboot application?

We are working with a dockerized kafka environment. I would like to know the best practices for deployments of kafka-connectors and kafka-streams applications in such scenerio . Currently we are deploying each connector and stream as springboot applications and are started as systemctl microservices . I do not find a significant advantage in dockerizing each kafka connector and stream . Please provide me insights on the same
To me the Docker vs non-Docker thing comes down to "what does your operations team or organization support?"
Dockerized applications have an advantage in that they all look / act the same: you docker run a Java app the same way as you docker run a Ruby app. Where as with an approach of running programs with systemd, there's not usually a common abstraction layer around "how do I run this thing?"
Dockerized applications may also abstract some small operational details, like port management for example - ie making sure all your app's management.ports don't clash with each other. An application in a Docker container will run as one port inside the container, and you can expose that port as some other number outside. (either random, or one to your choosing).
Depending on the infrastructure support, a normal Docker scheduler may auto-scale a service when that service reaches some capacity. However, in Kafka streams applications the concurrency is limited by the number of partitions in the Kafka topics, so scaling up will just mean some consumers in your consumer groups go idle (if there's more than the number of partitions).
But it also adds complications: if you use RocksDB as your local store, you'll likely want to persist that outside the (disposable, and maybe read only!) container. So you'll need to figure out how to do volume persistence, operationally / organizationally. With plain ol' Jars with Systemd... well you always have the hard drive, and if the server crashes either it will restart (physical machine) or hopefully it will be restored by some instance block storage thing.
By this I mean to say: that kstream apps are not stateless, web apps where auto-scaling will always give you some more power, and that serves HTTP traffic. The people making these decisions at an organization or operations level may not fully know this. Then again, hey if everyone writes Docker stuff then the organization / operations team "just" have some Docker scheduler clusters (like a Kubernetes cluster, or Amazon ECS cluster) to manage, and don't have to manage VMs as directly anymore.
Dockerizing + clustering with kubernetes provide many benefits like auto healing, auto horizontal scaling.
Auto healing: in case spring application crashes, kubernetes will automatically run another instances and will ensure required number of containers are always up.
Auto horizontal scaling: if you get burst of messages, yo can tune spring applications to auto scale up or down using HPA that can use custom metrics also.

Are docker containers safe enough to run third-party untrusted containers side-by-side with production system?

We plan to allow execution of third-party micro-services code on our infrastructure interacting with our api.
Is dockerizing safe enough? Are there solutions for tracking resources(network, ram,cpu)container consumes?
You can install portainer.io (see its demo, password tryportainer)
But to truly isolate those third-party micro-services, you could run them in their own VM defined on your infrastructure. That VM would run a docker daemon and services. As long as the VM has access to the API, those micro-services containers will do fine, and won't lead/have access to anything directly from the infrastructure.
You need to define/size your VM correctly to allocate enough resources for the containers to run, each one assuring their own resource isolation.
Docker (17.03) is a great tool to secure isolate processes. It uses Kernel namespaces, Control groups and some kernel capabilities in order to isolate processes that run in different containers.
But, those processes are not 100% isolated from each other because they use the same kernel resources. Every dockerize process that make an IO call will leave for that period of time its isolated environment and will enter a shared environment, the kernel. Although you can set limits per container, like how much processor or how much RAM it may use you cannot set limits on all kernel resources.
You can read this article for more information.

Is it useful to run publicly-reachable applications as Docker containers just for the sake of security?

There are many use-cases found for docker, and they all have something to do with portability, testing, availability, ... which are especially useful for large enterprise applications.
Considering a single Linux server in the internet, that acts as mail- web- and application server - mostly for private use. No cluster, no need to migrate services, no similar services, that could be created from the same image.
Is it useful to consider wrapping each of the provided services in a Docker container, instead of just running them directly on the server (in a chroot environment) when considering the security of the whole server, or would that be using a sledgehammer to crack a nut?
As far as I would understand, the security would really be increased, as the services would be really isolated, and even gaining root privileges wouldn't allow to escape the chroot, but the maintenance requirements would increase, as I would need to maintain several independent operations system (security updates, log analysis, ...).
What would you propose, and what experiences have you made with Docker in small environments?
From my point of security is, or will be, one of the strengths of linux containers and Docker. But there is a long way to get a secure environment and completely isolated inside a container. Docker and some other big collaborators like RedHat have shown a lot of efforts and interest in securing containers, and any public security flag (about isolation) in Docker has been fixed. Today Docker is not a replacement in terms of isolation to hardware virtualization, but there are projects working in Hypervisors running container that will help in this area. This issue is more related to companies offering IAAS or PAAS where they use virtualization to isolate each client.
In my opinion for a case as you propose, running each service inside a Docker container provides one more layer in your security scheme. If one of the service is compromised there will be one extra lock to gain access to all your server and the rest of services. Maybe the maintenance of the services increases a little, but if you organize your Dockerfiles to use a common Docker image as base, and you (or somebody else) update that base image regularly, you don't need to update all the Docker container one by one. And also if you use a base image that is update regularly (i.e.: Ubuntu, CentOS) the security issues that affect those images will be updated fixed rapidly and you'd only have to rebuild and relaunch your containers to update them. Maybe is an extra work but if security is a priority, Docker may be an added value.

Resources