IIS Process Vs Container Process - docker

I am trying to understand what is the difference between Docker Container Process and IIS Process? From Container perspective, it is not advisable to have more than one process in a single container and in IIS also you can not do that. Each application is executed in it's own process. So if IIS provides me the the same Process Isolation then why should I use Containers?

Although it is process isolation per app on iis docker provides another layer of isolation where memory used and kernel access is also isolated. Bear in mind that containers contain all that is needed to run something including the os. Only thing is the physical os memory and kernel that is used together as is the case with vm-s for example. So in a way containers give you even higher isolation than just having a separate process for an application.
But that is not the main selling point of containers. The main selling point is that they are scalable solutions that are basically infrastructure as code and thus easier to manage and deploy on any environment. Also being that means that it will work the same wherever you deploy it to since you include all is needed in them. And if your apps have lots of traffic with load balancing you ca deploy multiples of the same container in a cluster and do not have those bottlenecks.
Second point is that during development there is historical data of what was added to the container and removed to have a stable environment. That and the ability to deploy dev instances alongside prod instances and just switching over makes it reduce possible downtime for unforeseen error minimal as you can just redirect to old prod container until the fix is out.
A bit of a rant there and yet there is more.

Related

Testing with docker containers instead of virtual machines

I have a question regarding using docker for testing.
Our main solution is a client/server solution. However, the same server is also being used by our web applications. We know that our web applications, server and SQL Database can run in docker container, as it is today.
All our customers are currently running our web applications and server on either physical servers, or virtual machines.
From my knowledge that is obtained from the docker website, docker courses and the following stackoverflow post, there is a difference between a Virtual Machine and a Docker container.
But could there be such a big difference that our automated testing would have a different output or unable to catch errors in a docker container compared to a virtual machine?
From my understanding the main difference is that containers runs on host os, and VM's runs on its own instance of a OS. So, from my view the difference is not big enough to change the output of our testing in anyway?
Setup
Our container setup would be the exact same as it is in our VM testing environment
MS-SQL Server container
Server container (Windows container)
IIS Container
The differences between VM and container is usually visible from the management perspective e.g. different resource requirements or security concerns. From the client perspective there should be no difference. If the aplication uses well defined network interfaces e.g. Java has JDBC for communicating with the database the change from VM to container should be transparent, just like switching from one VM to another VM.
If the automated testing has different outcome on VM and container it means that either the application depends on something specific in the VM or there is an issue with the test suite. One way or another it should be debugged.
It all depends.
The first question is one of provisioning - you'll need to create the Docker images, install the dependencies, manage configuration settings etc. If that provisioning process is different to the way you provision VMs, it's possible that when you test on Docker, you'll not get the same results as on VMs (or indeed the target production environment). This is especially important for non-functional testing like load and performance testing. It could also affect functional testing, e.g. when configuring database code pages etc.
The second question is whether your applications rely on any operating system features, or display extreme resource requirements. For instance, if your database absolutely must have a certain amount of memory, or your application server needs a custom configuration for network timeouts, this may be hard to reflect on Docker containers.

Elixir/Erlang Applications with Docker in production?

I would like to know what are the strong reasons to go or not to go with the Docker with Elixir/Erlang Application in Production.This is the first time I am asked for starting with the Docker in production.I worked on the production without Docker.I am an Erlang/Elixir Developer.I worked on the high traffic productions servers with millions of transactions per second which are running without Docker.I spent one day for creating and running a Elixir Application image with lots of issues with the network.I had to do lots of configurations for DNS setup etc.After that I started thinking What are the strong reasons for proceeding further.Are there any strong reasons to go or not to go with the Docker with Elixir/Erlang Applications in production.
I went through some of the reasons in the forums but still It am not convinced.All the advantages that docker is providing is already there in the Erlang VM. Could any Erlang Expert in the form please help me.
I deploy Elixir packaged in Docker on AWS in production.
This used to be my preferred way of doing things but now I am more inclined to create my own AMI using Packer with everything preinstalled.
The matter central in deployments is that of control, which to a certain extent I feel is relinquished when leveraging Docker.
The main disadvantage of Docker is that it limits the capabilities of Erlang/Elixir, such as internode connection over epmd. This also means that remsh is practically out of the question and the cool :observer.start is a no-no. If you ever need to interact with a production node for whatever reason, there is an extra barrier of entry of first ssh-ing into the server, going inside Docker etc.. Fine when it is just about checking something, frustrating when production is burning down in agony. Launching multiple containers in one Node is kinda useless as the BEAM makes efficient use of all your cores. Hot upgrades are practically out of the question, but that is not really a feature we personally have an intrinsic business need for.
Effort has been made to have epmd working within container setup, such as: https://github.com/Random-Liu/Erlang-In-Docker but that will require you to rebuild Erlang for custom net_kernel modifications.
Amazon has recently released a new feature to AWS ECS, AWS VPC Networking Mode, which perhaps may facilitate inter-container epmd communication and thus connecting to your node directly. I haven't validated that as yet.
Besides the issue of epmd communication is the matter of deployment time. Creating your image with Docker, even though you have images that boast 5MB only, quickly will end up taking 300MB, with 200MB of that just for all the various dependencies to have your release created. There may be ways to reduce that, but that requires specialized knowledge and dedicated effort. I would classify this extra space more as an annoyance as opposed to a deal breaker, but believe me if you have to wait 25 minutes for your immutable deployments to complete, any minute you can shave off would be worthwhile.
Performance wise, I did not notice a significant difference between bare metal deployments and docker deployments. AWS EB Docker nicely expands the container resources to that of the EC2-instance.
The advantage of course is that of portability. If you have a front end engineer that needs to hit a JSON API then in terms of local development it is a huge win that with some careful setup they can just spawn up the latest api running on their local without having to know about Erlang/Elixir/Rserve/Postgres.
Also, Vendor lock-in is greatly reduced, especially ever since AWS launched their support for Kubernetes
This is a question of tradeoffs, if you are a developer who needs to get to production and have very little Devops knowledge, then perhaps a Docker deployment may be warranted. If you are more familiar with infrastructure, deployments etc., then as developer I believe that creating your own AMI gives you more control over your environment.
All by all, I would encourage to at least play around with Docker and experiment with it, it may open a new realm of possibilities.
Maybe it depends on the server you want to use. From what I know, for example, Docker facilitates the deployment of a Phoenix application on AWS Elastic Beanstalk a lot, but I'm not competent enough to give you very specific reasons at the moment.
Maybe someone can elaborate more.
Docker is primarily a deployment and distribution tool. From the Docker docs:
Docker streamlines the development lifecycle by allowing developers to work in standardized environments using local containers which provide your applications and services. Containers are great for continuous integration and continuous development (CI/CD) workflows.
If your application has external dependencies (for example, a crypto library), interacts with another application written in another language (for example, a database running as a separate process), or if it relies on certain operating system / environment configuration (you mentioned you had to do some DNS configuration), then packaging your application in a docker container helps you avoid doing duplicate work installing dependencies and configuring the environment. It helps you avoid extra work keeping in sync your testing and production environment in terms of dependencies or investigating why an application works on one machine in one environement, but not another.
The above is not specific to an Erlang application, though I can agree that Erlang helps eliminate some of the problems being cross-platform and abstracting away some of the dependencies, and OTP release handling helps you package your application.
Since you mentioned you are a developer, it is worth mentioning that Docker offers more advantages for an administrator or a team running the infrastructure rather than it does for a developer.

Are docker containers safe enough to run third-party untrusted containers side-by-side with production system?

We plan to allow execution of third-party micro-services code on our infrastructure interacting with our api.
Is dockerizing safe enough? Are there solutions for tracking resources(network, ram,cpu)container consumes?
You can install portainer.io (see its demo, password tryportainer)
But to truly isolate those third-party micro-services, you could run them in their own VM defined on your infrastructure. That VM would run a docker daemon and services. As long as the VM has access to the API, those micro-services containers will do fine, and won't lead/have access to anything directly from the infrastructure.
You need to define/size your VM correctly to allocate enough resources for the containers to run, each one assuring their own resource isolation.
Docker (17.03) is a great tool to secure isolate processes. It uses Kernel namespaces, Control groups and some kernel capabilities in order to isolate processes that run in different containers.
But, those processes are not 100% isolated from each other because they use the same kernel resources. Every dockerize process that make an IO call will leave for that period of time its isolated environment and will enter a shared environment, the kernel. Although you can set limits per container, like how much processor or how much RAM it may use you cannot set limits on all kernel resources.
You can read this article for more information.

Is it useful to run publicly-reachable applications as Docker containers just for the sake of security?

There are many use-cases found for docker, and they all have something to do with portability, testing, availability, ... which are especially useful for large enterprise applications.
Considering a single Linux server in the internet, that acts as mail- web- and application server - mostly for private use. No cluster, no need to migrate services, no similar services, that could be created from the same image.
Is it useful to consider wrapping each of the provided services in a Docker container, instead of just running them directly on the server (in a chroot environment) when considering the security of the whole server, or would that be using a sledgehammer to crack a nut?
As far as I would understand, the security would really be increased, as the services would be really isolated, and even gaining root privileges wouldn't allow to escape the chroot, but the maintenance requirements would increase, as I would need to maintain several independent operations system (security updates, log analysis, ...).
What would you propose, and what experiences have you made with Docker in small environments?
From my point of security is, or will be, one of the strengths of linux containers and Docker. But there is a long way to get a secure environment and completely isolated inside a container. Docker and some other big collaborators like RedHat have shown a lot of efforts and interest in securing containers, and any public security flag (about isolation) in Docker has been fixed. Today Docker is not a replacement in terms of isolation to hardware virtualization, but there are projects working in Hypervisors running container that will help in this area. This issue is more related to companies offering IAAS or PAAS where they use virtualization to isolate each client.
In my opinion for a case as you propose, running each service inside a Docker container provides one more layer in your security scheme. If one of the service is compromised there will be one extra lock to gain access to all your server and the rest of services. Maybe the maintenance of the services increases a little, but if you organize your Dockerfiles to use a common Docker image as base, and you (or somebody else) update that base image regularly, you don't need to update all the Docker container one by one. And also if you use a base image that is update regularly (i.e.: Ubuntu, CentOS) the security issues that affect those images will be updated fixed rapidly and you'd only have to rebuild and relaunch your containers to update them. Maybe is an extra work but if security is a priority, Docker may be an added value.

Docker, what is it and what is the purpose

I've heard about Docker some days ago and wanted to go across.
But in fact, I don't know what is the purpose of this "container"?
What is a container?
Can it replace a virtual machine dedicated to development?
What is the purpose, in simple words, of using Docker in companies? The main advantage?
VM: Using virtual machine (VM) software, for example, Ubuntu can be installed inside a Windows. And they would both run at the same time. It is like building a PC, with its core components like CPU, RAM, Disks, Network Cards etc, within an operating system and assemble them to work as if it was a real PC. This way, the virtual PC becomes a "guest" inside an actual PC which with its operating system, which is called a host.
Container: It's same as above but instead of using an entire operating system, it cut down the "unnecessary" components of the virtual OS to create a minimal version of it. This lead to the creation of LXC (Linux Containers). It therefore should be faster and more efficient than VMs.
Docker: A docker container, unlike a virtual machine and container, does not require or include a separate operating system. Instead, it relies on the Linux kernel's functionality and uses resource isolation.
Purpose of Docker: Its primary focus is to automate the deployment of applications inside software containers and the automation of operating system level virtualization on Linux. It's more lightweight than standard Containers and boots up in seconds.
(Notice that there's no Guest OS required in case of Docker)
[ Note, this answer focuses on Linux containers and may not fully apply to other operating systems. ]
What is a container ?
It's an App: A container is a way to run applications that are isolated from each other. Rather than virtualizing the hardware to run multiple operating systems, containers rely on virtualizing the operating system to run multiple applications. This means you can run more containers on the same hardware than VMs because you only have one copy of the OS running, and you do not need to preallocate the memory and CPU cores for each instance of your app. Just like any other app, when a container needs the CPU or Memory, it allocates them, and then frees them up when done, allowing other apps to use those same limited resources later.
They leverage kernel namespaces: Each container by default will receive an environment where the following are namespaced:
Mount: filesystems, / in the container will be different from / on the host.
PID: process id's, pid 1 in the container is your launched application, this pid will be different when viewed from the host.
Network: containers run with their own loopback interface (127.0.0.1) and a private IP by default. Docker uses technologies like Linux bridge networks to connect multiple containers together in their own private lan.
IPC: interprocess communication
UTS: this includes the hostname
User: you can optionally shift all the user id's to be offset from that of the host
Each of these namespaces also prevent a container from seeing things like the filesystem or processes on the host, or in other containers, unless you explicitly remove that isolation.
And other linux security tools: Containers also utilize other security features like SELinux, AppArmor, Capabilities, and Seccomp to limit users inside the container, including the root user, from being able to escape the container or negatively impact the host.
Package your apps with their dependencies for portability: Packaging an application into a container involves assembling not only the application itself, but all dependencies needed to run that application, into a portable image. This image is the base filesystem used to create a container. Because we are only isolating the application, this filesystem does not include the kernel and other OS utilities needed to virtualize an entire operating system. Therefore, an image for a container should be significantly smaller than an image for an equivalent virtual machine, making it faster to deploy to nodes across the network. As a result, containers have become a popular option for deploying applications into the cloud and remote data centers.
Can it replace a virtual machine dedicated to development ?
It depends: If your development environment is running Linux, and you either do not need access to hardware devices, or it is acceptable to have direct access to the physical hardware, then you'll find a migration to a Linux container fairly straight forward. The ideal target for a docker container are applications like web based API's (e.g. a REST app), which you access via the network.
What is the purpose, in simple words, of using Docker in companies ? The main advantage ?
Dev or Ops: Docker is typically brought into an environment in one of two paths. Developers looking for a way to more rapidly develop and locally test their application, and operations looking to run more workload on less hardware than would be possible with virtual machines.
Or Devops: One of the ideal targets is to leverage Docker immediately from the CI/CD deployment tool, compiling the application and immediately building an image that is deployed to development, CI, prod, etc. Containers often reduce the time to move the application from the code check-in until it's available for testing, making developers more efficient. And when designed properly, the same image that was tested and approved by the developers and CI tools can be deployed in production. Since that image includes all the application dependencies, the risk of something breaking in production that worked in development are significantly reduced.
Scalability: One last key benefit of containers that I'll mention is that they are designed for horizontal scalability in mind. When you have stateless apps under heavy load, containers are much easier and faster to scale out due to their smaller image size and reduced overhead. For this reason you see containers being used by many of the larger web based companies, like Google and Netflix.
Same questions were hitting my head some days ago and what i found after getting into it, let's understand in very simple words.
Why one would think about docker and containers when everything seems fine with current process of application architecture and development !!
Let's take an example that we are developing an application using nodeJs , MongoDB, Redis, RabbitMQ etc services [you can think of any other services].
Now we face these following things as problems in application development and shipping process if we forget about existence of docker or other alternatives of containerizing applications.
Compatibility of services(nodeJs, mongoDB, Redis, RabbitMQ etc.) with OS(even after finding compatible versions with OS, if something unexpected happens related to versions then we need to relook the compatibility again and fix that).
If two system components requires a library/dependency with different versions in application in OS(That need a relook every time in case of an unexpected behaviour of application due to library and dependency version issue).
Most importantly , If new person joins the team, we find it very difficult to setup the new environment, person has to follow large set of instructions and run hundreds of commands to finally setup the environment And it takes time and effort.
People have to make sure that they are using right version of OS and check compatibilities of services with OS.And each developer has to follow this each time while setting up.
We also have different environment like dev, test and production.If One developer is comfortable using one OS and other is comfortable with other OS And in this case, we can't guarantee that our application will behave in same way in these two different situations.
All of these make our life difficult in process of developing , testing and shipping the applications.
So we need something which handles compatibility issue and allows us to make changes and modifications in any system component without affecting other components.
Now we think about docker because it's purpose is to
containerise the applications and automate the deployment of applications and ship them very easily.
How docker solves above issues-
We can run each service component(nodeJs, MongoDB, Redis, RabbitMQ) in different containers with its own dependencies and libraries in the same OS but with different environments.
We have to just run docker configuration once then all our team developers can get started with simple docker run command, we have saved lot of time and efforts here:).
So containers are isolated environments with all dependencies and
libraries bundled together with their own process and networking
interfaces and mounts.
All containers use the same OS resources
therefore they take less time to boot up and utilise the CPU
efficiently with less hardware costs.
I hope this would be helpful.
Why use docker:
Docker makes it really easy to install and running software without worrying about setup or dependencies. Docker is really made it easy and really straight forward for you to install and run software on any given computer not just your computer but on web servers as well or any cloud based computing platform. For example when I went to install redis in my computer by using bellow command
wget http://download.redis.io/redis-stable.tar.gz
I got error,
Now I could definitely go and troubleshoot this install that program and then try installing redis again, and I kind of get into endless cycle of trying to do all bellow troubleshooting as you I am installing and running software.
Now let me show you how easy it is to run read as if you are making use of Docker instead. just run the command docker run -it redis, this command will install docker without any error.
What docker is:
To understand what is docker you have to know about docker Ecosystem.
Docker client, server, Machine, Images, Hub, Composes are all projects tools pieces of software that come together to form a platform where ecosystem around creating and running something called containers, now if you run the command docker run redis something called docker CLI reached out to something called the Docker Hub and it downloaded a single file called an image.
An image is a single file containing all the dependencies and all the configuration required to run a very specific program, for example redis this which is what the image that you just downloaded was supposed to run.
This is a single file that gets stored on your hard drive and at some point time you can use this image to create something called a container.
A container is an instance of an image and you can kind of think it as being like a running program with it's own isolated set of hardware resources so it kind of has its own little set or its own little space of memory has its own little space of networking technology and its own little space of hard drive space as well.
Now lets examine when you give bellow command:
sudo docker run hello-world
Above command will starts up the docker client or docker CLI, Docker CLI is in charge of taking commands from you kind of doing a little bit of processing on them and then communicating the commands over to something called the docker server, and docker server is in charge of the heavy lifting when we ran the command Docker run hello-world,
That meant that we wanted to start up a new container using the image with the name of hello world, the hello world image has a tiny tittle program inside of it whose sole purpose or sole job is to print out the message that you see in the terminal.
Now when we ran that command and it was issued over to the docker server a series of actions very quickly occurred in background. The Docker server saw that we were trying to start up a new container using an image called hello world.
The first thing that the docker server did was check to see if it already had a local copy like a copy on your personal machine of the hello world image or that hello world file.So the docker server looked into something called the image cache.
Now because you and I just installed Docker on our personal computers that image cache is currently empty, We have no images that have already been downloaded before.
So because the image cache was empty the docker server decided to reach out to a free service called Docker hub. The Docker Hub is a repository of free public images that you can freely download and run on your personal computer. So Docker server reached out to Docker Hub and and downloaded the hello world file and stored it on your computer in the image-cache, where it can now be re-run at some point the future very quickly without having to re-downloading it from the docker hub.
After that the docker server will use it to create an instance of a container, and we know that a container is an instance of an image, its sole purpose is to run one very specific program. So the docker server then essentially took that image file from image cache and loaded it up into memory to created a container out of it and then ran a single program inside of it. And that single programs purpose was to print out the message that you see.
What a container is:
A container is a process or a set of processes that have a grouping of resource specifically assigned to it, in the bellow is a diagram that anytime that we think about a container we've got some running process that sends a system call to a kernel, the kernel is going to look at that incoming system call and direct it to a very specific portion of the hard drive, the RAM, CPU or what ever else it might need and a portion of each of these resources is made available to that singular process.
Let me try to provide as simple answers as possible:
But in fact, I don't know what is the purpose of this "container"?
What is a container?
Simply put: a package containing software. More specifically, an application and all its dependencies bundled together. A regular, non-dockerised application environment is hooked directly to the OS, whereas a Docker container is an OS abstraction layer.
And a container differs from an image in that a container is a runtime instance of an image - similar to how objects are runtime instances of classes in case you're familiar with OOP.
Can it replace a virtual machine dedicated to development?
Both VMs and Docker containers are virtualisation techniques, in that they provide abstraction on top of system infrastructure.
A VM runs a full “guest” operating system with virtual access to host resources through a hypervisor. This means that the VM often provides the environment with more resources than it actually needs In general, VMs provide an environment with more resources than most applications need. Therefore, containers are a lighter-weight technique. The two solve different problems.
What is the purpose, in simple words, of using Docker in companies?
The main advantage?
Containerisation goes hand-in-hand with microservices. The smaller services that make up the larger application are often tested and run in Docker containers. This makes continuous testing easier.
Also, because Docker containers are read-only they enforce a key DevOps principle: production services should remain unaltered
Some general benefits of using them:
Great isolation of services
Great manageability as containers contain everything the app needs
Encapsulation of implementation technology (in the containers)
Efficient resource utilisation (due to light-weight os virtualisation) in comparison to VMs
Fast deployment
If you don't have any prior experience with Docker this answer will cover the basics needed as a developer.
Docker has become a standard tool for DevOps as it is an effective application to improve operational efficiencies. When you look at why Docker was created and why it is very popular, it is mostly for its ability to reduce the amount of time it takes to set up the environments where applications run and are developed.
Just look at how long it takes to set up an environment where you have React as the frontend, a node and express API for backend, which also needs Mongo. And that's just to start. Then when your team grows and you have multiple developers working on the same front and backend and therefore they need to set up the same resources in their local environment for testing purposes, how can you guarantee every developer will run the same environment resources, let alone the same versions? All of these scenarios play well into Docker's strengths where it's value comes from setting containers with specific settings, environments and even versions of resources. Simply type a few commands to have Docker set up, install, and run your resources automatically.
Let's briefly go over the main components. A container is basically where your application or specific resource is located. For example, you could have the Mongo database in one container, then the frontend React application, and finally your node express server in the third container.
Then you have an image, which is from what the container is built. The images contains all the information that a container needs to build a container exactly the same way across any systems. It's like a recipe.
Then you have volumes, which holds the data of your containers. So if your applications are on containers, which are static and unchanging, the data that change is on the volumes.
And finally, the pieces that allow all these items to speak is networking. Yes, that sounds simple, but understand that each container in Docker have no idea of the existence of each container. They're fully isolated. So unless we set up networking in Docker, they won't have any idea how to connect to one and another.
There are really good answers above which I found really helpful.
Below I had drafted a simpler answer:
Reasons to dockerize my web application?
a. One OS for multiple applications ( Resources are shared )
b. Resource manangement ( CPU / RAM) is efficient.
c. Serverless Implementation made easier -Yes, AWS ECS with Fargate, But serverless can be achieved with Lamdba
d. Infra As Code - Agree, but IaC can be achieved via Terraforms
e. "It works in my machine" Issue
Still, below questions are open when choosing dockerization
A simple spring boot application
a. Jar file with size ~50MB
b. creates a Docker Image ~500MB
c. Cant I simply choose a small ec2 instance for my microservices.
Financial Benefits (reducing the individual instance cost) ?
a. No need to pay for individual OS subscription
b. Is there any monetary benefit like the below implementation?
c. let say select t3.2xlarge ( 8 core / 32 GB) and start 4-5 docker images ?

Resources