Bitnami and Docker - docker

How Bitnami and Docker are different from each other when it comes to container based deployments.
I have been learning about microservices recently. I used Docker images to run my apps as containers. And, I noticed that Bitnami does something similar when it creates a virtual image on a cloud form its launchpad.
From whatever links I could see on Internet, I could not visualize how these two - Docker and Bitnami - are different from each other.

Docker
Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.
Containers and virtual machines have similar resource isolation and allocation benefits -- but a different architectural approach allows containers to be more portable and efficient.
Virtual machines include the application, the necessary binaries and libraries, and an entire guest operating system -- all of which can amount to tens of GBs. Docker containers include the application and all of its dependencies --but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud.
Bitnami
Bitnami is an app library for server software. You can install your favorite applications on your own servers or run them in the cloud.
One of the platforms on which to deploy these applications are using Docker Containers. Virtual machines are another technology where applications can be deployed.
Bitnami containers give you the latest stable versions of your application stacks, allowing you to focus on coding rather than updating dependencies or outdated libraries. Available as development containers, turnkey application and infrastructure containers, or build your own custom container using Stacksmith.

Related

Is docker a infrastructure as code technology because it virtualizes an OS to handle multiple workloads on a single OS instance?

I have come across the word IaC many times while learning DevOps and when I googled it to know what it is it showed that it used code as it is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. So is docker also a infrastructure as code technology because it virtualizes an OS to handle multiple workloads on a single OS instance? Thanks in advance
I'm not sure exactly what you are asking, but Docker provides infrastructure as code because the Docker functionality is set via Dockerfiles and shell scripts. You don't install a list of programs manually when defining an image. You don't configure anything with a GUI in order to create an environment when you pull an image from Docker hub or when you deploy your own image.
And as said in another answer, Docker is not virtualization, as everything is actually running in your Linux kernel, but with limited resources in its own namespace. You can see a container process via htop in the host machine, for instance. There's no hypervisor. There's no overhead.
I think you misunderstud the concept, because neither Docker is an hypervidor nor containers are VMs.
From this page: https://www.docker.com/resources/what-container
A Docker container image is a lightweight, standalone, executable package of software that includes everything needed to run an application: code, runtime, system tools, system libraries and settings.
Container images become containers at runtime and in the case of Docker containers - images become containers when they run on Docker Engine.
Containers are an abstraction at the app layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in user space.

Vagrat Box vs docker?

Vagrant Box:
Boxes are the package format for Vagrant environments. A box can be used by anyone on any platform that Vagrant supports to bring up an identical working environment.
Docker
Docker is a tool that packages, provisions and runs containers independent of the OS. A container packages the application service or function with all of the libraries, configuration files, dependencies and other necessary parts to operate
Question :
How docker and vagrant box are different from each other?
What freedom does they provide for the developer and production?
How Developer can make use of the Vagrant and differenciate the differences between docker and vagrant.
Vagrant : Vagrant is a project that helps the spawning of virtual machines. It started as an command line of VirtualBox, something similar to Gemfile for VM's. You can choose the base image to start with, network, IP, share folders and put it all in a file that anyone can reuse to spawn the same configured machine. Vagrant has different extensions, provisioning options and VM providers. You can run a VirtualBox, VMware and it is extensible enough to be able to create instances on EC2.
Docker : Docker, allows to package an application with all of its dependencies into a standardized unit of software development. So, it reduces a friction between developer, QA and testing. The idea is to share the linux kernel. It dynamically change your application, adding new capabilities every single day, scaling out services to quickly changing the problem areas. Docker is putting itself in an excited place as the interface to PaaS be it networking, discovery and service discovery with applications not having to care about underlying infrastructure. The industry now benefits from a standardized container work-flow and an ecosystem of helpful tools, services and vibrant community around it.
Following are few points ease for developer and production deployments:
ACCELERATE DEVELOPERS : Your development environment is the first and foremost thing in IT. Whatever you want, the different tools, databases, instances, networks, etc. you can easily create all these with docker using simple commands(Image creation using Dockerfile or pull from Docker Hub). Get 0 to 100 with docker machine within seconds and as a developer I can focus more on my application.
EMPOWER CREATIVITY : The loosely coupled architecture where every instance i.e. container here is completely isolated with each other. So, their is no any conflict between the tools, softwares, etc. So, the more creative way developer can utilize the system.
ELIMINATE ENVIRONMENT INCONSISTENCIES : Docker containers are responsible for actual running of the applications and includes the operating system, user-files and metadata. And docker image is same across the environment so your build will go seamlessly from dev to qa, staging and production.
In production environment you must have a zero downtime along with automated deployments. You should take care of all things as service discovery, logging and monitoring, scaling and vulnerability scanning for docker images, etc. All these things accelerate the deployment process and help you better serve the production environment. You don't need to login into production server for any configuration change, logging or monitoring. Docker will do it for you. Developers must understand that docker is a tool, it's nothing without other components. But, it will definitely reduce your huge deployment from hours to minutes. Hope this will clear. Thank you.
Docker relies on containerization, while Vagrant utilizes virtualization.

Why run Docker under Vagrant?

I've read multiple articles how to do this, but I can't figure out what the benefits are under macOS.
From my point of view, you can run Docker natively on macOS using Docker Community Edition (boot2docker+Kitematic). What does it's give me for running from Vagrant, mobility?
My standard day to day development work is carried out in Docker For Mac/Windows as they cover about 95% of what I need to do with Docker. Since they replaced Docker Toolbox/boot2docker and made the integration to the OS pretty seamless I have found very few reasons to move over to another virtual machine. The two main reasons I see for using Vagrant or standalone VM's now are for VM customisation and clustering.
VM Customisation
The virtual machines supplied by Docker Toolbox, Docker for Mac/Windows are pre packaged cut down Linux distros (TinyCore and Alpine) that are largely ephemeral, except for the Docker configuration so you don't get much say in how they work.
Networking
I deal with a number of custom network configurations that just aren't possible in the pre packaged VM's, largely around having containers connected to routable networks rather than using mapped ports.
Version Control
Occasionally you need to replicate server environments that run old versions of the Docker daemon, or RHEL servers using devicemapper. A VM let's you choose the packages to install.
Clustering
Building a swarm, or branching out into Mesosphere/Kubernetes will require multiple VM's. I tend to find these easier to manage and build with Vagrant rather than Docker Machine, and again they require custom config inside the VM.

What components should be "containerized" - Docker

I am exploring use of containers in a new application and have looked at a fair amount of content and created a sandbox environment to explore docker and containers. My struggle is more an understanding what components needs to be containerized individually vs bundling multiple components into my own container. And what points to consider when architecting this?
Example:
I am building a python back end service to be executed via webservice call.
The service would interact with both Mongo DB, and RabbitMQ.
My questions are:
Should I run individual OS container (EG Ubuntu), Python Container, MongoDB Container, Rabbit MQ container etc? Combined they all form part of my application and by decoupling everything I have the ability to scale individually?
How would I be able to bundle/link these for deployment without losing the benefits of decoupling/decomposing into individual containers
Is an OS and python container actually required as this will all be running on an OS with python anyways?
Would love to see how people have approached this problem?
Docker's philosophy: using microservices in containers. The term "Microservice Architecture" has sprung up over the last few years to describe a particular way of designing software applications as suites of independently deployable services.
Some advantages of microservices architecture are:
Easier upgrade management
Eliminates long-term commitment to a single technology stack
Improved fault isolation
Makes it easier for a new developer to understand the functionality of a service
Improved Security
...
Should I run individual OS container (EG Ubuntu), Python Container,
MongoDB Container, Rabbit MQ container etc? Combined they all form
part of my application and by decoupling everything I have the ability
to scale individually?
You dont need an individual OS ontainer. Each container will use Docker host's kernel, and will contain only binaries required, python binaries for example.
So you will have, a python container for you python service, MongoDB container and RabbitMQ container.
How would I be able to bundle/link these for deployment without losing
the benefits of decoupling/decomposing into individual containers?
For deployments, You will use dockerfiles + docker-compose file. Dockerfiles include instructions to create a docker image. If you are just using official library images, you don't need dockerfiles.
docker-compose will help you orchestrate the container builds (from docker files), start ups, Creating required networks, Mounting required volumes and etc.

Docker, what is it and what is the purpose

I've heard about Docker some days ago and wanted to go across.
But in fact, I don't know what is the purpose of this "container"?
What is a container?
Can it replace a virtual machine dedicated to development?
What is the purpose, in simple words, of using Docker in companies? The main advantage?
VM: Using virtual machine (VM) software, for example, Ubuntu can be installed inside a Windows. And they would both run at the same time. It is like building a PC, with its core components like CPU, RAM, Disks, Network Cards etc, within an operating system and assemble them to work as if it was a real PC. This way, the virtual PC becomes a "guest" inside an actual PC which with its operating system, which is called a host.
Container: It's same as above but instead of using an entire operating system, it cut down the "unnecessary" components of the virtual OS to create a minimal version of it. This lead to the creation of LXC (Linux Containers). It therefore should be faster and more efficient than VMs.
Docker: A docker container, unlike a virtual machine and container, does not require or include a separate operating system. Instead, it relies on the Linux kernel's functionality and uses resource isolation.
Purpose of Docker: Its primary focus is to automate the deployment of applications inside software containers and the automation of operating system level virtualization on Linux. It's more lightweight than standard Containers and boots up in seconds.
(Notice that there's no Guest OS required in case of Docker)
[ Note, this answer focuses on Linux containers and may not fully apply to other operating systems. ]
What is a container ?
It's an App: A container is a way to run applications that are isolated from each other. Rather than virtualizing the hardware to run multiple operating systems, containers rely on virtualizing the operating system to run multiple applications. This means you can run more containers on the same hardware than VMs because you only have one copy of the OS running, and you do not need to preallocate the memory and CPU cores for each instance of your app. Just like any other app, when a container needs the CPU or Memory, it allocates them, and then frees them up when done, allowing other apps to use those same limited resources later.
They leverage kernel namespaces: Each container by default will receive an environment where the following are namespaced:
Mount: filesystems, / in the container will be different from / on the host.
PID: process id's, pid 1 in the container is your launched application, this pid will be different when viewed from the host.
Network: containers run with their own loopback interface (127.0.0.1) and a private IP by default. Docker uses technologies like Linux bridge networks to connect multiple containers together in their own private lan.
IPC: interprocess communication
UTS: this includes the hostname
User: you can optionally shift all the user id's to be offset from that of the host
Each of these namespaces also prevent a container from seeing things like the filesystem or processes on the host, or in other containers, unless you explicitly remove that isolation.
And other linux security tools: Containers also utilize other security features like SELinux, AppArmor, Capabilities, and Seccomp to limit users inside the container, including the root user, from being able to escape the container or negatively impact the host.
Package your apps with their dependencies for portability: Packaging an application into a container involves assembling not only the application itself, but all dependencies needed to run that application, into a portable image. This image is the base filesystem used to create a container. Because we are only isolating the application, this filesystem does not include the kernel and other OS utilities needed to virtualize an entire operating system. Therefore, an image for a container should be significantly smaller than an image for an equivalent virtual machine, making it faster to deploy to nodes across the network. As a result, containers have become a popular option for deploying applications into the cloud and remote data centers.
Can it replace a virtual machine dedicated to development ?
It depends: If your development environment is running Linux, and you either do not need access to hardware devices, or it is acceptable to have direct access to the physical hardware, then you'll find a migration to a Linux container fairly straight forward. The ideal target for a docker container are applications like web based API's (e.g. a REST app), which you access via the network.
What is the purpose, in simple words, of using Docker in companies ? The main advantage ?
Dev or Ops: Docker is typically brought into an environment in one of two paths. Developers looking for a way to more rapidly develop and locally test their application, and operations looking to run more workload on less hardware than would be possible with virtual machines.
Or Devops: One of the ideal targets is to leverage Docker immediately from the CI/CD deployment tool, compiling the application and immediately building an image that is deployed to development, CI, prod, etc. Containers often reduce the time to move the application from the code check-in until it's available for testing, making developers more efficient. And when designed properly, the same image that was tested and approved by the developers and CI tools can be deployed in production. Since that image includes all the application dependencies, the risk of something breaking in production that worked in development are significantly reduced.
Scalability: One last key benefit of containers that I'll mention is that they are designed for horizontal scalability in mind. When you have stateless apps under heavy load, containers are much easier and faster to scale out due to their smaller image size and reduced overhead. For this reason you see containers being used by many of the larger web based companies, like Google and Netflix.
Same questions were hitting my head some days ago and what i found after getting into it, let's understand in very simple words.
Why one would think about docker and containers when everything seems fine with current process of application architecture and development !!
Let's take an example that we are developing an application using nodeJs , MongoDB, Redis, RabbitMQ etc services [you can think of any other services].
Now we face these following things as problems in application development and shipping process if we forget about existence of docker or other alternatives of containerizing applications.
Compatibility of services(nodeJs, mongoDB, Redis, RabbitMQ etc.) with OS(even after finding compatible versions with OS, if something unexpected happens related to versions then we need to relook the compatibility again and fix that).
If two system components requires a library/dependency with different versions in application in OS(That need a relook every time in case of an unexpected behaviour of application due to library and dependency version issue).
Most importantly , If new person joins the team, we find it very difficult to setup the new environment, person has to follow large set of instructions and run hundreds of commands to finally setup the environment And it takes time and effort.
People have to make sure that they are using right version of OS and check compatibilities of services with OS.And each developer has to follow this each time while setting up.
We also have different environment like dev, test and production.If One developer is comfortable using one OS and other is comfortable with other OS And in this case, we can't guarantee that our application will behave in same way in these two different situations.
All of these make our life difficult in process of developing , testing and shipping the applications.
So we need something which handles compatibility issue and allows us to make changes and modifications in any system component without affecting other components.
Now we think about docker because it's purpose is to
containerise the applications and automate the deployment of applications and ship them very easily.
How docker solves above issues-
We can run each service component(nodeJs, MongoDB, Redis, RabbitMQ) in different containers with its own dependencies and libraries in the same OS but with different environments.
We have to just run docker configuration once then all our team developers can get started with simple docker run command, we have saved lot of time and efforts here:).
So containers are isolated environments with all dependencies and
libraries bundled together with their own process and networking
interfaces and mounts.
All containers use the same OS resources
therefore they take less time to boot up and utilise the CPU
efficiently with less hardware costs.
I hope this would be helpful.
Why use docker:
Docker makes it really easy to install and running software without worrying about setup or dependencies. Docker is really made it easy and really straight forward for you to install and run software on any given computer not just your computer but on web servers as well or any cloud based computing platform. For example when I went to install redis in my computer by using bellow command
wget http://download.redis.io/redis-stable.tar.gz
I got error,
Now I could definitely go and troubleshoot this install that program and then try installing redis again, and I kind of get into endless cycle of trying to do all bellow troubleshooting as you I am installing and running software.
Now let me show you how easy it is to run read as if you are making use of Docker instead. just run the command docker run -it redis, this command will install docker without any error.
What docker is:
To understand what is docker you have to know about docker Ecosystem.
Docker client, server, Machine, Images, Hub, Composes are all projects tools pieces of software that come together to form a platform where ecosystem around creating and running something called containers, now if you run the command docker run redis something called docker CLI reached out to something called the Docker Hub and it downloaded a single file called an image.
An image is a single file containing all the dependencies and all the configuration required to run a very specific program, for example redis this which is what the image that you just downloaded was supposed to run.
This is a single file that gets stored on your hard drive and at some point time you can use this image to create something called a container.
A container is an instance of an image and you can kind of think it as being like a running program with it's own isolated set of hardware resources so it kind of has its own little set or its own little space of memory has its own little space of networking technology and its own little space of hard drive space as well.
Now lets examine when you give bellow command:
sudo docker run hello-world
Above command will starts up the docker client or docker CLI, Docker CLI is in charge of taking commands from you kind of doing a little bit of processing on them and then communicating the commands over to something called the docker server, and docker server is in charge of the heavy lifting when we ran the command Docker run hello-world,
That meant that we wanted to start up a new container using the image with the name of hello world, the hello world image has a tiny tittle program inside of it whose sole purpose or sole job is to print out the message that you see in the terminal.
Now when we ran that command and it was issued over to the docker server a series of actions very quickly occurred in background. The Docker server saw that we were trying to start up a new container using an image called hello world.
The first thing that the docker server did was check to see if it already had a local copy like a copy on your personal machine of the hello world image or that hello world file.So the docker server looked into something called the image cache.
Now because you and I just installed Docker on our personal computers that image cache is currently empty, We have no images that have already been downloaded before.
So because the image cache was empty the docker server decided to reach out to a free service called Docker hub. The Docker Hub is a repository of free public images that you can freely download and run on your personal computer. So Docker server reached out to Docker Hub and and downloaded the hello world file and stored it on your computer in the image-cache, where it can now be re-run at some point the future very quickly without having to re-downloading it from the docker hub.
After that the docker server will use it to create an instance of a container, and we know that a container is an instance of an image, its sole purpose is to run one very specific program. So the docker server then essentially took that image file from image cache and loaded it up into memory to created a container out of it and then ran a single program inside of it. And that single programs purpose was to print out the message that you see.
What a container is:
A container is a process or a set of processes that have a grouping of resource specifically assigned to it, in the bellow is a diagram that anytime that we think about a container we've got some running process that sends a system call to a kernel, the kernel is going to look at that incoming system call and direct it to a very specific portion of the hard drive, the RAM, CPU or what ever else it might need and a portion of each of these resources is made available to that singular process.
Let me try to provide as simple answers as possible:
But in fact, I don't know what is the purpose of this "container"?
What is a container?
Simply put: a package containing software. More specifically, an application and all its dependencies bundled together. A regular, non-dockerised application environment is hooked directly to the OS, whereas a Docker container is an OS abstraction layer.
And a container differs from an image in that a container is a runtime instance of an image - similar to how objects are runtime instances of classes in case you're familiar with OOP.
Can it replace a virtual machine dedicated to development?
Both VMs and Docker containers are virtualisation techniques, in that they provide abstraction on top of system infrastructure.
A VM runs a full “guest” operating system with virtual access to host resources through a hypervisor. This means that the VM often provides the environment with more resources than it actually needs In general, VMs provide an environment with more resources than most applications need. Therefore, containers are a lighter-weight technique. The two solve different problems.
What is the purpose, in simple words, of using Docker in companies?
The main advantage?
Containerisation goes hand-in-hand with microservices. The smaller services that make up the larger application are often tested and run in Docker containers. This makes continuous testing easier.
Also, because Docker containers are read-only they enforce a key DevOps principle: production services should remain unaltered
Some general benefits of using them:
Great isolation of services
Great manageability as containers contain everything the app needs
Encapsulation of implementation technology (in the containers)
Efficient resource utilisation (due to light-weight os virtualisation) in comparison to VMs
Fast deployment
If you don't have any prior experience with Docker this answer will cover the basics needed as a developer.
Docker has become a standard tool for DevOps as it is an effective application to improve operational efficiencies. When you look at why Docker was created and why it is very popular, it is mostly for its ability to reduce the amount of time it takes to set up the environments where applications run and are developed.
Just look at how long it takes to set up an environment where you have React as the frontend, a node and express API for backend, which also needs Mongo. And that's just to start. Then when your team grows and you have multiple developers working on the same front and backend and therefore they need to set up the same resources in their local environment for testing purposes, how can you guarantee every developer will run the same environment resources, let alone the same versions? All of these scenarios play well into Docker's strengths where it's value comes from setting containers with specific settings, environments and even versions of resources. Simply type a few commands to have Docker set up, install, and run your resources automatically.
Let's briefly go over the main components. A container is basically where your application or specific resource is located. For example, you could have the Mongo database in one container, then the frontend React application, and finally your node express server in the third container.
Then you have an image, which is from what the container is built. The images contains all the information that a container needs to build a container exactly the same way across any systems. It's like a recipe.
Then you have volumes, which holds the data of your containers. So if your applications are on containers, which are static and unchanging, the data that change is on the volumes.
And finally, the pieces that allow all these items to speak is networking. Yes, that sounds simple, but understand that each container in Docker have no idea of the existence of each container. They're fully isolated. So unless we set up networking in Docker, they won't have any idea how to connect to one and another.
There are really good answers above which I found really helpful.
Below I had drafted a simpler answer:
Reasons to dockerize my web application?
a. One OS for multiple applications ( Resources are shared )
b. Resource manangement ( CPU / RAM) is efficient.
c. Serverless Implementation made easier -Yes, AWS ECS with Fargate, But serverless can be achieved with Lamdba
d. Infra As Code - Agree, but IaC can be achieved via Terraforms
e. "It works in my machine" Issue
Still, below questions are open when choosing dockerization
A simple spring boot application
a. Jar file with size ~50MB
b. creates a Docker Image ~500MB
c. Cant I simply choose a small ec2 instance for my microservices.
Financial Benefits (reducing the individual instance cost) ?
a. No need to pay for individual OS subscription
b. Is there any monetary benefit like the below implementation?
c. let say select t3.2xlarge ( 8 core / 32 GB) and start 4-5 docker images ?

Resources