How are they functioning differently?
Which features of the kernel are they using?
You can read all about it in this link
Basically, my impression is that rkt takes pride in being image-agnostic (meaning you can run images that were built using docker or other container engines) and contain less overhead than docker does. This is a nice picture to describe the differences between the two (taken from the link I've attached) -
Related
Official .NET Docker images support three Linux distors:
Debian - 3.1.201-buster
Alpine - 3.1.201-alpine
Ubuntu - 3.1.201-bionic
I didn't find much in the documentation:
Which and why should prefer one over another? Since AKS nodes are Ubuntu based they all work. So which should I select?
Since they are all base don the same architecture, I would say that the deciding factor should be
1) which image is the smaller (or not as big)
2) which comes with built in binaries that are useful to your need. (e.g. the alpine base normally handles DNS lookup differently when using nslookup)
e.g : https://github.com/gliderlabs/docker-alpine/issues/476
In the end, it is up to you, what is important, you pick one that you are comfortable with and one with you trust more with respect to CVE and security updates to be available the fastest.
I am trying to understand Docker and its related core concepts, I came to know that there is concept of images which forms the basis of container where applications run isolated.
I also came to know that we can download the official images from docker hub, https://hub.docker.com , part of screen shot below:
My question is:
Do respective company create special/custom made OS (the minimal, for example we can see ubuntu image) for docker? If so, what benefit these companies get in creating these custom made images for docker?
One could call them custom images, however, they are just base bare images which are to be used as a starting point for your application.
They are mostly built by people who works at Docker and they are trying to ensure some guarantee of quality.
They are stripped of unnecessary packages in order to keep the image size to a minimum.
To find out more you could read this Docker documentation page or this blog post.
I understand that it is software shipped in some sort of binary format. In simple terms, what exactly is a docker-image? And what problem is it trying to solve? And how is it better than other distribution formats?
As a student, I don't completely get the picture of how it works and why so many companies are opting for it? Even many open source libraries are shipped as docker images.
To understand the docker images, you should better understand the main element of the Docker mechanism the UnionFS.
Unionfs is a filesystem service for Linux, FreeBSD and NetBSD which
implements a union mount for other file systems. It allows files and
directories of separate file systems, known as branches, to be
transparently overlaid, forming a single coherent file system.
The docker images сonsist of several layers(levels). Each layer is a write-protected filesystem, for every instruction in the Dockerfile created own layer, which has been placed over already created. Then the docker run or docker create is invoked, it make a layer on the top with write persmission(also it has doing a lot of other things). This approach to the distribution of containers is very good in my opinion.
Disclaimer:
It is my opinion which I'm found somewhere, feel free to correct me if I'm wrong.
I would like to compose multiple Docker images that start with different bases. However, many of the installation scripts afterward are similar.
What's the best way to source a sub Docker file?
Sounds like what you're looking for is the ability to include Dockerfiles in other Dockerfiles. There was a proposal for such a feature, but currently there is nothing that supports this out of the box. The discussion is worth reading through because it includes links to tools like harbor and dfpp that people built to a support a subset of the functionality.
One problem with tools like this is that you can't easily make the same include file work for debian, centos, and alpine linux (for example). The way this is currently addressed (like redis and redis-alpine images for essentially the same software) is to have duplicate dockerfiles.
I know we can create docker images using ansible. I'm learning and doing POC work.
I'm trying to find what are the pros/cons of creating a docker image using Ansible.
Would like to hear if you have played and found any issues/solutions with creating docker images (NOT deploying docker images) using ansible?
Also, are there any good reasons not to create docker images using Ansible?
It can be a good choice.
If an agentless system is good enough for your needs, keeping your Docker images lightweight (by not having any agent in them) is a reasonable thing to desire.
If your ops team uses Ansible, using the same playbooks in configuring your Docker images (used for dev/test) as for production is desirable.
If your production environment uses Docker in the manner in which it's intended to be used, then you have reduced need for complex logic around maintenance and upkeep of existing systems, which makes Ansible a better option.
That said, I also have a laundry list of complaints about Ansible -- particularly, in places where its DSL is poorly designed in ways that make automating generation of playbooks error-prone, and places where functionality present in some of its competitors (albeit not particularly relevant to Docker image generation) was designed in only as an afterthought.
No tool is perfect; the decision in terms of what meets your needs and fails only in ways you find acceptable needs to be made in the context of your own use cases.