I have been working with Docker for Windows for about a year now, and I still do not have a good grasp of when I should use the different images, how they are related, and what components of Windows that are in them.
On this link:
https://hub.docker.com/_/microsoft-windows-base-os-images
there are four "Featured repos":
windows/servercore
windows/nanoserver
windows/iotcore
windows
I understand that windows/servercore should contain more things that nanoserver, but what are does things exactly? Why does some programs work in servercore and not nanoserver and is there some way of finding what is missing in nanoserver for a particular program?
In addition to this, they list three related repos:
microsoft/dotnet-framework
microsoft/dotnet
microsoft/iis
Both of the dotnet repos contain five sub repos, and the difference is that dotnet-framework is based on server core, while dotnet is based on nanoserver.
Is there some comprehensible documentation of all these repos/images, maybe with a graph for a simple overview? Do some of them have a public Dockerfile that explains how they were created, like for example this:?
https://github.com/docker-library/python/blob/master/3.6/windows/windowsservercore-ltsc2016/Dockerfile
The differences your are mentionning are less linked to Docker than you think.
All images are a successions of operation which will result in a functionning environnement. See it as an automated installation, just like you would do it by hand on a physical machine.
Having different images on a repo means that the installation is different, with different settings. I'm not a .NET expert nor a Windows Server enthousiast, but for what I found, Nano Server is another way to install a Windows Server, with less functionnality so it's light-weigth. (https://learn.microsoft.com/en-us/windows-server/get-started/getting-started-with-nano-server)
Those kind of technical difference are technology specific and you'll find all the informations needed on the official documentations of Microsoft.
Remember that Docker is a way to do something, not the designer of the os you are using, so most of the time you'll have to search in the actual documentation of your system (in that case, Windows Server and .NET framework).
I hope this helped you to understand a little better, have fun with Docker!
Related
I am learning docker from linked in learning video class. Here trainer mentioned as below
Include installer in your project . Five years from now the installer
for version 0.18 of your project will not still be available. Include
it. If you depend on a piece of software to build your image, check it
into your image.
How can I check dependency into my image? My understanding is that if we build image software by giving commands like below.
FROM ubuntu:14.0
We already downloaded ubuntu software 14.0 version and created image. Why trainer is mentioning that version 14.0 is not possible to download
after 5 years down the line? Is my understanding right.
Thanks for your time in clarifying
I'm having difficulty understanding what the LinkedIn teacher is trying to explain.
For images no longer available on DockerHub there is usually many years of pre-warnings before they're removed. For example .Net Core 2.1
It's a no brainer that people (and companies) should invest to time to move away from unsecured legacy software. If an online teacher is saying pin Ubuntu 14 because you might need it in 2028, I consider that bad practice and not something a decent Engineer would ever aspire to do.
What your code is doing is asking for an Ubuntu image with the tag 14 so you can search here to see the results. 14.04 has been recently pushed and scanned for log4js (a massive security vulnerability), coming back clean which is great. On the other hand 14.04.1 was pushed 7 years ago and is clearly not maintained.
One thing most companies do is to push images to ECR/Artifactory/DockerHub/etc so you could docker pull say ubuntu:14.04 and then docker push it to your own container registry and you'll have it forever and ever amen. In this case you'd add your private registry to docker so it can discover the image or use a custom URL like my-company-images#artifactory-company.com/ubuntu:14.04
https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html
Short version
I would like to know the technical reasons why do Docker images need to be created for multiple architectures. Also, it is not clear whether the point here is creating an image for each CPU architecture or for an OS. Shouldn't the OS abstract the architecture?
Long version
I can understand why the Docker Engine must be ported to multiple architectures. It is a piece of software that will interact with the OS, make system calls, and ultimately it is just code that is represented as a sequence of instructions within a particular instruction set, for a particular architecture. So the Docker Engine must be ported to multiple OS/architectures much like, let's say, Microsoft Word would have to be ported.
The same thing would occur to - let's say - the JVM, or to VirtualBox.
But, different than with Docker, software written for the JVM on Windows would run on Linux. The JVM would abstract the differences of the underlying OS/architectures, and run the same code on both platforms.
Why isn't that the case with Docker images? Why can't the Docker Engine just abstract the differences, and provide a common interface, so the image itself wouldn't need to be compatible with a specific OS/architecture?
Is this a decision (like "let's make different images per architecture because it is better for reason X"), or a consequence of how Docker works (like "we need to do it this way because Docker requires Y")?
Note
I'm not crying "omg, why??". This is not a rant or criticism, I'm just looking for a technical explanation for the need of different images for different architectures.
I'm not asking how to create a multi-architecture image.
I'm not looking for an answer like "multi-architecture images are needed so you can run your images on various platforms", which answers "what for?", but not "why is that needed?" (which is my question).
Besides that, when you see an image, it usually has an os/arch in the digest, like this:
What exactly the image is targeting? The OS, the architecture, or both? Shouldn't the OS abstract the underlying architecture?
edit: I'm starting to assume that the need for different images per architecture is on the lines of: the image will contain applications inside it. Let's say, it will contain the Go compiler. The Go compiler itself is a binary that must have been complied to different architectures. The image for x86-64 will contain the Go compiler compiled to x86-64, and so on. Is this correct? If this is correct, is this the only reason?
Why can't the Docker Engine just abstract the differences, and provide a common interface
Performance would be a major factor. Consider how slow Cygwin is for some things when providing a POSIX API on top of Windows by emulating some POSIX things that don't map directly to the Windows API. (e.g. fork() / exec separately, instead of CreateProcess).
And that's just source compatibility; the resulting binaries are specific to Cygwin on Windows. It's even worse if you want to do that at runtime (binary compat instead of source compat).
There's also the amount of complexity Docker would need to provide an efficient portable JIT-compiling VM on top of various OSes, especially across various CPU ISAs like x86-64 vs. AArch64 that don't even share common machine code.
If Docker had gone this route, it would really just be re-inventing a JVM or .NET CLR bytecode-based VM.
Or more likely, instead of reinventing that wheel, it would just use an existing VM and add image management on top of that. But then it couldn't work with native programs written in C, unless it transpiled them to Java or CLR bytecode.
All tough the promise of Docker is the elimination of differences when moving software between machines, you'll still face the problem that Docker runs with the host machine's CPU architecture, which can't be crossed in Docker.
Neither Docker, nor a virtual machine, abstract a CPU to enable full cross compatibility.
Emulators do. If both Docker and VM's would run on Emulators, they would be less performant as they are today.
The docker buildx command and --build-arg ARCH flag leverages the advantage of the qemu emulator, emulating the full system with an architecture during a build. The downside of emulation is that it runs much slower than normal builds.
I am a bit confused about Docker and how can I use it. My situation is the following:
I have a project that requires the use of a requisite, in my case installing ROS2. I have installed it in my system and develop a program. No problem there.
I wish to upload it to Gitlab and use CICD there. So I am guessing I will push it to my repository and then build a pipeline where I can use as image the docker image for ROS 2. I haven't tried it yet (will do it tomorrow) but I guess that is how I should do it.
My question is, can I do something similar (or how to ) in my local machine? In other words, just use the docker image and then develop and build over there and not install the requisite in the first place?
I heartily agree that using docker to develop locally improves the development experience, primarily by obviating system specific dependency management, just as you say.
Exactly how this is done depends on how many components you need to develop simultaneously, and how you want the development environment to behave .
An obvious place to start might be docker compose, a framework for starting multiple docker containers. https://docs.docker.com/compose/gettingstarted/ looks like quite a nice tutorial on the subject, and straight from the horse's mouth too.
However, your robotics project (?) may not be a very good fit for the server/client model behind the write - restart python - execute client - debug - repeat cycle in the document. To provide a better answer, we'd need a lot more understanding of how exactly your local development works - what exactly you want your development process to look like in this project might require a different solution. So add some workflow details to your question!
I started looking into docker lately and I understand a lot of the benefits it offers I think, you can quickly create a docker container and run it on different machines. Building (compiling) is also relatively easy, you can download the maven image for example and just build your code. That works fine. So, building is easy, testing is easy and deploying (and running) in production is easy.
What I don't understand is how docker can make the development phase easier. And what I mean with the development phase is, starting up your IDE, reading code, quickly navigate through your methods definition using the methods the IDE provides, use intelliSense, etc. Then change something, run a unit test, try a different third party library, etc. All things you can do with your IDE. But I don't understand how to do this with a docker image. I've read a few posts about starting the IDE from within your docker container, but that requires setting things up with a windows manager and I am not sure if that's the way to go.
Of course I can set up my laptop in such a way that I can do all of this with my IDE, but that way I bypass all of the benefits docker should offer. I still have to download dependencies, set up environment variables, do a lot of manual settings etc. And not just me, but everyone in the team.
So, not a very concrete question, possibly a duplicate, but I just can't wrap my head around it, how to use an IDE together with docker?
Yeah it's hard. It also depends on what language/framework you're using. But the things you mention are all easy to accomplish. For example we use Ruby a lot and someone in my team uses RubyMine to work with his code. That source code is mapped onto the container so the changes are reflected immediately. If you want to run a test, I'm sure you can override the command your IDE brings by default with something custom like docker run --rm myapp ./run_tests.sh or similar. At least that's what I do with VIM.
Probably the most important missing part when doing dev with Docker is debugging. I think JetBrains is starting to add features to their IDE's but I'm not sure on the status of that.
Also, almost every IDE or good editor has an integrated console. You could maintain a docker exec sessions opened there and run all your app commands, like tests, generators or any other. Even do some basic debugging.
Hope it helps.
When building Docker images, I find myself in a strange place -- I feel like I'm doing something that somebody has already done many times before -- and did a vastly better job at it. In most cases, this gut feeling is absolutely right -- I'm taking a piece of software and re-describe everything that's already described in the OS's packaging system in a Dockerfile.
More often than not, I even find myself installing software into the image using a packager manager and then looking inside that package to get some clues about writable paths, configuration files, open ports etc. for my Dockerfile. The duplication of effort between OS packager and Docker packager is most evident in such a case which I assume is one of the more common.
So basically, every Docker user building an image on top of pre-packaged software is re-packaging almost from scratch, but without the time and often the domain knowledge the OS packagers had for trial, error and polish. If we consider the low reusability of community-maintained images (re-basing from Debian to RHEL hurts), we're stuck with copying or re-implementing functionality that already exists and works on OS level, wasting a lot of time and putting a maintenance burden on the poor souls who'd inherit whatever we might leave behind.
Is there any way to resolve this duplication of effort and re-use whatever package maintainers have already learned about a piece of software in Docker?
The main source for Docker image reuse is hub.docker.com
Search first there if your system is already described in one of those images.
You can see their Dockerfile, and start your own from one of those images instead of starting from a basic ubuntu or wheezy one.