I am learning docker from linked in learning video class. Here trainer mentioned as below
Include installer in your project . Five years from now the installer
for version 0.18 of your project will not still be available. Include
it. If you depend on a piece of software to build your image, check it
into your image.
How can I check dependency into my image? My understanding is that if we build image software by giving commands like below.
FROM ubuntu:14.0
We already downloaded ubuntu software 14.0 version and created image. Why trainer is mentioning that version 14.0 is not possible to download
after 5 years down the line? Is my understanding right.
Thanks for your time in clarifying
I'm having difficulty understanding what the LinkedIn teacher is trying to explain.
For images no longer available on DockerHub there is usually many years of pre-warnings before they're removed. For example .Net Core 2.1
It's a no brainer that people (and companies) should invest to time to move away from unsecured legacy software. If an online teacher is saying pin Ubuntu 14 because you might need it in 2028, I consider that bad practice and not something a decent Engineer would ever aspire to do.
What your code is doing is asking for an Ubuntu image with the tag 14 so you can search here to see the results. 14.04 has been recently pushed and scanned for log4js (a massive security vulnerability), coming back clean which is great. On the other hand 14.04.1 was pushed 7 years ago and is clearly not maintained.
One thing most companies do is to push images to ECR/Artifactory/DockerHub/etc so you could docker pull say ubuntu:14.04 and then docker push it to your own container registry and you'll have it forever and ever amen. In this case you'd add your private registry to docker so it can discover the image or use a custom URL like my-company-images#artifactory-company.com/ubuntu:14.04
https://docs.aws.amazon.com/AmazonECR/latest/userguide/Registries.html
Related
Hope this isn't a stupid question. When looking at Docker images, particularly from the official repository they list multiple versions and labels for each dockerfile. For instance:
9.1.6-php8.0-apache-buster, 9.1-php8.0-apache-buster, 9-php8.0-apache-buster, php8.0-apache-
buster, 9.1.6-php8.0-apache, 9.1-php8.0-apache, 9-php8.0-apache, php8.0-apache, 9.1.6-apache-
buster, 9.1-apache-buster, 9-apache-buster, apache-buster, 9.1.6-apache, 9.1-apache,
9-apache, apache, 9.1.6, 9.1, 9, latest, 9.1.6-php8.0, 9.1-php8.0, 9-php8.0, php8.0
My question is why do they list so many variations of the version in the link (i.e. 9.1.6-php8.0-apache-buster and 9.1-php8.0-apache-buster, etc.)? I'm not sure if this is for searching and spiders (though it wouldn't need to be included in the link like they do), or if it is because each dockerfile can be modified to any of those versions? (and if so, how?). For instance, the above drupal dockerfile supports 9 - 9.1.6 and the dockerfile can be adjusted to that version. TBH, it's mainly just confusing why they do their links like that if its just for search indexing because it looks like its supporting multiple versions of something.
Today, these tags may all point to the same image. However tomorrow, 9.1.7 may be released, and when that happens, all the 9.1.6 images will remain unchanged and a new set of 9.1.7 images are built with generic tags like 9.1 and 9. Additionally when 9.2.0 is released, or php 8.1 comes out, or the next version of debian is released, any of these could be breaking changes for an app. So in your Dockerfile, you might say:
FROM ${BASE}:9.1-php8.0-apache-buster
And by doing so, you'll get the 9.1.7 when it's released, but not 9.2.0, and you won't accidentally be shifted over to nginx, or upgraded from buster to bullseye when it becomes stable. The more change your app is able to tolerate, the more generic you may be in your base image tag. The risk being one of those changes may break your app.
You can be very specific and pin to an exact base image that doesn't automatically update, but then as security vulnerabilities are patched in newer base images, your child image would remain vulnerable since it's locked to an old base image that won't receive updates.
I am a bit confused about Docker and how can I use it. My situation is the following:
I have a project that requires the use of a requisite, in my case installing ROS2. I have installed it in my system and develop a program. No problem there.
I wish to upload it to Gitlab and use CICD there. So I am guessing I will push it to my repository and then build a pipeline where I can use as image the docker image for ROS 2. I haven't tried it yet (will do it tomorrow) but I guess that is how I should do it.
My question is, can I do something similar (or how to ) in my local machine? In other words, just use the docker image and then develop and build over there and not install the requisite in the first place?
I heartily agree that using docker to develop locally improves the development experience, primarily by obviating system specific dependency management, just as you say.
Exactly how this is done depends on how many components you need to develop simultaneously, and how you want the development environment to behave .
An obvious place to start might be docker compose, a framework for starting multiple docker containers. https://docs.docker.com/compose/gettingstarted/ looks like quite a nice tutorial on the subject, and straight from the horse's mouth too.
However, your robotics project (?) may not be a very good fit for the server/client model behind the write - restart python - execute client - debug - repeat cycle in the document. To provide a better answer, we'd need a lot more understanding of how exactly your local development works - what exactly you want your development process to look like in this project might require a different solution. So add some workflow details to your question!
For some reason, we have to build windows based docker. from here, we know there are 4 types of base image we could build from.
windows/nanoserver
windows/servercore
windows
windows/iotcore
I am sure I am not IOT relevant, so windows/iotcore is excluded. while it is not sure about the remains three. it seems from size perspective (nanoserver < servercore < windows). I should try in this order. by now, my service will not start in 1 neither 2. i have to try 3.
what are the criteria to choose between them?
clearly, I am missing some dll to start the service, while dependencywalker seems also not work in the base image 1 and 2. do someone have experience on how to identify this missing DLL? in this way, it still is possible to use minimize base image with the missed dll.
Progress update:
My service succeed run with #3(windows base image). but the docker image size is very very large. see following. this makes the choice important.
mcr.microsoft.com/windows/nanoserver 10.0.14393.2430 9fd35fc2a361 15 months ago 1.14GB
mcr.microsoft.com/windows/servercore 1809-amd64 733821d00bd5 5 days ago 4.81GB
mcr.microsoft.com/windows 1809-amd64 57e56a07cc8a 6 days ago 12GB
Many Thanks.
you've probably moved on by now but essentially
IOT - tiny, for builders and maker boards.
Nanoserver = smallest. running netcore apps. you have to build it using multi stage builds. It's quite advanced from what I see to get working.
ServerCore = middle. GUIless windows server. Is the most common default base image. You've not said what service is not running but it's possible that including the App Compatability FOD might solve the problem without increasing the size as much. Use newest container. 1903 I think it is.
https://learn.microsoft.com/en-us/windows-server/get-started-19/install-fod-19
Windows = fattest, the whole shebang
I have been working with Docker for Windows for about a year now, and I still do not have a good grasp of when I should use the different images, how they are related, and what components of Windows that are in them.
On this link:
https://hub.docker.com/_/microsoft-windows-base-os-images
there are four "Featured repos":
windows/servercore
windows/nanoserver
windows/iotcore
windows
I understand that windows/servercore should contain more things that nanoserver, but what are does things exactly? Why does some programs work in servercore and not nanoserver and is there some way of finding what is missing in nanoserver for a particular program?
In addition to this, they list three related repos:
microsoft/dotnet-framework
microsoft/dotnet
microsoft/iis
Both of the dotnet repos contain five sub repos, and the difference is that dotnet-framework is based on server core, while dotnet is based on nanoserver.
Is there some comprehensible documentation of all these repos/images, maybe with a graph for a simple overview? Do some of them have a public Dockerfile that explains how they were created, like for example this:?
https://github.com/docker-library/python/blob/master/3.6/windows/windowsservercore-ltsc2016/Dockerfile
The differences your are mentionning are less linked to Docker than you think.
All images are a successions of operation which will result in a functionning environnement. See it as an automated installation, just like you would do it by hand on a physical machine.
Having different images on a repo means that the installation is different, with different settings. I'm not a .NET expert nor a Windows Server enthousiast, but for what I found, Nano Server is another way to install a Windows Server, with less functionnality so it's light-weigth. (https://learn.microsoft.com/en-us/windows-server/get-started/getting-started-with-nano-server)
Those kind of technical difference are technology specific and you'll find all the informations needed on the official documentations of Microsoft.
Remember that Docker is a way to do something, not the designer of the os you are using, so most of the time you'll have to search in the actual documentation of your system (in that case, Windows Server and .NET framework).
I hope this helped you to understand a little better, have fun with Docker!
When building Docker images, I find myself in a strange place -- I feel like I'm doing something that somebody has already done many times before -- and did a vastly better job at it. In most cases, this gut feeling is absolutely right -- I'm taking a piece of software and re-describe everything that's already described in the OS's packaging system in a Dockerfile.
More often than not, I even find myself installing software into the image using a packager manager and then looking inside that package to get some clues about writable paths, configuration files, open ports etc. for my Dockerfile. The duplication of effort between OS packager and Docker packager is most evident in such a case which I assume is one of the more common.
So basically, every Docker user building an image on top of pre-packaged software is re-packaging almost from scratch, but without the time and often the domain knowledge the OS packagers had for trial, error and polish. If we consider the low reusability of community-maintained images (re-basing from Debian to RHEL hurts), we're stuck with copying or re-implementing functionality that already exists and works on OS level, wasting a lot of time and putting a maintenance burden on the poor souls who'd inherit whatever we might leave behind.
Is there any way to resolve this duplication of effort and re-use whatever package maintainers have already learned about a piece of software in Docker?
The main source for Docker image reuse is hub.docker.com
Search first there if your system is already described in one of those images.
You can see their Dockerfile, and start your own from one of those images instead of starting from a basic ubuntu or wheezy one.