Need more Docker Engine Resources - docker

I was able to install docker, as well pull an image on my OS.
Went throughout most of the Docker Arch Wiki.
However now I wouldn't know what to do next. It would seem that the official docker docs mention just the Docker Desktop.
Perhaps I was expecting a docker image to work as a VM image. What I would like to accomplish here is being able to install Code - OSS without breaking the packages, and their dependencies, of my OS. Since Node.JS < 17 is required.
To my understanding, I could use the image and basically play with the packages there and if something breaks it's just the image that's broken, not my main OS.

Related

Is Docker-ized dev envoirment good for maintaining legacy software?

Let's say I have old, unmaintained application that lives on a VPS (i.e. Symfony 3 PHP app that relies on PHP 5).
If some changes are needed I have to clone this app to my desktop, build it, change and re-deploy. As time goes, recreating desktop dev environment gets harder - in this example I can't simply build the app as I use PHP7 in my CLI that breaks building process.
I tried to dockerize the app, so I added Ubuntu 18 to my docker-compose file... and it doesn't work as latest Ubuntu that has PHP5 support is 14.04. 14.04 is also the oldest (official) version available on DockerHub. But will it be still available in 3 years? If not, Docker won't build a container.
So, my question is: is Docker a right tool to solve described problem at all?
If so, should I backup docker images described that my build relies on?
If not, beside proper maintenance, what tool is better?
You can install PHP5 in newer ubuntu versions, but it means adding an external repository.
You could also create your own docker image, containing only the libraries you want. If so, I'd advise to try and use alpine as a base image. There is a bit of a learning curve to adapt, but once you do it you'll have a small image tailored to your needs.
Given that containers allow you to isolate processus and conf with minimal footprint compared to a VM, I think it is the best option. Tailoring and maintaining your own image is not that expensive in terms of maintenance if you document it correctly, and it will allow you to always have a system 'maintaining' all your precise requirements.

Create a docker image from old linux distro without distro's repository

I have a bootable iso image (live cd) with Linux system that is pretty old. That distro doesn't have remote repo (all installations are done from cdrom and separate disk with packages). I wanted to turn it into a docker image. Reading through articles google gave me, I've found several ways to do that. The first one is to mount the iso and find filesystem.squashfs - only modern distros use that way, not my case. My distro doesn't have that file available. The second approach is to call debootstrap but it requires to specify the repo for the distro with dist directory available in it. My distro doesn't have a public repo. What can I do? Is it even possible? I think that should be possible by doing a lot of things manually but how?
I faced similar problems when I had to containerize an old build server (building natively for legacy systems), eventually I succeeded. This approach describes how to containerize some old Linux distro (kernel 2.6.27 in my case), in the present Linux kernel 5 era.
General steps
if necessary: boot the old OS (or Live CD image)
login to the old system as root (or use sudo)
create a tarball from the relevant folders present in root
cd / ; tar cfvz image.tar.gz --one-file-system --exclude=/var/log --exclude=/image.tar.gz /
the selection worked in my case; review for yourself which folders to include or exclude
transfer the tarball to the Docker host (step not shown here)
and import it:
docker import image.tar.gz
the previous command will print out some hash
if convenient, tag the imported image:
docker tag <import-hash> <your-label>
Legacy problem: unsupported system calls
The imported image contains a Linux distribution snapshot. Some binaries can be executed from Docker, eg.:
docker run --rm <your-label> bin/ls
may actually work.
Some important binaries initially did not work for me, most notably bash:
docker run -it --rm <your-label> bin/bash
was failing silently. (Also, running with strace was possible but gave no clear indication.)
As #hiranchaudhuri pointed out, this is likely due to an API discrepancy between the host's kernel and the container's user space code.
In my case the problem was solved by enabling the legacy vsyscall kernel API
for Windows WSL2, this is described here https://learn.microsoft.com/en-us/windows/wsl/wsl-config
for native Linux systems of today, I guess this can be set in the boot configuration, with the kernel command-line parameter vsyscall=emulate, if the present kernel supports this option
I seriously doubt you will succeed on that.
Be aware Docker is not a full virtualization like KVM or VirtualBox. The lightweight virtualization benefits from the docker containers running on the host's Linux kernel. Which means the kernel is the same inside and outside of the container.
If you now try to install some old distro inside the container you may end up with an incompatible combination. Patching the kernel may involve upgrading glibc, and patching that may involve recompiling the rest of the OS.
I am not sure why you want to stick to the old distro, but seriously I believe you are better off with real virtualization.

what programs can be installed in a docker container

I am a Windows user.
I have looked at the official Docker tutorial "Get Started". The example focus is a python app. I don't know python and I guess a Docker container can have many programs installed as an environment, not just python.
Is Docker good for testing a program I download from the internet in an isolated environment (like a sandbox in firewalls or antivirus) ?
How for example can I make a container that has an environment containing installed programs like Visual Studio, VLC player, Office, etc.?
Thanks,
Abe
Yes; you can have an isolated environment with docker. You can set your desired configurations, download from internet, install, and whatever you do in a Virtual Machine.
Yes, you can. What your container contains depends on the base image you create it FROM and packages you install inside of it.
Tips
You can build your container from an empty OS (e.g. ubuntu), configure the OS, download/install/configure/run whatever you want.
You can create a base image which derives FROM a suitable OS, then install any basic application (e.g. firefox) which you may use in a lot of containers on it. Then you should push it in a registry (e.g. Github). After that, you can use it as a base image for other containers, so your new containers have installed applications by default; no need to install them again. It reduces complexity and repetitions in Dockerfile.

How to build an image with Dockerfile in Kitematic?

Is there anyway to build an image with Dockerfile while using Kitematic?
From the top of the docs for Kitematic
Legacy desktop solution. Kitematic is a legacy solution, bundled with Docker Toolbox. We recommend updating to Docker for Mac or Docker for Windows if your system meets the requirements for one of those applications.
If possible, you should avoid using the tool.
If you have to use Kitematic, the feature you are asking about is tracked by this GitHub issue: Import Dockerfile - (Docker build). At the time of writing the feature is not implemented.

Deploy docker image as standalone executable

Are there any tools to install/deploy a docker image as a standalone/portable installation.
So that you don't have to install docker manually beforehand, just one installation, and it will run and deploy your docker image. And perhaps autostart it as well on boot.
Mainly interested for Win&OSX, but for linux would be nice too.
You can get a standalone Docker image automatically with preconfigured scaling options using the already packaged Docker engine. The details of this solution and its installation are described in the instruction.
I don't think that this is even possible. Docker has so many dependencies.
(Linux &/ OSX)
The much easier way would be a bash script wich starts the installation and afterwards runs the container. Shouldn't be that time consuming.

Resources