Change Installation Directory for IOT Edge Runtime Possible? - azure-iot-edge

We are installing IOT Edge runtime to a Linux device running Debian 10. We are following the below article with minor modification for Debian as mentioned here.
Install Azure IoT Edge | Microsoft Docs
Our hardware vendor is requesting to change the default installation directory of IOT Edge runtime and requesting not to install anything on the root.
Is it a suggested practice and if yes is it possible to do the same? Also, is it possible to change the installation directory for MOBY runtime too as we will have to change that too.

The package will install files relative to /. You want to make the package installer (and whatever it installs) think that / is not really / but some directory of your choice, ie a fake root. The general method of working with a fake root is to create a chroot. In your case, you can create a chroot and install the packages inside it.
An explanation of what a chroot is and how to use it is too long for one SO answer; you can look around on this site or the internet to learn about that.
Note that you'll also have to run the services manually rather than relying on systemd, since the systemd service files will not be installed to the outer system's systemd unit directories but within the chroot, where the outer systemd will not look.
Lastly, you'll have to run the services from within the chroot too, so that binaries in the chroot'd /bin can find libraries in the chroot'd /lib, find their configuration in the chroot'd /etc/, etc.

Related

Virtual Environment for each tools to avoid dependency conflicts?

I was wondering If any one could help me to understand the difference of python virtual environment and docker container.
So I would like to have environment for each tools isolating from each other to avoid dependency conflict for example: use of different version of same dependency causing error in one of the tool because one tool need older version and other one requires newer version.
I’m tested out python venv but not sure if it’s the right one I should use for the issue I just explained or docker is something I should be using for my situation?.
Particularly for day-to-day development, prefer a virtual environment if it's practical.
Virtual environment
Docker
Works with native tools; can just run python myscript.py
Requires Docker-specific setup
Every IDE and editor works fine with it
Requires Docker-specific IDE support
Can just open() data files with no special setup
Can't access data files without Docker-specific mount setup
Immediately re-run code after editing it
Re docker build image or use Docker-specific mount setup
Uses Python installation from host
Use any single specific version of Python
Isolated Python library tree
Isolated Python library tree
Uses host version of C library dependencies
Isolated C library dependencies
A virtual environment acts like a normal Python installation in an alternate path. You don't need to do special things to make your local code or data files available; you can just run your script directly or via your IDE. The one downside is that you're limited to what your host OS's package manager makes available for Python versions and C library dependencies.
A Docker container contains the filesystem of a complete OS, including a completely isolated Python installation. It can be a good match if you need a very specific version of Python or if you need host OS dependencies that are tricky to install. It can also be a good match if you're looking for a production-oriented deployment setup that doesn't specifically depend on installing things on to the target system. But, Docker by design makes it hard to access your host files; it is not a great match for a live development environment or especially for one-off scripts that read and write host files.
The other consideration here is, if you use the standard Python packaging tools, it's straightforward to run your program in a virtual environment, and converting that to a Docker image is almost boilerplate. Starting from Docker can make it tricky to go back the other way, and I see some setups around SO that can only be run via Docker; if they were restructured to use a standard setup.cfg/requirements.txt installation setup they would not require Docker but could still be used with it.

Create a docker image from old linux distro without distro's repository

I have a bootable iso image (live cd) with Linux system that is pretty old. That distro doesn't have remote repo (all installations are done from cdrom and separate disk with packages). I wanted to turn it into a docker image. Reading through articles google gave me, I've found several ways to do that. The first one is to mount the iso and find filesystem.squashfs - only modern distros use that way, not my case. My distro doesn't have that file available. The second approach is to call debootstrap but it requires to specify the repo for the distro with dist directory available in it. My distro doesn't have a public repo. What can I do? Is it even possible? I think that should be possible by doing a lot of things manually but how?
I faced similar problems when I had to containerize an old build server (building natively for legacy systems), eventually I succeeded. This approach describes how to containerize some old Linux distro (kernel 2.6.27 in my case), in the present Linux kernel 5 era.
General steps
if necessary: boot the old OS (or Live CD image)
login to the old system as root (or use sudo)
create a tarball from the relevant folders present in root
cd / ; tar cfvz image.tar.gz --one-file-system --exclude=/var/log --exclude=/image.tar.gz /
the selection worked in my case; review for yourself which folders to include or exclude
transfer the tarball to the Docker host (step not shown here)
and import it:
docker import image.tar.gz
the previous command will print out some hash
if convenient, tag the imported image:
docker tag <import-hash> <your-label>
Legacy problem: unsupported system calls
The imported image contains a Linux distribution snapshot. Some binaries can be executed from Docker, eg.:
docker run --rm <your-label> bin/ls
may actually work.
Some important binaries initially did not work for me, most notably bash:
docker run -it --rm <your-label> bin/bash
was failing silently. (Also, running with strace was possible but gave no clear indication.)
As #hiranchaudhuri pointed out, this is likely due to an API discrepancy between the host's kernel and the container's user space code.
In my case the problem was solved by enabling the legacy vsyscall kernel API
for Windows WSL2, this is described here https://learn.microsoft.com/en-us/windows/wsl/wsl-config
for native Linux systems of today, I guess this can be set in the boot configuration, with the kernel command-line parameter vsyscall=emulate, if the present kernel supports this option
I seriously doubt you will succeed on that.
Be aware Docker is not a full virtualization like KVM or VirtualBox. The lightweight virtualization benefits from the docker containers running on the host's Linux kernel. Which means the kernel is the same inside and outside of the container.
If you now try to install some old distro inside the container you may end up with an incompatible combination. Patching the kernel may involve upgrading glibc, and patching that may involve recompiling the rest of the OS.
I am not sure why you want to stick to the old distro, but seriously I believe you are better off with real virtualization.

Docker query on containerizing

Our requirement is to create a container for legacy apps over docker.
We don't have the operating system support/application server support available, nor do we have knowledge to build them from scratch.
But we have a physical instance of the legacy app running in our farm.
We could get an ISO image from our server team if required, our question is if we get this ISO image can we export this as a docker image?
if yes, please let me know if there is any specific procedure or steps associated with it.
if no, please tell me why? and the possible workarounds for the same.
if we get this ISO image can we export this as a docker image?
I don't think there is an easy way (like push-the-export-button) to do this. Explanation follows...
You are describing a procedure taking place in the Virtual Machine world. You take a snapshot of a server, move the .iso file somewhere else and create a new VM that will run on a Hypervisor.
Containers are not VMs. They "contain" all the bytes that a service needs to run but not a whole operating system. They are supposed to run as processes on the host.
Workarounds:
You will have to get your hands dirty. This means that you will have to find out what the legacy app uses (for example Apache + PHP + MySql + app code) and build it from scratch with Docker.
Some thoughts:
containers are supposed to be lightweight. For example one might use one container for the database, another one for the Apache etc... Your case looks like you are moving towards a fat container that has everything inside.
Depending on what the legacy technology is, you might hit a wall... For example, if we are talking about something working with old php, mysql you might find ready-to-use images on hub.docker.com. But if the legacy app is a financial system written in cobol, I don't know what your starting point might be...
You will need to reverse engineer the application dependencies from the artifacts that you have in access to. This means recovering the language specific dependencies (whether python, java, php, node, etc). And any operating system level packages/dependencies that are required.
Essentially you are rebuilding the contents of that ISO image inside your docker file using OS package installation tools like apt, language level tools like pip, PECL, PEAR, composer, or maven, and finally the files that make up the app code.
So, for example: a PHP application might be dependent on having build-essential and php-mysql installed in the OS. Then the app may be dependent on packages like twig and monolog loaded through composer. If you are using SASS you may need to install ruby as well.
Your job is to track all these down and create a docker file that reproduces the iso image. If you are using a common stack like a J2EE app in tomcat, or a php app fronted by apache or ngnix, there will be base docker images that will get you most of the way to where you need to go.
It does look like there are some tools that can do this for you automatically: Dependency Walker equivalent for Linux?. I can't vouch for any of them. But you can also use command line tools. For example this will give you a list of all the user installed packages on a fedora system:
sudo dnf history userinstalled
When an app is using a dependency manager like composer or pip, there is usually a file that lists all the language specific dependencies.
At the end of the process you'll have a portable legacy app that can be easily deployed anywhere with a minimal footprint.
As one of the comments rightly points out, creating a VM from the ISO image is another way forward that will be much easier to accomplish. The application dependencies won't be explicit, but maybe that's ok for your use case.

what programs can be installed in a docker container

I am a Windows user.
I have looked at the official Docker tutorial "Get Started". The example focus is a python app. I don't know python and I guess a Docker container can have many programs installed as an environment, not just python.
Is Docker good for testing a program I download from the internet in an isolated environment (like a sandbox in firewalls or antivirus) ?
How for example can I make a container that has an environment containing installed programs like Visual Studio, VLC player, Office, etc.?
Thanks,
Abe
Yes; you can have an isolated environment with docker. You can set your desired configurations, download from internet, install, and whatever you do in a Virtual Machine.
Yes, you can. What your container contains depends on the base image you create it FROM and packages you install inside of it.
Tips
You can build your container from an empty OS (e.g. ubuntu), configure the OS, download/install/configure/run whatever you want.
You can create a base image which derives FROM a suitable OS, then install any basic application (e.g. firefox) which you may use in a lot of containers on it. Then you should push it in a registry (e.g. Github). After that, you can use it as a base image for other containers, so your new containers have installed applications by default; no need to install them again. It reduces complexity and repetitions in Dockerfile.

What part of installed Python does python4delphi use?

I have Python 2.7 installed in "C:\Python27". Now I run 1st demo of Python4delphi with D7, which somehow uses my Py2.7 install folder. If I rename Python folder, demo can't run (without error message). I didn't change properties of a demo form.
What part/file does py4delphi use from my Python folder?
python4delphi is a loose wrapper around the Python API and as such relies on a functioning Python installation. Typically on Windows this comprises at least the following:
The main Python directory. On your system this is C:\Python27.
The Python DLL which is python27.dll and lives in your system directory.
Registry settings that indicate where your Python directory is installed.
When you rename the Python directory, the registry settings refer to a location that no longer exists. And so the failure you observe is entirely to be expected.
Perhaps you are trying to work out how to deploy your application in a self-contained way without requiring an external dependency on a Python installation. If so, then I suggest you look in to one of the portable Python distributions. You may need to adapt python4delphi a little to find the Python DLL which will be located under your application's directory. But that should be all that's needed. Take care of the licensing issues too if you do distribute Python with your application.

Resources