Docker query on containerizing - docker

Our requirement is to create a container for legacy apps over docker.
We don't have the operating system support/application server support available, nor do we have knowledge to build them from scratch.
But we have a physical instance of the legacy app running in our farm.
We could get an ISO image from our server team if required, our question is if we get this ISO image can we export this as a docker image?
if yes, please let me know if there is any specific procedure or steps associated with it.
if no, please tell me why? and the possible workarounds for the same.

if we get this ISO image can we export this as a docker image?
I don't think there is an easy way (like push-the-export-button) to do this. Explanation follows...
You are describing a procedure taking place in the Virtual Machine world. You take a snapshot of a server, move the .iso file somewhere else and create a new VM that will run on a Hypervisor.
Containers are not VMs. They "contain" all the bytes that a service needs to run but not a whole operating system. They are supposed to run as processes on the host.
Workarounds:
You will have to get your hands dirty. This means that you will have to find out what the legacy app uses (for example Apache + PHP + MySql + app code) and build it from scratch with Docker.
Some thoughts:
containers are supposed to be lightweight. For example one might use one container for the database, another one for the Apache etc... Your case looks like you are moving towards a fat container that has everything inside.
Depending on what the legacy technology is, you might hit a wall... For example, if we are talking about something working with old php, mysql you might find ready-to-use images on hub.docker.com. But if the legacy app is a financial system written in cobol, I don't know what your starting point might be...

You will need to reverse engineer the application dependencies from the artifacts that you have in access to. This means recovering the language specific dependencies (whether python, java, php, node, etc). And any operating system level packages/dependencies that are required.
Essentially you are rebuilding the contents of that ISO image inside your docker file using OS package installation tools like apt, language level tools like pip, PECL, PEAR, composer, or maven, and finally the files that make up the app code.
So, for example: a PHP application might be dependent on having build-essential and php-mysql installed in the OS. Then the app may be dependent on packages like twig and monolog loaded through composer. If you are using SASS you may need to install ruby as well.
Your job is to track all these down and create a docker file that reproduces the iso image. If you are using a common stack like a J2EE app in tomcat, or a php app fronted by apache or ngnix, there will be base docker images that will get you most of the way to where you need to go.
It does look like there are some tools that can do this for you automatically: Dependency Walker equivalent for Linux?. I can't vouch for any of them. But you can also use command line tools. For example this will give you a list of all the user installed packages on a fedora system:
sudo dnf history userinstalled
When an app is using a dependency manager like composer or pip, there is usually a file that lists all the language specific dependencies.
At the end of the process you'll have a portable legacy app that can be easily deployed anywhere with a minimal footprint.
As one of the comments rightly points out, creating a VM from the ISO image is another way forward that will be much easier to accomplish. The application dependencies won't be explicit, but maybe that's ok for your use case.

Related

Emacs workflow with development containers

New to Emacs and recently been trying to get used to it. loving it so far!
One thing I cannot seem to figure out by myself nor find any proper examples of how to figure out to following workflow:
Since I work on multiple projects with different languages and like to keep my work and private projects separated as much as possible in my OS, ive been working with development containers using docker and VScode for the past years.
This allowed me to keep both my project dependencies and the development tools in one container, where i just attached my VScode instance to a project and used extensions such as Language servers / linting / debugging from within that container.
Currently I can open my projects in emacs as the code is local and mounted to the containers, but im looking for a way to either:
Allow my local emacs to use the language/linting/debugging services installed in the container.
Install emacs in the dev containers and mount my configs to keep this synchronized.
Or better alternatives?
Most valuable would be to get language servers working again.
In case it matters, i'm working in DOOM Emacs on Arch. Mostly Python, PHP and NodeJS projects.
... use the language/linting/debugging services installed in the container
By design this is difficult to do with Docker: by design the host system can't directly access files or binaries installed in a container. Without a lot of tricks around bind mounts and user IDs and paths and permissions it's very difficult to run a program in a container in a way that looks like it's on the host system. A couple of tools have those tricks built in, but it's not at all universal. (Jenkins for example generates about 5 lines' worth of docker run command options if you ask it to run a step inside a container.)
My Emacs experience has generally been much better using a host-based per-language version manager and per-project packaging tool (a per-project node_modules directory, rbenv plus Ruby gem sets, pipenv for Python programs, ...).
In short: Emacs can't use language servers, language interpreters, or other tools from Docker images instead of the host system (without writing a lot of Lisp (and if you do consider publishing it to MELPA (and also to GitHub))).
Most valuable would be to get language servers working again.
M-x lsp-install-server will download one of the language servers lsp-mode knows about and save it in your $HOME/.emacs directory. If you activate lsp-mode and it doesn't already have a language server for the current major mode, it will offer to download it for you. There's not much to "get working" usually.

Virtual Environment for each tools to avoid dependency conflicts?

I was wondering If any one could help me to understand the difference of python virtual environment and docker container.
So I would like to have environment for each tools isolating from each other to avoid dependency conflict for example: use of different version of same dependency causing error in one of the tool because one tool need older version and other one requires newer version.
I’m tested out python venv but not sure if it’s the right one I should use for the issue I just explained or docker is something I should be using for my situation?.
Particularly for day-to-day development, prefer a virtual environment if it's practical.
Virtual environment
Docker
Works with native tools; can just run python myscript.py
Requires Docker-specific setup
Every IDE and editor works fine with it
Requires Docker-specific IDE support
Can just open() data files with no special setup
Can't access data files without Docker-specific mount setup
Immediately re-run code after editing it
Re docker build image or use Docker-specific mount setup
Uses Python installation from host
Use any single specific version of Python
Isolated Python library tree
Isolated Python library tree
Uses host version of C library dependencies
Isolated C library dependencies
A virtual environment acts like a normal Python installation in an alternate path. You don't need to do special things to make your local code or data files available; you can just run your script directly or via your IDE. The one downside is that you're limited to what your host OS's package manager makes available for Python versions and C library dependencies.
A Docker container contains the filesystem of a complete OS, including a completely isolated Python installation. It can be a good match if you need a very specific version of Python or if you need host OS dependencies that are tricky to install. It can also be a good match if you're looking for a production-oriented deployment setup that doesn't specifically depend on installing things on to the target system. But, Docker by design makes it hard to access your host files; it is not a great match for a live development environment or especially for one-off scripts that read and write host files.
The other consideration here is, if you use the standard Python packaging tools, it's straightforward to run your program in a virtual environment, and converting that to a Docker image is almost boilerplate. Starting from Docker can make it tricky to go back the other way, and I see some setups around SO that can only be run via Docker; if they were restructured to use a standard setup.cfg/requirements.txt installation setup they would not require Docker but could still be used with it.

Is Docker-ized dev envoirment good for maintaining legacy software?

Let's say I have old, unmaintained application that lives on a VPS (i.e. Symfony 3 PHP app that relies on PHP 5).
If some changes are needed I have to clone this app to my desktop, build it, change and re-deploy. As time goes, recreating desktop dev environment gets harder - in this example I can't simply build the app as I use PHP7 in my CLI that breaks building process.
I tried to dockerize the app, so I added Ubuntu 18 to my docker-compose file... and it doesn't work as latest Ubuntu that has PHP5 support is 14.04. 14.04 is also the oldest (official) version available on DockerHub. But will it be still available in 3 years? If not, Docker won't build a container.
So, my question is: is Docker a right tool to solve described problem at all?
If so, should I backup docker images described that my build relies on?
If not, beside proper maintenance, what tool is better?
You can install PHP5 in newer ubuntu versions, but it means adding an external repository.
You could also create your own docker image, containing only the libraries you want. If so, I'd advise to try and use alpine as a base image. There is a bit of a learning curve to adapt, but once you do it you'll have a small image tailored to your needs.
Given that containers allow you to isolate processus and conf with minimal footprint compared to a VM, I think it is the best option. Tailoring and maintaining your own image is not that expensive in terms of maintenance if you document it correctly, and it will allow you to always have a system 'maintaining' all your precise requirements.

How do you run an .exe file on Docker?

I am currently trying to understand and learn Docker. I have an app, .exe file, and I would like to run it on either Linux or OSX by creating a Docker. I've searched online but I can't find anything allowing one to do that, and I don't know Docker well enough to try and improvise something. Is this possible? Would I have to use Boot2Docker? Could you please point me in the right direction? Thank you in advance any help is appreciated.
Docker allows you to isolate applications running on a host, it does not provide a different OS to run those applications on (with the exception of a the client products that include a Linux VM since Docker was originally a Linux only tool). If the application runs on Linux, it can typically run inside a container. If the application cannot run on Linux, then it will not run inside a Linux container.
An exe is a windows binary format. This binary format incompatible with Linux (unless you run it inside of an emulator or VM). I'm not aware of any easy way to accomplish your goal. If you want to run this binary, then skip Docker on Linux and install a Windows VM on your host.
As other answers have said, Docker doesn't emulate the entire Windows OS that you would need in order to run an executable 'exe' file. However, there's another tool that may do something similar to what you want: "Wine" app from WineHQ. An abbreviated summary from their site:
Wine is a compatibility layer capable of running Windows applications
on several operating systems, such as Linux and macOS.
Instead of simulating internal Windows logic like a virtual
machine or emulator, Wine translates Windows API calls
on-the-fly, eliminating the performance and memory penalties of
other methods and allowing you to cleanly integrate Windows
applications into your desktop.
(I don't work with nor for WineHQ, nor have I actually used it yet. I've only heard of it, and it seems like it might be a solution for running a Windows program inside of a light-weight container.)

Use Docker rather than native/homebrew on Mac?

I currently have a LAMP stack installed on my mac running through Homebrew, which, to be honest hardly ever get's used.
Lately I have been working a lot with AngularJS and service based apps, so generally run the sites through a gulp / nodeJS based webserver.
I am totally frontend orientated, so very rarely do I play with backend related technologies other than the odd Drupal site and mysql.
I am interested to learn more NodeJS, perhaps even some Ruby, purely to understand programming more - not really for it to become my new job description.
So reading up on NodeJS a bit last night I read a lot about Docker, and installed it the toolkit and gui this morning. It looks pretty neat!
My question is: Would it work better for me to just run everything I need through Docker? For example, I can just install the mysql container, and turn it on when I need a db, and just spin up a drupal instance when I need one and connect it to my db instance?
I understand that running Docker on Mac is slower as it doesn't have the native Linux kernel and runs through a VM - but considering my needs from it, this should be okay?
I love the idea of just deploying containers, so will probably want to install Docker on my hosting environment too (VM in the cloud).
Follow up question: 90% of the sites I work on are AngularJS based frontends that speak to APIs that our backend guys build separately. Would it be overkill to have a Docker for each of those sites, or would I rather just run them all in one, or just bypass docker entirely for that (as I mentioned, I normally just load them up from within my Gulp's webserver)
Thanks a lot. I realise this is a n00b asking questions about big technology, but I'm trying to wrap my head around it and hopefully grow a bit in the process.
The interest in deploying Docker container is reproducibility.
You can easily reproduce:
either a complex development environment requiring the installation of numerous libraries (that you don't want to pollute directly your host)
or an execution environment, for a given tool to run (like a web server)
If you are not likely to repeat a setup (for dev or exec), a docker container would bring little value.
But if you want to keep track of the exact specification of an environment (through its Dockerfile) and will deploy it not just on your workstation, but in other places as well, then docker is certainly a good option to consider.

Resources