I have a python application implemented with python containing following components:
1. Database
2. python app upono anaconda
3. Linux OS
The idea is to dockerization these three components into isolated container and then linking them together by running.
To me it's clear how to link database image with linux image, but how can I combine anaconda and linux? Isn't anaconda suppose to be installed on linux system?
You will only have two containers. Both your database and python app presumably need a Linux OS of one flavor or another. In your docker file you would start with something like with ubuntu to pull in a base image and make your changes. Using the diff based file system your changes will be layered on top of the base image.
Related
I have a bootable iso image (live cd) with Linux system that is pretty old. That distro doesn't have remote repo (all installations are done from cdrom and separate disk with packages). I wanted to turn it into a docker image. Reading through articles google gave me, I've found several ways to do that. The first one is to mount the iso and find filesystem.squashfs - only modern distros use that way, not my case. My distro doesn't have that file available. The second approach is to call debootstrap but it requires to specify the repo for the distro with dist directory available in it. My distro doesn't have a public repo. What can I do? Is it even possible? I think that should be possible by doing a lot of things manually but how?
I faced similar problems when I had to containerize an old build server (building natively for legacy systems), eventually I succeeded. This approach describes how to containerize some old Linux distro (kernel 2.6.27 in my case), in the present Linux kernel 5 era.
General steps
if necessary: boot the old OS (or Live CD image)
login to the old system as root (or use sudo)
create a tarball from the relevant folders present in root
cd / ; tar cfvz image.tar.gz --one-file-system --exclude=/var/log --exclude=/image.tar.gz /
the selection worked in my case; review for yourself which folders to include or exclude
transfer the tarball to the Docker host (step not shown here)
and import it:
docker import image.tar.gz
the previous command will print out some hash
if convenient, tag the imported image:
docker tag <import-hash> <your-label>
Legacy problem: unsupported system calls
The imported image contains a Linux distribution snapshot. Some binaries can be executed from Docker, eg.:
docker run --rm <your-label> bin/ls
may actually work.
Some important binaries initially did not work for me, most notably bash:
docker run -it --rm <your-label> bin/bash
was failing silently. (Also, running with strace was possible but gave no clear indication.)
As #hiranchaudhuri pointed out, this is likely due to an API discrepancy between the host's kernel and the container's user space code.
In my case the problem was solved by enabling the legacy vsyscall kernel API
for Windows WSL2, this is described here https://learn.microsoft.com/en-us/windows/wsl/wsl-config
for native Linux systems of today, I guess this can be set in the boot configuration, with the kernel command-line parameter vsyscall=emulate, if the present kernel supports this option
I seriously doubt you will succeed on that.
Be aware Docker is not a full virtualization like KVM or VirtualBox. The lightweight virtualization benefits from the docker containers running on the host's Linux kernel. Which means the kernel is the same inside and outside of the container.
If you now try to install some old distro inside the container you may end up with an incompatible combination. Patching the kernel may involve upgrading glibc, and patching that may involve recompiling the rest of the OS.
I am not sure why you want to stick to the old distro, but seriously I believe you are better off with real virtualization.
I'm currently building a Docker container that contains all the libraries needed for deployment of our app on a test machine, such as, for example, OpenCV 3.3 built with CUDA 9.
So, on a clean minimal OS install we can download the container and fire up our app in the desired environment, which is as I understand it one of the main reasons to use Docker.
So, after a while we decide to do our tests on the bare metal without the Docker file system, etc, in the way. Can we somehow replay the Dockerfile commands or image command history to run the apt-get, etc of not just the current package, but all FROM packages that are not yet installed on the raw environment?
We have a large application with several parts running on a Windows VM and I am trying to evaluate Docker containers for our application deployment. Is it possible to create a base docker image from an existing Windows VM already running my application? (I know this can be done using Dockerfile but I am looking for a quick way to create the image)
https://docs.docker.com/engine/userguide/eng-image/baseimages/
Above link describes creating image from working machine for Linux, but I am looking for something similar for Windows.
The only base image for Windows that I know are the ones proposed by Microsoft, for Windows Server 2016 or 1709.
See "PoC: How to build images for 1709 without 1709"
That means you can translate any Widows VM into an image.
You would need:
a Dockerfile
the right Microsoft base image, which would represent a Windows server one.
Typically:
microsoft/nanoserver,
microsoft/windowsservercore
If you application only runs on a Windows VM, you need to make sure it can be installed and run on one of those base Windows images.
EVen though you are using a VM Windows server 2016, you would not be able to quickly "capture its state": you need a Dockerfile to describe what you want your Widows container to run.
No it's not possible. You have some stuff like Vm2Docker etc but all it does the same thing you will do manually that is enumerate features installed and create some artifacts for you.
But it's not possible to do for third party application as you mentioned. You'd have to disassemble it and figure out how to scripts to install it.
I am looking for a way to have a Development environment of Production web server for our Developers/testers created using Docker on windows.
I have windows server 2016 OS installed on a Physical server (not VM), and want to dockerize it so that Dev team can make changes on it first and once they confirm all working fine then same changes will be done on production web server.
Thanks,
RK.
I am a Windows user.
I have looked at the official Docker tutorial "Get Started". The example focus is a python app. I don't know python and I guess a Docker container can have many programs installed as an environment, not just python.
Is Docker good for testing a program I download from the internet in an isolated environment (like a sandbox in firewalls or antivirus) ?
How for example can I make a container that has an environment containing installed programs like Visual Studio, VLC player, Office, etc.?
Thanks,
Abe
Yes; you can have an isolated environment with docker. You can set your desired configurations, download from internet, install, and whatever you do in a Virtual Machine.
Yes, you can. What your container contains depends on the base image you create it FROM and packages you install inside of it.
Tips
You can build your container from an empty OS (e.g. ubuntu), configure the OS, download/install/configure/run whatever you want.
You can create a base image which derives FROM a suitable OS, then install any basic application (e.g. firefox) which you may use in a lot of containers on it. Then you should push it in a registry (e.g. Github). After that, you can use it as a base image for other containers, so your new containers have installed applications by default; no need to install them again. It reduces complexity and repetitions in Dockerfile.
As in the question. Can you actually use Docker on top of Linux system (Ubuntu) that has NO php or ruby installed? I use postgres image for database and (of course) postgres package is not installed on my host.
I wonder if it is possible to use containers for development. How to overcome lack on rails new/rails g on host?
Docker environment it's completely independent of your host - they have the own library, packages installed inside of them. If you want to run ruby/php inside docker, just download proper image which contains ruby/php or build own image.
For ruby check this link: https://hub.docker.com/_/ruby/