Can docker image choose the OS? - docker

Can a Docker image specify which operating system it can use? Say one Image with Windows and another with RHEL? In this case how Docker will maintain two different operating systems?

Docker is composed of layers. At the beginning of any Dockerfile you specify the OS by typing e.g. FROM python:3 My belief is that if you were to add another OS. The image would retain the environment from the first OS and install the env of the second OS over that. So essentially, your image would have both environments.
If you create a python image from the command above and name it docker build -t 'this_python' . then make a new Dockerfile with the first line: FROM this_python so the new image has python already, and you can install anything over this.
Best practice is to keep your docker image as small as possible. Install only what is required.
A quick example
FROM python:3
FROM ubuntu:latest
RUN apt-get update
The above Dockerfile gives you an image with Python and Ubuntu installed. But this is not how you should do it. Better is to use FROM ubuntu:latest and then install python over it.

Docker image is just docker image. It doesn't depend on the OS on which you run the docker engine. For example, when you run your docker image on Windows, actually it run on docker engine, which was hosted by a Virtual host of linux.

Related

Using Docker and Cypress in Same Docker Image

Fair warning: I'm new to all of this, so there there might be some mistakes in my thinking process.
I want to system test an application we are developing, and we ship this application via Docker, so that's what I want to test.
For GitLab CI, this means creating a Docker image which has Docker in Docker and Cypress, since that is what I'd like to use.
So just from checking the Docker docs I can see that Docker can be installed on a multitude of Linux distros, but not on Alpine. The official image however is Alpine.
The Cypress docs however show that Cypress can not be installed to Alpine. Only the package managers "apt-get" and "yum" are supported, which is Ubuntu and Fedora, respectively.
So as far as I can tell, it's not possible to have both of these at once? Which would be absolutely baffling (but so is the package manager chaos I just learned about).
What I tried:
used the Docker image as a base and tried to install Cypress (does not work because there is no installation manual and the packages you need to install via apt-get don't exist for apk)
used the Cypress image as a base and tried to install Docker (does not work because the Cypress images don't work)
used another image and tried to install both (does not work because installing Docker inside the Docker container does not work, that's why they have the image provided)
used DinD with another distro (cruizba/ubuntu-dind, fails with " dockerd is not running after max time")
So... what am I missing? Is there any way to get to the point where I can use both Cypress and DinD in the same image?
There is an image named blackholegalaxy/cypress-dind which combines DinD and Cypress.
Sadly it's really old and there is no way to update Docker to the newest version easily.

Same Ubuntu image fetched for docker machine and docker container but more binaries are available in docker-machine

I created a container and logged in
docker run -it -d ubuntu bash
checked fdisk -l its NOT available.
But when I create a machine using:
docker-machine create -d "virtualbox" --swarm-image "ubuntu" dev3
The command fdisk is available in the machine.
Question: I guess binaries comes from image, how this is happening? and how can I add fdisk without creating a custom image or installing it after container creation.
Same host
Your two commands are doing completely different things.
In the first case, you're pulling down the ubuntu docker image and starting a container.
In the second case, you're building a virtual machine in Virtualbox using a VM image named ubuntu. This is a completely different operation and the ubuntu vm image has nothing to do with the ubuntu container image. The minimal set of packages required to actually boot a machine is substantially larger than that required to start a container, so it's no surprise that the virtual machine has packages you don't find in the container image.
For example, a container doesn't interact with block devices so there is no need to have fdisk installed. If you really need fdisk in a container image (which, again, is unlikely, although there are some use cases where that makes sense), you would build a custom image from a Dockerfile. E.g.:
FROM ubuntu:eoan
RUN apt-get update; apt-get -y install fdisk

How to prepare a blank website to be dockerized?

I have a totally empty debian9 on which I installed docker-ce and nothing else.
My client wants me to run a website (already done locally on my PC) that he can migrate/move rapidly from one server to another moving docker images.
My idea is to install some empty docker image, and then install on it manually all dependencies (ngingrtmp, apache2, nodejs, mysql, phpmyadmin, php, etc...)
I need to install all these dependencies MANUALLY (to keep control) - not using a ready to go docker images from dockerhub, and then to create an IMAGE of ALL things I have done (including these dependencies, but also files I will upload).
Problem is : I have no idea how to start a blank image, connect to it and then save a modified image with components and dependencies I will run.
I am aware that the SIZE may be bigger with a simple dockerfile, but I need to customize lots of things such as using php5.6, apache2.2, edit some php.ini etc etc..
regards
if you don't want to define you're dependencies on the docker file then you can have an approach like this, spin up a linux container with a base image and go inside the docker
sudo docker exec -it <Container ID> /bin/bash
install your dependencies as you install on any other linux server.
sudo apt-get install -y ngingrtmp apache2 nodejs mysql phpmyadmin php
then exit the container by ctrl+p and ctrl+q and now commit the changes you made
sudo docker commit CONTAINER_ID new-image-name
run docker images command and you will see the new image you have created, then you can use/move that image
You can try with a Dockerfile with the following content
FROM SCRATCH
But then you will need to build and add the operating system yourself.
For instance, alpine linux does this in the following way:
FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/sh"]
Where rootfs.tar.xz is a file of less of 2MB available on alpine's github repository (version 3.7 for x86_64 arch):
https://github.com/gliderlabs/docker-alpine/tree/61c3181ad3127c5bedd098271ac05f49119c9915/versions/library-3.7/x86_64
Or you can begin with alpine itself, but you said that you don't want to depend on ready to go docker images.
A good start point for you (if you decide to use alpnie linux), could look like the one available at https://github.com/docker-library/httpd/blob/eaf4c70fb21f167f77e0c9d4b6f8b8635b1cb4b6/2.4/alpine/Dockerfile
As you can see, A Dockerfile can became very big and complex because within it you provision all the software you need for running your image.
Once you have your Dockerfile, you can build the image with:
docker build .
You can give it a name:
docker build -t mycompany/myimage:1.0
Then you can run your image with:
docker run mycompany/myimage:1.0
Hope this helps.

How to convert VM image to dockerfile?

For work purpose, I have an ova file which I need to convert it to DockerFile.
Does someone know how to do it?
Thanks in advance
There are a few different ways to do this. They all involve getting at the disk image of the VM. One is to mount the VDI, then create Docker image from that (see other Stackoverflow answers). Another is to boot the VM and copy the complete disk contents, starting at root, to a shared folder. And so on. We have succeeded with multiple approaches. As long as the disk in the VM is compatible with the kernel underlying the running container, creating Docker image that has the complete VM disk has worked.
Yes it is possible to use a VM image and run it in a container. Many our customers have been using this project successfully: https://github.com/rancher/vm.git.
RancherVM allows you to create VMs that run inside of Kubernetes pods,
called VM Pods. A VM pod looks and feels like a regular pod. Inside of
each VM pod, however, is a container running a virtual machine
instance. You can package any QEMU/KVM image as a Docker image,
distribute it using any Docker registry such as DockerHub, and run it
on RancherVM.
Recently this project has been made compatible for kubernetes as well. For more information: https://rancher.com/blog/2018/2018-04-27-ranchervm-now-available-on-kubernetes
Step 1
Install ShutIt as root:
sudo su -
(apt-get update && apt-get install -y python-pip git docker) || (yum update && yum install -y python-pip git docker which)
pip install shutit
The pre-requisites are python-pip, git and docker. The exact names of these in your package manager may vary slightly (eg docker-io or docker.io) depending on your distro.
You may need to make sure the docker server is running too, eg with ‘systemctl start docker’ or ‘service docker start’.
Step 2
Check out the copyserver script:
git clone https://github.com/ianmiell/shutit_copyserver.git
Step 3
Run the copy_server script:
cd shutit_copyserver/bin
./copy_server.sh
There are a couple of prompts – one to correct perms on a config file, and another to ask what docker base image you want to use. Make sure you use one as close to the original server as possible.
Note that this requires a version of docker that has the ‘docker exec’ option.
Step 4
Run the build server:
docker run -ti copyserver /bin/bash
You are now in a practical facsimile of your server within a docker container!
Source
https://zwischenzugs.com/2015/05/24/convert-any-server-to-a-docker-container/
in my opinon it's totally impossible. But you can create a dockerfile with same OS and mount your datas.

is it possible to wrap an entire ubuntu 14 os in a docker image

I have a Ubuntu 14 desktop, on which I do some of my development work.
This work mainly revolves around Django & Flask development using PyCharm
I was wandering if it was possible to wrap the entire OS file system in a Docker container, so my whole development environment, including PyCharm and any other tools, would become portable
Yes, this is where Docker shines. Once you install Docker you can run:
docker run --name my-dev -it ubuntu:14.04 /bin/bash
and this will put you, as root, inside a Docker container's bash prompt. It is for all intents and purposes the entire os without anything extra, you will need to install the extras, like pycharm, flask, django, etc. Your entire environment. The environment you start with has nothing, so you will have to add things like pip (apt-get install -y python-pip), and other goodies. Once you have your entire environment you can exit (with exit, or ^D) and you will be back in your host operating system. Then you can commit :
docker commit -m 'this is my development image' my-dev my-dev
This takes the Docker image you just ran (and updated with changes) and saves it on your machine with the tag my-dev:v1, any time in the future you can run this again using the invocation:
docker run -it my-dev /bin/bash
Building a Docker image like this is harder, it is easier once you learn how to make a Dockerfile that describes the base image (ubuntu:14.04) and all of the modifications you want to make to it in a file called Dockerfile. I have an example of a Dockerfile here:
https://github.com/tacodata/pythondev
This builds my python development environment, including git, ssh keys, compilers, etc. It does have my name hardcoded in it, so, it won't help you much doing development (I need to fix that). Anyway, you can download the Dockerfile, change it with your details in it, and create your own image like this:
docker build -t my-dev -< Dockerfile
There are hundreds of examples on the Docker hub which is where I started with mine.
-g

Resources