For work purpose, I have an ova file which I need to convert it to DockerFile.
Does someone know how to do it?
Thanks in advance
There are a few different ways to do this. They all involve getting at the disk image of the VM. One is to mount the VDI, then create Docker image from that (see other Stackoverflow answers). Another is to boot the VM and copy the complete disk contents, starting at root, to a shared folder. And so on. We have succeeded with multiple approaches. As long as the disk in the VM is compatible with the kernel underlying the running container, creating Docker image that has the complete VM disk has worked.
Yes it is possible to use a VM image and run it in a container. Many our customers have been using this project successfully: https://github.com/rancher/vm.git.
RancherVM allows you to create VMs that run inside of Kubernetes pods,
called VM Pods. A VM pod looks and feels like a regular pod. Inside of
each VM pod, however, is a container running a virtual machine
instance. You can package any QEMU/KVM image as a Docker image,
distribute it using any Docker registry such as DockerHub, and run it
on RancherVM.
Recently this project has been made compatible for kubernetes as well. For more information: https://rancher.com/blog/2018/2018-04-27-ranchervm-now-available-on-kubernetes
Step 1
Install ShutIt as root:
sudo su -
(apt-get update && apt-get install -y python-pip git docker) || (yum update && yum install -y python-pip git docker which)
pip install shutit
The pre-requisites are python-pip, git and docker. The exact names of these in your package manager may vary slightly (eg docker-io or docker.io) depending on your distro.
You may need to make sure the docker server is running too, eg with ‘systemctl start docker’ or ‘service docker start’.
Step 2
Check out the copyserver script:
git clone https://github.com/ianmiell/shutit_copyserver.git
Step 3
Run the copy_server script:
cd shutit_copyserver/bin
./copy_server.sh
There are a couple of prompts – one to correct perms on a config file, and another to ask what docker base image you want to use. Make sure you use one as close to the original server as possible.
Note that this requires a version of docker that has the ‘docker exec’ option.
Step 4
Run the build server:
docker run -ti copyserver /bin/bash
You are now in a practical facsimile of your server within a docker container!
Source
https://zwischenzugs.com/2015/05/24/convert-any-server-to-a-docker-container/
in my opinon it's totally impossible. But you can create a dockerfile with same OS and mount your datas.
Related
I created a container and logged in
docker run -it -d ubuntu bash
checked fdisk -l its NOT available.
But when I create a machine using:
docker-machine create -d "virtualbox" --swarm-image "ubuntu" dev3
The command fdisk is available in the machine.
Question: I guess binaries comes from image, how this is happening? and how can I add fdisk without creating a custom image or installing it after container creation.
Same host
Your two commands are doing completely different things.
In the first case, you're pulling down the ubuntu docker image and starting a container.
In the second case, you're building a virtual machine in Virtualbox using a VM image named ubuntu. This is a completely different operation and the ubuntu vm image has nothing to do with the ubuntu container image. The minimal set of packages required to actually boot a machine is substantially larger than that required to start a container, so it's no surprise that the virtual machine has packages you don't find in the container image.
For example, a container doesn't interact with block devices so there is no need to have fdisk installed. If you really need fdisk in a container image (which, again, is unlikely, although there are some use cases where that makes sense), you would build a custom image from a Dockerfile. E.g.:
FROM ubuntu:eoan
RUN apt-get update; apt-get -y install fdisk
I have a totally empty debian9 on which I installed docker-ce and nothing else.
My client wants me to run a website (already done locally on my PC) that he can migrate/move rapidly from one server to another moving docker images.
My idea is to install some empty docker image, and then install on it manually all dependencies (ngingrtmp, apache2, nodejs, mysql, phpmyadmin, php, etc...)
I need to install all these dependencies MANUALLY (to keep control) - not using a ready to go docker images from dockerhub, and then to create an IMAGE of ALL things I have done (including these dependencies, but also files I will upload).
Problem is : I have no idea how to start a blank image, connect to it and then save a modified image with components and dependencies I will run.
I am aware that the SIZE may be bigger with a simple dockerfile, but I need to customize lots of things such as using php5.6, apache2.2, edit some php.ini etc etc..
regards
if you don't want to define you're dependencies on the docker file then you can have an approach like this, spin up a linux container with a base image and go inside the docker
sudo docker exec -it <Container ID> /bin/bash
install your dependencies as you install on any other linux server.
sudo apt-get install -y ngingrtmp apache2 nodejs mysql phpmyadmin php
then exit the container by ctrl+p and ctrl+q and now commit the changes you made
sudo docker commit CONTAINER_ID new-image-name
run docker images command and you will see the new image you have created, then you can use/move that image
You can try with a Dockerfile with the following content
FROM SCRATCH
But then you will need to build and add the operating system yourself.
For instance, alpine linux does this in the following way:
FROM scratch
ADD rootfs.tar.xz /
CMD ["/bin/sh"]
Where rootfs.tar.xz is a file of less of 2MB available on alpine's github repository (version 3.7 for x86_64 arch):
https://github.com/gliderlabs/docker-alpine/tree/61c3181ad3127c5bedd098271ac05f49119c9915/versions/library-3.7/x86_64
Or you can begin with alpine itself, but you said that you don't want to depend on ready to go docker images.
A good start point for you (if you decide to use alpnie linux), could look like the one available at https://github.com/docker-library/httpd/blob/eaf4c70fb21f167f77e0c9d4b6f8b8635b1cb4b6/2.4/alpine/Dockerfile
As you can see, A Dockerfile can became very big and complex because within it you provision all the software you need for running your image.
Once you have your Dockerfile, you can build the image with:
docker build .
You can give it a name:
docker build -t mycompany/myimage:1.0
Then you can run your image with:
docker run mycompany/myimage:1.0
Hope this helps.
What I want to build, without Docker, would look like this:
AWS EC2 Debian OS where I can install:
Python 3
Anaconda
Run python scripts that will create new files as output
SSH into it so that I can explore the new files created
I am starting to use Docker an my first approach was to build everything into a Dockerfile but I don't know how to add multiple FROM so that I can install the official docker image of Debian, Python3 and Anaconda.
In addition, with this approach is it possible to get into the container and explore the files that have been created?
So let's say we just spun up a docker container and allows user SSH into the container by mapping port 22:22.
User then installed some software like git or whatever they want. So that container is now polluted.
Later on, suppose I want to apply some patches to the container, what is the best way to do so?
Keep in mind that the user has modified contents in container, including some system level directories like /usr/bin. So I cannot simply replace the running container with another image.
So to give you some real life use cases. Take Nitrous.io as an example. I saw they are using docker containers to serve as user's VM. So users can install packages like Node.js global packages. So how do they update/apply patch to containers like a pro? Similar platforms like Codeanywhere might work in the same way.
I tried google it but I failed. I am not 100 percent sure whether this is a duplicate though.
User then installed some software like git or whatever they want ... I want to apply some patch to the container, what is the best way to do so ?
The recommended way is to plan your updates through Dockerfile. However, if you are unable to achieve that, than any additional changes or new packages installed to the container should be committed before they are exited.
ex: Below is simple container created which does not have vim installed.
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
pingimg 1.5 1e29ac7353d1 4 minutes ago 209.6 MB
Start the container and check if vim is installed.
$ docker run -it pingimg:1.5 /bin/bash
root#f63accdae2ab:/#
root#f63accdae2ab:/# vim
bash: vim: command not found
Install the required packages, inside the container:
root#f63accdae2ab:/# sudo apt-get update && install -y vim
Back on the host, commit the container with a new tag before stopping or exiting the container.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f63accdae2ab pingimg:1.5 "/bin/bash" About a minute ago Up About a minute modest_lovelace
$ docker commit f63accdae2ab pingimg:1.6
378e0359eedfe902640ff71df4395c3fe9590254c8c667ea3efb54e033f24cbe
$ docker stop f63accdae2ab
f63accdae2ab
Now docker images should show to both the tags or versions of the container. Note, the updated container shows larger size.
$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
pingimg 1.6 378e0359eedf 43 seconds ago 252.8 MB
pingimg 1.5 1e29ac7353d1 4 minutes ago 209.6 MB
Re-start the recently committed container, you can see that vim installed
$ docker run -it pingimg:1.6 /bin/bash
root#63dbbb8a9355:/# which vim
/usr/bin/vim
Verify the contents of the previous version container and should notice that vim is still missing.
$ docker run -it pingimg:1.5 /bin/bash
root#99955058ea0b:/# which vim
root#99955058ea0b:/# vim
bash: vim: command not found
Hope this helps!
There's a whole branch of software called configuration management that seeks to solve this issue, with solutions such as Ansible and Puppet. Whilst designed with VMs in mind, it is certainly possible to use such solutions with containers.
However, this is not the Docker way. Rather than patch a Docker container, throw it away and replace it with a new one. If you need to install new software, add it to the Dockerfile and build a new container as per #askb's solution. By doing things this way, we can avoid a whole set of headaches (similarly, prefer docker exec to installing ssh in containers).
I want to create development environment for all our programmers so I want to install jdk, jdeveloper, maven and svn in one docker container.
How can I do that?
First of all, you need to go to the docker site and learn how to create a Dockerfile. The file will run and create a container with whatever you want to install in it. For example:
FROM debian
RUN apt-get update -qq && DEBIAN_FRONTEND=noninteractive apt-get install -yqq\
openjdk-8-jre\
maven\
svn\
....
RUN ... (to run commands inside the container when it's created)
EXPOSE 80 8080... (whatever ports you want to expose)
This is a very simple example, you have to read the docs and see what is available. I would recommend looking at the Dockerfile for from the github repo that has all the library docker files and get an idea of what's happening so you know how to create your container.
On a side note, I would not install svn in the same container, it would be better to have it in a separate container so that you never have more than one service per container since each container runs in a separate process. You can link containers, but that would require reading the docs to see how that is done.
Why using jdeveloper? For eclipse you can find several examples in the docker hub.
Agree with #Hatem-Jaber seperate svn from the dev environment.