How to make docker image of host operating system which is running docker itself? - docker

I started using Docker and I can say, it is a great concept.
Everything is going fine so far.
I installed docker on ubuntu (my host operating system) , played with images from repository and made new images.
Question:
I want to make an image of the current(Host) operating system. How shall I achieve this using docker itself ?
I am new to docker, so please ignore any silly things in my questions, if any.

I was doing maintenance on a server, the ones we pray not to crash, and I came across a situation where I had to replace sendmail with postfix.
I could not stop the server nor use the docker hub available image because I need to be clear sure I will not have problems. That's why I wanted to make an image of the server.
I got to this thread and from it found ways to reproduce the procedure.
Below is the description of it.
We start by building a tar file of the entire filesystem of the machine (excluding some non necessary and hardware dependent directory - Ok, it may not be as perfect as I intent, but it seams to be fine to me. You'll need to try whatever works for you) we want to clone (as pointed by #Thomasleveil in this thread).
$ sudo su -
# cd /
# tar -cpzf backup.tar.gz --exclude=/backup.tar.gz --exclude=/proc --exclude=/tmp --exclude=/mnt --exclude=/dev --exclude=/sys /
Then just download the file into your machine, import targz as an image into the docker and initialize the container. Note that in the example I put the date-month-day of image generation as image tag when importing the file.
$ scp user#server-uri:path_to_file/backup.tar.gz .
$ cat backup.tar.gz | docker import - imageName:20190825
$ docker run -t -i imageName:20190825 /bin/bash
IMPORTANT: This procedure generates a completely identical image, so it is of great importance if you will use the generated image to distribute between developers, testers and whateever that you remove from it or change any reference containing restricted passwords, keys or users to avoid security breaches.

I'm not sure to understand why you would want to do such a thing, but that is not the point of your question, so here's how to create a new Docker image from nothing:
If you can come up with a tar file of your current operating system, then you can create a new docker image of it with the docker import command.
cat my_host_filesystem.tar | docker import - myhost
where myhost is the docker image name you want and my_host_filesystem.tar the archive file of your OS file system.
Also take a look at Docker, start image from scratch from superuser and this answer from stackoverflow.
If you want to learn more about this, searching for docker "from scratch" is a good starting point.

Related

Convert docker image into Cloud Foundry droplet

Is there a simple way of converting my docker image to a cloud foundry droplet ?
What did not work:
docker save registry/myapp1 |gzip > myapp1.tgz
cf push myapp1 --droplet myapp1.tgz
LOG: 2021-02-13T12:36:28.80+0000 [APP/PROC/WEB/0] OUT Exit status 1
LOG: 2021-02-13T12:36:28.80+0000 [APP/PROC/WEB/0] ERR /tmp/lifecycle/launcher: no start command specified or detected in droplet
If you want to run your docker image on Cloud Foundry, simply run cf push -o <your/image>. Cloud Foundry can natively run docker images so long as your operations team has enabled that functionality (not a lot of reason to disable it) and you meet the requirements.
You can check to see if Docker support is enabled by running cf feature-flag and looking for the line diego_docker enabled. If it says disabled, talk to your operations team about enabling it.
By doing this, you don't need to do any complicated conversion. The image is just run directly on Cloud Foundry.
This doesn't 100% answer your question, but it's what I would recommend if at all possible.
To try and answer your question, I don't think there's an easy way to make this conversion. The output of docker save is a bunch of layers. This is not the same as a droplet which is an archive containing some specific folders (app bits + what's installed by your buildpacks). I suppose you could convert them, but there's not a clear path to doing this.
The way Cloud Foundry uses a droplet is different and more constrained than a Docker image. The droplet gets extracted into /home/vcap overtop of an Ubuntu Bionic (cflinuxfs3 root filesystem) and the app is then run out of there. This your droplet can only contain files that will go into this one place in the file system.
For a Docker image, you can literally have a completely custom file system.
So given that difference, I don't think there's a generic way you can take a random docker image and convert that to a droplet. The best you could probably do is take some constrained set of docker images, like those build from Ubuntu Bionic, using certain patterns, extract the files necessary to run your app, stuff them directories that will unpack overtop of /home/vcap (i.e. that resembles a droplet), tar gzip it and try to use that.
Starting with the output of docker save is probably a good idea. You'd then just need to extract the files you want from the tar archive of the layers (i.e. dig through each layer, which is another tar archive and extract files), then move them into a directory structure that resembles this:
./
./deps/
./profile.d/
./staging_info.yml
./tmp/
./logs/
./app/
where ./deps is typically where buildpacks will install required dependencies, ./.profile.d/ is where you can put scripts that will run before your app starts and ./app is where your app (most of your files) will end up.
The staging_info.yml, I'm not 100% sure is required, but basically breaks down to {"detected_buildpack":"java","start_command":""}. You could fake the detected_buildpack setting it to anything and then start_command is obviously the command to run (you can override this later though).
I haven't tried doing this because cf push -o is much easier, but you could give it a shot if cf push -o isn't an option.

Docker - Safest way to upload new content to production

I am new to Docker.
Every time i need to upload new content in production I get anxious that something will go wrong so I try to understand how backups work and how to backup my Volumes which seems pretty complicated for me at the moment.
So i have this idea of creating a new image every time I want to upload new content.
Then i pull this image in the machine and stack rm/deploy the container and see if it works - if not I pull the old image.
If the code works I can then delete my old image.
Is this a proper/safe way to update production machines or I need to get going with backups and restores?
I mean i read this guide https://www.thegeekdiary.com/how-to-backup-and-restore-docker-containers/ but I don't quite understand how to restore my volumes.
Any suggestion would be nice.
Thank you
That's a pretty normal way to use Docker. Make sure you give each build a distinct tag, like a date stamp or source-control ID. You can do an upgrade like
# CONTAINER=...
# IMAGE=...
# OLD_TAG=...
# NEW_TAG=...
# Shell function to run `docker run`
start_the_container() {
docker run ... --name "$CONTAINER" "$IMAGE:$1"
}
# Shut down the old container
docker stop "$CONTAINER"
docker rm "$CONTAINER"
# Launch the new container
start_the_container "$NEW_TAG"
# Did it work?
if check_if_container_started_successfully; then
# Delete the old image
docker rmi "$IMAGE:$OLD_TAG"
else
# Roll back
docker stop "$CONTAINER"
docker rm "$CONTAINER"
start_the_container "$OLD_TAG"
docker rmi "$IMAGE:$NEW_TAG"
fi
The only docker run command here is in the start_the_container shell function; if you have environment-variable or volume-mount settings you can put them there, and the old volumes will get reattached to the new container. You do need to back up volume content, but that can be separate from this upgrade process. You should not need to back up or restore the contents of the container filesystems beyond this.
If you're using Kubernetes, changing the image: in a Deployment spec does this for you automatically. It will actually start the new container(s) before stopping the old one(s) so you get a zero-downtime upgrade; the key parts to doing this are being able to identify the running containers, and connecting them to a load balancer of some sort that can route inbound requests.
The important caveat here is that you must not use Docker volumes or bind mounts for key parts of your application. Do not use volumes for your application code, or static asset files, or library files. Otherwise the lifecycle of the volume will take precedence over the lifecycle of the image/container, and you'll wind up running old code and can't update things this way. (They make sense for pushing config files in, reading log files out, and storing things like the underlying data for your database.)

Accessing Files on a Windows Docker Container Easily

Summary
So I'm trying to figure out a way to use docker to be able to spin up testing environments for customers rather easily. Basically, I've got a customized piece of software that want to install to a Windows docker container (microsoft/windowsservercore), and I need to be able to access the program folder for that software (C:\Program Files\SOFTWARE_NAME) as it has some logs, imports/exports, and other miscellaneous configuration files. The installation part was easy, and I figured that after a few hours of messing around with docker and learning how it works, but transferring files in a simple manner is proving far more difficult than I would expect. I'm well aware of the docker cp command, but I'd like something that allows for the files to be viewed in a file browser to allow testers to quickly/easily view log/configuration files from the container.
Background (what I've tried):
I've spent 20+ hours monkeying around with running an SSH server on the docker container, so I could just ssh in and move files back and forth, but I've had no luck. I've spent most of my time trying to configure OpenSSH, and I can get it installed, but there appears to be something wrong with the default configuration file provided with my installation, as I can't get it up and running unless I start it manually via command line by running sshd -d. Strangely, this runs just fine, but it isn't really a viable solution as it is running in debug mode and shuts down as soon as the connection is closed. I can provide more detail on what I've tested with this, but it seems like it might be a dead end (even though I feel like this should be extremely simple). I've followed every guide I can find (though half are specific to linux containers), and haven't gotten any of them to work, and half the posts I've found just say "why would you want to use ssh when you can just use the built in docker commands". I want to use ssh because it's simpler from an end user's perspective, and I'd rather tell a tester to ssh to a particular IP than make them interact with docker via the command line.
EDIT: Using OpenSSH
Starting server using net start sshd, which reports it starting successfully, however, the service stops immediately if I haven't generated at least an RSA or DSA key using:
ssh-keygen.exe -f "C:\\Program Files\\OpenSSH-Win64/./ssh_host_rsa_key" -t rsa
And modifying the permissions using:
icacls "C:\Program Files\OpenSSH-Win64/" /grant sshd:(OI)(CI)F /T
and
icacls "C:\Program Files\OpenSSH-Win64/" /grant ContainerAdministrator:(OI)(CI)F /T
Again, I'm using the default supplied sshd_config file, but I've tried just about every adjustment of those settings I can find and none of them help.
I also attempted to setup Volumes to do this, but because the installation of our software is done at compile time in docker, the folder that I want to map as a volume is already populated with files, which seems to make docker fail when I try to start the container with the volume attached. This section of documentation seems to say this should be possible, but I can't get it to work. Keep getting errors when I try to start the container saying "the directory is not empty".
EDIT: Command used:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination=C:/temp my_container
Running this on a ProxMox VM.
At this point, I'm running out of ideas, and something that I feel like should be incredibly simple is taking me far too many hours to figure out. It particularly frustrates me that I see so many blog posts saying "Just use the built in docker cp command!" when that is honestly a pretty bad solution when you're going to be browsing lots of files and viewing/editing them. I really need a method that allows the files to be viewed in a file browser/notepad++.
Is there something obvious here that I'm missing? How is this so difficult? Any help is appreciated.
So after a fair bit more troubleshooting, I was unable to get the docker volume to initialize on an already populated folder, even though the documentation suggests it should be possible.
So, I instead decided to try to start the container with the volume linked to an empty folder, and then start the installation script for the program after the container is running, so the folder populates after the volume is already linked. This worked perfectly! There's a bit of weirdness if you leave the files in the volume and then try to restart the container, as it will overwrite most of the files, but things like logs and files not created by the installer will remain, so we'll have to figure out some process for managing that, but it works just like I need it to, and then I can use windows sharing to access that volume folder from anywhere on the network.
Here's how I got it working, it's actually very simple.
So in my dockerfile, I added a batch script that unzips the installation DVD that is copied to the container, and runs the installer after extracting. I then used the CMD option to run this on container start:
Dockerfile
FROM microsoft/windowsservercore
ADD DVD.zip C:\\resources\\DVD.zip
ADD config.bat C:\\resources\\config.bat
CMD "C:\resources\config.bat" && cmd
Then I build the container without anything special:
docker build -t my_container:latest .
And run it with the attachment to the volume:
docker run -it -d -p 9999:9092 --mount source=my_volume,destination="C:/Program Files (x86)/{PROGRAM NAME}" my_container
And that's it. Unfortunately, the container takes a little longer to start (it does build faster though, for what that's worth, as it isn't running the installer in the build), and the program isn't installed/running for another 5 minutes or so after the container does start, but it works!
I can provide more details if anyone needs them, but most of the rest is implementation specific and fairly straightforward.
Try this with Docker composer. Unfortunately, I cannot test it as I'm using a Mac it's not a "supported platform" (way to go Windows). See if that works, if not try volume line like this instead - ./my_volume:C:/tmp/
Dockerfile
FROM microsoft/windowsservercore
# need to ecape \
WORKDIR C:\\tmp\\
# Add the program from host machine to container
ADD ["<source>", "C:\tmp"]
# Normally used with web servers
# EXPOSE 80
# Running the program
CMD ["C:\tmp\program.exe", "any-parameter"]
Docker Composer
Should ideally be in the parent folder.
version: "3"
services:
windows:
build: ./folder-of-Dockerfile
volume:
- type: bind
source: ./my_volume
target: C:/tmp/
ports:
- 9999:9092
Folder structure
|---docker-composer.yml
|
|---folder-of-Dockerfile
|
|---Dockerfile
Just run docker-composer up to build and start the container. Use -d for detach mode, should only use once you know its working properly.
Useful link Manage Windows Dockerfile

Setting up a container from a users github source

Can be closed, not sure how to do it.
I am to be quite frank lost right now, the user whom published his source on github somehow failed to update the installation instructions when he released a new branch. Now, I am not dense, just uneducated when it comes to docker. I would really appreciate a push in the right direction. If I am missing any information from this post, please allow me to provide it in the comments.
Current Setup
O/S - Debian 8 Minimal (Latest kernel)
Hardware - 1GB VPS (KVM)
Docker - Installed with Compose (# docker info)
I am attempting to setup this (https://github.com/pboehm/ddns/tree/docker_and_rework), first I should clone this git to my working directory? Lets say /home for example. I will run the following command;
git clone -b docker_and_rework https://github.com/pboehm/ddns.git
Which has successfully cloned the source files into /home/ddns/... (working dir)
Now I believe I am supposed to go ahead and build something*, so I go into the following directory;
/home/ddns/docker
Inside contains a docker-compose.yml file, I am not sure what this does but by looking at it, it appears to be sending a bunch of instructions which I can only presume is to do with actually deploying or building the whole container/image or magical thing right? From here I go ahead and do the following;
docker-compose build
As we can see, I believe its building the container or image or whatever its called, you get my point (here). After a short while, that completes and we can see the following (docker images running). Which is correct, I see all of the dependencies in there, but things like;
go version
It does not show as a command, so I presume I need to run it inside the container maybe? If so I dont have a clue how, I need to run 'ddns.go' which is inside /home/ddns, the execution command is;
ddns --soa_fqdn=dns.stealthy.pro --domain=d.stealthy.pro backend
I am also curious why the front end web page is not showing? There should be a page like this;
http://ddns.pboehm.org/
But again, I believe there is some more to do I just do not know what??
docker-compose build will only build the images.
You need to run this. It will build and run them.
docker-compose up -d
The -d option runs containers in the background
To check if it's running after docker-compose up
docker-compose ps
It will show what is running and what ports are exposed from the container.
Usually you can access services from your localhost
If you want to have a look inside the container
docker-compose exec SERVICE /bin/bash
Where SERVICE is the name of the service in docker-compose.yml
The instructions it runs that you probably care about are in the Dockerfile, which for that repo is in the docker/ddns/ directory. What you're missing is that Dockerfile creates an image, which is a template to create an instance. Every time you docker run you'll create a new instance from the image. docker run docker_ddns go version will create a new instance of the image, run go version and output it, then die. Running long running processes like the docker_ddns-web image probably does will run the process until something kills that process. The reason you can't see the web page is probably because you haven't run docker-compose up yet, which will create linked instances of all of the docker images specified in the docker-compose.yml file. Hope this helps

How to handle python module installation when running jupyter notebook in docker?

I'm currently starting to use the awesome jupyter notebook. Since I've always had troubles with stuff not working because of different python versions and python module versions, I like to run jupyter notebook in a docker container. I've created a Dockerfile to build my image (based on the official jupyter/scipy-notebook image on dockerhub), I have everything up and running and its working great.
The only thing that concerns me is how to handle the installation of different python modules I might need during the next week(s). How do you guys handle that?
1) Install the needed modules in the running docker container, then use docker commit and save the running container as a new image?
2) Always edit the Dockerfile to install the needed modules and re-build the image?
3) Don't delete the container (no --rm flag) and just restart it?
1) and 2) seem to be a little complicated, but I also want to be able to be able to start from a "fresh" notebook in case I mess something up, so 3) is also not perfect. Is there something I missed?

Resources