Google Cloud VM Image to docker image - docker

I have a Google Cloud VM that installed with my application. The installation step is completed and I:
Turned off the VM instance.
Exported the disk to disk image called MY_CUSTOM_IMAGE_1
My wish now is to use MY_CUSTOM_IMAGE_1 as the starting image of my docker image build. For building the images I'm using Google Cloud Build.
My docker file should look like this:
FROM MY_CUSTOM_IMAGE_1 AS BUILD_ENV
...
When I tried to use this image I got the build error:
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
ERROR
pull access denied for MY_CUSTOM_IMAGE_1, repository does not exist or may require 'docker login'
Step 1/43 : FROM MY_CUSTOM_IMAGE_1 AS BUILD_ENV
The reason is that VM images are not the same as Docker images.
Is this possible to make this transform (GCP VM Image -> Docker image), without external tools (outside GCP, like "docker private repositories")?
Thanks!

If you know all the installed things on your VM (and all the commands), do the same thing in a Dokerfile. Use as base image, the same OS version as your current VM. Perform some tests and it should be quickly equivalent.
If you have statefull files in your VM application, it's a little bit more complex, you have to mount a disk in your container and to update your application's configuration to write in the correct mounted folder. It's more "complex" but there is tons of example on internet!

No, this is not possible without a tool to extract your application out of the virtual machine image and recreate in a container. To the best of my knowledge, there is no general-purpose tool that exists.
There is a big difference between a container image and a virtual machine image. Container images do not have an operating system, virtual machine images are a complete operating system and device data. The two conceptually are similar, but extremely different in how they are implemented at the software and hardware level.

Related

How can I clone my Google Cloud Instance so I can download it and host it locally using Docker [duplicate]

I have a Google Cloud VM that installed with my application. The installation step is completed and I:
Turned off the VM instance.
Exported the disk to disk image called MY_CUSTOM_IMAGE_1
My wish now is to use MY_CUSTOM_IMAGE_1 as the starting image of my docker image build. For building the images I'm using Google Cloud Build.
My docker file should look like this:
FROM MY_CUSTOM_IMAGE_1 AS BUILD_ENV
...
When I tried to use this image I got the build error:
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
ERROR
pull access denied for MY_CUSTOM_IMAGE_1, repository does not exist or may require 'docker login'
Step 1/43 : FROM MY_CUSTOM_IMAGE_1 AS BUILD_ENV
The reason is that VM images are not the same as Docker images.
Is this possible to make this transform (GCP VM Image -> Docker image), without external tools (outside GCP, like "docker private repositories")?
Thanks!
If you know all the installed things on your VM (and all the commands), do the same thing in a Dokerfile. Use as base image, the same OS version as your current VM. Perform some tests and it should be quickly equivalent.
If you have statefull files in your VM application, it's a little bit more complex, you have to mount a disk in your container and to update your application's configuration to write in the correct mounted folder. It's more "complex" but there is tons of example on internet!
No, this is not possible without a tool to extract your application out of the virtual machine image and recreate in a container. To the best of my knowledge, there is no general-purpose tool that exists.
There is a big difference between a container image and a virtual machine image. Container images do not have an operating system, virtual machine images are a complete operating system and device data. The two conceptually are similar, but extremely different in how they are implemented at the software and hardware level.

Persisting changes to Windows Registry between restarts of a Windows Container

Given a Windows application running in a Docker Windows Container, and while running changes are made to the Windows registry by the running applications, is there a docker switch/command that allows changes to the Windows Registry to be persisted, so that when the container is restarted the changed values are retained.
As a comparison, file changes can be persisted between container restarts by exposing mount points e.g.
docker volume create externalstore
docker run -v externalstore:\data microsoft/windowsservercore
What is the equivalent feature for Windows Registry?
I think you're after dynamic changes (each start and stop of the container contains different user keys you want to save for the next run), like a roaming profile, rather than a static set of registry settings but I'm writing for static as it's an easier and more likely answer.
It's worth noting the distinction between a container and an image.
Images are static templates.
Containers are started from images and while they can be stopped and restarted, you usually throw them entirely away after each execution with most enterprise designs such as with Kubernetes.
If you wish to run a docker container like a VM (not generally recommended), stopping and starting it, your registry settings should persist between runs.
It's possible to convert a container to an image by using the docker commit command. In this method, you would start the container, make the needed changes, then commit the container to an image. New containers would be started from the new image. While this is possible, it's not really recommended for the same reason that cloning a machine or upgrading an OS is not. You will get extra artifacts (files, settings, logs) that you don't really want in the image. If this is done repeatedly, it'll end up like a bad photocopy.
A better way to make a static change is to build a new image using a dockerfile. You'll need to read up on that (beyond the scope of this answer) but essentially you're writing a docker script that will make a change to an existing docker image and save it to a new image (done with docker build). The advantage of this is that it's cleaner, more repeatable, and each step of the build process is layered. Layers are advantageous for space savings. An image made with a windowsservercore base and application layer, then copied to another machine which already had a copy of the windowsservercore base, would only take up the additional space of the application layer.
If you want to repeatedly create containers and apply consistent settings to them but without building a new image, you could do a couple things:
Mount a volume with a script and set the execution point of the container/image to run that script. The script could import the registry settings and then kick off whatever application you were originally using as the execution point, note that the script would need to be a continuous loop. The MS SQL Developer image is a good example, https://github.com/Microsoft/mssql-docker/tree/master/windows/mssql-server-windows-developer. The script could export the settings you want. Not sure if there's an easy way to detect "shutdown" and have it run at that point, but you could easily set it to run in a loop writing continuously to the mounted volume.
Leverage a control system such as Docker Compose or Kubernetes to handle the setting for you (not sure offhand how practical this is for registry settings)
Have the application set the registry settings
Open ports to the container which allow remote management of the container (not recommended for security reasons)
Mount a volume where the registry files are located in the container (I'm not certain where these are or if this will work correctly)
TL;DR: You should make a new image using a dockerfile for static changes. For dynamic changes, you will probably need to use some clever scripting.

Can I run a docker container doing a x86 build on a IBM Power system?

Our build setup is backed into a large docker container (basically a 2 GB image coming with a complete X86 linux in itself).
We have two ways to actually build: the official approach is jenkins environment (running on X86 hardware). But we also have a little "side X86 server" running RH 7. Developers can log into that RH server and kick off specific builds (using said docker images) themselves.
Those RH servers will be shut down at some point, to be replaced with IBM Power8 machines (running RH7 Little Endian for power).
I am simply wondering: is there a chance that our existing build setup and docker images simply work on Power8? Or are the fundamental technical issues that make it unlikely and not even worth trying?
You can probably use your existing build methodology and scripts close to unchanged, but you'll need to rebuild the actual images.
You can't directly run x86 binaries on Power (at a very low level, the bytes of machine code are just different). Docker doesn't contain any sort of virtualization layer; it does a bunch of setup to isolate the container from the host, but then runs the binaries in an image directly.
If your Jenkins setup has enough parameters for image names and version tags, then you should be able to run the x86 and Power setups side-by-side; you need to encode the architecture somewhere in the built image name or tag; for instance, repo.example.com/app/build:20180904-power. (I don't know that one or the other is considered better if you control all of the machinery.) If you have a private repo, you could encode it earlier in the path, winding up with image names like repo.example.com/power/build:20180904.
You'd need to double-check that everywhere that has a Docker image reference has it correctly parameterized (which is a good practice anyways). That would include any direct docker run commands; any Docker Compose or Kubernetes YAML files or similar artifacts; and the FROM line of any Dockerfiles.
Existing build setup? Not sure!
Docker images? NO, don’t even try.
Docker images are actually multiple layers which stored on filesystem through corresponding storage driver and backing filesystem(shown in the output of docker info).
If storage driver/backing filesystem has been changed, which likely be true when OS changed, older docker images could not be valid any more. Meaning they must be rebuilt for sure.

Why provide a Linux distro as a Dockerfile base when my host has all the software I need installed?

I want to start writing a Docker image. I have a .net Core 2.0 Web Api service that I have deployed to an Amazon Linux machine. It runs fine, but I would like to automate the build and deployment process a bit.
As far as I am concerned, there is no need for a Parent image for the image I need to build. I might grab some files from a location, run some dotnet CLI commands, and run the service using Apache as a reverse proxy. I dont really see the need for a parent image in any of that.
I am asking this question because most of the examples I have seen include a base image. Most of the time its something very generic, like "From Ubuntu". I have read that most images will include a parent image. According to Docker's documentation:
A parent image is the image that your image is based on. It refers to the contents of the FROM directive in the Dockerfile. Each subsequent declaration in the Dockerfile modifies this parent image. Most Dockerfiles start from a parent image, rather than a base image. However, the terms are sometimes used interchangeably.
What exactly is the point of inheriting from Ubuntu? Even the Docker docs suggest using Debian "since it’s very tightly controlled and kept minimal". Does that just ensure that your Linux machine has an Ubuntu distribution? Does it even matter if I am using Amazon Linux but use the Debian image as my base?
A Docker image runs in a set of filesystem namespaces which are unconnected from the host's except where you've chosen to bind-mount a volume. This means that tools installed on the host are unavailable to the container: Just because the host runs Amazon Linux doesn't mean that the userspace commands Amazon Linux provides (and the libraries those commands use to run) are available to the guests.
Without a Linux distro available inside the container, you wouldn't have a package management tool (yum, apt-get, etc) with which to install the tools you need to download a file, run software (that presumably needs to be linked to a libc, a copy of OpenSSL, or other shared components). There are also runtime parts of a working Linux system such as the resolver that are provided in userland by your distro and not shared from the host in a Docker install.
Using a base image ensures that you have tools available inside your container -- and it ensures that that container will work consistently on any Linux system with a compatible kernel and hardware architecture.
It's possible in theory to bind-mount many of the tools from the host (as by exposing all of /usr as a volume), but doing so would defeat many of the advantages Docker offers in portability.

Docker base images, what do they compose of?

I am trying to wrap my head around the Docker architecture, in particular figuring out what exactly a base image consists of, and in doing I have been exploring some of the images found on the docker hub. Specifically when looking at the following repo it references the centos-7.2.1511-docker.tar.xz file.
I've downloaded and examined the contents of the tar and it has your typical Linux filesystem.
As I understand it, this is not a complete Linux OS and is just a replica of a linux filesystem with all the non essentials removed? Where all other requirements are drawn from the Host OS when a container is run(?)
My question essentially boils down to how one would go about creating that tar file? What exactly do you need. My intention is not to create one but rather understand what portion of files/data/dependencies come from a target OS to create an image and what gets used on the Host OS
A Docker container is a set of processes, running a sandbox enabled by Linux namespaces, on top of the host kernel.
A Docker image is a set of layers, which are often simply tarballs, of files that are unpacked, and made to look as if they are the root of the filesystem when used to start a container.
A Docker image could be just a single statically-linked executable! You can create your own Docker image from scratch by simply creating a tarball of a single executable, and giving it to docker load which wI'll store it as the appropriate internal format and register it as an image.
As you can see then, a Docker image need not be much. It certainly doesn't need a kernel, or any of the components normally used for configuring the system, networking daemons, or even things like cron. Those are all left to the host.
Things that are usually available in an image are a dynamic library runtime, and files like /etc/hosts, /etc/resolv.conf, and other files which are referenced directly by libc. This allows you to add typical dynamically-linked executables which interact with the system as if they're running on a traditionalal OS.
I have successfully "Dockerized" a legacy CentOS 6-based VM by uninstalling as many packages as possible, then tar-ing up the filesystem (excluding directories like /proc, /sys, /dev, etc.) and loading this via docker load. Afterwards, I started a container and (sometimes forcefully) removed additional "system" packages that serve no purpose in a Docker image, like kernel, udev, etc.
This blog post goes into some of the specifics of docker load:
http://tuhrig.de/difference-between-save-and-export-in-docker/

Resources