Can I run a docker container doing a x86 build on a IBM Power system? - docker

Our build setup is backed into a large docker container (basically a 2 GB image coming with a complete X86 linux in itself).
We have two ways to actually build: the official approach is jenkins environment (running on X86 hardware). But we also have a little "side X86 server" running RH 7. Developers can log into that RH server and kick off specific builds (using said docker images) themselves.
Those RH servers will be shut down at some point, to be replaced with IBM Power8 machines (running RH7 Little Endian for power).
I am simply wondering: is there a chance that our existing build setup and docker images simply work on Power8? Or are the fundamental technical issues that make it unlikely and not even worth trying?

You can probably use your existing build methodology and scripts close to unchanged, but you'll need to rebuild the actual images.
You can't directly run x86 binaries on Power (at a very low level, the bytes of machine code are just different). Docker doesn't contain any sort of virtualization layer; it does a bunch of setup to isolate the container from the host, but then runs the binaries in an image directly.
If your Jenkins setup has enough parameters for image names and version tags, then you should be able to run the x86 and Power setups side-by-side; you need to encode the architecture somewhere in the built image name or tag; for instance, repo.example.com/app/build:20180904-power. (I don't know that one or the other is considered better if you control all of the machinery.) If you have a private repo, you could encode it earlier in the path, winding up with image names like repo.example.com/power/build:20180904.
You'd need to double-check that everywhere that has a Docker image reference has it correctly parameterized (which is a good practice anyways). That would include any direct docker run commands; any Docker Compose or Kubernetes YAML files or similar artifacts; and the FROM line of any Dockerfiles.

Existing build setup? Not sure!
Docker images? NO, don’t even try.
Docker images are actually multiple layers which stored on filesystem through corresponding storage driver and backing filesystem(shown in the output of docker info).
If storage driver/backing filesystem has been changed, which likely be true when OS changed, older docker images could not be valid any more. Meaning they must be rebuilt for sure.

Related

Docker & Kubernetes & architecture: understanding platform differences

Intro
There is an option --platform for Docker image to be run and config platform for docker-compose.
Also, almost in all official Docker images in hub.docker.com there is some of supported architectures in one tag.
Example, Ubuntu official image:
Most of Servers (also in Kubernetes) are linux/amd64.
I updated my MacBook to new one with their own Silicon chip (M1/M2...) and now Docker Desktop showing me message:
For official images (you can see them without yellow note) it downloads automatically needed platform (I guess).
But for custom created images (in private repository like nexus, artifacts) I have no influence. Yes, I can build appropriate images (like with buildx) for different platforms and push it to the private repository, but, in companies, where repos managed by DevOps - it is tricky to do so. They say that the server architecture is linux/amd64, and if I develop web-oriented software (PHP etc.) on a different platform, even if the version (tag) is the same - then the environment is different, and there is no guarantee that it will work on the server.
I assumed that it is only the difference in interpretation of instructions between the software and the hardware.
I would like to understand the subject better. There is a lot of superficial information on the web, no details.
Questions
what "platform/architecture" for Docker image does it really means? Like core basics.
Will you really get different code for interpreted programming languages?
It seems to me that if the wrong platform is specified, the containers work very slowly. But how to measure this (script performance, interaction with the host file system, etc.)
TLDR
Build multi-arch images supporting multiple architectures
Always ensure that the image you're trying to run has compatible architecture
what "platform/architecture" for docker image does it really means? Like core basics. Links would be appreciated.
It means that some of the compiled binary code within the image contains CPU instructions exlusive to that specific architecture.
If you run that image on the incorrect architecture, it'll either be slower due to the incompatible code needing to run through an emulator, or it might even not work at all.
Some images are "multi-arch", where your Docker installation selects the most suitable architecture of the image to run.
Will you really get different code for interpreted programming languages?
Different machine code, yes. But it will be functionally equivalent.
It seems to me that if the wrong platform is specified, the containers work very slowly. But how to measure this (script performance, interaction with the host file system, etc.)
I recommend to always ensure you're running images meant for your machine's infrastructure.
For the sake of science, you could do an experiment.
You can build an image that is meant to run a simple batch job for two different architectures, and then you can try running them both on your machine. Compare the time it takes the containers to finish.
Sources:
https://serverfault.com/questions/1066298/why-are-docker-images-architecture-specific#:~:text=Docker%20images%20obviously%20contain%20processor,which%20makes%20them%20architecture%20dependent.
https://www.docker.com/blog/multi-arch-build-and-images-the-simple-way/
https://www.reddit.com/r/docker/comments/o7u8uy/run_linuxamd64_images_on_m1_mac/

Does Docker image include OS?

I have below Dockerfile"
FROM openjdk:12.0.2
EXPOSE 8080
ADD ./build/libs/*.jar app.jar
ENTRYPOINT ["java","-jar","/app.jar"]
The resulting Docker image encapsulates Java program. When I deploy this Docker image to Windows Server or Linux, does the image always include OS like Linux which runs on top of host OS (Windows Server or Linux) ?
I am asking this question in the sense of Docker image being physical box which contains other boxes (one being openjdk), does this box also contain Linux OS box that I can pull out of it ( assuming if this was possible) and install it as Linux OS on empty machine?
That depends on what you call the "OS". It will always contain stuff from the distribution image, it is built on.
For example, a debian based images will include apt and other debian-specific tools. But most of the stuff you need on a "complete" machine (as in non-container), will have been removed to keep the image as small as possible.
It will not contain the kernel, as it is running on the host machine and is controlled by the host's kernel.
The "official" OpenJDK images from the Docker Hub are available in variants based on a number of different Linux distributions. There is a cut-down Debian, an Alpine, and others. There are advantages and disadvantages to each.
The image will need to contain enough operating system dependencies to allow the JVM to run. It may also include basic diagnostic and management tools -- enough to carry out rudimentary troubleshooting in the container, anyway. You can expect all the images to contain at least basic console shell tools like "cp" and "cat", although they differ in implementation. For example, the Alpine variant gets these utilities from BusyBox, not from a conventional GNU/Linux installation.
It's possible to create a Docker image that contains no platform dependencies at all, but there's little incentive to be that minimal -- you'd just have to build more stuff into the application program itself.
It doesn't include the entire operating system, but the image will be dependent on either linux or windows, you can't build an image that runs on both in one Dockerfile.
The reason for the dependency is that a docker container shares resources with it's host machine in a carefully fenced off way, this mechanism is different on windows and linux (though to you, as a docker user, the difference is invisible).

Why provide a Linux distro as a Dockerfile base when my host has all the software I need installed?

I want to start writing a Docker image. I have a .net Core 2.0 Web Api service that I have deployed to an Amazon Linux machine. It runs fine, but I would like to automate the build and deployment process a bit.
As far as I am concerned, there is no need for a Parent image for the image I need to build. I might grab some files from a location, run some dotnet CLI commands, and run the service using Apache as a reverse proxy. I dont really see the need for a parent image in any of that.
I am asking this question because most of the examples I have seen include a base image. Most of the time its something very generic, like "From Ubuntu". I have read that most images will include a parent image. According to Docker's documentation:
A parent image is the image that your image is based on. It refers to the contents of the FROM directive in the Dockerfile. Each subsequent declaration in the Dockerfile modifies this parent image. Most Dockerfiles start from a parent image, rather than a base image. However, the terms are sometimes used interchangeably.
What exactly is the point of inheriting from Ubuntu? Even the Docker docs suggest using Debian "since it’s very tightly controlled and kept minimal". Does that just ensure that your Linux machine has an Ubuntu distribution? Does it even matter if I am using Amazon Linux but use the Debian image as my base?
A Docker image runs in a set of filesystem namespaces which are unconnected from the host's except where you've chosen to bind-mount a volume. This means that tools installed on the host are unavailable to the container: Just because the host runs Amazon Linux doesn't mean that the userspace commands Amazon Linux provides (and the libraries those commands use to run) are available to the guests.
Without a Linux distro available inside the container, you wouldn't have a package management tool (yum, apt-get, etc) with which to install the tools you need to download a file, run software (that presumably needs to be linked to a libc, a copy of OpenSSL, or other shared components). There are also runtime parts of a working Linux system such as the resolver that are provided in userland by your distro and not shared from the host in a Docker install.
Using a base image ensures that you have tools available inside your container -- and it ensures that that container will work consistently on any Linux system with a compatible kernel and hardware architecture.
It's possible in theory to bind-mount many of the tools from the host (as by exposing all of /usr as a volume), but doing so would defeat many of the advantages Docker offers in portability.

What's the difference between Docker and Chef's new Habitat tool?

Does Chef's new Habitat tool somehow work with Docker? If so, what problem is Habitat trying to solve or is it just trying to replace tools in the Docker toolset (e.g., Docker Swarm, Docker Machine, Docker Compose, etc.)?
This is skirting the limits of StackOverflow's policy on open-ended questions, but I'll answer anyway:
Docker and Habitat don't really overlap much. The main point of competition is on building release artifacts. Docker has Dockerfiles and docker build, Habitat has plans and the Studio. The output of both can be a Docker image though, which is basically a tarball of a filesystem along with some metadata. Habitat is aimed more are building super minimal artifacts, i.e. not including a Linux distro of any kind, no package manager, just statically compiled executable code and whatever support files you need for that specific app.
As for runtime, they are 100% orthogonal. Docker is a way to run a process inside a bunch of Linux security features collectively called a "container" now. Habitat is a little stub that surrounds your process and handles things like runtime config distribution, secrets transfer, and service discovery. Those features are more overlapping with higher-level tools like Kube but even there it's only barely overlapping. You need something to actually start hab-sup, which could be docker run (possibly via Swarm), Nomad, Kube, or even a non-container system like Upstart or Runit if you wanted to. The only interaction point between those is those tools all start an entrypoint process, and hab-sup is a generic entrypoint process that gives whatever app it runs underneath some cool features if they want to use 'em.

"Dockerized" apps frequently are built on top of OS containers. Why doesn't this defeat the purpose?

A question came up as I was giving a presentation on Docker to my team that I didn't know how to answer.
Many of the prebuilt containers on Docker Hub, for just one example the jboss/wildfly container, are built on top of containers for a specific OS (Ubuntu, CentOS, etc.). A few of these containers ARE in fact nothing but containers for these OSes.
Yet Docker's main raison d'etre, it's prime claim to fame, the basis of its claim that it is better than Virtual Machine technologies, is that it is lighter weight because it doesn't need to be built on top of an OS. But if this is so and most containers include an OS does this not defeat the purpose and invalidate the claim?
So what IS in these OS Docker images, and how is the claim of lighter weight still able to be made? Is it some stripped down version of an OS?
Can one make a Docker image that is not built on top of an OS?
What determines when an application gets OS services from the OS embedded in the container, as opposed to getting OS services from the host?
A Docker image (which will most likely contain the base system from a Linux distribution), is read only and is augmented with several layers that are enabled as you write to a location. So you can share the base image and have "add-ons" if you will. This is called a union file system. The docker documentation provides more information here. This kind of sharing makes Docker consume less resources (fs space in this case) compared to VMs, where you'd have to install a new distribution on each.
Note that you don't have to have a full Ubuntu installation (the kernel is shared with the host system, anyway), it is just that most of it is usually required by the applications you want to run in your container. You can easily find images that are stripped down, omitting files not needed to run most applications while still being viable for many targets (so you can still share the base image, see above).

Resources