GitHub Codespaces: how to set x86_64, AMD64, ARM64 platform? - docker

First, the question: is there a way to choose the platform (e.g. x86_64, AMD64, ARM64) for a GitHub Codespace?
Here's what I've found so far:
Attempt 1 (not working):
From within GitHub.com, you can choose the "machine" for a Codespace, but the only options are RAM and disk size.
Attempt 2 (EDIT: not working): devcontainer.json
When you create a Codespace, you can specify options by creating a top-level .devcontainer folder with two files: devcontainer.json and Dockerfile
Here you can customize runtimes, installed packages, etc., but the docs don't say anything about determining architecture...
...however, the VSCode docs for devcontainer.json has a runArgs option, which "accepts Docker CLI arguments"...
and the Docker CLI docs on --platform say you should be able to pass --platform linux/amd64 or --platform linux/arm64, but...
When I tried this, the Codespace would just hang, never finishing building.
Attempt 3 (in progress): specify in Dockerfile
This route seems the most promising, but it's all new to me (containerization, codespaces, docker). It's possible that Attempts 2 and 3 work in conjunction with one another. At this point, though, there are too many new moving pieces, and I need outside help.
Does GitHub Codespaces support this?
Would you pass it in the Dockerfile or devcontainer.json? How?
How would you verify this, anyway? [Solved: dpkg --print-architecture or uname -a]
For Windows, presumably you'd need a license (I didn't see anything on GitHub about pre-licensed codespaces) -- but that might be out of scope for the question.
References:
https://code.visualstudio.com/docs/remote/devcontainerjson-reference
https://docs.docker.com/engine/reference/commandline/run/
https://docs.docker.com/engine/reference/builder/
https://docs.docker.com/desktop/multi-arch/
https://docs.docker.com/buildx/working-with-buildx/

EDIT: December 2021
I received a response from GitHub support:
The VM hosts for Codespaces are only x86_64 and we do not offer any ARM64 machines.
So for now, setting the platform does nothing, or fails.
But if they end up supporting multiple platforms, you should be able to (in Dockerfile)
RUN --platform=arm64|amd64|x86-64 [image-name],
Which is working for me in the non-cloud version of Docker.
Original answer:
I may have answered my own question
In Dockerfile:
I had RUN alpine
changed to
RUN --platform=linux/amd64 alpine
or
RUN --platform=linux/x86-64 alpine
checked at the command line with
uname -a to print the architecture.
Still verifying, but seems promising. [EDIT: Nope]
So, despite the above, I can only get GitHub codespaces to run x86-64. Nevertheless, the above syntax seems correct.
A clue:
In the logs that appear while the codespace is building, I saw target OS: x86
Maybe GitHub just doesn't support other architectures yet.
Still investigating.

Currently only x64 based hosts running Linux are supported for Codespaces. Other hardware and host is types are yet to be announced.

Related

Docker under WSL without Docker Desktop

This is the question regarding running Docker from within WSL, without Docker Desktop. It is doable for WSL2, so the focus of this question is on WSL1 specifically. Of my researches,
Some says "the Docker daemon cannot run directly on WSL", while
Another article says Docker can be run "seamlessly in Windows Subsystem Linux", with the help of Docker Community Edition 17.09.0, as "A crucial change was made to the WSL kernel that enables the usage of cgroups which Docker needs to manage your system’s resources into containers."
My docker is 20.10.5 under debian bullseye. Would it be still good?
I tried it, and got:
iptables can't initialize iptables table `nat': Table does not exist
and the answer to Iptables v1.6.1 can't initialize iptables table `filter' Ubuntu 18.04 Bash Windows is that,
According to the Microsoft WSL page on github.com, iptables isn't supported.
https://github.com/Microsoft/WSL/issues/767
But that's more than 4 years ago, and since it has been possible later in year 2019, I'm wondering what the latest status is.
WSL1 - The little engine that could (link included since that reference may only be understood by a limited audience).
Unfortunately, in the case of Docker, the WSL1 engine seems to have run out of steam. In reading that blog post that you reference, and the corresponding Github thread, I'm pretty amazed at just how far along folks did get with running Docker. I had never seen that before.
However, if you read the full comments on the Github thread, it appears that the results were fairly limited. Placing these excerpts in order:
[2018-04-23] I'm glad to say Docker daemon finally runs on WSL. I'm testing on build 17134. ... The last docker-ce version that works right now on build 17134 is 17.09.0. Anything after that fails on extracting the docker images.
Note that it had to (and still has to) be run in a WSL1 instance running as a Windows admin.
[2018-0612] Unfortunately, docker-compose still doesn't work.... There is a problem with iptables which is not fully supported via WSL yet.
(Which you've run into, although I didn't see that. Perhaps the "admin" thing?)
[2018-07-09] Yeah, I recently mentioned it on Twitter and got a major "we aren't supporting this, we highly advise against it" message from our former WSL PM.
[2018-11-13] WSL PM here. As mentioned in the above comment, we have improved Docker support in recent builds of WSL. Most (if not all) versions of docker-ce work with WSL. We're working on a large set of changes for WSL currently. As part of those changes, we are looking at adding native Docker support in WSL. I will add to this thread and other issues on Docker support when I have additional updates to share
It doesn't seem like this ever progressed, since the PM never posted again in the thread, at least.
[2019-04-18] Like others have pointed out, running docker 17.09 works. Anything later fails with different errors. It might be that newer docker versions are using other syscalls not yet implemented by WSL.
There are some other messages scattered in here about running with --network host (for the client) or --iptables=false (for the daemon).
[2019-08-04] Windows Insider Fast Ring build (>=18917) via WSL2, latest docker/docker-compose is running native in WSL Linux.
And in late 2020, the thread died off.
In a test WSL1 Ubuntu 20.04 instance, I was able to get hello-world running, but nothing more. Running a busybox or ubuntu image (with or without an interactive terminal) failed with:
Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: waiting for init preliminary setup: EOF: unknown.
Once the focused shifted to WSL2 and the real kernel, it doesn't appear to me that the WSL team has made any more progress advancing WSL1's pseudo-kernel syscall translation layer.

Create a docker image from old linux distro without distro's repository

I have a bootable iso image (live cd) with Linux system that is pretty old. That distro doesn't have remote repo (all installations are done from cdrom and separate disk with packages). I wanted to turn it into a docker image. Reading through articles google gave me, I've found several ways to do that. The first one is to mount the iso and find filesystem.squashfs - only modern distros use that way, not my case. My distro doesn't have that file available. The second approach is to call debootstrap but it requires to specify the repo for the distro with dist directory available in it. My distro doesn't have a public repo. What can I do? Is it even possible? I think that should be possible by doing a lot of things manually but how?
I faced similar problems when I had to containerize an old build server (building natively for legacy systems), eventually I succeeded. This approach describes how to containerize some old Linux distro (kernel 2.6.27 in my case), in the present Linux kernel 5 era.
General steps
if necessary: boot the old OS (or Live CD image)
login to the old system as root (or use sudo)
create a tarball from the relevant folders present in root
cd / ; tar cfvz image.tar.gz --one-file-system --exclude=/var/log --exclude=/image.tar.gz /
the selection worked in my case; review for yourself which folders to include or exclude
transfer the tarball to the Docker host (step not shown here)
and import it:
docker import image.tar.gz
the previous command will print out some hash
if convenient, tag the imported image:
docker tag <import-hash> <your-label>
Legacy problem: unsupported system calls
The imported image contains a Linux distribution snapshot. Some binaries can be executed from Docker, eg.:
docker run --rm <your-label> bin/ls
may actually work.
Some important binaries initially did not work for me, most notably bash:
docker run -it --rm <your-label> bin/bash
was failing silently. (Also, running with strace was possible but gave no clear indication.)
As #hiranchaudhuri pointed out, this is likely due to an API discrepancy between the host's kernel and the container's user space code.
In my case the problem was solved by enabling the legacy vsyscall kernel API
for Windows WSL2, this is described here https://learn.microsoft.com/en-us/windows/wsl/wsl-config
for native Linux systems of today, I guess this can be set in the boot configuration, with the kernel command-line parameter vsyscall=emulate, if the present kernel supports this option
I seriously doubt you will succeed on that.
Be aware Docker is not a full virtualization like KVM or VirtualBox. The lightweight virtualization benefits from the docker containers running on the host's Linux kernel. Which means the kernel is the same inside and outside of the container.
If you now try to install some old distro inside the container you may end up with an incompatible combination. Patching the kernel may involve upgrading glibc, and patching that may involve recompiling the rest of the OS.
I am not sure why you want to stick to the old distro, but seriously I believe you are better off with real virtualization.

docker error while loading shared libraries (RHEL 7.5)

I installed Docker on a Red Hat Enterprise Linux Server 7.5 (Maipo) system:
docker version
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-58.git87f2fab.e17.x86_64
OS/Arch: linux/amd64
Now if I try to run a docker image, I get errors similar to this:
docker run docker.io/jupyter/datascience-notebook
tini: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
I have searched for help and have already taken a multitude of possible actions:
libraries seem to be linked correctly
all libraries are up to date
Hello-World example works
I also came across information saying that running containers from docker.io / hub.docker.com under RHEL is not supported - which I don't really get, as main purpose of docker is to enable running programs independent from their OS...?
https://access.redhat.com/solutions/1408853 Does this mean using docker under RHEL does not really provide me with the possibility of easily deploying/sharing a docker-image with non-RHEL users?
Also, does this mean I can only access and use official RHEL-docker images?
https://access.redhat.com/containers/?start=90#/search/
As I wanted to use docker to have ready-to-go environments with R-Python/Jupyter/H2o (and similar), I'm disappointed because I could not find suitable images for RHEL there.
So, my questions would be:
Is it possible to run docker.io / hub.docker.com images under RHEL7.5?
if not, could I share my own created docker images under RHEL7.5 to other users with different OS versions?
Are there other projects / sites to share docker-images for data science purposes on RHEL?
Would you agree that my next step would be: building my own docker-image, adding R/Python/jupyter step by step?
Best regards,
workah0lic
This error message
tini: error while loading shared libraries: libc.so.6: cannot open shared object file: No such file or directory
comes from within the container image. It could be a corrupted container image, but the message is also printed when the glibc dynamic linker determines that the kernel features are not sufficient for loading libc.so.6. I looked at the image (digest is sha256:79f929bd0e58fa9cb238dceda48b0c8360e748d09b476b429216c93dac0bd783), and it appears to require kernel 3.2, so the Red Hat Enterprise Linux 7 kernel version of 3.10 should be sufficient.
In fact, I cannot reproduce this problem with kernel-3.10.0-862.6.3.el7.x86_64 and docker-1.13.1-58.git87f2fab.el7.x86_64. You could try to run this command to obtain additional information about dynamic linker behavior:
docker run -e LD_DEBUG=all docker.io/jupyter/datascience-notebook

How did Docker know to emulate arm architecture?

This was a huge surprise for me:
Today, using Docker For Mac (18.03.1-ce-mac65), I ran a Debian Stretch image. Inside the image I mounted the latest Raspbian Stretch image (2018-04-18-raspbian-stretch-lite) using mount. I then used chroot to this mounted Raspbian filesystem.
This is where it got weird. I was able to use apt (without any special modifications) to install software into this mounted filesystem.
Running:
dpkg --print-architecture
returned: armfh
and the software I installed (vim) worked like a charm
I was even able to compile a simple program using gcc and run it.
But, I need to know! How is this possible?
According to Docker:
Docker for Mac provides binfmt_misc multi architecture support, so you can run containers for different Linux architectures, such as arm, mips, ppc64le, and even s390x.
EDIT
On Linux, you can install qemu-user-static and then follow this git repo to get cross-architecture support!

Development environment setup for Mac and CentOS using Docker

I have searched the history a little bit but failed to find a good answer. So I just asked my question here. If there is a good answer already, please redirect it for me. Thanks.
The question is, I found my company's new hire doc lists a bunch of software to install to setup the development environment. Usually it took 1 or 2 days for a new hire to setup everything ready for a new mac. We want to shorten that process. The first thing I thought is Docker.
I read through the user guide of Docker and followed some blogs regarding to how to setup dev environment using Docker but still a little confused if Docker applies to our setting. So here's the detail of requirements:
We need to install a bunch of software (many of them are customized binaries). Right now, we distribute the source code, a new hire need to build from the source code, install it and set environment to include the binary into path. I am wondering if Docker allows us to install customized binaries into it's container?
The source code should not stay in the container. The source code is still checked out in one's local machine using git. Then, how can I rely on the Docker container's environment to build my software? I have searched a little bit is that, you need to mount your folder into the container, and then shell into your container to build? Is that how it works?
We usually develop in mac, does Docker also support mac container or it just allows you to run Linux container using boot2Docker?
Thank you so much in advance for your help.
Some answers :)
First, I think it's a really good idea to use Docker to standardise the development configuration (softwares, custom packages, env variables, ...).
With Docker, you can get your customised binaries from the host, it's not a problem. With the CMD command, you can use bash to install them and add them into your PATH. You can also write a shell script to install all your stuff and launch this script when you build your container
Your code will be on the host and you can "mount" a host folder in your docker image with the -v command. Ex: docker run -v /home/user/code:/tmp/code your_image. I'll detail below how the developer will use your Docker image.
Yep, you have to use Boot2Docker, it works well
Once your development image will be ready, you have to publish it on the official Docker registry (or to host a local registry on your network).
Next, the developer will launch the following Docker command:
docker run -rm -ti your_build_image /bin/bash
This will launch a bash terminal in your Docker image and the developer will be able to compile the code. Ex: cd /tmp/code + mvn clean install
Please have a look to this article to learn about volumes: http://jam.sg/blog/mongodb-docker-part-2/
And this one about Dockerfile: https://www.digitalocean.com/community/tutorials/docker-explained-using-dockerfiles-to-automate-building-of-images
You can also find a lot of Dockerfiles on github (search Dockerfile).
If the goal is to speed up the time it takes to get a Mac setup and usable in your environment, you might want to look at Boxen.
From the "About" section:
"Boxen is your team's IT robot. It's a dangerously opinionated framework that automates every piece of your development environment. GitHub, Inc. wrote the first version of Boxen (imaginatively called “The Setup”) to help employees start shipping on day one."

Resources