What OS is used when using `FROM rust` inside a Dockerfile - docker

I am a little bit confused how containers run. I am developing on a Mac and when I copy my compiled sources into a docker image and Debian OS, I get an error that the file can not be executed. I googled it and it has something to do with different CPU architectures, I needed to cross compile. That makes sense.
This however work:
FROM rust:1.65 AS builder
WORKDIR app
COPY . .
RUN cargo build --release
FROM debian:buster-slim
COPY --from=builder ./app/target/release/hello ./app/myapp
CMD ["./app/myapp"]
I can build a binary without knowing in advance which architecture I am compiling for right? This is because I just do a cargo build on a builder called rust:1.65. I am curious how it does know it will be ran on Debian and on the correct CPU.
How does FROM rust:1.65 compile for the correct architecture? Or is it just all the same default architecture in a Dockerfile?

Which operating system is (likely) a more significant variable than which processor architecture.
The Docker core doesn't run natively on MacOS. Docker Desktop runs a hidden Linux virtual machine. In the case where you're compiling the binary on the host, you get a MacOS binary, but then you try to run it in a Linux container, which results in an error. If you do the compilation in a container too, it's all Linux.
More generally, there are also lurking problems around shared libraries, support files, permissions, ... and unless you're confident in what you're doing I would not try to build binaries on the host and copy them into an image or container. Install them in the image, either compiling them yourself or using the base image distribution's package manager.

You can compile for the given architecture.
Run the following command to see all target availlable. doc
docker run --rm -ti rust:1.65 rustc --print target-list
and in you Toml config you setup the buil option. doc
[build]
target = ["x86_64-unknown-linux-gnu", "i686-unknown-linux-gnu"]

Related

How is Go executable created in Docker?

I'm pretty new to development Golang & Docker. I'm following the instructions in the official Golang DockerHub image. Here's the part I'm a bit confused:
The part I really don't get is the last line of the Dockerfile:
CMD ["app"]
My question is, how is the "app" executable created in the first place? I created a standard hello-world.go file and added this Docker file to a directory. I don't get how building the Docker image would generate an executable called "app". Can someone explain?
Excerpt of the go command https://golang.org/cmd/go/#hdr-Compile_and_install_packages_and_dependencies
Compile and install packages and dependencies
Usage:
go install [-i] [build flags] [packages]
Install compiles and installs the packages named by the import paths.
Executables are installed in the directory named by the GOBIN
environment variable, which defaults to $GOPATH/bin or $HOME/go/bin if
the GOPATH environment variable is not set. Executables in $GOROOT are
installed in $GOROOT/bin or $GOTOOLDIR instead of $GOBIN.
When module-aware mode is disabled, other packages are installed in
the directory $GOPATH/pkg/$GOOS_$GOARCH. When module-aware mode is
enabled, other packages are built and cached but not installed.
The -i flag installs the dependencies of the named packages as well.
For more about the build flags, see 'go help build'. For more about
specifying packages, see 'go help packages'.
See also: go build, go get, go clean.
This makes an executable out of your go code.

Copy ffmpeg bins in multistage docker build

I'm trying to install ffmpeg via a multistage docker build
Here is the ffmpeg image that contains the ffmpeg binaries
FROM jrottenberg/ffmpeg
Here is the pm2 image that I need to run my web server
FROM keymetrics/pm2:8-alpine
I copy the bins into the current image, and I can see that ffmpeg, ffserver, and ffprobe all exist in /usr/local/bin.
COPY --from=0 /usr/local /usr/local
The copy command appears to succeed, since those files exist when I run the container interactively.
$# which ffmpeg
/usr/local/bin/ffmpeg
However, when I try running the bins, it says the command isn't found.
$# ffmpeg --version
/bin/sh: ffmpeg: not found
I've had a similar issue and ended up building my own binaries with no dependencies using the alpine gcc toolchain that supports building "static" PIE binaries. The reason was that I wanted no dependencies, hardened build and also support ASLR.
https://hub.docker.com/r/mwader/static-ffmpeg/
It makes sense that you need to use the same base image for compatibility reasons. I was using jrottenberg/ffmpeg (which defaults to ubuntu). I should have been using jrottenberg/ffmpeg:3.3-alpine since I'm using the alpine-based pm2 image.
Also, building ffmpeg depends on some shared libraries, so copying /usr/local wasn't enough to make it work. I'm sure there's a more graceful solution, but I ended up just copying the root dir, which did the trick.
FROM jrottenberg/ffmpeg:3.3-alpine
# copy ffmpeg bins
COPY --from=0 / /
FROM <extension of 3.3-alpine>
It seems that this issue opened here. And there is some solution for it like this one and this one.

Unable to run a Docker image with a Rust executable

I am trying to create an image with my binary file (written in Rust) but I get different errors. This is my Dockerfile:
FROM scratch
COPY binary /
COPY .env /
COPY cert.pem /etc/ssl/
ENV RUST_BACKTRACE 1
CMD /binary
Building finishes fine but when I try to run it I get this:
$ docker run binary
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown.
ERRO[0000] error waiting for container: context canceled
And this:
$ docker run binary /binary
standard_init_linux.go:195: exec user process caused "no such file or directory"
I have no idea what to do. The error message looks very odd to me. According to the official Docker documentation it must work.
System info: latest Arch Linux and Docker:
Docker version 18.02.0-ce, build fc4de447b5
I tested with a C++ program and it works fine, with both clang and gcc.
It does not work with scratch, alpine, busybox ,or bash-based images, but it does work with postgresql, ubuntu, and debian images. The exact problem is something related to Rust and lightweight docker images - everything works okay otherwise.
As #Oleg Sklyar pointed out, the problem is that the Rust binary is dynamically-linked.
This may be a bit confusing because many people who have heard of Rust have also heard that Rust binaries are statically-linked, but this refers to the Rust code in crates: crates are linked statically because they are all known at the moment of compilation. This does not refer to existing C dynamic libraries that the program may link to, such as libc and other must-have libraries. Often times, these libraries can also be built as statically-linked artifacts (see the end of this post). To check whether your program or library is dynamically-linked, you can use ldd utility:
$ ldd target/release/t
linux-vdso.so.1 (0x00007ffe43797000)
libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fa78482d000)
librt.so.1 => /usr/lib/librt.so.1 (0x00007fa784625000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fa784407000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007fa7841f0000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007fa783e39000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fa784ca2000)
You'll need these libraries in your Docker image. You will also need the interpreter; to get its path you can use objdump utility:
$ LANG=en objdump -s -j .interp target/release/t
target/release/t: file format elf64-x86-64
Contents of section .interp:
0270 2f6c6962 36342f6c 642d6c69 6e75782d /lib64/ld-linux-
0280 7838362d 36342e73 6f2e3200 x86-64.so.2.
Copy the files into the expected directories and everything works okay.
There is also a second option which is to use the rust-musl-builder docker image. There are some problems with postgresql and diesel but for most of projects it would be good. It works by producing a statically-linked executable which you may just copy and use. This option is much more preferred than using an interpreter and dynamic libraries if you want to provide a docker image with less size and without having all that useless extra data such as interpreter, unused libraries and so on.
In my case, the issue was that I was passing an invalid executable name:
CMD ["liquidator"]
liquidator was the name of the Docker image, but I needed this:
CMD ["hifi-liquidator"]
Basically the CMD must be the same as the "name" field in the Cargo.toml file.

Running C/C++ binary executable as a docker container

I am new to container world and exploring options to run my application on a container.Here are the things that I am seeing:
When I include compiling and building the C/C++ binary as part of docker image itself, it works fine with out any problems. Container starts and everything works fine.
If I try to run an already compiled and existing binary using CMD ["./helloworld"] in a container It throws me this error
standard_init_linux.go:185: exec user process caused “exec format error”.
Any ideas of how to get out of this problem? This seems like a basic problem that would have been solved already
Here is my dockerfile:
FROM ubuntu
COPY . /Users/test//Documents/CPP-Projects/HelloWorld-Static
WORKDIR /Users/test/Documents/CPP-Projects/HelloWorld-Static
CMD ["./build/exe/hellostatic/hellostatic"]
Hers is my exe:
gobjdump -a build/exe/hellostatic/hellostatic
build/exe/hellostatic/hellostatic: file format mach-o-x86-64
build/exe/hellostatic/hellostatic
Here is the error:
docker run test
standard_init_linux.go:185: exec user process caused “exec format error”
The problem is that you are trying to run an incompatible binary format in your container...
You are running an Ubuntu-based container (FROM ubuntu) line, but you are trying to run a Mach-O binary. By default, Linux will not run mach-o binaries.
Build your binary for the target platform (Ubuntu/Linux) and it will work well. It appears that you are running Mac OS X, so you could install an Ubuntu VM to compile your binary and transfer it to be used by the container.
When you build it inside the container, it works because it will be built to the right platform.

Compiling and running in different containers

I have a project which compiles to a binary file, and running that binary file exposes some REST APIs.
To compile the project I need docker image A which has the compiler and and all the libraries required to produce the executable. To run the executable (ie. host the service) I can get away with a much smaller image B (just basic linux distro, no need for the compiler).
How does one use docker is such a situation?
My thinking for this scenario is that you can prepare two base images:
The 1st one, which includes compiler and all libs for building your executable, call it base-image:build
The 2nd one, as the base image to build your final image to delivery, call it base-image:runtime
And then break your build process into two steps:
Step 1: build your executable inside base-image:build, and then put your executable to some place, like NFS or any registry from where you can fetch it for later use;
Step 2: write your Dockerfile which FROM base-image:runtime, fetch your artifact/executable from wherever generated by Step 1, docker build your delivery image, and then docker push to your registry for release.
Hope this could be helpful :-)
mkdir local_dir
docker run -dv $PWD/local_dir:/mnt BUILD_CONTAINER
compile code and save it to /mnt in the container. It'll be written to local_dir on your host filesystem and persist after the container is destroyed.
You Should now write a Dockerfile and add a step to copy in the new binary, then build. But for example's sake...
docker run -dv $PWD/local_dir:/mnt PROD_CONTAINER
Your bin, and everything else in local_dir, will reside in the container at /mnt/

Resources