I am trying to create an image with my binary file (written in Rust) but I get different errors. This is my Dockerfile:
FROM scratch
COPY binary /
COPY .env /
COPY cert.pem /etc/ssl/
ENV RUST_BACKTRACE 1
CMD /binary
Building finishes fine but when I try to run it I get this:
$ docker run binary
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/bin/sh\": stat /bin/sh: no such file or directory": unknown.
ERRO[0000] error waiting for container: context canceled
And this:
$ docker run binary /binary
standard_init_linux.go:195: exec user process caused "no such file or directory"
I have no idea what to do. The error message looks very odd to me. According to the official Docker documentation it must work.
System info: latest Arch Linux and Docker:
Docker version 18.02.0-ce, build fc4de447b5
I tested with a C++ program and it works fine, with both clang and gcc.
It does not work with scratch, alpine, busybox ,or bash-based images, but it does work with postgresql, ubuntu, and debian images. The exact problem is something related to Rust and lightweight docker images - everything works okay otherwise.
As #Oleg Sklyar pointed out, the problem is that the Rust binary is dynamically-linked.
This may be a bit confusing because many people who have heard of Rust have also heard that Rust binaries are statically-linked, but this refers to the Rust code in crates: crates are linked statically because they are all known at the moment of compilation. This does not refer to existing C dynamic libraries that the program may link to, such as libc and other must-have libraries. Often times, these libraries can also be built as statically-linked artifacts (see the end of this post). To check whether your program or library is dynamically-linked, you can use ldd utility:
$ ldd target/release/t
linux-vdso.so.1 (0x00007ffe43797000)
libdl.so.2 => /usr/lib/libdl.so.2 (0x00007fa78482d000)
librt.so.1 => /usr/lib/librt.so.1 (0x00007fa784625000)
libpthread.so.0 => /usr/lib/libpthread.so.0 (0x00007fa784407000)
libgcc_s.so.1 => /usr/lib/libgcc_s.so.1 (0x00007fa7841f0000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007fa783e39000)
/lib64/ld-linux-x86-64.so.2 => /usr/lib64/ld-linux-x86-64.so.2 (0x00007fa784ca2000)
You'll need these libraries in your Docker image. You will also need the interpreter; to get its path you can use objdump utility:
$ LANG=en objdump -s -j .interp target/release/t
target/release/t: file format elf64-x86-64
Contents of section .interp:
0270 2f6c6962 36342f6c 642d6c69 6e75782d /lib64/ld-linux-
0280 7838362d 36342e73 6f2e3200 x86-64.so.2.
Copy the files into the expected directories and everything works okay.
There is also a second option which is to use the rust-musl-builder docker image. There are some problems with postgresql and diesel but for most of projects it would be good. It works by producing a statically-linked executable which you may just copy and use. This option is much more preferred than using an interpreter and dynamic libraries if you want to provide a docker image with less size and without having all that useless extra data such as interpreter, unused libraries and so on.
In my case, the issue was that I was passing an invalid executable name:
CMD ["liquidator"]
liquidator was the name of the Docker image, but I needed this:
CMD ["hifi-liquidator"]
Basically the CMD must be the same as the "name" field in the Cargo.toml file.
Related
I am a little bit confused how containers run. I am developing on a Mac and when I copy my compiled sources into a docker image and Debian OS, I get an error that the file can not be executed. I googled it and it has something to do with different CPU architectures, I needed to cross compile. That makes sense.
This however work:
FROM rust:1.65 AS builder
WORKDIR app
COPY . .
RUN cargo build --release
FROM debian:buster-slim
COPY --from=builder ./app/target/release/hello ./app/myapp
CMD ["./app/myapp"]
I can build a binary without knowing in advance which architecture I am compiling for right? This is because I just do a cargo build on a builder called rust:1.65. I am curious how it does know it will be ran on Debian and on the correct CPU.
How does FROM rust:1.65 compile for the correct architecture? Or is it just all the same default architecture in a Dockerfile?
Which operating system is (likely) a more significant variable than which processor architecture.
The Docker core doesn't run natively on MacOS. Docker Desktop runs a hidden Linux virtual machine. In the case where you're compiling the binary on the host, you get a MacOS binary, but then you try to run it in a Linux container, which results in an error. If you do the compilation in a container too, it's all Linux.
More generally, there are also lurking problems around shared libraries, support files, permissions, ... and unless you're confident in what you're doing I would not try to build binaries on the host and copy them into an image or container. Install them in the image, either compiling them yourself or using the base image distribution's package manager.
You can compile for the given architecture.
Run the following command to see all target availlable. doc
docker run --rm -ti rust:1.65 rustc --print target-list
and in you Toml config you setup the buil option. doc
[build]
target = ["x86_64-unknown-linux-gnu", "i686-unknown-linux-gnu"]
I am a complete noob to Docker, I apologize if this a simple question, I seem to have a significant installation/configuration error in Docker. I am working on a fresh install of Ubuntu 20.04 and Docker version 20.10.12, build e91ed57.
I am literally trying to follow the basic Docker tutorial for beginners from their website and the second command is not working.
The tutorial is no big deal, I can switch tutorials to something more up to date, but it seems that key functionality of Docker is not working correctly.. symlinks.
This command docker build -t getting-started . results in this error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /home/<user>/Downloads/WebDev/Docker/tutorial/getting-started-master/app/Dockerfile: no such file or directory
I double & triple checked everything. I literally tried every solution found here:
Docker: unable to prepare context: unable to evaluate symlinks in Dockerfile path: GetFileAttributesEx
Docker build gives "unable to prepare context: context must be a directory: /Users/tempUser/git/docker/Dockerfile"
Nothing worked. I could only execute the command using an explicit path. rather than "." which is going to get annoying pretty quickly when developing real projects.
docker build -t getting-started ~/Downloads/WebDev/Docker/tutorial/getting-started-master/
All documentation states docker build -t getting-started . is the correct command, so I am worried about continuing with a "broken" docker installation.
I ran the docker ./check-config.sh script and it shows all is well except CONFIG_RT_GROUP_SCHED: missing which I thought was inconsequential for the moment since hello-world image worked as expected.
Hhhmmm?
What directory are you in when running these commands? From your error it seems you are in the /app directory, whilst you should probably be 1 directory up.
Or, alternatively, your dockerfile is 1 directory below where it was intended to be. I draw this conclusion by comparing the 2 paths you mention:
~/Downloads/WebDev/Docker/tutorial/getting-started-master/app/Dockerfile
vs
~/Downloads/WebDev/Docker/tutorial/getting-started-master/
If the second one works, then the dockerfile is under getting-started-master and not under /app.
Fyi: Docker has a context and unless you explicitly specify it to be something different, by default it's going to be the directory the Dockerfile is located in.
I'm trying the NodeMCU Docker build in Ubuntu 16.04.4 LTS for the first time.
I have read the tagged articles here for Docker and NodeMCU, but don't see this particular error.
"docker run hello-world" has no problems.
I have tried the NodeMCU build command in both forms:
$ docker run --rm -ti -v `pwd`:/opt/nodemcu-firmware marcelstoer/nodemcu-build
and the explicit path variation:
$ docker run --rm -it -v /home/tim/nodemcu-firmware:/opt/nodemcu-firmware marcelstoer/nodemcu-build
In both cases, I get this error:
standard_init_linux.go:187: exec user process caused "exec format error"
I have searched on this error, and most solutions are related to a missing shebang.
However, I'm not sure what script would need the shebang, or why it would be not working in my case but correct for others.
Has anyone else run across this error?
Speaking without deep technical details, this error means that the kernel can not recognize the format of the executable file, thus, it can not run this file. In your case this error is about the executable file which is started when the container is launched. According to the Cmd entry in the output of docker inspect marcelstoer/nodemcu-build, it is a file /bin/sh, which is an ELF executable.
When Linux can not execute ELF binary and returns such an error (about the file format), it usually is related to the system architecture. More specifically, the image marcelstoer/nodemcu-build contains ELF64 executables (i.e. for amd64 architecture), and your system does not support it (is it i386 or even some flavor of arm?). Running docker run hello-world, however, works fine for you, because hello-world image exists for all architectures supported by Docker.
According to the Dockerfile of marcelstoer/nodemcu-build image, it is built from ubuntu, which exists for different architectures, thus, you may try building the marcelstoer/nodemcu-build image on your system rather than pulling it from the dockerhub.
P.S.: regarding the solution you have linked to your question. This is not about your case (ELF binary), rather it is about a script. In case of script, the executable format is recognized by the shebang (#!) at the very beginning of the file, thus, the script must start with #!, not with the newline. That's why the author got the same error: the kernel could not detect that this is a script and failed to start it. Different (but similar) reasons, same error.
According to the documentation at bazelbuild/rules_docker, it should be possible to work with these container images on OSX, and it also claims that it's possible to do so without docker.
These rules do not require / use Docker for pulling, building, or pushing images. This means:
They can be used to develop Docker containers on Windows / OSX without boot2docker or docker-machine installed.
They do not require root access on your workstation.
How do I do that? Here's a simple rule:
go_image(
name = "helloworld_image",
importpath = "github.com/nictuku/helloworld",
library = ":go_default_library",
visibility = ["//visibility:public"],
)
I can build the image with bazel build :helloworld_image. It produces a tar ball in blaze-bin, but it won't run it:
INFO: Running command line: bazel-bin/helloworld_image
Loaded image ID: sha256:08d312b529d30431c68741fd3a31468a02533f27a8c2c29eedc969dae5a39852
Tagging 08d312b529d30431c68741fd3a31468a02533f27a8c2c29eedc969dae5a39852 as bazel:helloworld_image
standard_init_linux.go:185: exec user process caused "exec format error"
ERROR: Non-zero return code '1' from command: Process exited with status 1.
It's trying to run the linux this is OSX, which is silly.
I also tried doing a "docker load" on the .tar content but it doesn't seem to like that format.
$ docker load -i bazel-bin/helloworld_image-layer.tar
open /var/lib/docker/tmp/docker-import-330829602/app/json: no such file or directory
Help? Thanks!
You are building for your host platform by default so you need to build for the container platform if you want to do that.
Since you are using a go binary, you can do cross compilation by specifying --cpu=k8 on the command line. Ideally we would be able to just say that the docker image needs a linux binary (so no need to specify the --cpu command-line flag) but this is still a work in progress in Bazel.
I am new to container world and exploring options to run my application on a container.Here are the things that I am seeing:
When I include compiling and building the C/C++ binary as part of docker image itself, it works fine with out any problems. Container starts and everything works fine.
If I try to run an already compiled and existing binary using CMD ["./helloworld"] in a container It throws me this error
standard_init_linux.go:185: exec user process caused “exec format error”.
Any ideas of how to get out of this problem? This seems like a basic problem that would have been solved already
Here is my dockerfile:
FROM ubuntu
COPY . /Users/test//Documents/CPP-Projects/HelloWorld-Static
WORKDIR /Users/test/Documents/CPP-Projects/HelloWorld-Static
CMD ["./build/exe/hellostatic/hellostatic"]
Hers is my exe:
gobjdump -a build/exe/hellostatic/hellostatic
build/exe/hellostatic/hellostatic: file format mach-o-x86-64
build/exe/hellostatic/hellostatic
Here is the error:
docker run test
standard_init_linux.go:185: exec user process caused “exec format error”
The problem is that you are trying to run an incompatible binary format in your container...
You are running an Ubuntu-based container (FROM ubuntu) line, but you are trying to run a Mach-O binary. By default, Linux will not run mach-o binaries.
Build your binary for the target platform (Ubuntu/Linux) and it will work well. It appears that you are running Mac OS X, so you could install an Ubuntu VM to compile your binary and transfer it to be used by the container.
When you build it inside the container, it works because it will be built to the right platform.