I am trying to leverage the address sanitizer from GCC 8.3.0 in a 32 bit debian buster docker container, but keep hitting my head against a wall when launching the executable:
==17738==Shadow memory range interleaves with an existing memory mapping. ASan cannot proceed correctly. ABORTING.
==17738==ASan shadow was supposed to be located in the [0x1ffff000-0x3fffffff] range.
==17738==Process memory map follows:
0x29414000-0x295f3000
...
0x5762d000-0x57a9c000
0xfff26000-0xfff47000 [stack]
==17738==End of process memory map.
As you might be able to see, there does not seem to be an overlap. And the kernel version 5.11.0-41-generic does not seem to have the issues other people reported with similar symptoms.
The same binary works flawlessly on the 64 bit host Ubuntu system, but the 32 bit debian buster docker container (AND a 64 bit debian docker image) always gives the above error.
The sanitizer is compiled with -fsanitize=address and linked with -static-libasan in the debian docker container.
Other people have indicated launching docker with --privileged --cap-add=SYS_PTRACE should help, but it does not seem to alleviate the problem.
Related
I am using a Docker container to run a C++ compiled executable. The Docker container is built using the latest Linux Debian distribution, while the host is a MacOS system (MacOS 12.6, on MacBook Pro 16 Latest 2019).
Within the C++ code, I call the function __rdtscp(unsigned int *__A) including x86intrin.h for monitoring purpose. Compiling and executing the application on the MacOS host it works correctly. But if I try to run it within the Docker container, I obtain a Illegal instruction error (it is compiled on another physical Linux host, I need this: anyway, I can run the same executable on different Linux machines and also on the container generated by the same Docker image I use if executed on another host).
Looking deeper into the issue, I found that __rdtscp(unsigned int *__A) must be supported by the CPU. It should be supported by all the CPUs after 2010/2011. In fact, it seems that flag is reported within the host CPU's features (RDTSCP). The problem is that I cannot find it within the container CPU's features.
Note that using __rdtsc() it works correctly, but this is not serializable, so I want to use __rdtscp(unsigned int *__A).
Following the MacOS host output of sysctl -a | grep machdep.cpu
And this is the output of the Debian docker container of lscpu
Could you help me to figure out the reason of this difference? Is there a way to force Docker to provide the same host CPU's features?
Thank you!
I'm noticing exec user process caused "exec format error" when trying to run a Docker image on a Raspberry Pi 4.
First I'm bewildered that a Docker image is pulled that won't run on the platform to begin with. Nonetheless I am keen to make it work, but I don't know how.
Here's the project: https://github.com/kaihendry/sla How can I build ARM compatible images?
The FROM golang line will pull the appropriate architecture; they have arm v6 (older pi / pi 0 running raspbian) + arm v7 (newer pi running raspbian) and arm64 (newer pi running ubuntu) as part of a multi-arch docker image https://hub.docker.com/_/golang?tab=tags
Your problem with exec format error (i.e. it is the wrong binary format) appears to just be the line https://github.com/kaihendry/sla/blob/a22d983340f3df794696e5c8e31cf3b89f7edd89/Dockerfile#L14 where your architecture is wrong for a pi; it should be GOARCH=arm (32 bit, non-ubuntu) or GOARCH=arm64 (ubuntu), additionally for 32 bit ARM (v6 and v7) you would also need to specify GOARM=6 or GOARM=7 per https://github.com/golang/go/wiki/GoArm
I have tested your code with a swap to to GOARCH=arm64 (and no GOARM required) and had it build and run on my pi3b+ running ubuntu.
Noting for future reference I suspect my answer may change if/when raspbian switches to 64 bit.
I'm trying to create a 32-bit docker image with Ubuntu 14.04 and, any time that I run uname, I see that it is x86_64 instead of i386. Could anyone tell me why this is happening?
docker run talex5/lucid32 uname -m
The weird thing is when I look up the architecture type a different way, it says 32-bit:
docker run i386/ubuntu:14.04 file /sbin/init
/sbin/init: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.24, BuildID[sha1]=c394677bccc720a3bb4f4c42a48e008ff33e39b1, stripped`
This happens consistently whenever I download different docker images that say they are 32-bit and even when I create my own docker image using debootstrap.
Thanks!
uname reports the version and OS details of the kernel, but Docker containers always use the host system's kernel, and if it's a 64-bit kernel it will report x86_64.
You should see the same results running this with a mixed 32-/64-bit OS install (in Ubuntu land installing packages like libc6:i686); with a 32-bit filesystem tree in a chroot; and in a Docker container; which are all the same case of running 32-bit binaries on a system with a 64-bit kernel.
This is possible these days, with just a simple script. You could use https://github.com/docker-32bit/ubuntu.
I know that the official support is only for 64-bit but I can see from a few people have tried to custom build the docker binaries for 32-bit and succeeded (32-bit version of docker maybe a little unstable but it is fine for my use-case).
However, most of those blogs are old and do not work. Is there anyone who has done this recently?
I'm trying to build docker on 2 machines (i686) running with debian - wheezy and stretch (with kernel > 3.10; the minimum required). Has at-least 2GB of RAM and sufficient disk space.
There's 32bit docker on 32bit ARM machines, i think i've seen it done on RPis and ODROIDs at least.
On 32bit x86... i doubt you'll find much. It's not that it's impossible (if there's 32bit ARM docker, there can be 32bit x86 docker), but nobody cares enough. You can run 32bit docker images (in fact i've done it recently) on a 64bit system, but docker itself...
When I am in my container, I run
lspci | grep -i nvidia
and nothing shows.
When I run ./deviceQuery from the samples NVIDIA provides I get
no CUDA-capable device is detected
I know I have a nvidia driver on my mac. I just can't figure out how to get my docker container to realize that.
On OS X, docker is a container running inside a separate virtualbox vm which does not expose the host GPU.
You'll first need to make the graphics card available in the Virtual Box VM. I'm not sure how to do that, but this looks like it might help:
https://www.virtualbox.org/manual/ch04.html#guestadd-video
Once you've got it mounted within the VM, then you can also share it with the container.
I haven't tried this myself, but this guy says that he can run native X11 Apps on a Mac using a beta docker client called Kinematic along with socat, XQuartz, and QGIS, and he seems to imply that NVidia driver issues were thus avoided. This looks worth a try!