I am trying to use STM32CubeProgrammer within Ubuntu 20.04 inside Docker container. As a step to prepare USB serial link for flashing as given in STM32CubeProgrammer I need to do:
cd <your STM32CubeProgrammer install directory>/Drivers/rules
sudo cp *.* /etc/udev/rules.d/
But /etc/udev/ directory is not available.
Is it safe to create this directory to access USB devices and what files should be part of this directory?
I had the same problem when creating a kinect4azure Docker image. I fixed it by installing the udev package.
sudo apt -y install udev
after that lsusb also gave more info about the connected usb devices.
Related
When using docker cp to move files from my local machine /tmp/data.txt to the container, it fails with the error:
lstat /tmp/data.txt: no such file or directory
The file exists and I can run stat /tmp/data.txt and cat /tmp/data.txt without any issues.
Even if I create another file in /tmp like data2.txt I get the exact same error.
But if I create a file outside /tmp like in ~/documents and copy it with docker cp it works fine.
I checked out the documentation for docker cp and it mentions:
It is not possible to copy certain system files such as resources under /proc, /sys, /dev, tmpfs, and mounts created by the user in the container
but doesn't mention /tmp as such a directory.
I'm running on Debian 10, but a friend of mine who is on Ubuntu 20.04 can do it just fine.
We're both using the same version of docker (19.03.11).
What could be the cause?
I figured out the solution.
I had install docker as a snap. I uninstalled it (sudo snap remove docker) and installed it using the official Docker guidelines for installing on Debian.
After this, it worked just fine.
I think it might've been due to snap packages having limited access to system resources - but I don't know for sure.
I'm building a docker image which also involves a small yum install. I'm currently in a location where firewall's and access controls makes docker pull, yum install etc extremely slow.
In my case, its a JRE8 docker image using this official image script
My problem:
Building the image requires just 2 libraries (gzip + tar) which combined is only of (132 kB + 865 kB). But the yum inside docker build script will first download the repo information which is over 80 MB. While 80 MB is generally small, here, this took over 1 hour just to download. If my colleagues need to build, this would be sheer waste of productive time, not to mention frustration.
Workarounds I'm aware of:
Since this image may not need the full yum power, I can simply grab the *.rpm files, COPY in container script and use rpm -i instead of yum
I can save the built image and locally distribute
I could also find closest mirror for docker-hub, but not yum
My bet:
I've copy of the linux CD with about the same version
I can add commands in dockerfile to rename the *.repo to *.repo.old
Add a cdrom.repo in /etc/yum.repos.d/ inside the container
Use yum to load most common libraries from the CDROM instead of internet
My problem:
I'm not able to make out how to create a mount point to a cdrom repo from inside the container build without using httpd.
In plain linux I do this:
mkdir /cdrom
mount /dev/cdrom /cdrom
cat > /etc/yum.repos.d/cdrom.repo <<EOF
[cdrom]
name=CDROM Repo
baseurl=file:///cdrom
enabled=1
gpgcheck=1
gpgkey=file:///cdrom/RPM-GPG-KEY-oracle
EOF
Any help appreciated.
Docker containers cannot access host devices. I think you will have to write a wrapper script around the docker build command to do the following
First mount the CD ROM to a directory within the docker context ( that would be a sub-directory where your DockerFile exists).
call docker build command using contents from this directory
Un-mount the CD ROM.
so,
cd docker_build_dir
mkdir cdrom
mount /dev/cdrom cdrom
docker build "$#" .
umount cdrom
In the DockerFile, you would simple do this:
RUN cd cdrom && rpm -ivh rpms_you_need
I have a frame buffer sample code(square.c) to draw a square on screen.It was successfully executed on my Virtual Machine.Now i have to run this c application inside a Ubuntu container.But when i run this application from the container it shows a message as Error: cannot open framebuffer device: No such file or directory.
Reason for the error:Cannot open /dev/fb0.(fb0 is not present).I would like to know is there any method to access display device from docker.
I have successfully compiled and executed sqaure.c(Framebuffer code) in Virtual Machine.Now i tried to run the same code inside the ubuntu container which is actually running inside my virtual Machine.
docker file
Download base image ubuntu
FROM ubuntu:14.04
MAINTAINER xxaxaxax
RUN apt-get update
RUN apt-get install -y vim
RUN apt-get -y install gcc
RUN mkdir /home/test
ADD hello /home/test
ADD square /home/test -->sqare->executable of square.c
Yes you CAN use host's hardware in docker.
use --privileged to gain access to all devices (like in /dev/)
or use --device=/dev/fb0 option when running the container. note that if you add a device to the machine the device will not be seen in a running container.
I'm trying to run a command from a priveleged docker. Specifically the command nmcli.
When I try to add nmcli as a volume, it complains that it's missing other files.
When I add the entire /usr/bin it complains about python being unable to add site-packages.
Is there a way, I can run a command on the host machine from a child container
The majority of tools that are installed with a package manager like yum or apt will make use of shared libraries to reduce the overall size of the install.
The container would either need to be the same distro and have the same package dependencies installed, or mount all the dependencies of the binary into the container.
Something like this should work:
docker run \
-v /lib:/lib \
-v /usr/lib/:/usr/lib \
-v /usr/bin/nmcli:/usr/bin/nmcli \
busybox \
/usr/bin/nmcli
But you might need to be more specific about the library mounts if you want the container to use it's own shared libraries as well.
Some packages can provide a "static binary" that include's all their dependencies in the executable. I doubt this exists for nmcli as it's a RHEL specific tool to manage a RHEL box, whose policy is to use yum to manage shared libraries.
I'm creating my own Dockerfile for Runner, which is about to work in Gitlab CI as Android project runner. The problem is, that I'm about to connect the physical device to a machine, on which I'm about to deploy that runner. As usually with Linux machine, I was trying to add 51-android.rules into /etc/dev/rules.d as in this tutorial: Udev Setup
During docker build . command execution, I got error:
/bin/sh: 1: udevadm: not found
My questions are:
1) Is it possible, to connect the physical Android device to docker-running OS?
2) If 1) yes, where is my mistake?
The problematic dockerfile part:
FROM ubuntu:latest
#Ubuntu setup
RUN apt-get update
RUN apt-get install -y wget
...
#Setup Android Udev Rules
RUN wget https://raw.githubusercontent.com/M0Rf30/android-udev-rules/master/51-android.rules
RUN mv -y `pwd`/51-android.rules /etc/udev/rules.d
RUN chmod a+r /etc/udev/rules.d/51-android.rules
RUN udevadm control --reload-rules
RUN service udev restart
RUN usermod -a -G plugdev `whoami`
RUN adb kill-server
RUN adb devices
#Cleaning
RUN apt-get clean
The philosophy of Docker is to have one process running per container. There usually is no Init System so you cannot use services as you are used to.
I don't know if it's possible to achieve what you are trying to do but I think that you want the udev-rules on the host and add the device when you are starting it: https://docs.docker.com/engine/reference/commandline/run/#add-host-device-to-container-device
Also you may want to read https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#/apt-get
Every RUN creates a new layer, only adding information to the container.
Having said that, you probably want to have adb devices as the ENTRYPOINT or CMD of your container.