How to access display device from docker - docker

I have a frame buffer sample code(square.c) to draw a square on screen.It was successfully executed on my Virtual Machine.Now i have to run this c application inside a Ubuntu container.But when i run this application from the container it shows a message as Error: cannot open framebuffer device: No such file or directory.
Reason for the error:Cannot open /dev/fb0.(fb0 is not present).I would like to know is there any method to access display device from docker.
I have successfully compiled and executed sqaure.c(Framebuffer code) in Virtual Machine.Now i tried to run the same code inside the ubuntu container which is actually running inside my virtual Machine.
docker file
Download base image ubuntu
FROM ubuntu:14.04
MAINTAINER xxaxaxax
RUN apt-get update
RUN apt-get install -y vim
RUN apt-get -y install gcc
RUN mkdir /home/test
ADD hello /home/test
ADD square /home/test -->sqare->executable of square.c

Yes you CAN use host's hardware in docker.
use --privileged to gain access to all devices (like in /dev/)
or use --device=/dev/fb0 option when running the container. note that if you add a device to the machine the device will not be seen in a running container.

Related

/etc/udev directory not present in Docker container

I am trying to use STM32CubeProgrammer within Ubuntu 20.04 inside Docker container. As a step to prepare USB serial link for flashing as given in STM32CubeProgrammer I need to do:
cd <your STM32CubeProgrammer install directory>/Drivers/rules
sudo cp *.* /etc/udev/rules.d/
But /etc/udev/ directory is not available.
Is it safe to create this directory to access USB devices and what files should be part of this directory?
I had the same problem when creating a kinect4azure Docker image. I fixed it by installing the udev package.
sudo apt -y install udev
after that lsusb also gave more info about the connected usb devices.

How can I keep ALL changes made to a docker container?

I'm using docker as a "light" virtual machine. For example, when I need to do some experiments on Ubuntu and don't want to mess up the host OS, I simply run docker run -it ubuntu bash.
Generally I'm very happy with it, except that I cannot keep the changes after I exit, which means I need to rerun
apt update && apt install vim git python python3 <other_tools> && pip install flask coverage <other_libraries> && .....
every single time I start the docker container as a VM, which is very inefficient.
I've noticed this question, but it only enables me to keep some specific files from being erased, whereas I want the whole system (including but not limited to all configuration, cache and tools installed) to be retained between the life cycles of the docker container.
You must use something like
docker commit mycontainer_id myuser/myimage:12
see the doc: docker commit
and then you launch your saved image myuser/myimage:12
But you should definitely use a Dockerfile

Creation of Dockerfile for Android GitLab CI

I'm creating my own Dockerfile for Runner, which is about to work in Gitlab CI as Android project runner. The problem is, that I'm about to connect the physical device to a machine, on which I'm about to deploy that runner. As usually with Linux machine, I was trying to add 51-android.rules into /etc/dev/rules.d as in this tutorial: Udev Setup
During docker build . command execution, I got error:
/bin/sh: 1: udevadm: not found
My questions are:
1) Is it possible, to connect the physical Android device to docker-running OS?
2) If 1) yes, where is my mistake?
The problematic dockerfile part:
FROM ubuntu:latest
#Ubuntu setup
RUN apt-get update
RUN apt-get install -y wget
...
#Setup Android Udev Rules
RUN wget https://raw.githubusercontent.com/M0Rf30/android-udev-rules/master/51-android.rules
RUN mv -y `pwd`/51-android.rules /etc/udev/rules.d
RUN chmod a+r /etc/udev/rules.d/51-android.rules
RUN udevadm control --reload-rules
RUN service udev restart
RUN usermod -a -G plugdev `whoami`
RUN adb kill-server
RUN adb devices
#Cleaning
RUN apt-get clean
The philosophy of Docker is to have one process running per container. There usually is no Init System so you cannot use services as you are used to.
I don't know if it's possible to achieve what you are trying to do but I think that you want the udev-rules on the host and add the device when you are starting it: https://docs.docker.com/engine/reference/commandline/run/#add-host-device-to-container-device
Also you may want to read https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#/apt-get
Every RUN creates a new layer, only adding information to the container.
Having said that, you probably want to have adb devices as the ENTRYPOINT or CMD of your container.

Compatability of Dockerfile RUN Commands Cross-OS (apt-get)

A beginner's question; how does Docker handle underlying operating system variations when using the RUN command?
Let's take, for example, a very simple Official Docker Hub Dockerfile, for JRE 1.8. When it comes to installing the packages for java, the Dockerfile uses apt-get:
RUN apt-get update && apt-get install -y --no-install-recommends ...
To the untrained eye, this appears to be a platform-specific instruction that will only work on Debian-based operating systems (or at least ones with APT installed).
How exactly would this work on a CentOS installation, for example, where the package manager would be yum? Or god forbid, something like Solaris.
If this pattern of using RUN to fork arbitrary shell commands is prevalent in docker, how does one avoid inter-platform, or even inter-version, dependencies?
i.e. what if the Dockerfile writer has a newer version of (say) grep than I do, and they've used some new CLI flag that isn't available on earlier versions?
The only two outcomes from this can be: (1) RUN command exits with non-zero exit code (2) the Dockerfile changes the installed version of grep before running the command.
The common point shared by all Dockerfiles is the FROM statement. It is the first line in the file and indicates the parent Docker image you're building on. A typical base image could be one with Ubuntu (i.e.: https://hub.docker.com/_/ubuntu/). The snippet you share in your question would fit well in an Ubuntu image (with apt-get) but not in a CentOS image.
In summary, you're installing docker in your CentOS system, but you're building a Docker image with Ubuntu in it.
As I commented in your question, you can add FROM statement to specify which relaying OS you want. for example:
FROM docker.io/centos:latest
RUN yum update -y
RUN yum install -y java
...
now you have to build/create the image with:
docker build -t <image-name> .
The idea is that you'll use the OS you are familiar with (for example, CentOS) and build an image of it. Now, you can take this image and run it above Ubuntu/CentOS/RHEL/whatever... with
docker run -it <image-name> bash
(You just need to install docker in the desired OS.

Can't figure out how I have to build my Docker architecture

So currently I'm building an API in PHP as different (micro) services which runs on nginx.
I've followed all the Docker fundamental video's and went through the docs, but I still can't figure out how to implement it.
Do I need a server where I push my code to and deploy on the containers (with CI or so)?
Does the container volume get pushed to the hub as well? So my code will be in the container itself?
I think you messed up a bit what's container and what's an image. For me image is something you build on disk to run. And container is an image running on the computer and serving/doing things.
Do I need a server where I push my code to and deploy on the containers
No, you don't. You start building image from some base image, and from a Dockerfile. So make some work dir, copy Dockerfile here, copy your PHP sources here as PHPAPI and in Dockerfile have commands to copy PHP into docker. Along the lines
FROM ubuntu:15.04
MAINTAINER guidsen
RUN apt-get update && \
apt-get install -y nginx && \
apt-get install -y php && \
apt-get autoremove; apt-get clean; apt-get autoclean
RUN mkdir -p /root/PHPAPI
COPY PHPAPI /root/PHPAPI
WORKDIR /root/PHPAPI
CMD /root/PHPAPI/main.php
Does the container volume get pushed to the hub as well? So my code will be in the container itself?
That depends on what do you use to run containers from image. AWS I think require image pulled from Docker hub, so you have to push it here first. Some other cloud providers or private clouds require to push image directly to them. And yes, your code would be in the image and will be run in the container.

Resources