I'm creating my own Dockerfile for Runner, which is about to work in Gitlab CI as Android project runner. The problem is, that I'm about to connect the physical device to a machine, on which I'm about to deploy that runner. As usually with Linux machine, I was trying to add 51-android.rules into /etc/dev/rules.d as in this tutorial: Udev Setup
During docker build . command execution, I got error:
/bin/sh: 1: udevadm: not found
My questions are:
1) Is it possible, to connect the physical Android device to docker-running OS?
2) If 1) yes, where is my mistake?
The problematic dockerfile part:
FROM ubuntu:latest
#Ubuntu setup
RUN apt-get update
RUN apt-get install -y wget
...
#Setup Android Udev Rules
RUN wget https://raw.githubusercontent.com/M0Rf30/android-udev-rules/master/51-android.rules
RUN mv -y `pwd`/51-android.rules /etc/udev/rules.d
RUN chmod a+r /etc/udev/rules.d/51-android.rules
RUN udevadm control --reload-rules
RUN service udev restart
RUN usermod -a -G plugdev `whoami`
RUN adb kill-server
RUN adb devices
#Cleaning
RUN apt-get clean
The philosophy of Docker is to have one process running per container. There usually is no Init System so you cannot use services as you are used to.
I don't know if it's possible to achieve what you are trying to do but I think that you want the udev-rules on the host and add the device when you are starting it: https://docs.docker.com/engine/reference/commandline/run/#add-host-device-to-container-device
Also you may want to read https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#/apt-get
Every RUN creates a new layer, only adding information to the container.
Having said that, you probably want to have adb devices as the ENTRYPOINT or CMD of your container.
Related
I have a Dockerfile, which is meant to use script1, like so:
# Pull from Debian
FROM debian
# Update apt and install dependencies
RUN apt update
RUN apt -y upgrade
RUN apt -y install wget curl
# Download script1.sh
RUN wget -O ./script1.sh https://example.com
# Make script1.sh executable
RUN chmod +x ./script1.sh
Currently, I can:
Build this Dockerfile into an image
Run said image in a container
Open a CLI in said container, and run script1 manually (with bash ./script1.sh)
The script runs, and the container stays open.
However, I'd like to automatically run this script on container startup.
So I tried to change my Dockerfile to this:
# Pull from Debian
FROM debian
# Update apt and install dependencies
RUN apt update
RUN apt -y upgrade
RUN apt -y install wget curl
# Download script1.sh
RUN wget -O ./script1.sh https://example.com
# Make script1.sh executable
RUN chmod +x ./script1.sh
# Run script1.sh on startup
CMD bash ./script1.sh
However, when I do this, the container only stays open for a little bit, and then exits right away.
I suspect it exits as soon as script1 is done...
I also tried ENTRYPOINT, without much success.
Why does my container stay open if I open a CLI and run the script manually, but doesn't stay open if I try to automatically run it at startup?
And how can I run the script automatically on container startup, and keep the container from exiting right away?
An old Docker (v2) tricks to prevent premature container closing consisted in letting run an "infinite" loop command in it, such as:
CMD tail -f /dev/null
I am preparing a test automation which require me to install network manager so that the code api(which uses python3-networkmanager) could be tested.
In the docker file, I tried installing:
apt-get install dbus \
network-manager
start receiving error:
networkmanager.systems do not have hostname property.
I looked for solutions, but appears that will require:
Privilege user (cannot use privilege user, project requirement)
Reboot after installing same. (in docker, hence, can't reboot)
This leaves me with an only option for mocking debian networkmanager that can communicate with python3-networkmanager.
Trying to figure out, how I can mock same?
RUN apt-get update && apt-get install -y \
network-manager
worked for me.
I would like to contribute as I had to spend some time getting it to work.
Inside the dockerfile you have to add:
RUN apt-get update && apt-get install -y network-manager dbus
Also, I added a script to start the network manager:
#!/bin/bash
service dbus start
service NetworkManager start
Then in the Dockerfile you have to call this start script at the end:
COPY start_script.sh /etc/init/
ENTRYPOINT ["/etc/init/start_script.sh"]
Now you can build your container and run it like this:
docker run --net="host" -v /run/dbus/system_bus_socket:/run/dbus/system_bus_socket container
For me, it is enough to work with OrangePi and Docker without a privileged container.
I have a frame buffer sample code(square.c) to draw a square on screen.It was successfully executed on my Virtual Machine.Now i have to run this c application inside a Ubuntu container.But when i run this application from the container it shows a message as Error: cannot open framebuffer device: No such file or directory.
Reason for the error:Cannot open /dev/fb0.(fb0 is not present).I would like to know is there any method to access display device from docker.
I have successfully compiled and executed sqaure.c(Framebuffer code) in Virtual Machine.Now i tried to run the same code inside the ubuntu container which is actually running inside my virtual Machine.
docker file
Download base image ubuntu
FROM ubuntu:14.04
MAINTAINER xxaxaxax
RUN apt-get update
RUN apt-get install -y vim
RUN apt-get -y install gcc
RUN mkdir /home/test
ADD hello /home/test
ADD square /home/test -->sqare->executable of square.c
Yes you CAN use host's hardware in docker.
use --privileged to gain access to all devices (like in /dev/)
or use --device=/dev/fb0 option when running the container. note that if you add a device to the machine the device will not be seen in a running container.
I am new to Docker. I have few doubts on my understandings. This is what my understanding.
Docker solves these problems by creating a lightweight, standalone, executable package of your application that includes everything needed to run it including the code, the runtime, the libraries, tools, environments, and configurations
Does that mean the same container image will run on different Operating systems? For example:
I hope the below DockerFile would create the container image and run only on CENTOS.
If I want to run my applicaiton different OS, then I should have different DockerFile configuration depending on the OS. In this case what is the advantage of docker containers? Can you pleae correct my understanding?
FROM centos
ENV JAVA_VERSION 8u31
ENV BUILD_VERSION b13
# Upgrading system
RUN yum -y upgrade
RUN yum -y install wget
# Downloading & Config Java 8
RUN wget --no-cookies --no-check-certificate --header "Cookie: oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/$JAVA_VERSION-$BUILD_VERSION/jdk-$JAVA_VERSION-linux-x64.rpm" -O /tmp/jdk-8-linux-x64.rpm
RUN yum -y install /tmp/jdk-8-linux-x64.rpm
RUN alternatives --install /usr/bin/java jar /usr/java/latest/bin/java 200000
RUN alternatives --install /usr/bin/javaws javaws /usr/java/latest/bin/javaws 200000
RUN alternatives --install /usr/bin/javac javac /usr/java/latest/bin/javac 200000
EXPOSE 8080
#install Spring Boot artifact
VOLUME /tmp
ADD /maven/sfg-thymeleaf-course-0.0.1-SNAPSHOT.jar sfg-thymeleaf-course.jar
RUN sh -c 'touch /sfg-thymeleaf-course.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/sfg-thymeleaf-course.jar"]
The Dockerfile you have just provided will create a Docker image that can run on all Operating systems that shares the Linux kernel such as: Debian, Ubuntu, Centos, Fedora just to name a few. And this is one of the Docker purpose, to be able to run the same image on any host that run the Linux kernel.
However, as you specify CentOs (FROM centos) inside your Dockerfile the application that would be running inside the Docker container will be using CentOS as for its Operating System.
I'm relatively new to Docker.
I have launch a boot2docker host using docker-machine create -d.
Managed to connect to it, and run few commands. All good.
However, when trying to create a basic http server image, based on centos..
"yum install" simply fails. No matter what is the package.
This is my Docker file:
FROM centos
MAINTAINER Amir
#Install Apache
RUN yum install httpd
When running:
docker build .
It's starting to build the image, and everything looks good.. but then fails with:
Your transaction was saved, rerun it with:
yum load-transaction /tmp/yum_save_tx.2015-09-18.15-10.q5ss8m.yumtx
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
The command '/bin/sh -c yum install httpd' returned a non-zero code: 1
Any idea what am I doing wrong?
Thanks in advance.
If you look bit earlier than the last message, you have a good chance to see something like this:
Total download size: 24 M
Installed size: 32 M
Is this ok [y/d/N]: Exiting on user command
Your transaction was saved, rerun it with:
which means you have to change the default choice, e.g.
#Install Apache
RUN yum install -y httpd