rustembedded/cross- can't override linux image - docker

I'm attempting to use rustembedded/cross to cross compile to linux but I need to override the default image since I need clang and the default image doesn't provide that.
According to their documentation, you can provide a dockerfile and a configuration file to accomplish this.
This led me to having a docker file of:
FROM rustembedded/cross:x86_64-unknown-linux-gnu
RUN dpkg --add-architecture x86_64 && apt-get update && \
apt-get install --assume-yes --no-install-recommends \
libclang-dev \
clang
and a Cross.toml configuration file of:
[target.x86_64-unknown-linux-gnu]
image = "linux:v2"
Then, when I build this image with docker build -f ./Dockerfile.x86_64-unknown-linux-gnu -t linux:v2 ., it gives me an error saying that dkpg can't be found. Even if I remove that part it tells me that apt-get can't be found.
I can access the shell of the container that it extends, rustembedded/cross:x86_64-unknown-linux-gnu, in docker desktop and if I run dpkg, it shows me the help command. This is confusing since my image extends from it but it doesn't know about dkpg.

rustembedded/cross:x86_64-unknown-linux-gnu is based on centos, not ubuntu, see its Dockerfile:
FROM ubuntu:16.04
COPY linux-image.sh /
RUN /linux-image.sh x86_64
FROM centos:7
It use multistage build, the last os is centos:7.
You could also confirm it with next:
$ docker run --rm -it rustembedded/cross:x86_64-unknown-linux-gnu cat /etc/os-release
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
I don't know how you get dpkg with manual run, definitely something wrong with your steps. But the package system in centos is yum, you should find same ways using yum for your package install.
$ docker run --rm -it rustembedded/cross:x86_64-unknown-linux-gnu yum version
Loaded plugins: fastestmirror, ovl
Repodata is over 2 weeks old. Install yum-cron? Or run: yum makecache fast
Installed: 7/x86_64 213:96cbf4a3c83d0362309d530d515684dce3f40f63
Group-Installed: yum 14:302aa408884b4b89361d6bb1c41ceac40665f80e
version

Related

How could I Install openmodelica in my docker image?

I have set up a docker image and install ubuntu on it. Can you please tell me how can I install Openmodelica inside ubuntu to that docker image?
for example, if I want to install node.js on this docker image I could use this code:
apt install nodejs
so I need some codes like that to install open Modelica on my docker image.
p.s: my docker image is an ubuntu image.
I happened to create a Docker image for OpenModelica to debug something, so I might add it here as well. We got this questions in the OpenModelica forum as well.
While the answer of #sjoelund.se will stay up to date this one is a bit more explaining.
Dockerfile
FROM ubuntu:18.04
# Export DISPLAY, so a XServer can display OMEdit
ARG DEBIAN_FRONTEND=noninteractive
ENV DISPLAY=host.docker.internal:0.0
# Install wget, gnupg, lsb-release
RUN apt-get update \
&& apt install -y wget gnupg lsb-release
# Get the OpenModelica stable version
RUN for deb in deb deb-src; do echo "$deb http://build.openmodelica.org/apt `lsb_release -cs` stable"; done | tee /etc/apt/sources.list.d/openmodelica.list
RUN wget -q http://build.openmodelica.org/apt/openmodelica.asc -O- | apt-key add -
# Install OpenModelica
RUN apt-get update \
&& apt install -y openmodelica
# Install OpenModelica libraries (like all of them)
RUN for PKG in `apt-cache search "omlib-.*" | cut -d" " -f1`; do apt-get install -y "$PKG"; done
# Add non-root user for security
RUN useradd -m -s /bin/bash openmodelicausers
USER openmodelicausers
ENV HOME /home/openmodelicausers
ENV USER openmodelicausers
WORKDIR $HOME
# Return omc version
CMD ["omc", "--version"]
Let's build and tag it:
docker build --tag openmodelica:ubuntubionic .
How to use omc from the docker image
Let's create a small helloWorld.mo Modelica model:
model helloWorld
Real x(start=1.0, fixed=true);
equations
der(x) = 2.5*x;
end helloWorld;
and a MOS script which will simulate it, called runHelloWorld.mos
loadFile("helloWorld.mo"); getErrorString();
simulate(helloWorld); getErrorString();
Now we can make our files accessible to the docker container with the -v flag and run our small example with:
docker run \
--rm \
-v $(pwd):/home/openmodelicausers \
openmodelica:ubuntubionic \
omc runHelloWorld.mos
Note that -v needs an absolute path. I added --rm to clean up.
Using OMEdit with a GUI
I'm using Windows + Docker with WSL2. So in order to get OMEdit running I need to have a XServer installed on my Windows host system. They are not trivial to set up, but I'm using VcXsrv and so far it is working for me. On Linux this is of course much simpler.
I'm using this config to start XLaunch:
<?xml version="1.0" encoding="UTF-8"?>
<XLaunch WindowMode="MultiWindow" ClientMode="NoClient" LocalClient="False" Display="-1" LocalProgram="xcalc" RemoteProgram="xterm" RemotePassword="" PrivateKey="" RemoteHost="" RemoteUser="" XDMCPHost="" XDMCPBroadcast="False" XDMCPIndirect="False" Clipboard="True" ClipboardPrimary="True" ExtraParams="" Wgl="True" DisableAC="True" XDMCPTerminate="False"/>
But when the XServer is running you can use OMEdit in nearly the same fashion you would from a Linux OS, just mount some directory with your files and that's it:
docker run \
--rm \
-v $(pwd):/home/openmodelicausers \
openmodelica:ubuntubionic \
OMEdit
You could get some inspiration from the Dockerfiles that are used to generate the OpenModelica docker images. For example: https://github.com/OpenModelica/OpenModelicaDockerImages/tree/v1.16.2

Docker: Openjdk:14 RHEL based imaged, cannot install yum/wget/netstat

In my dockerfile, I need a maven builder (3.6 at least) working on a OpenJDK (J14 is required).
FROM maven:3.6.3-openjdk-14 as builder
The problem is simple: I need netstat command because it is used in several scripts. The OpenJDK official image is RHEL based, so it comes without any of this package installed.
I tried to download it or yum via wget command but, as you can guess, it is not installed. I feel trapped because it seems like you cannot you can't install any package on it.
That image is actually based on Oracle
$ podman run -it maven:3.6.3-openjdk-14 /bin/bash -c 'cat /etc/os-release'
NAME="Oracle Linux Server"
VERSION="8.2"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="8.2"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Oracle Linux Server 8.2"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:8:2:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
ORACLE_BUGZILLA_PRODUCT_VERSION=8.2
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=8.2
And this is actually a "slim" variant where dnf or yum aren't installed, but microdnf is. Try using that, instead:
RUN microdnf install /usr/bin/netstat
Or
RUN microdnf install net-tools

Docker run vs build - build gstreamer Different behaviour

I'm trying to build a docker image that uses nvidia hardware decoding in gstreamer and have encountered a strange problem with making the image.
The build process does not find the nvidia cuda related stuff while running docker build (or nvidia-docker build), but when I spin up the failed image as a container and do those very same steps from within the container everything works. I even saved the container as image which gave me a persistent image that works as intended.
Has anyone experienced similar problem and can shed some light on it?
Dockerfile:
FROM nvcr.io/nvidia/deepstream:3.0-18.11 AS base
ENV DEBIAN_FRONTEND noninteractive
#install some dependencies. NOTE - not removing apt cache for the MWE
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libdc1394-22 \
tmux \
vim \
libjpeg-dev \
libpng-dev \
libpng12-dev \
cuda-toolkit-10-0 \
python3-setuptools \
python3-pip ninja-build pkg-config gobject-introspection gnome-devel bison flex libgirepository1.0-dev liborc-0.4-dev
RUN pip3 install meson && ldconfig
FROM base
#pull and make gstreamer:
RUN cd /tmp && mkdir gstreamer
RUN git clone https://github.com/GStreamer/gst-build.git /tmp/gstreamer \
&& cd /tmp/gstreamer \
&& git checkout tags/1.16.0 \
&& ./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure \
&& ninja -C build \
&& ninja install -C build
Testing:
build and run the container. Inside the container:
$ gst-inspect-1.0 nvdec
No such element or plugin 'nvdec'
$ cd /tmp/gstreamer
$ ./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure
$ ninja -C build
$ ninja install -C build
$ gst-inspect-1.0 nvdec
Factory Details:
Rank primary (256)
[... all plugin parameters show up]
GObject
+----GInitiallyUnowned
+----GstObject
+----GstElement
+----GstVideoDecoder
+----GstNvDec
EDIT1
The image builds with no errors, only when I try to call gstreamer it is built with no acceleration. I noticed that in the build process the major difference is
meson.build:109:2: Exception: Problem encountered: The nvdec plugin was enabled explicitly, but required CUDA dependencies were not found.
which does not happen when building from within the container.
Lack of error is related, most likely, to the ninja+meson build system which looks for compatible packages, reports the exception, but doesn't throw it and continues as if nothing wrong happened
EDIT2
Answering comment:
To build it and get the error, just build the attached docker image:
sudo docker build -t gst16:latest . > build.log
This will dump all the output into the build.log file.
I don't have a docker registry that I could use for this and the docker image gets quite big by docker standards (~8 Gigs), but to produce successfully, it's fairly simple:
sudo docker run --runtime="nvidia" -ti gst16:latest /bin/bash
or
sudo nvidia-docker run -ti gst16:latest /bin/bash
which seems to work the same for me. Notice no --rm flag! From within the container:
#check if nvidia decoder plugin is there:
gst-inspect-1.0 nvdec
#fail!
#now build it from within:
cd /opt/gstreamer
./setup.py -Dgtk_doc=disabled -Dgst-plugins-bad:nvdec=enabled -Dgst-plugins-bad:nvenc=enabled -Dgst-plugins-bad:iqa=disabled -Dgst-plugins-bad:bluez=disabled --reconfigure
ninja -C build
ninja install -C build
gst-inspect-1.0 nvdec
#success reported
Now to get the image, exit the container (ctrl+d) and in the host shell:
sudo docker container ls -a to view all containers including stopped ones
from gst16:latest get the CONTAINER_ID and copy it
sudo docker commit <CONTAINER_ID> gst16:manual and after a few seconds you should have the container saved as an image. Verify with sudo docker images
run the new image with sudo docker run --runtime=`nvidia` --rm -ti gst16:manual /bin/bash
from within the container try again the gst-inspect-1.0 nvdec to verify it's working
EDIT3
$ nvidia-docker --version
Docker version 18.09.0, build 4d60db4
I think I found the solution/reason
Writing it here in case someone finds themselves in similar situation, plus I hate finding old threads with similar problem and no answer or "nevermind, I solved it" as the only follow up
Docker build does not have any ties to nvidia runtime and gstreamer requires access to the full nvidia toolchain in order to build the plugins that need it. This is to be resolved with gstreamer 1.18 but until then, there is no way to build gstreamer with nvidia codecs in docker build.
The workaround:
Build image with all dependencies.
Run a container of said image using runtime="nvidia" but don't use --rm flag
In the container, build gstreamer and install it as normally.
Verify with gst-inspect-1.0
Commit the container as new image: docker commit <container_name> <temporary_image_name>
Tag the temporary image properly.

yum proxy error while creating a docker image

I have a dockerfile that should create a linux image and oracle database and few other things. However running the docker command i get the following error
http://yum.oracle.com/repo/OracleLinux/OL7/UEKR4/x86_64/repodata/repomd.xml:
[Errno 14] curl#5 - "Could not resolve proxy: www-proxy.us.oracle.com;
Unknown error"
I am creating this docker from inside a proxy and the proxy is appropriately setup in the environment and also in the dockerfile.
The lines from where i get this error are
yum install zip
yum -y install oracle-database-preinstall-18c
Interestingly, if i remove the yum commands and create a base docker and run the same yum commands from within the container, it works quite well.
# LICENSE UPL 1.0
# Copyright (c) 2015 Oracle and/or its affiliates. All rights reserved.
FROM oraclelinux:7.4
MAINTAINER Temporary Name <temporary.name#oracle.com>
ENV JAVA_PKG=server-jre-8u*-linux-x64.tar.gz \
JAVA_HOME=/usr/java/default \
http_proxy=http://www-myproxy.cn.company.com:80 \
https_proxy=http://www-myproxy.cn.company.com:80 no_proxy=localhost,127.0.0.1,192.168.0.0/16,10.0.0.0/8,.cn.company.com,.companycorp.com,/var/run/docker.sock
RUN yum-config-manager --save --setopt=ol7_UEKR4.skip_if_unavailable=true && \
yum install zip && \
yum -y install oracle-database-preinstall-18c
Sorted this out with the help of a friend in my office. The issue was with the build command. I had to use the --network=host flag while 'build'ing docker.
I did the following
sudo docker build --build-arg http_proxy=http://www-proxy.cnt.company.com:80 -t oracle:183 .
instead of
sudo docker build --network=host --build-arg http_proxy=http://www-proxy.cnt.company.com:80 -t oracle:183 .
If someone faces a similar issue, please use --network flag to get that sorted.
Thanks
Bala

RStudio in docker image does not deal with some librairies

I have a problem to use the docker rstudio-image rocker/rstudio proposed
on https://www.rocker-project.org/ (docker containers for R). Since I am a beginner with both docker and RStudio, I suspect the problem comes from me and does not deserve a bug report:
I open a proper terminal with 'Docker Quickstart Terminal'
where I run the image with docker run -d -p 8787:8787 -e DISABLE_AUTH=true -v <...>:/home/rstudio/<...> --name rstudio rocker/rstudio
in my browser I then get a nice RStudio instance at the address http://192.168.99.100:8787
but in this instance I can't install several packages such as xml2. I get the message:
Using PKG_CFLAGS=
Using PKG_LIBS=-lxml2
------------------------- ANTICONF ERROR ---------------------------
Configuration failed because libxml-2.0 was not found. Try installing:
* deb: libxml2-dev (Debian, Ubuntu, etc)
* rpm: libxml2-devel (Fedora, CentOS, RHEL)
* csw: libxml2_dev (Solaris)
If libxml-2.0 is already installed, check that 'pkg-config' is in your
PATH and PKG_CONFIG_PATH contains a libxml-2.0.pc file. If pkg-config
is unavailable you can set INCLUDE_DIR and LIB_DIR manually via:
R CMD INSTALL --configure-vars='INCLUDE_DIR=... LIB_DIR=...'
--------------------------------------------------------------------
ERROR: configuration failed for package ‘xml2’
* removing ‘/usr/local/lib/R/site-library/xml2’
Warning in install.packages :
installation of package ‘xml2’ had non-zero exit status
I don't know whether xml2 is on the image but the file libxml-2.0.pc does exist on my laptop in the directory /opt/local/lib/pkgconfig and pkg-config is in /opt/local/bin. So I tried linking these pkg paths when running
the image (to see what happen when I play with the image environment
in RStudio), adding options -v
/opt/local/lib/pkgconfig:/home/rstudio/lib/pkgconfig -v
/opt/local/bin:/home/rstudio/bin to the run command. But it doesn't work: for some reason
I don't see the content of lib/pkgconfig in RStudio...
Also the RStudio instance does not accept root/sudo commands so I can't
use tools such as apt-get in the RStudio terminal
so, what's the trick ?
Libraries on your laptop (the host for docker) are not available for docker containers. You should create a custom image with required libraries, create a Dockerfile like this:
FROM rocker/rstudio
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libxml2-dev # add any additional libraries you need
CMD ["/init"]
Above I added the libxml2-dev but you can add as many libraries as you need.
Then build your image using this command (you need to execute below command in directory there you created Dockerfile):
docker build -t my_rstudio:0.1 .
Then you can start your container:
docker run -d -p 8787:8787 -e DISABLE_AUTH=true --name rstudio my_rstudio:0.1
(you can add any additional arguments like -v to above).

Resources