Docker error for GUI app: qt.qpa.xcb: could not connect to display, qt.qpa.plugin: Could not load the Qt platform plugin "xcb" "" even though - docker

I am trying to install a GUI based software called Dragonfly as a container, since the software has conflicts with my host OS RHEL7. So I thought installing as a Docker container could be a solution, even though I am completely new with Docker. My Dockerfile looks like below:
FROM ubuntu
COPY DragonflyInstaller /Dragonfly/
WORKDIR /Dragonfly/
# Dependent packages for Dragonfly
ARG DEBIAN_FRONTEND=noninteractive #
ENV TZ=Europe/Berlin
RUN apt-get update && apt-get install -y apt-utils \
fontconfig \
libxcb1 \
libxcb-glx0 \
x11-common \
x11-apps \
libx11-xcb-dev \
libxrender1 \
libxext6 \
libxkbcommon-x11-0 \
libglu1 \
libxcb-xinerama0 \
qt5-default \
libxcb-icccm4 \
libxcb-image0 \
libxcb-render-util0 \
libxcb-util1 \
freeglut3-dev \
python3-pip \
xauth
CMD ./DragonflyInstaller
After building the corresponding Docker image, it cannot launch GUI based installer-window of Dragonfly. I am using the two following commands:
xhost +local:docker
sudo docker run -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix: dragonfly
I tried various suggestions posted on different forums, accordingly, I tried various arguments for docker run, however, I am getting two errors every time, as below:
No protocol specified
qt.qpa.xcb: could not connect to display :340.0
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: minimal, xcb.
Could you please suggest me how to resolve this issue?

In fact I was logging in my machine with X2Go remote desktop client that provides its own desktop after log in. However, I also tried another remote login software called NoMachine which does not create its own display or desktop, rather it keeps original desktop for the remote user. When I tried with NoMachine, there was no errors.
So I guess, the above two errors are caused by remote desktop software X2Go.

Related

Docker custom .deb file location

I had to compile a custom kernel to get Ubuntu to run on my laptop and now I'm trying to run docker containers on it.
It generated packages I installed:
linux-headers-5.15.30-25.25custom_5.15.30-25.25custom-1_amd64.deb
linux-image-5.15.30-25.25custom-dbg_5.15.30-25.25custom-1_amd64.deb
linux-image-5.15.30-25.25custom_5.15.30-25.25custom-1_amd64.deb
Now when I try to create docker images I get the following error:
...
Reading package lists...
Building dependency tree...
Reading state information...
E: Unable to locate package linux-headers-5.15.30-25.25custom
E: Couldn't find any package by glob 'linux-headers-5.15.30-25.25custom'
E: Couldn't find any package by regex 'linux-headers-5.15.30-25.25custom'
The Dockerfile just pulls an nvidia image and adds some other packages required
FROM nvidia/cuda:11.4.2-devel-ubuntu18.04
ARG COMPILE_GRAPHICS=OFF
ARG DEBIAN_FRONTEND=noninteractive
USER root
RUN \
set -ex && \
apt-key update && \
apt-get update && \
apt-get install -y -q \
build-essential \
software-properties-common \
openssl \
curl && \
rm -rf /var/lib/apt/lists/* && \
rm -rf /var/cache/apt/archives/ && \
rm -rf /usr/share/doc/ && \
rm -rf /usr/share/man/
...
It is installed on the host PC
~$ sudo dpkg -l | grep linux-headers-5.15.30-25.2
ii linux-headers-5.15.30-25.25custom 5.15.30-25.25custom-1 amd64 Linux kernel headers for 5.15.30-25.25custom on amd64
There's no problem on other machines using the upstream Ubuntu kernel packages.
So guess docker needs the actual package. How can I add a custom location to fetch the packages?
Thanks
I get the feeling you are mixing up what is outside and inside of a container.
Outside - on your host operating system you had to compile a custom kernel to get Linux running. So far so good.
Now you are trying to build a docker container. So the next steps are happening inside the container. Docker is lightweight virtualization therefore the container runs on the same kernel as the host. Since some package dependency is on the kernel's headers, and apt is trying to install the Debian package for them but cannot find them. Seems obvious as you are running a custom kernel and the package is in none well-known repository.
To get out of the situation:
check whether the headers for your kernel are available as .deb
make that .deb available inside the container. This may happen by placing them into the Docker build context.
ensure your Dockerfile installs the .deb before installing whatever needs that .deb. This will prevent it is searched in online repositories.

Use docker to containerize and build an old application

Our embedded system product is built in an Ubuntu 12.04 with some ancient tools that are no longer available. We have the tools in our local git repo.
Setting up the build environment for a new comer is extremely challenging. I would like to set up the build environment in a docker container, download the source code into a host machine, mount the source code into the container and execute the build so that someone starting fresh doesnt have to endure the challenging setup. Is this a reasonable thing to do?
Here is what I have done so far:
Created a dockerfile to set up the env
# Ubuntu 12.04.5 LTS is the standard platform for development
FROM ubuntu:12.04.5
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
build-essential \
dialog \
autoconf \
automake \
libtool \
libgtk-3-dev \
default-jdk \
bison \
flex \
php5 \
php5-pgsql \
libglib2.0-dev \
gperf \
sqlite3 \
txt2man \
libssl-dev \
libudev-dev \
ia32-libs \
git
ENV PATH="$PATH:toolchain/bin"
The last line (ENV ...) sets the path to the toolchain location. Also there are a few more env variables to set.
On my host machine I run a have my source pulled in to my working dir.
Built the docker image using:
docker build --tag=myimage:latest .
And then I mounted the source code as a volume to the container using:
docker run -it --volume /path/to/host/code:/path/in/container myimage
All this works - it mounts the code in the container and I am in the container's terminal, I can see the code. However I dont see the path I set to the toolchain in my dockerfile. I was hoping the path would get set and I could call make.
Is this not how it is supposed to work, is there a better way to do this?

How can we install google-chrome-stable on alpine image in dockerfile using dpkg?

I am trying to install google-chrome-stable on alpine image using dpkg. However, the dpkg is installed but it does not install google-chrome-stable and return this error instead? Is there a way to install google-chrome-stable in alpine image either using dpkg or other way?
dpkg: regarding google-chrome-stable_current_amd64.deb containing
google-chrome-stable:amd64, pre-dependency problem:
google-chrome-stable:amd64 pre-depends on dpkg (>= 1.14.0)
dpkg: error processing archive google-chrome-stable_current_amd64.deb (--install):
pre-dependency problem - not installing google-chrome-stable:amd64
Errors were encountered while processing:
Dockerfile:
# Base image
FROM ruby:2.6.3-alpine3.10
# Use node version 10.16.3, yarn version 1.16.0
RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/v3.10/main/ nodejs=10.16.3-r0
RUN apk add --no-cache --repository http://dl-cdn.alpinelinux.org/alpine/v3.10/community/ yarn=1.16.0-r0
# Install dependencies
RUN apk upgrade
RUN apk --update \
add build-base \
git \
tzdata \
nodejs \
nodejs-npm \
bash \
curl \
yarn \
gzip \
postgresql-client \
postgresql-dev \
imagemagick \
imagemagick-dev \
imagemagick-libs \
chromium \
chromium-chromedriver \
ncurses \
less \
dpkg=1.19.7-r0 \
chromium \
chromium-chromedriver
RUN dpkg --add-architecture amd64
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb
# This is the base directory used in any
# further COPY, RUN and ENTRYPOINT commands
WORKDIR /webapp
# Copy Gemfile and Gemfile.lock and run bundle install
COPY Gemfile* /webapp/
RUN gem install bundler -v '1.17.3' && \
bundle _1.17.3_ install
# Copy everything to /webapp for docker image
COPY . ./
EXPOSE 3000
# Run the application
CMD ["rails", "server", "-b", "0.0.0.0"]
Installing the Chrome .deb file this way won't work on Alpine.
While the dpkg package is available in the Alpine repository, and is useful for installing lightweight Debian packages, you won't be able to use it for installing complex Debian packages, since it'll be impossible to satisfy many Debian dependencies. Alpine is generally not Debian compatible (relying on musl libc), so installing native Alpine packages using apk is the right way to go.
AFAIK, there's currently no Google Chrome Alpine Linux compatible, musl-libc build.
You could, however, install the Chromium browser, which is available using an apk package:
apk add chromium
Another option is enabling glibc on a vanilla Alpine image, making it compatible with Debian binaries. This is a fairly simple procedure, see: Dockerfile. However, it may not be suitable for images with existing applications such as ruby:2.6.3-alpine3.10. Moreover, even with a glibc setup on Alpine, Chrome is not likely to run without issues. I have made a quick attempt (Dockerfile) but couldn't get past the first segfault.
Edit 9/5/21: Running the debian compatible Chrome stable on Alpine is going to be a very difficult task to say the least. This is in part due to the very large number of dependencies and libraries. Trying to run it results with segfaults during dynamic linking and finally assertions from the dynamic linker. Even if we manage to get passed these issues and start Chrome it will probably be very unstable.
Since chromium-chromedriver is presented in your package list, I suppose that you want to do browser automation.As to browser automation, I used java and selenium, and download chromium binary and chromium driver binary manually.
What the most I want to tell you is that the chromium binary and chromium driver binary bundle might not work as expected, and you need to downgrade the version of either chrome driver or chrome and make several trial to find a matched versions that really work, no matter whether you use node.js or java selenium.
With Selenium, you have another option that you can deploy the chrome and chromedriver bundle as a http service in a different server, and make selenium invoke the remote chrome service.
ChromeDriver for version 93.0.4577.15

Jest-dynamoDB connection gets refused inside of docker container

I have a suite of tests written in Jest for dynamoDB that use the dynamodb-local instance as explained here using this dependency. I use a custom-built Docker image which builds a container within which the tests are executed.
Here's the Dockerfile
FROM openjdk:8-jre-alpine
RUN apk -v --no-cache add \
curl \
build-base \
groff \
jq \
less \
py-pip \
python openssl \
python3 \
python3-dev \
yarn \
&& \
pip3 install --upgrade pip awscli boto3 aws-sam-cli
EXPOSE 8000
I yarn install all of my dependencies and then yarn test, at this point after a long time it will output this:
Error
This is the command I ma using:
docker run -it --rm -p 8000:8000 -v $(pwd):/data -w /data aws-cli-java8-v15:latest
The tests work completely fine on my own machine, but no matter what project I use or what I include in my Dockerfile connection always gets dropped.
I solved the issue, turns out it has to do with Alpine Linux. Because it uses musl instead of Glibc local dynamodb won't be able to start and it will crash a few seconds after it was executed without outputting any error messages. The solution is to either use OracleJDK on alpine, which is hard enough given their new license or using any other OS that does use glibc with OpenJDK. Or you could try to install glibc on Alpine and try to link it to your OpenJDK, but it's not a terribly good idea.

Docker commands require keyboard interaction

I'm trying to create a Docker image for ripping CDs (using abcde).
Here's the relevant portion of the Dockerfile:
FROM ubuntu:17.10
MAINTAINER Graham Nicholls <graham#rockcons.co.uk>
RUN apt update && apt -y install eject vim ruby abcde
...
Unfortunately, the package "abcde" pulls in a mail client (not sure which), and apt tries to configure that by asking what type of mail connection to configure (smarthost/relay etc).
When docker runs, it's not appearing to read from stdin, so I can't redirect into the docker process.
I've tried using --nodeps with apt (and replacing apt with apt-get); unfortunately --nodeps seems no-longer to be a supported option and returns:
E: Command line option --nodeps is not understood in combination with the other options
Someone has suggested using expect in response to a similar question, which I'd rather avoid. This seems to be a "difficult to google" problem - I can't find anything.
So, is there a way of passing in the answer to the config in apt, or of preventing apt from pulling in a mail client, which would be better - I'm not planning in sending updates to cddb.
The typical template to install apt packages in a docker container looks like:
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
eject \
vim \
ruby \
abcde \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
Running it with the "noninteractive" value removes any prompts. You don't want to set that as an ENV since that would also impact any interactive commands you run inside the container.
You also want to cleanup the package database when finished to reduce the layer size and avoid reusing a stale cached package database in a later step.
The no-install-recommends option will reduce the number of packages installed by only installing the required dependencies, not the additional recommended packages. This cuts the size of the root filesystem down by half for me.
If you need to pass a non-default configuration to a package, then use debconf. First run you install somewhere interactively and enter the options you want to save. Install debconf-utils. Then run:
debconf-get-selections | grep "${package_name}"
to view all the options you configured for that package. You can then pipe these options to debconf-set-selections in your container before running your install, e.g.:
RUN echo "postfix postfix/main_mailer_type select No configuration" \
| debconf-set-selections \
&& apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
....
or save your selections to a file that you copy in:
COPY debconf-selections /
RUN debconf-set-selections </debconf-selections \
&& apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
....

Resources