Installing Google Chrome on Ubuntu via Dockerfile hitting Geographic Area [duplicate] - docker

This question already has answers here:
How to fill user input for interactive command for "RUN" command?
(2 answers)
Closed 6 months ago.
I am trying to create a docker image with base image as Ubuntu, NodeJS, Git & Google chrome.
This is my dockerfile.
FROM ubuntu:20.04
USER root
WORKDIR /home/app
COPY ./package.json /home/app/package.json
RUN apt-get update
RUN apt-get -y install curl gnupg
RUN apt-get install g++ build-essential --yes
RUN curl -sL https://deb.nodesource.com/setup_15.x | bash -
RUN apt-get -y install nodejs
RUN apt-get install git --yes
# Install Google Chrome
RUN apt-get install wget
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN apt-get install ./google-chrome*.deb --yes
When I build the image, I keep getting stuck the this step.
=> [12/13] RUN apt-get install ./google-chrome*.deb --yes 409.6s
=> => # 1. Africa 6. Asia 11. System V timezones
=> => # 2. America 7. Atlantic Ocean 12. US
=> => # 3. Antarctica 8. Europe 13. None of the above
=> => # 4. Australia 9. Indian Ocean
=> => # 5. Arctic Ocean 10. Pacific Ocean
=> => # Geographic area:
Has anyone had a similar experience and was able to resolve this.

I have finally gotten some clarity on this issue and can provide a work-around for anyone who runs into this issue in the future.
Problem: The geographical area will be requested within the standard ubuntu 20.04 image from Docker because that image has been stripped of the locale settings. Those settings can usually be found by using locale or localectl status in the terminal; however, both of those commands require the presence of the systemd service (which is not included with the standard ubuntu 20.04 image you are using in your FROM statement). In addition to this, you will not be able to easily add systemd to that image as you'll run into an issue with it not being PID1 during start of your docker container.
Solution: The easiest solution I found is to simply change to using an Ubuntu image that already contains systemd. One such image that I am using now is jrei/systemd-ubuntu. So what you could do to prevent that question for region selection on Chrome or any other app that may request it is to replace your current FROM statement with this one: FROM jrei/systemd-ubuntu:20.04.

I solved that issue by configuring the timezone before installing Chrome
ENV TZ=Europe/Madrid RUN echo "Preparing geographic area ..."
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

Related

Tauri Development in VS Code Dev Container

I am trying to setup a VS Code Dev Container environment for Tauri development. I am using X-Forwarding to display the app on the container's host system. When I try to start the app via yarn tauri dev, the host's X11 server displays the app but with a black screen. Withing the container, I am getting the following warnings & error:
(process:3623): Gtk-WARNING **: 20:01:34.585: Locale not supported by C library.
Using the fallback 'C' locale.
event - compiled client and server successfully in 9.3s (176 modules)
(process:3665): Gtk-WARNING **: 20:01:40.390: Locale not supported by C library.
Using the fallback 'C' locale.
(WebKitWebProcess:3665): Gdk-ERROR **: 20:01:42.784: The program 'WebKitWebProcess' received an X Window System error.
This probably reflects a bug in the program.
The error was 'GLXBadFBConfig'.
(Details: serial 173 error_code 163 request_code 149 (GLX) minor_code 21)
(Note to programmers: normally, X errors are reported asynchronously;
that is, you will receive the error a while after causing it.
To debug your program, run it with the GDK_SYNCHRONIZE environment
variable to change this behavior. You can then get a meaningful
backtrace from your debugger if you break on the gdk_x_error() function.)
This is the Dockerfile I am using
#FROM debian:bullseye
FROM nvidia/opengl:1.0-glvnd-runtime-ubuntu20.04
ENV DEBIAN_FRONTEND=noninteractive
# install dependencies from package manager
RUN apt update && \
apt install -y libwebkit2gtk-4.0-dev \
build-essential \
curl \
wget \
libssl-dev \
libgtk-3-dev \
libayatana-appindicator3-dev \
librsvg2-dev
# install rust
RUN curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh -s -- -y
# install nodejs, npm & yarn
RUN curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && apt-get install -y nodejs
RUN npm install -g yarn
# create 'dev' user
RUN mkdir /home/dev
RUN useradd -u 1000 dev && chown -R dev /home/dev
And this is the devcontainer.json
{
"name": "Tauri Dev Environment",
"dockerFile": "Dockerfile",
"settings": {
"terminal.integrated.shell.linux": "/bin/bash"
},
"extensions": [
"esbenp.prettier-vscode",
"dbaeumer.vscode-eslint",
"rust-lang.rust",
"be5invis.toml"
],
"remoteEnv": {
"DISPLAY": "host.docker.internal:0"
},
"runArgs": ["--gpus=all"],
"workspaceMount": "source=${localWorkspaceFolder},target=/home/dev/workspace/${localWorkspaceFolderBasename},type=bind",
"workspaceFolder": "/home/dev/workspace/${localWorkspaceFolderBasename}",
}
As you can see in the Dockerfile, first I tried with a Debian base image. Then, I thought the issue could be fixed by accessing the GPU of my system via a Nvidia image and corresponding configuration. However, both attempts led to the same result. It might be worth noticing that via the "Debian-based" image, I am able to successfully forward browser UI from the container.
Update: The problem only exists when I run the host's X11 server in "Multiple windows" mode. When I run it in "Fullscreen" or "Single Window" mode, the Tauri app is displayed properly. This also works with the simple Debian base image without GPU access. The following image shows the display settings that work and the one that does not. Does anyone know why it does not work in the "Multiple windows" mode?
This appears to be a problem with the X11 server. I was using VcXsrv before. When I use Xming, the "Multiple Windows" option is working properly. I want to highlight again, that the Nvidia base image and GPU access is not required.

How could I Install openmodelica in my docker image?

I have set up a docker image and install ubuntu on it. Can you please tell me how can I install Openmodelica inside ubuntu to that docker image?
for example, if I want to install node.js on this docker image I could use this code:
apt install nodejs
so I need some codes like that to install open Modelica on my docker image.
p.s: my docker image is an ubuntu image.
I happened to create a Docker image for OpenModelica to debug something, so I might add it here as well. We got this questions in the OpenModelica forum as well.
While the answer of #sjoelund.se will stay up to date this one is a bit more explaining.
Dockerfile
FROM ubuntu:18.04
# Export DISPLAY, so a XServer can display OMEdit
ARG DEBIAN_FRONTEND=noninteractive
ENV DISPLAY=host.docker.internal:0.0
# Install wget, gnupg, lsb-release
RUN apt-get update \
&& apt install -y wget gnupg lsb-release
# Get the OpenModelica stable version
RUN for deb in deb deb-src; do echo "$deb http://build.openmodelica.org/apt `lsb_release -cs` stable"; done | tee /etc/apt/sources.list.d/openmodelica.list
RUN wget -q http://build.openmodelica.org/apt/openmodelica.asc -O- | apt-key add -
# Install OpenModelica
RUN apt-get update \
&& apt install -y openmodelica
# Install OpenModelica libraries (like all of them)
RUN for PKG in `apt-cache search "omlib-.*" | cut -d" " -f1`; do apt-get install -y "$PKG"; done
# Add non-root user for security
RUN useradd -m -s /bin/bash openmodelicausers
USER openmodelicausers
ENV HOME /home/openmodelicausers
ENV USER openmodelicausers
WORKDIR $HOME
# Return omc version
CMD ["omc", "--version"]
Let's build and tag it:
docker build --tag openmodelica:ubuntubionic .
How to use omc from the docker image
Let's create a small helloWorld.mo Modelica model:
model helloWorld
Real x(start=1.0, fixed=true);
equations
der(x) = 2.5*x;
end helloWorld;
and a MOS script which will simulate it, called runHelloWorld.mos
loadFile("helloWorld.mo"); getErrorString();
simulate(helloWorld); getErrorString();
Now we can make our files accessible to the docker container with the -v flag and run our small example with:
docker run \
--rm \
-v $(pwd):/home/openmodelicausers \
openmodelica:ubuntubionic \
omc runHelloWorld.mos
Note that -v needs an absolute path. I added --rm to clean up.
Using OMEdit with a GUI
I'm using Windows + Docker with WSL2. So in order to get OMEdit running I need to have a XServer installed on my Windows host system. They are not trivial to set up, but I'm using VcXsrv and so far it is working for me. On Linux this is of course much simpler.
I'm using this config to start XLaunch:
<?xml version="1.0" encoding="UTF-8"?>
<XLaunch WindowMode="MultiWindow" ClientMode="NoClient" LocalClient="False" Display="-1" LocalProgram="xcalc" RemoteProgram="xterm" RemotePassword="" PrivateKey="" RemoteHost="" RemoteUser="" XDMCPHost="" XDMCPBroadcast="False" XDMCPIndirect="False" Clipboard="True" ClipboardPrimary="True" ExtraParams="" Wgl="True" DisableAC="True" XDMCPTerminate="False"/>
But when the XServer is running you can use OMEdit in nearly the same fashion you would from a Linux OS, just mount some directory with your files and that's it:
docker run \
--rm \
-v $(pwd):/home/openmodelicausers \
openmodelica:ubuntubionic \
OMEdit
You could get some inspiration from the Dockerfiles that are used to generate the OpenModelica docker images. For example: https://github.com/OpenModelica/OpenModelicaDockerImages/tree/v1.16.2

Why might the host behave more deterministic than a docker container?

We use Docker to well define the build environment and help with deterministic builds but on my machine I get a tiny change in the build results using Docker but not when not using Docker.
I did pretty extensive testing and am out of ideas :(
I tested on the following systems:
A: My new PC without Docker
AD1: My new PC with Docker, using our Dockerfile based on ubuntu:18.04 compiled "a year ago"
AD2: My new PC with Docker, using our Dockerfile based on ubuntu:19:10 compiled now
B: My laptop (that I had copied the disk from to my new PC) without Docker
BD: My laptop with Docker
CD1: Co-worker's laptop with Docker, using our Dockerfile based on ubuntu:18.04 compiled "a year ago"
CD2: Co-worker's laptop with Docker, using our Dockerfile based on ubuntu:19:10 compiled now
DD: A Digital Ocean VPS with our Dockerfile based on ubuntu:18.04 compiled now
In all scenarios we got either of two build results I will name variant X and Y.
We got variant X using A, B, CD1, CD2 and DD.
We got variant Y using AD1, AD2 and BD.
The issue keeps being 100% reproducible since several releases of our Android app. It did not go away when I updated my Docker from 19.03.6 to 19.03.8 to match my co-worker's version. We both had Ubuntu 19.10 back then and I now keep getting the issue with Ubuntu 20.04.
I always freshly cloned our project into a new folder, used disorderfs to eliminate file system sorting issues and mounted the folder into the docker container.
I doubt it's relevant but we are using this Dockerfile:
FROM ubuntu:18.04
RUN dpkg --add-architecture i386 && \
apt-get update -y && \
apt-get install -y software-properties-common && \
apt-get update -y && \
apt-get install -y wget \
openjdk-8-jre-headless=8u162-b12-1 \
openjdk-8-jre=8u162-b12-1 \
openjdk-8-jdk-headless=8u162-b12-1 \
openjdk-8-jdk=8u162-b12-1 \
git unzip && \
rm -rf /var/lib/apt/lists/* && \
apt-get autoremove -y && \
apt-get clean
# download and install Android SDK
ARG ANDROID_SDK_VERSION=4333796
ENV ANDROID_HOME /opt/android-sdk
RUN mkdir -p /opt/android-sdk && cd /opt/android-sdk && \
wget -q https://dl.google.com/android/repository/sdk-tools-linux-${ANDROID_SDK_VERSION}.zip && \
unzip *tools*linux*.zip && \
rm *tools*linux*.zip && \
yes | $ANDROID_HOME/tools/bin/sdkmanager --licenses
Also here are the build instructions I run and get different results. The diff itself is can be found here.
Edit: I also filed it as a bug on the docker repo.
Docker is not fully architecture-independent. For different architecture you may have more or less minute differences. Usually it should not affect anything important but may change some optimisation decisions of a compiler and such things. It is more visible if you try very different CPUs like AMD64 vs ARM. For Java it should not matter but it seems that at least sometimes it matters.
Another thing is network and DNS. When you do apt-get, wget and other such things then it downloads code or binary from network. It may differ depending on which DNS you use (which may lead to different server or different repo url) and there can be some minute differences. Theoretically there should be no difference but practically there can be difference sometimes like when they roll out new version and it's visible only on some nodes or something bad happened or you have some cache/proxy in between and connect through that and it caches etc.
Also the latter can create differences that appear in time. Like app is compiled on one month and someone tries to verify few weeks or months later and apt-get installs other versions of libraries and in effect there are minute differences.
I'm not sure which applies here but I have some ideas:
may try to make some small changes to the app so in effect it will again build same on most of popular CPU's, do extensive testing, and then list architectures on which it can be verified
make verification process a little more complex and non-free so users should have to run a server instance (on AWS or Google or Azure or Rackspace or other) with specified architecture and build and verify there - may try and specify on which types of machines exacly result will be the same and what are minimal requirements (as it may or may not run on free-plan instances)
check diff of created images content (not only apk but full system image), maybe there will be something important that differs between docker images on different machines producing different results
try to find as small as possible initial image and don't allow apt-get or other things automatically install dependencies with newest version but specify all dependencies and their versions

determine the time zone when building a singularity image

I am trying to build an image using singularity. in one step I have to run a R script to do so, in the recipe file I need to install R and I did using the following command:
apt-get install -y systemd systemd-sysv gdebi-core procps libssl1.1 ed wget curl libqt5webkit5 libqt5core5a libxml2-dev r-cran-xml wget libssl-dev curl libcurl4-openssl-dev libnetcdf-dev netcdf-bin libcairo2-dev libxt-dev default-jre texlive-latex-base libhdf5-dev r-base r-base-dev
curl https://download1.rstudio.org/rstudio-xenial-1.1.463-amd64.deb > /rstudio-1.1.463-amd64.deb
apt-get -y install /rstudio-1.1.463-amd64.deb
wget -O /rstudio-server-stretch-1.1.463-amd64.deb \
https://download2.rstudio.org/rstudio-server-stretch-1.1.463-amd64.deb
gdebi -n /rstudio-server-stretch-1.1.463-amd64.deb
and I run the recipe file using this command:
sudo singularity build nanos.sif Singularity.recipe
but after running it, at some point it asks me that which time zones I am located at and here is the message:
Please select the geographic area in which you live. Subsequent configuration
questions will narrow this down by presenting a list of cities, representing
the time zones in which they are located.
1. Africa 6. Asia 11. System V timezones
2. America 7. Atlantic Ocean 12. US
3. Antarctica 8. Europe 13. None of the above
4. Australia 9. Indian Ocean
5. Arctic Ocean 10. Pacific Ocean
Geographic area:
I chose one of them using name and numbers but building did not proceed. do you know how I can fix this problem?
You can set this via the TZ environment variable in %post. e.g., export TZ=UTC. You may also need to set TZ=UTC (or the desired timezone) in %environment as well.
See additional info on timezones in R and a related question: How to change the default time zone in R?
To me the following spell worked
%environment
TZ=UTC
DEBIAN_FRONTEND=noninteractive
%post
export TZ=UTC
export DEBIAN_FRONTEND=noninteractive
apt-get update
apt-get install -y .....
Replace ..... with your packages. No worries about the specific timzone. Normally it should be taken from the host system, so UTC will be overridden unless you instruct singularity otherwise at the run time.

Succesfully created a virtualenv (using "mkproject") in Dockerfile, but can't run "workon" properly

Edit: Solved- typo
I have a Dockerfile that successfully creates a virtualenv using virtualenvwrapper (along with setting up a heap of "standard" settings/packages in our normal environment). I am using the resulting image as a "base image" for further use. All good so far. However, the following Dockerfile (based of the first image, "base_image_14.04") falls down at the last line:
FROM base_image_14.04
USER root
RUN DEBIAN_FRONTEND=noninteractive \
apt-get update && apt-get install -y \
libproj0 libproj-dev \
libgeos-c1v5 libgeos-dev \
libjpeg62 libjpeg-dev \
zlib1g zlib1g-dev \
libfreetype6 libfreetype6-dev \
libgdal20 libgdal-dev \
&& rm -rf /var/lib/apt/lists
USER webdev
RUN ["/bin/bash", "-ic", "mkproject maproxy"]
EXPOSE 80
WORKDIR $PROJECT_HOME/mapproxy
ADD ./requirements.txt .
RUN ["/bin/bash", "-ic", "workon mapproxy && pip install -r requirements.txt"]
The "mkproject mapproxy" works fine. If I comment out the last line it builds successfully and I can spin up the container and run "workon mapproxy" manually, not a problem. But when I try and build with the last line, it gives a workon error:
ERROR: Environment 'mapproxy' does not exist. Create it with 'mkvirtualenv mapproxy'.
workon is being called, but for some reason it can't find the mapproxy virtualenv.
WORKON_HOME & PROJECT_HOME both exist (defined in the parent image) and point to the correct locations (and are used successfully by "mkproject mapproxy").
So why is workon returning an error when the mapproxy virtualenv exists? The same error happens when I isolate that last line into a third Dockerfile building on the second.
Solved: It was a simple typo. mkproject maproxy instead of mapproxy. :sigh:
I am trying to build a docker image and am running into similar problems.
First question was why use a virtual env in docker? The main reason in a nutshell is to minimize effort to migrate an existing and working approach into a docker container. I will eventually use docker-compose, but I wanted to start by getting my feet wet with it all in a single docker container.
In my first attempt I installed almost everything with apt-get, including uwsgi. I installed my app "globally" with pip3. The app has command line functionality and a separate flask web app, hence the need for uwsgi. The command line functionality works, but when I make a request of the flask app uwsgi / python has a problem with locale: Fatal Python error: Py_Initialize: Unable to get the locale encoding and ImportError: No module named 'encodings
I have stripped away all my app specific additions to narrow down the problem. This is the Dockerfile I'm using:
# Docker image definition for testing
FROM ubuntu:xenial
# Create a user
RUN useradd -G sudo -ms /bin/bash tester
RUN echo 'tester:password' | chpasswd
WORKDIR /home/tester
# Skipping apt-get update to save some build time. Some are kept
# to insure they are the same as on host setup.
RUN apt-get install -y python3 python3-dev python3-pip \
virtualenv virtualenvwrapper sudo nano && \
apt-get clean -qy
# After above, can we use those installed in rest of Dockerfile?
# Yes, but not always, such as with virtualenvwrapper. What about
# virtualenv? How do you "source" the script? Doesn't appear to be
# installed, as bash complains "source needs a single parameter"
ENV VIRTUALENVWRAPPER_PYTHON /usr/bin/python3
ENV VIRTUALENVWRAPPER_VIRTUALENV /usr/bin/virtualenv
RUN ["/bin/bash", "-c", "source", "/usr/share/virtualenvwrapper/virtualenvwrapper.sh"]
# Create a virtualenv so uwsgi can find locale
# RUN mkdir /home/tester/.virtualenv && virtualenv -p`which python3` /home/bts_tools/.virtualenv/bts_tools
RUN mkvirtualenv -p`which python3` bts_tools && \
workon bts_tools && \
pip3 --disable-pip-version-check install --upgrade bts_tools
USER tester
ENTRYPOINT ["/bin/bash"]
CMD ["--login"]
The build fails on the line I try to source the virtualenvwrapper script. Bash complains source needs an argument - the file to be sourced. So I comment out the RUN lines and it builds without error. When I run the resulting container I see all the additions to the ENV that virtualenvwrapper makes (you can see all of them by executing the "set" command without any args), and the script to be sourced is there too.
So my question is why doesn't docker find them? How does the docker build process work if the results of any previous RUNs or ENVs aren't applied for subsequent use in the Dockerfile? I know some things are applied and work, for example if you apt-get nginx you can refer to /etc/nginx or alter things under that folder. You can create a user and set it's password or cd into its home folder for example. If I move the WORKDIR before the RUN useradd -G I see a warning from useradd the home folder already exists. I tried to use the "time" program to time how long it takes to do various things in the Dockerfile and docker complains it can't find 'time'.
So what exactly is going on? I have spent the last 3 days trying to figure this out. It just shouldn't be this difficult. What am I missing?
Parts of the bts_tools flask app worked when I wasn't using virtual envs. Most of the app didn't work, and the issue was this locale problem. Since everything works on the host outside of docker, and after trying to alter the PATH, PYTHONHOME, PYTHONPATH in my uwsgi start script to overcome the dreaded "locale encoding" fatal error, I decided to try to replicate the host setup as closely as possible since that didn't have the locale issue. When I have had that problem before I could run dpkg-reconfigure python3 or fix with changes to PATH or ENV settings. If you google the problem you'll see many people have difficulties with python & locale. It's almost enough reason to avoid using python!
I posted this elsewhere about locale issue, if it helps.

Resources