Ubuntu 20.04 packages missing all mo-files in docker image - docker

For some reason, iso-codes package does not install it's files inside docker image.
Here is what as consider more or less minimal Dockerfile:
FROM ubuntu:20.04
ENV TZ=Etc/UTC
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get upgrade -y && apt-get install -y locales
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL=en_US.UTF-8
RUN apt-get update && apt-get install -y iso-codes
RUN ls /usr
I have left locale-related settings in case those are relevant. The same problem appears when I comment all but FROM and RUN apt-get update && apt-get install -y iso-codes out.
Building:
docker build -t 'mytry:1' .
Now when I run the following, I do not see anything in the directory where mo-files should reside:
docker run --cidfile /tmp/docker_test.cid 'mytry:1' ls -R /usr/share/locale/en/LC_MESSAGES/
However, dpkg -l shows it's there:
ii iso-codes 4.4-1 all ISO language, territory, currency, script codes and their translations
And dpkg -L has some files in the directory:
/usr/share/locale/en/LC_MESSAGES
/usr/share/locale/en/LC_MESSAGES/iso_3166-2.mo
/usr/share/locale/en/LC_MESSAGES/iso_3166_2.mo
What is that I am missing? (I am using the specific docker run way just for simplicity. The same problem arises in the normal usage)
I also tried find / -name 'iso_3166-1.mo', but seems like there is no such file anywhere.
Also it seems like poedit-common, which also should have mo files, is missing them, so the problem is more general.
docker -v gives
Docker version 20.10.7, build 20.10.7-0ubuntu5~20.04.2

We have found the reason:
cat /etc/dpkg/dpkg.cfg.d/excludes
# Drop all man pages
path-exclude=/usr/share/man/*
# Drop all translations
path-exclude=/usr/share/locale/*/LC_MESSAGES/*.mo
# Drop all documentation ...
path-exclude=/usr/share/doc/*
# ... except copyright files ...
path-include=/usr/share/doc/*/copyright
# ... and Debian changelogs
path-include=/usr/share/doc/*/changelog.Debian.*
In order to get locales, one should comment out the path-exclude line for /usr/share/locale/... or replace the file. Before installing packages.
Of course, the size of the image can grow as a result.

Related

GUI menu in docker container freezes (ubuntu parent image)

I've been trying to run a docker container including the esp8266 toolchain and ESP8266_RTOS_SDK.
After the Dockerfile is done the 'Espressif IoT Menu' pops up but freezes instantly and I cant control anything. Screenshot of the menu. I thought maybe I had to run the container endlessly, but it didn't help either. I tried this command: RUN tail -f /dev/null.
What else I thought is that the container might be missing some programs for a terminal?
Here is my Dockerfile (first time working with docker):
FROM ubuntu:latest
# -------------------------- TOOLCHAIN --------------------------------------
WORKDIR /
RUN apt-get update && apt-get install -y software-properties-common
RUN apt update && add-apt-repository universe
RUN apt-get -y install gcc wget git make libncurses-dev flex bison gperf python3 python3-serial python3-pip
RUN mkdir -p downloads esp8266
ADD https://dl.espressif.com/dl/xtensa-lx106-elf-linux64-1.22.0-100-ge567ec7-5.2.0.tar.gz downloads
RUN cd esp8266;tar -xzf /downloads/xtensa-lx106-elf-linux64-1.22.0-100-ge567ec7-5.2.0.tar.gz
ENV PATH=/esp8266/xtensa-lx106-elf/bin:$PATH
# -------------------------- ESP8266_RTOS_SDK ---------------------------------
RUN cd esp8266;git clone https://github.com/espressif/ESP8266_RTOS_SDK.git
ENV IDF_PATH="/esp8266/ESP8266_RTOS_SDK"
RUN ln -s /usr/bin/python3 /usr/bin/python #otherwise python wont be found
ENV TERM xterm #otherwise "terminal unknown"
RUN python3 -m pip install --user -r $IDF_PATH/requirements.txt
RUN cd esp8266;cp -r $IDF_PATH/examples/get-started/hello_world .
RUN cd /esp8266/hello_world;make menuconfig
I run the container with:
sudo docker build -f $(pwd)/dEsp8266 -t espenv .
The guide's I used:
For the toolchain: https://docs.espressif.com/projects/esp8266-rtos-sdk/en/latest/get-started/linux-setup.html
For the RTOS_SDK: https://docs.espressif.com/projects/esp8266-rtos-sdk/en/latest/get-started/index.html#get-started-get-esp-idf

Why can't I run command in Dockerfile but I can from within the my Docker container?

I have the following Dockerfile. This is what the "n" package is.
FROM ubuntu:18.04
SHELL ["/bin/bash", "-c"]
# Need to install curl, git, build-essential
RUN apt-get clean
RUN apt-get update
RUN apt-get install -y build-essential
RUN apt-get install -y curl
RUN apt-get install -y git
# Per docs, the following allows automated installation of n without installing node https://github.com/mklement0/n-install
RUN curl -L https://git.io/n-install | bash -s -- -y
# This refreshes the terminal to use "n"
RUN . /root/.bashrc
# Install node version 6.9.0
RUN /root/n/bin/n 6.9.0
This works perfectly and does everything I expect.
Unfortunately, after refreshing the terminal via RUN . /root/.bashrc, I can't seem to call "n" directly and instead I have to reference the exact binary using RUN /root/n/bin/n 6.9.0.
However, when I docker run -it container /bin/bash into the container and run the above sequence of commands, I am able to call "n" like so: Shell command: n 6.9.0 with no issues.
Why does the following command not work in the Dockerfile?
RUN n 6.9.0
I get the following error when I try to build my image:
/bin/bash: n: command not found
Each RUN command runs a separate shell and a separate container; any environment variables set in a RUN command are lost at the end of that RUN command. You must use the ENV command to permanently change environment variables like $PATH.
# Does nothing
RUN export FOO=bar
# Does nothing, if all the script does is set environment variables
RUN . ./vars.sh
# Needed to set variables
ENV FOO=bar
Since a Docker image generally only contains one prepackaged application and its runtime, you don't need version managers like this. Install the single version of the language runtime you need, or use a prepackaged image with it preinstalled.
# Easiest
FROM node:6.9.0
# The hard way
FROM ubuntu:18.04
ARG NODE_VERSION=6.9.0
ENV NODE_VERSION=NODE_VERSION
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
apt-get install --assume-yes --no-install-recommends \
curl
RUN cd /usr/local \
&& curl -LO https://nodejs.org/dist/v${NODE_VERSION}/node-v${NODE_VERSION}-linux-x64.tar.xz \
&& tar xjf node-v${NODE_VERSION}-linux-x64.tar.xz \
&& rm node-v${NODE_VERSION}-linux-x64.tar.xz \
&& for f in node npm npx; do \
ln -s ../node-v${NODE_VERSION}-linux-x64/bin/$f bin/$f; \
done

Completely delete a Ubuntu based docker container along with underlying layers of MySQL and JDK

I have created custom docker image using Ubuntu 16.04 xenial as base image, and installed JDK-1.8 and MY-SQL layer on it. Following is sample snap-shot of my Dockerfile to create image.
# Use 0.9.19 as this is the last tag that uses Ubuntu 16.04 xenial
FROM phusion/baseimage:0.9.19
# Set correct environment variables.
ENV HOME /root
# Use baseimage-docker's init system.
CMD ["/sbin/my_init"]
# Maintained by Abhi
MAINTAINER Abhishek <abhi#gmail.com>
# Set the locale. Default locale causes some Perl regexes to fail.
# http://jaredmarkell.com/docker-and-locales/
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
ENV MYSQL_PWD pass123admin
# Fix warning "debconf: delaying package configuration, since apt-utils is not installed " during build.
RUN apt-get update && apt-get install -y --no-install-recommends apt-utils
# The base Ubuntu images don't come with sudo, so Add sudo
RUN apt-get update && apt-get install -y sudo
# Install JDK 8
RUN echo "deb http://debian.opennms.org/ stable main" >> /etc/apt/sources.list \
&& curl -sS http://debian.opennms.org/OPENNMS-GPG-KEY | apt-key add - \
&& apt-get update -y \
&& echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" | debconf-set-selections \
&& apt-get install oracle-java8-installer -y
## Install all software available via apt-get
RUN echo "mysql-server mysql-server/root_password password $MYSQL_PWD" | debconf-set-selections \
&& echo "mysql-server mysql-server/root_password_again password $MYSQL_PWD" | debconf-set-selections \
&& apt-get update -y && apt-get install -y \
mysql-server
Everything works as expected for building the image and creating the container.
But whenever I tried to remove the Docker-Container created using this image. It is not removing MY-SQL layer within it.
I used $ docker rm mycontainer mycontainer command to delete the container created using above image.
But when I recreated the container using above image (also uses --force-recreate option), I was able to see my previous data in MySql Database. This means $ docker rm can not remove the container completely with underlying layers.
Is there a way to delete completely remove docker-container including it's underlying layers of MySQL or JDK?
Thanks in advance.

How to save my installations on Ubuntu image into Docker

Docker Codes
# Import Ubuntu image to Docker
docker pull ubuntu:16.04
docker run -it ubuntu:16.04
# Instsall Python3 and pip3
apt-get update
apt-get install -y python3 python3-pip
# Install Selenium
pip3 install selenium
# Install BeautifulSoup4
pip3 install beautifulsoup4
# Install library for PhantomJS
apt-get install -y wget libfontconfig
# Downloading and installing binary
mkdir -p /home/root/src && cd &_
tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2
cd phantomjs-2.1.1-linux-x86_64/bin/
cp phantomjs /usr/local/bin/
# Installing font
apt-get install -y fonts-nanum*
Question
I am trying to import Ubuntu image to docker and install serveral packages inscluding python3, pip3, bs4, and PhantomJs. Then I want to save all this configurations in Docker as "ubuntu-phantomjs". As I am currently on Ubuntu image, anything that starts with 'docker' command do not work. How could I save my image?
Here is the dockerfile:
# Import Ubuntu image to Docker
FROM ubuntu:16.04
# Install Python3, pip3, library and fonts
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
wget libfontconfig \
fonts-nanum*
&& rm -rf /var/lib/apt/lists/*
RUN pip3 install selenium beautifulsoup4
# Downloading and installing binary
RUN mkdir -p /home/root/src && cd &_ tar jxvf phantomjs-2.1.1-linux-x86_64.tar.bz2 && cd phantomjs-2.1.1-linux-x86_64/bin/ && cp phantomjs /usr/local/bin/
Now after saving the code in file named dockerfile, open a terminal in the same directory as the one where file is stored, and run following command:
$ docker build -t ubuntu-phantomjs .
-t means that the target is ubuntu-phantomjs and . means that the context for docker is the current directory. The above dockerfile is not a standard one, and does not follow all good practices mentioned here. You can change this file according to your needs, read the documentations for more help.

Syntaxnet spec file and Docker?

I'm trying to learn Synatxnet. I have it running through Docker. But I really dont know much about either program Synatxnet or Docker. On the Github Sytaxnet page it says
The SyntaxNet models are configured via a combination of run-time
flags (which are easy to change) and a text format TaskSpec protocol
buffer. The spec file used in the demo is in
syntaxnet/models/parsey_mcparseface/context.pbtxt.
How exactly do I find the spec file to edit it?
I compiled SyntaxNet in a Docker container using these Instructions.
FROM java:8
ENV SYNTAXNETDIR=/opt/tensorflow PATH=$PATH:/root/bin
RUN mkdir -p $SYNTAXNETDIR \
&& cd $SYNTAXNETDIR \
&& apt-get update \
&& apt-get install git zlib1g-dev file swig python2.7 python-dev python-pip -y \
&& pip install --upgrade pip \
&& pip install -U protobuf==3.0.0b2 \
&& pip install asciitree \
&& pip install numpy \
&& wget https://github.com/bazelbuild/bazel/releases/download/0.2.2b/bazel-0.2.2b-installer-linux-x86_64.sh \
&& chmod +x bazel-0.2.2b-installer-linux-x86_64.sh \
&& ./bazel-0.2.2b-installer-linux-x86_64.sh --user \
&& git clone --recursive https://github.com/tensorflow/models.git \
&& cd $SYNTAXNETDIR/models/syntaxnet/tensorflow \
&& echo "\n\n\n" | ./configure \
&& apt-get autoremove -y \
&& apt-get clean
RUN cd $SYNTAXNETDIR/models/syntaxnet \
&& bazel test --genrule_strategy=standalone syntaxnet/... util/utf8/...
WORKDIR $SYNTAXNETDIR/models/syntaxnet
CMD [ "sh", "-c", "echo 'Bob brought the pizza to Alice.' | syntaxnet/demo.sh" ]
# COMMANDS to build and run
# ===============================
# mkdir build && cp Dockerfile build/ && cd build
# docker build -t syntaxnet .
# docker run syntaxnet
First, comment out the command line in the dockerfile, then create and cd into an empty directory on your host machine. You can then create a container from the image, mounting a directory in the container to your hard-drive:
docker run -it --rm -v /pwd:/tmp bash
You'll now have a bash session in the container. Copy the spec file into /tmp from /opt/tensorflow/syntaxnet/models/parsey_mcparseface/context.pbtxt (I'm guessing that's where it is given the info you've provided above -- I can't get your dockerfile to build an image so I can't confirm it; you can always run find . -name context.pbtxt from root to find it), and exit the container (ctrl-d or exit).
You now have the file on your host's hd ready to edit, but you really want it in a running container. If the directory it comes from contains only that file, then you can simply mount your host directory at that path in the container. If it contains other things, then you can use a, so called, bootstrap script to move the file from your mounted directory (in the example above, that's tmp) to its home location. Alternatively, you may be able to tell the software where to find the spec file with a flag, but that will take more research.

Resources