On a normal server e.g. a Linode VPS I would normally do:
localectl set-locale LANG=<locale>.utf8
timedatectl set-timezone <timezone>
But since systemd is not present or does not work on containers I get:
Failed to create bus connection: No such file or directory
Now, my goal is just to change these settings without using systemd but such approach seems to go undocumented. Is there a reference for non-systemd alternatives to config tools?
Some documentation about locale setting in arch wiki: https://wiki.archlinux.org/index.php/locale
In Dockerfile, adjust LANG to your desired locale. You can add more than one locale in /etc/locale.gen to have a choice later.
Works on debian, arch, but locale-gen misses on fedora:
ENV LANG=en_US.utf8
RUN echo "$LANG UTF-8" >> /etc/locale.gen
RUN locale-gen
RUN update-locale --reset LANG=$LANG
More general is localedef, works on fedora, too:
ENV LANG=en_US.UTF-8
localedef --verbose --force -i en_US -f UTF-8 en_US.UTF-8
Put this in your Dockerfile
ENV TZ=America/Denver
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
Edit .bash_profile or .bashrc from root and add the following.
TZ='Asia/Kolkata'
export TZ
Save file and commit image after its done.
Based on a technique used in sti-base, I came up with the following workaround for https://github.com/ncoghlan/fedbuildenv/blob/09a18d91e7af64a45394669bac2595a4b628960d/Dockerfile#L26:
# Set a useful default locale
RUN echo "export LANG=en_US.utf-8" > /opt/export_LANG.sh
ENV BASH_ENV=/opt/export_LANG.sh \
ENV=/opt/export_LANG.sh \
PROMPT_COMMAND="source /opt/export_LANG.sh"
BASH_ENV covers non-interactive bash sessions, ENV covers sh sessions, and PROMPT_COMMAND covers interactive bash sessions.
this seems to be the debians's equivalent of locale-gen:
RUN localedef -v -c -i fr_FR -f UTF-8 fr_FR.UTF-8 || true
Related
I am following this link to create a spark cluster. I am able to run the spark cluster. However, I have to give an absolute path to start spark-shell. I am trying to set environment variables i.e. PATH and a few others in start-shell.sh. However, it's not setting that inside container. I tried printing it using printenv inside the container. But these variables are never reflected.
Am I trying to set environment variables incorrectly? Spark cluster is running successfully though.
I am using docker-compose.yml to build and recreate an image and container.
docker-compose up --build
Dockerfile
# builder step used to download and configure spark environment
FROM openjdk:11.0.11-jre-slim-buster as builder
# Add Dependencies for PySpark
RUN apt-get update && apt-get install -y curl vim wget software-properties-common ssh net-tools ca-certificates python3 python3-pip python3-numpy python3-matplotlib python3-scipy python3-pandas python3-simpy
# JDBC driver download and install
ADD https://go.microsoft.com/fwlink/?linkid=2168494 /usr/share/java
RUN update-alternatives --install "/usr/bin/python" "python" "$(which python3)" 1
# Fix the value of PYTHONHASHSEED
# Note: this is needed when you use Python 3.3 or greater
ENV SPARK_VERSION=3.1.2 \
HADOOP_VERSION=3.2 \
SPARK_HOME=/opt/spark \
PYTHONHASHSEED=1
# Download and uncompress spark from the apache archive
RUN wget --no-verbose -O apache-spark.tgz "https://archive.apache.org/dist/spark/spark-${SPARK_VERSION}/spark-${SPARK_VERSION}-bin-hadoop${HADOOP_VERSION}.tgz" \
&& mkdir -p ${SPARK_HOME} \
&& tar -xf apache-spark.tgz -C ${SPARK_HOME} --strip-components=1 \
&& rm apache-spark.tgz
My Dockerfile-spark
When using SPARK_BIN="${SPARK_HOME}/bin/ under ENV in Dockerfile, environment variable get's set. It is visible inside the docker container by using printenv
FROM apache-spark
WORKDIR ${SPARK_HOME}
ENV SPARK_MASTER_PORT=7077 \
SPARK_MASTER_WEBUI_PORT=8080 \
SPARK_LOG_DIR=${SPARK_HOME}/logs \
SPARK_MASTER_LOG=${SPARK_HOME}/logs/spark-master.out \
SPARK_WORKER_LOG=${SPARK_HOME}/logs/spark-worker.out \
SPARK_WORKER_WEBUI_PORT=8080 \
SPARK_MASTER="spark://spark-master:7077" \
SPARK_WORKLOAD="master"
COPY start-spark.sh /
CMD ["/bin/bash", "/start-spark.sh"]
start-spark.sh
#!/bin/bash
. "$SPARK_HOME/bin/load-spark-env.sh"
export SPARK_BIN="${SPARK_HOME}/bin/" # This doesn't work here
export PATH="${SPARK_HOME}/bin/:${PATH}" # This doesn't work here
# When the spark work_load is master run class org.apache.spark.deploy.master.Master
if [ "$SPARK_WORKLOAD" == "master" ];
then
export SPARK_MASTER_HOST=`hostname` # This works here
cd $SPARK_BIN && ./spark-class org.apache.spark.deploy.master.Master --ip $SPARK_MASTER_HOST --port $SPARK_MASTER_PORT --webui-port $SPARK_MASTER_WEBUI_PORT >> $SPARK_MASTER_LOG.
My File structure is
dockerfile
dockerfile-spark # this uses pre-built image created by dockerfile
start-spark.sh # invoked buy dockerfile-spark
docker-compose.yml # uses build parameter to build an image from dockerfile-spark
From inside the master container
root#3abbd4508121:/opt/spark# export
declare -x HADOOP_VERSION="3.2"
declare -x HOME="/root"
declare -x HOSTNAME="3abbd4508121"
declare -x JAVA_HOME="/usr/local/openjdk-11"
declare -x JAVA_VERSION="11.0.11+9"
declare -x LANG="C.UTF-8"
declare -x OLDPWD
declare -x PATH="/usr/local/openjdk-11/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
declare -x PWD="/opt/spark"
declare -x PYTHONHASHSEED="1"
declare -x SHLVL="1"
declare -x SPARK_HOME="/opt/spark"
declare -x SPARK_LOCAL_IP="spark-master"
declare -x SPARK_LOG_DIR="/opt/spark/logs"
declare -x SPARK_MASTER="spark://spark-master:7077"
declare -x SPARK_MASTER_LOG="/opt/spark/logs/spark-master.out"
declare -x SPARK_MASTER_PORT="7077"
declare -x SPARK_MASTER_WEBUI_PORT="8080"
declare -x SPARK_VERSION="3.1.2"
declare -x SPARK_WORKER_LOG="/opt/spark/logs/spark-worker.out"
declare -x SPARK_WORKER_WEBUI_PORT="8080"
declare -x SPARK_WORKLOAD="master"
declare -x TERM="xterm"
root#3abbd4508121:/opt/spark#
There are a couple of different ways to set environment variables in Docker, and a couple of different ways to run processes. A container normally runs one process, which is controlled by the image's ENTRYPOINT and CMD settings. If you docker exec a second process in the container, that does not run as a child process of the main process, and will not see environment variables that are set by that main process.
In the setup you show here, the start-spark.sh script is the main container process (it is the image's CMD). If you docker exec your-container printenv, it will see things set in the Dockerfile but not things set in this script.
Things like filesystem paths will generally be fixed every time you run the container, no matter what command you're running there, so you can specify these in the Dockerfile
ENV SPARK_BIN=${SPARK_HOME}/bin PATH=${SPARK_BIN}:${PATH}
You can specify both an ENTRYPOINT and a CMD in your Dockerfile; if you do, the CMD is passed as arguments to the ENTRYPOINT. This leads to a useful pattern where the CMD is a standard shell command, and the ENTRYPOINT is a wrapper that does first-time setup and then runs it. You can split your script into two:
#!/bin/sh
# spark-env.sh
. "${SPARK_BIN}/load-spark-env.snh"
exec "$#"
#!/bin/sh
# start-spark.sh
spark-class org.apache.spark.deploy.master.Master \
--ip "$SPARK_MASTER_HOST" \
--port "$SPARK_MASTER_PORT" \
--webui-port "$SPARK_MASTER_WEBUI_PORT"
Then in your Dockerfile specify both parts
COPY spark-env.sh start-spark.sh /
ENTRYPOINT ["/spark-env.sh"] # must be JSON-array syntax
CMD ["/start-spark.sh"] # or any other valid CMD
This is useful for your debugging since it's straightforward to override the CMD in a docker run or docker-compose run instruction, leaving the ENTRYPOINT in place.
docker-compose run spark \
printenv
This launches a new container based on all of the same Dockerfile setup. When it runs, it runs printenv instead of the CMD in the image. This will do the first-time setup in the ENTRYPOINT script, and then the final exec "$#" line will run printenv instead of starting the Spark application. This will show you the environment the application will have when it starts.
I have CircleCI pipeline setup for my test flow using Jest snapshot and one of my snapshot tests keeps failing. I use Javascript to generate a Date object (new Date("YYYY-MM-DD")) and locally it yields MM/DD/YYYY but in the docker image (node:8) it yields YYYY-MM-DD instead so the snapshot test fails. I have tried to set up locales by:
docker:
- image: circleci/node:8
environment:
TZ: "America/Los_Angeles"
LANG: en_US.UTF-8
LANGUAGE: en_US.UTF-8
LC_ALL: en_US.UTF-8
But it complains it cannot set the default locale so I added:
- run:
name: Reconfigure Locale
command: sudo dpkg-reconfigure locales
which seemed to be a solution for most of the people that had the same problem but not my case.
I also tried to have the same local docker image and test it there and it worked fine with these commands:
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y locales
RUN sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=en_US.UTF-8
So I tried these in circleci and sed commands complained about permissions even when it is called with sudo.
Okay just FYI, it was the node version caused the date format issue. I installed full-icu npm package which handles locale for the node application. To re-visit my problem, I had successfully installed locale and set it to be the same as the local machine but Node won't pick the locale from the system but from the browser being used. I hope this info helps.
My container has locale settep up to POSIX and I want to change it. After I do that, I exit and reenter the container and the locale is back to POSIX.
I don't want to build a new image or run a new container because we have a lot of containers in several machines.
Running this:
DEBIAN_FRONTEND=noninteractive apt-get install -y locales
sed -i -e 's/# pt_PT ISO-8859-1/pt_PT ISO-8859-1/' /etc/locale.gen
dpkg-reconfigure --frontend=noninteractive locales
export LANGUAGE=pt_PT
export LANG=pt_PT
export LC_ALL=pt_PT
Works great in running container but exiting and reentering the container makes the changes lost.
Already tried this code in container Entrypoint but the export doesn't have any effect.
Those settings are shell-session bound, not OS-bound. To make it OS-bound, you should write it in OS files, but when the service restarts it will apply the image without those changes.
So, that has to be set in Dockerfile, to be image-bound, something like:
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y locales && \
sed -i -e 's/# pt_PT ISO-8859-1/pt_PT ISO-8859-1/' /etc/locale.gen && \
dpkg-reconfigure --frontend=noninteractive locales
ENV LANG pt_PT
ENV LANGUAGE pt_PT
ENV LC_ALL pt_PT
changes can't be stored in the container. I think the best way is to commit your changes into the container and create a new one.
you can use "docker commit" for this purpose.
docker commit
Ref: https://docs.docker.com/engine/reference/commandline/commit/
I'm trying to build a Docker image based on oracle/database:11.2.0.2-xe (which is based on Oracle Linux based on RHEL) and want to change the system locale in this image (using some RUN command inside a Dockerfile).
According to this guide I should use localectl set-locale <MYLOCALE> but this command is failing with Failed to create bus connection: No such file or directory message. This is a known Docker issue for commands that require SystemD to be launched.
I tried to start the SystemD anyway (using /usr/sbin/init as first process as well as using -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /run thanks to this help) but then the localectl set-locale failed with Could not get properties: Connection timed out message.
So I'm now trying to avoid the usage of localectl to change my system globale locale, how could I do this?
According to this good guide on setting locale on Linux, I should use
localedef -c -i fr_FR -f ISO-8859-15 fr_FR.ISO-8859-15
But this command failed with
cannot read character map directory `/usr/share/i18n/charmaps': No such file or directory`
This SO reply indicated one could use yum reinstall glibc-common -y to fix this and it worked.
So my final working Dockerfile is:
RUN yum reinstall glibc-common -y && \
localedef -c -i fr_FR -f ISO-8859-15 fr_FR.ISO-8859-15 && \
echo "LANG=fr_FR.ISO-8859-15" > /etc/locale.conf
ENV LANG fr_FR.ISO-8859-15
I've got the following kickstart file (ks.cfg) for a raw centos installation. I'm trying to implement a "%post" process that will allow the installation to be modified, using you functions (install, groupremove, etc). The whole ks file is at the end of the issue.
I'm not sure why, but the following kickstart is not running the yum install mysql, yum install mysql-server in the post process.
After the install, entering "service mysql start" results in the err msg saying mysql is not found. I am, however, able to run the yum install cmds after installation, and mysql gets installed.
I know I'm missing something subtle, but not sure what it is.
%post
yum install mysql -y <<<<<<<<<<<<<<NOT WORKING!!!!!
yum install mysql-server -y <<<<<<<<<<<<<<NOT WORKING!!!!!
%end
Thanks
ks.cfg
[root#localhost ~]# cat /root/anaconda-ks.cfg
# Kickstart file automatically generated by anaconda.
#version=DEVEL
install
cdrom
lang en_US.UTF-8
keyboard us
network --onboot yes --device eth0 --bootproto dhcp
rootpw --iscrypted $1$JCZKA/by$sVSHffsPr3ZDUp6m7c5gt1
# Reboot after installation
reboot
firewall --service=ssh
authconfig --useshadow --enablemd5
selinux --enforcing
timezone --utc America/Los_Angeles
bootloader --location=mbr --driveorder=sda --append=" rhgb crashkernel=auto quiet"
# The following is the partition information you requested
# Note that any partitions you deleted are not expressed
# here so unless you clear all partitions first, this is
# not guaranteed to work
#clearpart --all --initlabel
#part /boot --fstype=ext4 --size=200
#part / --fstype=ext4 --grow --size=3000
#part swap --grow --maxsize=4064 --size=2032
repo --name="CentOS" --baseurl=cdrom:sr1 --cost=100
%packages
#Base
#Core
#Desktop
#Fonts
#General Purpose Desktop
#Internet Browser
#X Window System
binutils
gcc
kernel-devel
make
patch
python
%end
%post
cp /boot/grub/menu.lst /boot/grub/grub.conf.bak
sed -i 's/ rhgb//' /boot/grub/grub.conf
cp /etc/rc.d/rc.local /etc/rc.local.backup
cat >>/etc/rc.d/rc.local <<EOF
echo
echo "Installing VMware Tools, please wait..."
if [ -x /usr/sbin/getenforce ]; then oldenforce=\$(/usr/sbin/getenforce); /usr/sbin/setenforce permissive || true; fi
mkdir -p /tmp/vmware-toolsmnt0
for i in hda sr0 scd0; do mount -t iso9660 /dev/\$i /tmp/vmware-toolsmnt0 && break; done
cp -a /tmp/vmware-toolsmnt0 /opt/vmware-tools-installer
chmod 755 /opt/vmware-tools-installer
cd /opt/vmware-tools-installer
mv upgra32 vmware-tools-upgrader-32
mv upgra64 vmware-tools-upgrader-64
mv upgrade.sh run_upgrader.sh
chmod +x /opt/vmware-tools-installer/*upgr*
umount /tmp/vmware-toolsmnt0
rmdir /tmp/vmware-toolsmnt0
if [ -x /usr/bin/rhgb-client ]; then /usr/bin/rhgb-client --quit; fi
cd /opt/vmware-tools-installer
./run_upgrader.sh
mv /etc/rc.local.backup /etc/rc.d/rc.local
rm -rf /opt/vmware-tools-installer
sed -i 's/3:initdefault/5:initdefault/' /etc/inittab
mv /boot/grub/grub.conf.bak /boot/grub/grub.conf
if [ -x /usr/sbin/getenforce ]; then /usr/sbin/setenforce \$oldenforce || true; fi
if [ -x /bin/systemd ]; then systemctl restart prefdm.service; else telinit 5; fi
EOF
/usr/sbin/adduser test
/usr/sbin/usermod -p '$1$QcRcMih7$VG3upQam.lF4BFzVtaYU5.' test
/usr/sbin/adduser test1
/usr/sbin/usermod -p '$1$LMyHixbC$4.aATdKUb2eH8cCXtgFNM0' test1
/usr/bin/chfn -f 'ruser' root
%end
%post
yum install mysql -y <<<<<<<<<<<<<<NOT WORKING!!!!!
yum install mysql-server -y <<<<<<<<<<<<<<NOT WORKING!!!!!
%end
It was caused by line-ending when I faced same problem as you. Try to check line-ending of ks.cfg. It should be LF not CR+LF or CR.
It will be help you if you;
Try system-config-kickstart tool.
Find generated /root/anaconda-ks.cfg though there may be no %post section.
Cheers.
You should just put mysql and mysql-server into the %packages section, no need to do this in %post.