envsubst command getting stuck in a container - docker

I have a requirement that before an application runs, some part of it needs to read the environmental variable. For this I have the following docker file
FROM nodesource/jessie:0.12.7
# install gettext for envsubst
RUN apt-get update
RUN apt-get install -y gettext-base
# cache package.json and node_modules to speed up builds
ADD package.json package.json
RUN npm install
# Add source files
ADD src src
# Substiture value for backend endpoint env var
RUN envsubst < src/js/envapp.js > src/js/app.js
ADD node_modules node_modules
EXPOSE 8000
CMD ["npm","start"]
The above envsubst line reads (should read) an env variable $MYENV and substitutes it. But when I open the file app.js, its empty.
I checked if the environmental variable exists in the container and it does. Any reason its value is not read and substituted?
I also tried the same command in teh container and it works. It only does not work when I run the image

This is likely because $MYENV is not available for envsubst when you run the image.
Each RUN command runs on its own shell.
From the Docker documentations:
RUN (the command is run in a shell - /bin/sh -c - shell form)
You need to source your profile as well, for example if the $MYENV environment variable is available in the .bashrc file, you can modify your Dockerfile like this:
RUN source ~/.bashrc && envsubst < src/js/envapp.js > src/js/app.js

I encountered the same issues, and after much research and fishing through the internet. I managed to find a few work arounds to this issue. Below I'll list them and identifiable risks at the time of this "Answer post"
Solutions:
1.) apt-get install -y gettext its a standard GNU package language library, one of these libraries that it includes is envsubst` and I can confirm that it works for docker UBUNTU:latest and it should work for every flavored version.
2.) npm install envsub dependent on the "use case" - this approach would be better supported by node based projects.
3.) enstub cli project lib in my opinion it seems a bit overkill to downloading a custom cli from a random stranger but, it's also another option.
Risk:
apt-get install -y gettext:
1.) gettext - this approach would NOT be ideal for VM's as with any package library, it requires maintenance and updates as time passes. However, this isn't necessary for docker because once an a container is initialized and up and running we can create a bashscript to add the package, substitute env vars and then remove the package.
2.) It's a bad idea for VM's because it can be used to execute arbitrary code
npm install ensub
1.) envsub - updating packages and this approach wouldn't be ideal if your dealing with a different stack and not using nodejs.
NOTE:
There's also a PHP version for those developing a PHP application and it seems to work PHP's cli if you need a custom environment.
Resources:
GetText package library info: https://www.gnu.org/software/gettext/
GetText Risk - https://ubuntu.com/security/notices/USN-3815-2
PHP-GetText - apt-get install -y php-gettext
Custom ensubst cli: https://github.com/a8m/envsubst

I suggest that since your are using Node, you use the npm envsub module.
This module is well tested and is developed with docker in mind.
It avoids the need for relying on other dependencies when you already have the full Node arsenal at your fingertips.
envsub is described as
envsub is envsubst for NodeJS
NodeJS global CLI module providing file-level environment variable substitution via Handlebars
I am the author of the package. I think you will enjoy it.

I had some issues with envsubst in Docker.
For some reasons envsubst doesn't work when I try to copy the output in the same file. For example, this is not working:
RUN envsubst < file.conf > file.conf
But when I when I tried to use a temp file the issue disappeared:
RUN envsubst < file.conf > file.conf.temp && cp -f file.conf.temp file.conf

Related

Mamba installing a package into wrong environment

The background is, I'm responsible for maintaining a fancy Docker image that is used by our team for analytics. It uses a Jupyter notebook image as the base, and then adds various customisations, extra packages, etc.
One of the team members recently wanted to run Tensorflow. No problem, I'll just run mamba install and add it to the image. However, this created an issue: Tensorflow 2.4.3 (the latest version) is somehow incompatible with R 4.1.1 (also the latest version) or something else in the ecosystem, causing R to to be downgraded to 3.6.3. So I created a new environment and installed TF into that:
FROM hongooi/jupytermodelrisk:1.0.0
RUN mamba create -n tensorflow --clone base
# Make RUN commands use the new environment
RUN echo "conda activate tensorflow" >> ~/.bashrc
SHELL ["/bin/bash", "--login", "-c"]
RUN mamba install -y 'tensorflow=2.4.3'
But when I rebuilt the image, I found that while the tensorflow env had been created, the Tensorflow package had been installed into the base env, not the tensorflow env. Has anyone else encountered this? I can verify, if I login to the container, that the tensorflow env has been created: it just doesn't contain the Tensorflow package.
I don't get this problem if run the create, activate and install commands from inside the container. It's only when I try to do it in the Dockerfile.
I use mamba instead of conda because the latter takes forever to run, given the number of packages installed. In fact, trying to run conda install tensorflow crashes after ~5 hours.
Not an expert on dockerfiles, but in general you could just use the -n flag to the install command to specify the target environment for the installation like so:
mamba install -n tensorflow -y tensorflow=2.4.3

Can't figure out how to use newer version of g++ in Centos 6 Docker image

I have a Docker image that is used for running tests in Jenkins and Bamboo. I need to upgrade the version of g++ used (to something with C++11 support).
I tried using a Dockerfile that looks roughly like the following one:
FROM docker.blahblahblah/centos/6.6:latest
RUN yum install -y git gcc-c++ imake centos-release-scl-rh devtoolset-7-toolchain
# I've tried putting this into /etc/bashrc, ~/.bashrc, ~/.bash_profile
RUN echo "source scl_source enable devtoolset-7" >> ~/.bashrc
My issue is that when g++ is used within the container, it uses the older one, instead of the newer one in devtoolset-7, even though the newer one should be sourced from the bashrc. (Maybe I'm misunderstanding how Docker will try to run everything.)
Could anyone point me in the right direction here?

Check Syntax errors in Dockerfile [duplicate]

If a Dockerfile is written with mistakes for example:
CMD ["service", "--config", "/etc/service.conf] (missing quote)
Is there a way to lint it to detect such mistake before building?
Try:
Either the Haskell Dockerfile Linter ("hadolint"), also available online. hadolint parses the Dockerfile into an AST and performs checking and validation based on best practice Docker images rules. It also uses Shellcheck to lint the Bash code on RUN commands.
Or dockerlinter (node.js-based).
I've performed a simple test against of a simple Docker file with RUN, ADD, ENV and CMD. dockerlinter was smart about grouping the same violation of rules together but it was not able to inspect as thorough as hadolinter possibly due to the lack of Shellcheck to statically analyze the Bash code.
Although dockerlinter falls short in the scope it can lint, it does seem to be much easier to install. npm install -g dockerlinter will do, while compiling hadolinter requires a Haskell compiler and build environment that takes forever to compile.
$ hadolint ./api/Dockerfile
L9 SC2046 Quote this to prevent word splitting.
L11 SC2046 Quote this to prevent word splitting.
L8 DL3020 Use COPY instead of ADD for files and folders
L10 DL3020 Use COPY instead of ADD for files and folders
L13 DL3020 Use COPY instead of ADD for files and folders
L18 DL3020 Use COPY instead of ADD for files and folders
L21 DL3020 Use COPY instead of ADD for files and folders
L6 DL3008 Pin versions in apt get install. Instead of `apt-get install <package>` use `apt-get install <package>=<version>`
L6 DL3009 Delete the apt-get lists after installing something
L6 DL3015 Avoid additional packages by specifying `--no-install-recommends`
$ dockerlint ./api/Dockerfile
WARN: ADD instruction used instead of COPY on line 8, 10, 13, 18, 21
ERROR: ./api/Dockerfile failed.
Update in 2018. Since hadolint has the official Docker repository now, you can get the executable quickly:
id=$(docker create hadolint/hadolint:latest)
docker cp "$id":/bin/hadolint .
docker rm "$id"
or you can use this command
docker container run --rm -i hadolint/hadolint hadolint - < Dockerfile
This is a statically compiled executable (according to ldd hadolint), so it should run regardless of installed libraries. A reference on how the executable is built: https://github.com/hadolint/hadolint/blob/master/docker/Dockerfile.
If you have a RedHat subscription, you can access the "Linter for Dockerfile" application directly at https://access.redhat.com/labs/linterfordockerfile/; information about the application is located at https://access.redhat.com/labsinfo/linterfordockerfile
This Node.js application is also available on GitHub https://github.com/redhataccess/dockerfile_lint if you prefer to run it locally.
I use very successfully in my CI pipeline npm's dockerfile_lint. You can add or extend rules. Using the package.json you can create different configs for the different jobs. There are both
Docker CLI
docker run -it --rm --privileged -v `pwd`:/root/ \
projectatomic/dockerfile-lint \
dockerfile_lint [-f Dockerfile]
docker run -it --rm --privileged -v `pwd`:/root/ \
-v /var/run/docker.sock:/var/run/docker.sock \
projectatomic/dockerfile-lint \
dockerfile_lint image <imageid>
and Atomic CLI available
atomic run projectatomic/dockerfile-lint
atomic run projectatomic/dockerfile-lint image <imageid>
Also you can lint your images for tagging.
I created dockerfile-validator as an extension for VS Code, which uses the dockerfile-lint mentioned in a previous answer. By default it uses dockerfile-lint default rules, but in VS code User Settings (dockerfile-validator.rulefile.path) you can specify a path to a custom rule file with your own coding standards.
Recently, I cam across dockerfilelint which is NodeJS based.
dockerfilelint Dockerfile
Supports following rules and rudimentary CMD checks
required_params
uppercase_commands
from_first
invalid_line
sudo_usage
apt-get_missing_param
apt-get_recommends
apt-get-upgrade
apt-get-dist-upgrade
apt-get-update_require_install
apkadd-missing_nocache_or_updaterm
apkadd-missing-virtual
invalid_port
invalid_command
expose_host_port
label_invalid
missing_tag
latest_tag
extra_args
missing_args
add_src_invalid
add_dest_invalid
invalid_workdir
invalid_format
apt-get_missing_rm
deprecated_in_1.13
Hadolint seems like a better option but this may suffice for simple needs. Also, Github's super-linter uses this.
I'm not too familiar with go but it looks like you can simply call the Parse method as is done in the test suite here. If that does not return an err then your lint passes. I'm assuming that's trivial to expose to a script or something to call during development.

Setup Docker Jenkins with default plugins

I want to create a Jenkins based image to have some plugins installed as well as npm. To do so I have the following Dockerfile:
FROM jenkins:2.60.3
RUN install-plugins.sh bitbucket
USER root
RUN apt-get update
RUN curl -sL https://deb.nodesource.com/setup_8.x | bash -
RUN apt-get install -y nodejs
RUN npm --version
USER jenkins
That works fine however when I run the image I have two problems:
It seems that the plugins I tried to install manually didn't get persisted for some reason.
I get prompted the list of plugins I want to install but I don't want to install anything else.
Am I missing anything configuring the Dockerfile or is it that what I want to achieve is simply not possible?
Without seeing the contents of install-plugins.sh, I can't comment as to why the plugins aren't persisting. It is most likely caused by an incorrect installation destination; persistence shouldn't be an issue at this stage, since the plugin installation is built into the image itself.
As for the latter issue, you should be able to skip the installation wizard altogether by adding the line ENV JAVA_OPTS=-Djenkins.install.runSetupWizard=false
to your Dockerfile. Please note that this can be a security risk, if the Jenkins image is exposed to the world at large, since this option disables the need for authentication
EDIT: The default plugin directory for the Docker image is /var/jenkins_home/plugins
EDIT 2: According to the README on the Jenkins Docker repo, adding the line RUN echo 2.0 > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.stateshould accomplish the same thing
Things have changed since 2017, when the last answer was posted, and it no longer works. The current way is in following Dockerfile snippet:
# Prevent setup wizard from running.
# WARNING: Jenkins will start with security disabled, without any password.
ENV JENKINS_OPTS="-Djenkins.install.runSetupWizard=false"
# plugins.txt must contain the list of plugins to be installed
# (One plugin per line, e.g. sidebar-link:1.11.0)
COPY plugins.txt /tmp/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /tmp/plugins.txt

Makefile for building an rpm works locally, but not in Jenkins

I have a makefile for building debian and rpm packages. I have two Jenkins environments, one for Ubuntu and one for CentOS. The debian package works no problem, and the rpm make command works on my machine, but not on Jenkins. Jenkins returns the following error:
cp: cannot stat /root/rpmbuild/SOURCES/myfile.file': No such file or directory
error: Bad exit status from /var/tmp/rpm-tmp.mII8KL (%install)
I was getting similar errors when developing the package but eventually figured everything out, and all was good. I think the problem may lie with $RPM_BUILD_ROOT, %{buildroot}, or _topdir options. Nothing I have tried has led me anywhere however.
Here is my (modified) Makefile:
# a list of tools we depend on and must install if they're missing
DEBTOOLS=/usr/bin/debuild-pbuilder
RPMTOOLS=/usr/bin/rpmbuild
# convenience target for "make deb"
deb: my-package_1.0_all.deb
# convenience target for "make rpm".
rpm: my-package-1.0-Public.x86_64.rpm
# the target package (on Ubuntu at least)
my-package_1.0_all.deb: $(DEBTOOLS)
cd my-package; debuild-pbuilder -us -uc
my-package-1.0-Public.x86_64.rpm: $(RPMTOOLS)
cd rpmbuild; rpmbuild -bb SPECS/my-package.spec
/usr/bin/debuild-pbuilder:
apt-get -y install pbuilder
/usr/bin/rpmbuild:
yum -y install rpm-build
This is my spec file:
Summary: My Package
Name: my-package
Version: 1.0
Release: Public
Group: Applications/System
License: Public
Requires: external-package
Source1: myfile.file
%description
blah blah
%files
%config /etc/myfile.file
%install
mkdir -p $RPM_BUILD_ROOT/etc/
cp %{SOURCE1} %{buildroot}/etc/myfile.file
%post
ln -sf /etc/myfile.file /etc/external-package.conf
The problem was in fact that the file wasn't being found (obviously). For me this had a lot to do with the confusing nature of building rpm files. When the make command is executed, and the rpmbuild command is called, I needed to be able to specify the directory. When reading the documentation, it was stated you could use rpmbuild -D '_topdir .' -bb path/to/spec.spec to set the _topdir variable to the local directory you call from. This made sense as . represents this in linux.
However the actual call needs to be
rpmbuild -D "_topdir `pwd`" -bb path/to/spec.spec
This doesn't look all that different except it is crucial to use double-quotes. Using this command will run the build within the directory you call it from. After this rpmbuild will copy and handle the files for you as it should (which is confusing in itself).

Resources