Installing Quicklisp libraries in a Docker image - docker

Is there a Dockerfile for installing cl-json (or other Quicklisp library) on Docker? Most installation instructions I've seen require user input on commands with no --noinput flag, making it difficult to install through a Dockerfile.
In addition, many of the instructions appear out of date or reference broken links and non-existent resources. It would be convenient to use a Dockerfile to install it in a consistent way with e.g. Quicklisp.

Here is a possible Dockerfile for an application based on SBCL.
FROM dparnell/minimal-sbcl
RUN sbcl --noinform \
--disable-ldb \
--lose-on-corruption \
--eval "(ql:quickload '(buildapp))" \
--eval '(buildapp:build-buildapp "/bin/buildapp")'
RUN buildapp --load /opt/quicklisp/setup.lisp \
--eval "(ql:quickload '(cl-json))" \
--output bin/executable
CMD executable
I am basing the image on dparnell/minimal-sbcl, which comes with Quicklisp pre-installed.
I then run SBCL once to build buildapp (that could be a separate docker image).
Then, I run buildapp, load quicklisp/setup.lisp and install cl-json. You can load as many dependencies you want with quickload, but I'd recommand defining your own system.asd file and list dependencies there.

https://lispcookbook.github.io/cl-cookbook/testing.html#continuous-integration
In this tutorial we use Gitlab CI with the daewok/lisp-devel Docker image that includes several Lisp implementations and Quicklisp, so we can run a lisp and (ql:quickload "cl-json") right away.

Related

How can I use a several line command in a Dockerfile in order to create a file within the resulting Image

I'm following installation instructions for RedhawkSDR, which rely on having a Centos7 OS. Since my machine uses Ubuntu 22.04, I'm creating a Docker container to run Centos7 then installing RedhawkSDR in that.
One of the RedhawkSDR installation instructions is to create a file with the following command:
cat<<EOF|sed 's#LDIR#'`pwd`'#g'|sudo tee /etc/yum.repos.d/redhawk.repo
[redhawk]
name=REDHAWK Repository
baseurl=file://LDIR/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhawk
EOF
How do I get a Dockerfile to execute this command when creating an image?
(Also, although I can see that this command creates the file /etc/yum.repos.d/redhawk.repo, which consists of the lines from [redhawk] to gpgkey=...., I have no idea how to parse this command and understand exactly why it does that...)
Using the text editor of your choice, create the file on your local system. Remove the word sudo from it; give it an additional first line #!/bin/sh. Make it executable using chmod +x create-redhawk-repo.
Now it is an ordinary shell script, and in your Dockerfile you can just RUN it.
COPY create-redhawk-repo ./
RUN ./create-redhawk-repo
But! If you look at what the script actually does, it just writes a file into /etc/yum.repos.d with a LDIR placeholder replaced with some other directory. The filesystem layout inside a Docker image is fixed, and there's no particular reason to use environment variables or build arguments to hold filesystem paths most of the time. You could use a fixed path in the file
[redhawk]
name=REDHAWK Repository
baseurl=file:///redhawk-yum/
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhawk
and in your Dockerfile, just COPY that file in as-is, and make sure the downloaded package archive is in that directory. Adapting the installation instructions:
ARG redhawk_version=3.0.1
RUN wget https://github.com/RedhawkSDR/redhawk/releases/download/$redhawk_version/\
redhawk-yum-$redhawk_version-el7-x86_64.tar.gz \
&& tar xzf redhawk-yum-$redhawk_version-el7-x86_64.tar.gz \
&& rm redhawk-yum-$redhawk_version-el7-x86_64.tar.gz \
&& mv redhawk-yum-$redhawk_version-el7-x86_64 redhawk-yum \
&& rpm -i redhawk-yum/redhawk-release*.rpm
COPY redhawk.repo /etc/yum.repos.d/
Remember that, in a Dockerfile, you are root unless you've switched to another USER (and in that case you can use USER root to switch back); you do not need generally sudo in Docker at all, and can just delete sudo where it appears in these instructions.
How do I get a Dockerfile to execute this command when creating an image?
Just use printf and run this command as single line:
FROM image_name:image_tag
ARG LDIR="/default/folder/if/argument/not/set"
# if container has sudo command and default user is not root
# you should choose this variant
RUN printf '[redhawk]\nname=REDHAWK Repository\nbaseurl=file://%s/\nenabled=1\ngpgcheck=1\ngpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhawk\n' "$LDIR" | sudo tee /etc/yum.repos.d/redhawk.repo
# if default container user is root this command without piping may be used
RUN printf '[redhawk]\nname=REDHAWK Repository\nbaseurl=file://%s/\nenabled=1\ngpgcheck=1\ngpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhawk\n' "$LDIR" > /etc/yum.repos.d/redhawk.repo
Where LDIR is an argument and docker build process should be run like:
docker build ./ --build-arg LDIR=`pwd`

Singularity arguments conflict with my bioinformatics tool arguments

EDIT: documentation given by the informatic administration was shitty, old version of singularity, now the order of arguments is different and the problem is solved.
To make my tool more portable, and because I have to use it on a cluster, I have to put my bioinformatics tool at disposal for docker. Tool is located here. The docker hub is 007ptar007/metadbgwas, if you want to experience with it. The Dockerfile is in the repo, and to make it easier to everyone :
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
USER root
COPY ./install_docker.sh ./
RUN chmod +x ./install_docker.sh && sh ./install_docker.sh
ENTRYPOINT ["/MetaDBGWAS/metadbgwas.sh"]
ENV PATH="/MetaDBGWAS/:${PATH}"
And the install_docker.sh script contains :
apt-get update
apt install -y libgatbcore-dev libhdf5-dev libboost-all-dev libpstreams-dev zlib1g-dev g++ cmake git r-base-core
Rscript -e "install.packages(c('ape', 'phangorn'))"
Rscript -e "install.packages('https://raw.githubusercontent.com/sgearle/bugwas/master/build/bugwas_1.0.tar.gz', repos=NULL, type='source')"
git clone --recursive https://github.com/Louis-MG/MetaDBGWAS.git
cd MetaDBGWAS
sed -i "51i#include <limits>" ./REINDEER/blight/robin_hood.h #temporary fix for REINDEER compilation
sh install.sh
The problem :
My tool parses the command line, and needs a verbose (-v, or --verbose) argument. It also needs to reject unknown arguments; anything that isn't used by the tool causes the help message to be printed in the standard output and exits. To use the tool, I need to mount volumes were the data is; using -v /path/to/files:/input option:
singularity run docker://007ptar007/metadbgwas --volumes '/path/to/data:/inputd/:/input' --files /input --strains /input/strains --threads 8 --output ~/output
But my tool sees this as a bad -v option value or the --volume as an unknown option. I can't change this on my tool. How do I solve this conflict ?
You need to put any arguments intended for singularity - such as the volume mounting - before the name of the image you want to run (e.g. the docker image you specify in your command):
singularity run -v '/path/to/data:/input' docker://007ptar007/metadbgwas --files /input --strains /input/strains --threads 8 --output ~/output

gRPC service definitions: containerize .proto compilation?

Let's say we have a services.proto with our gRPC service definitions, for example:
service Foo {
rpc Bar (BarRequest) returns (BarReply) {}
}
message BarRequest {
string test = 1;
}
message BarReply {
string test = 1;
}
We could compile this locally to Go by running something like
$ protoc --go_out=. --go_opt=paths=source_relative \
--go-grpc_out=. --go-grpc_opt=paths=source_relative \
services.proto
My concern though is that running this last step might produce inconsistent output depending on the installed version of the protobuf compiler and the Go plugins for gRPC. For example, two developers working on the same project might have slightly different versions installed locally.
It would seem reasonable to me to address this by containerizing the protoc step. For example, with a Dockerfile like this...
FROM golang:1.18
WORKDIR /src
RUN apt-get update && apt-get install -y protobuf-compiler
RUN go install google.golang.org/protobuf/cmd/protoc-gen-go#v1.26
RUN go install google.golang.org/grpc/cmd/protoc-gen-go-grpc#v1.1
CMD protoc --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative services.proto
... we can run the protoc step inside a container:
docker run --rm -v $(pwd):/src $(docker build -q .)
After wrapping the previous command in a shell script, developers can run it on their local machine, giving them deterministic, reproducible output. It can also run in a CI/CD pipeline.
My question is, is this a sound approach and/or is there an easier way to achieve the same outcome?
NB, I was surprised to find that the official grpc/go image does not come with protoc preinstalled. Am I off the beaten path here?
My question is, is this a sound approach and/or is there an easier way to achieve the same outcome?
It is definitely a good approach. I do the same. Not only to have a consistent across the team, but also to ensure we can produce the same output in different OSs.
There is an easier way to do that, though.
Look at this repo: https://github.com/jaegertracing/docker-protobuf
The image is in Docker hub, but you can create your image if you prefer.
I use this command to generate Go:
docker run --rm -u $(id -u) \
-v${PWD}/protos/:/source \
-v${PWD}/v1:/output \
-w/source jaegertracing/protobuf:0.3.1 \
--proto_path=/source \
--go_out=paths=source_relative,plugins=grpc:/output \
-I/usr/include/google/protobuf \
/source/*

How do I add an additional command line tool to an already existing Docker/Singularity image?

I work in neuroscience, and I use a cloud platform called Brainlife to upload and download data (linked here, but I don't think knowledge of Brainlife is relevant to this question). I use Brainlife's command line interface to upload and download data on my university's server. In order to use their CLI, I run Singularity with a Docker image created by Brainlife (found here). I run this using the following code:
singularity shell docker://brainlife/cli -B
I also have the file saved on my server account, and can run it like this:
singularity shell brainlifeimage.sif -B
After running one of those commands, I am able to download and upload data, usually successfully. Currently I'm following Brainlife's tutorial to bulk download data. The tutorial uses the command line tool "jq" (link), which isn't on their docker image. I tried installing it within the Singularity shell like this:
apt-get install jq
And it returned:
Reading package lists... Done
Building dependency tree
Reading state information... Done
W: Not using locking for read only lock file /var/lib/dpkg/lock
E: Unable to locate package jq
Is there an easy way to add this one tool to the image? I've been reading over the Singularity and Docker documentations, but Docker is all new to me and I'm really lost.
If relevant, my university server runs on Ubuntu 16.04.7 LTS, and I am using terminal on a Mac laptop running MacOS 11.3. This is my first stack overflow question - please let me know if i can provide any additional info! Thanks so much.
The short, specific answer: jq is portable, so you can just mount it into the image and use it normally. e.g.,
singularity shell -B /path/to/jq:/usr/bin/jq brainlifeimage.sif
The short, general answer: you can't modify the read only image and need to build a new one.
Long answer with several options and specific examples:
Since singularity images are read only, they cannot have persistent changes made to them. This is great for reproducibility, a bit inconvenient if your tools are likely to change often. You can rebuild the image in several ways, though all will require sudo permissions.
Write a new Singularity definition based on the docker image
Create a new definition file (generally called Singularity or something.def), use the current container as a base and add the desired software in the %post section. Then build the new image with: sudo singularity build brainy_jq.sif Singularity
The definition file docs are quite good and highly recommended.
Bootstrap: docker
From: brainlife/cli:latest
%post
apt-get update && apt-get install -y jq
Create a sandbox of the current singularity image, make your changes, and convert back to a read-only image. See the singularity docs on writable sandbox directories and converting images between formats.
# use --sandbox to create a writable singularity image
sudo singularity build --sandbox writable_brain/ brainlifeimage.sif
# --writable must still be used to make changes, and sudo for correct permissions
sudo singularity exec writable_brain/ bash -c 'apt-get update && apt-get install -y jq'
# convert back to read-only image for normal usage
sudo singularity build brainlifeimage_jq.sif writable_brain/
Modify the source docker image locally and build from that. One of the more... creative options. Almost sudo-free, except singularity pull doesn't accept docker-daemon so a sudo singularity build is necessary.
# add jq to a new docker container. the value for --name doesn't matter, but we use it
# in later steps. The entrypoint needs to be overridden in this case as well.
docker run -it --name brainlife-jq --entrypoint=/bin/bash \
brainlife/cli:1.5.25 -c 'apt-get update && apt-get install -y jq'
# use docker commit to create an image from the container so it can be reused
# note that we're using the name of the image set in the previous step
# the output of docker commit is the hash for the newly created image, so we grab that
IMAGE_ID=$(docker commit brainlife-jq)
# tag the newly created image with a more useful name
docker tag $IMAGE_ID brainlife/cli:1.5.25-jq
# here we use docker-daemon instead of docker to build from a locally cached docker image
# instead of looking at docker hub
sudo singularity build brainlife_jq.sif docker-daemon://brainlife/cli:1.5.25-jq
# now check that it all worked as planned
singularity exec brainlife_jq.sif which jq
# /usr/bin/jq
ref: docker commit, using locally cached docker images

RStudio in docker image does not deal with some librairies

I have a problem to use the docker rstudio-image rocker/rstudio proposed
on https://www.rocker-project.org/ (docker containers for R). Since I am a beginner with both docker and RStudio, I suspect the problem comes from me and does not deserve a bug report:
I open a proper terminal with 'Docker Quickstart Terminal'
where I run the image with docker run -d -p 8787:8787 -e DISABLE_AUTH=true -v <...>:/home/rstudio/<...> --name rstudio rocker/rstudio
in my browser I then get a nice RStudio instance at the address http://192.168.99.100:8787
but in this instance I can't install several packages such as xml2. I get the message:
Using PKG_CFLAGS=
Using PKG_LIBS=-lxml2
------------------------- ANTICONF ERROR ---------------------------
Configuration failed because libxml-2.0 was not found. Try installing:
* deb: libxml2-dev (Debian, Ubuntu, etc)
* rpm: libxml2-devel (Fedora, CentOS, RHEL)
* csw: libxml2_dev (Solaris)
If libxml-2.0 is already installed, check that 'pkg-config' is in your
PATH and PKG_CONFIG_PATH contains a libxml-2.0.pc file. If pkg-config
is unavailable you can set INCLUDE_DIR and LIB_DIR manually via:
R CMD INSTALL --configure-vars='INCLUDE_DIR=... LIB_DIR=...'
--------------------------------------------------------------------
ERROR: configuration failed for package ‘xml2’
* removing ‘/usr/local/lib/R/site-library/xml2’
Warning in install.packages :
installation of package ‘xml2’ had non-zero exit status
I don't know whether xml2 is on the image but the file libxml-2.0.pc does exist on my laptop in the directory /opt/local/lib/pkgconfig and pkg-config is in /opt/local/bin. So I tried linking these pkg paths when running
the image (to see what happen when I play with the image environment
in RStudio), adding options -v
/opt/local/lib/pkgconfig:/home/rstudio/lib/pkgconfig -v
/opt/local/bin:/home/rstudio/bin to the run command. But it doesn't work: for some reason
I don't see the content of lib/pkgconfig in RStudio...
Also the RStudio instance does not accept root/sudo commands so I can't
use tools such as apt-get in the RStudio terminal
so, what's the trick ?
Libraries on your laptop (the host for docker) are not available for docker containers. You should create a custom image with required libraries, create a Dockerfile like this:
FROM rocker/rstudio
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libxml2-dev # add any additional libraries you need
CMD ["/init"]
Above I added the libxml2-dev but you can add as many libraries as you need.
Then build your image using this command (you need to execute below command in directory there you created Dockerfile):
docker build -t my_rstudio:0.1 .
Then you can start your container:
docker run -d -p 8787:8787 -e DISABLE_AUTH=true --name rstudio my_rstudio:0.1
(you can add any additional arguments like -v to above).

Resources