EDIT: documentation given by the informatic administration was shitty, old version of singularity, now the order of arguments is different and the problem is solved.
To make my tool more portable, and because I have to use it on a cluster, I have to put my bioinformatics tool at disposal for docker. Tool is located here. The docker hub is 007ptar007/metadbgwas, if you want to experience with it. The Dockerfile is in the repo, and to make it easier to everyone :
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
USER root
COPY ./install_docker.sh ./
RUN chmod +x ./install_docker.sh && sh ./install_docker.sh
ENTRYPOINT ["/MetaDBGWAS/metadbgwas.sh"]
ENV PATH="/MetaDBGWAS/:${PATH}"
And the install_docker.sh script contains :
apt-get update
apt install -y libgatbcore-dev libhdf5-dev libboost-all-dev libpstreams-dev zlib1g-dev g++ cmake git r-base-core
Rscript -e "install.packages(c('ape', 'phangorn'))"
Rscript -e "install.packages('https://raw.githubusercontent.com/sgearle/bugwas/master/build/bugwas_1.0.tar.gz', repos=NULL, type='source')"
git clone --recursive https://github.com/Louis-MG/MetaDBGWAS.git
cd MetaDBGWAS
sed -i "51i#include <limits>" ./REINDEER/blight/robin_hood.h #temporary fix for REINDEER compilation
sh install.sh
The problem :
My tool parses the command line, and needs a verbose (-v, or --verbose) argument. It also needs to reject unknown arguments; anything that isn't used by the tool causes the help message to be printed in the standard output and exits. To use the tool, I need to mount volumes were the data is; using -v /path/to/files:/input option:
singularity run docker://007ptar007/metadbgwas --volumes '/path/to/data:/inputd/:/input' --files /input --strains /input/strains --threads 8 --output ~/output
But my tool sees this as a bad -v option value or the --volume as an unknown option. I can't change this on my tool. How do I solve this conflict ?
You need to put any arguments intended for singularity - such as the volume mounting - before the name of the image you want to run (e.g. the docker image you specify in your command):
singularity run -v '/path/to/data:/input' docker://007ptar007/metadbgwas --files /input --strains /input/strains --threads 8 --output ~/output
I have a problem to use the docker rstudio-image rocker/rstudio proposed
on https://www.rocker-project.org/ (docker containers for R). Since I am a beginner with both docker and RStudio, I suspect the problem comes from me and does not deserve a bug report:
I open a proper terminal with 'Docker Quickstart Terminal'
where I run the image with docker run -d -p 8787:8787 -e DISABLE_AUTH=true -v <...>:/home/rstudio/<...> --name rstudio rocker/rstudio
in my browser I then get a nice RStudio instance at the address http://192.168.99.100:8787
but in this instance I can't install several packages such as xml2. I get the message:
Using PKG_CFLAGS=
Using PKG_LIBS=-lxml2
------------------------- ANTICONF ERROR ---------------------------
Configuration failed because libxml-2.0 was not found. Try installing:
* deb: libxml2-dev (Debian, Ubuntu, etc)
* rpm: libxml2-devel (Fedora, CentOS, RHEL)
* csw: libxml2_dev (Solaris)
If libxml-2.0 is already installed, check that 'pkg-config' is in your
PATH and PKG_CONFIG_PATH contains a libxml-2.0.pc file. If pkg-config
is unavailable you can set INCLUDE_DIR and LIB_DIR manually via:
R CMD INSTALL --configure-vars='INCLUDE_DIR=... LIB_DIR=...'
--------------------------------------------------------------------
ERROR: configuration failed for package ‘xml2’
* removing ‘/usr/local/lib/R/site-library/xml2’
Warning in install.packages :
installation of package ‘xml2’ had non-zero exit status
I don't know whether xml2 is on the image but the file libxml-2.0.pc does exist on my laptop in the directory /opt/local/lib/pkgconfig and pkg-config is in /opt/local/bin. So I tried linking these pkg paths when running
the image (to see what happen when I play with the image environment
in RStudio), adding options -v
/opt/local/lib/pkgconfig:/home/rstudio/lib/pkgconfig -v
/opt/local/bin:/home/rstudio/bin to the run command. But it doesn't work: for some reason
I don't see the content of lib/pkgconfig in RStudio...
Also the RStudio instance does not accept root/sudo commands so I can't
use tools such as apt-get in the RStudio terminal
so, what's the trick ?
Libraries on your laptop (the host for docker) are not available for docker containers. You should create a custom image with required libraries, create a Dockerfile like this:
FROM rocker/rstudio
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
libxml2-dev # add any additional libraries you need
CMD ["/init"]
Above I added the libxml2-dev but you can add as many libraries as you need.
Then build your image using this command (you need to execute below command in directory there you created Dockerfile):
docker build -t my_rstudio:0.1 .
Then you can start your container:
docker run -d -p 8787:8787 -e DISABLE_AUTH=true --name rstudio my_rstudio:0.1
(you can add any additional arguments like -v to above).
I'd like to serve Tensorfow Model by using OpenFaaS. Basically, I'd like to invoke the "serve" function in such a way that tensorflow serving is going to expose my model.
OpenFaaS is running correctly on Kubernetes and I am able to invoke functions via curl or from the UI.
I used the incubator-flask as example, but I keep receiving 502 Bad Gateway all the time.
The OpenFaaS project looks like the following
serve/
- Dockerfile
stack.yaml
The inner Dockerfile is the following
FROM tensorflow/serving
RUN mkdir -p /home/app
RUN apt-get update \
&& apt-get install curl -yy
RUN echo "Pulling watchdog binary from Github." \
&& curl -sSLf https://github.com/openfaas-incubator/of-watchdog/releases/download/0.4.6/of-watchdog > /usr/bin/fwatchdog \
&& chmod +x /usr/bin/fwatchdog
WORKDIR /root/
# remove unecessery logs from S3
ENV TF_CPP_MIN_LOG_LEVEL=3
ENV AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
ENV AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
ENV AWS_REGION=${AWS_REGION}
ENV S3_ENDPOINT=${S3_ENDPOINT}
ENV fprocess="tensorflow_model_server --rest_api_port=8501 \
--model_name=${MODEL_NAME} \
--model_base_path=${MODEL_BASE_PATH}"
# Set to true to see request in function logs
ENV write_debug="true"
ENV cgi_headers="true"
ENV mode="http"
ENV upstream_url="http://127.0.0.1:8501"
# gRPC tensorflow serving
# EXPOSE 8500
# REST tensorflow serving
# EXPOSE 8501
RUN touch /tmp/.lock
HEALTHCHECK --interval=5s CMD [ -e /tmp/.lock ] || exit 1
CMD [ "fwatchdog" ]
the stack.yaml file looks like the following
provider:
name: faas
gateway: https://gateway-url:8080
functions:
serve:
lang: dockerfile
handler: ./serve
image: repo/serve-model:latest
imagePullPolicy: always
I build the image with faas-cli build -f stack.yaml and then I push it to my docker registry with faas-cli push -f stack.yaml.
When I execute faas-cli deploy -f stack.yaml -e AWS_ACCESS_KEY_ID=... I get Accepted 202 and it appears correctly among my functions. Now, I want to invoke the tensorflow serving on the model I specified in my ENV.
The way I try to make it work is to use curl in this way
curl -d '{"inputs": ["1.0", "2.0", "5.0"]}' -X POST https://gateway-url:8080/function/deploy-model/v1/models/mnist:predict
but I always obtain 502 Bad Gateway.
Does anybody have experience with OpenFaaS and Tensorflow Serving? Thanks in advance
P.S.
If I run tensorflow serving without of-watchdog (basically without the openfaas stuff), the model is served correctly.
Elaborating the link mentioned by #viveksyngh.
tensorflow-serving-openfaas:
Example of packaging TensorFlow Serving with OpenFaaS to be deployed and managed through OpenFaaS with auto-scaling, scale-from-zero and a sane configuration for Kubernetes.
This example was adapted from: https://www.tensorflow.org/serving
Pre-reqs:
OpenFaaS
OpenFaaS CLI
Docker
Instructions:
Clone the repo
$ mkdir -p ~/dev/
$ cd ~/dev/
$ git clone https://github.com/alexellis/tensorflow-serving-openfaas
Clone the sample model and copy it to the function's build context
$ cd ~/dev/tensorflow-serving-openfaas
$ git clone https://github.com/tensorflow/serving
$ cp -r serving/tensorflow_serving/servables/tensorflow/testdata/saved_model_half_plus_two_cpu ./ts-serve/saved_model_half_plus_two_cpu
Edit the Docker Hub username
You need to edit the stack.yml file and replace alexellis2 with your Docker Hub account.
Build the function image
$ faas-cli build
You should now have a Docker image in your local library which you can deploy to a cluster with faas-cli up
Test the function locally
All OpenFaaS images can be run stand-alone without OpenFaaS installed, let's do a quick test, but replace alexellis2 with your own name.
$ docker run -p 8081:8080 -ti alexellis2/ts-serve:latest
Now in another terminal:
$ curl -d '{"instances": [1.0, 2.0, 5.0]}' \
-X POST http://127.0.0.1:8081/v1/models/half_plus_two:predict
{
"predictions": [2.5, 3.0, 4.5
]
}
From here you can run faas-cli up and then invoke your function from the OpenFaaS UI, CLI or REST API.
$ export OPENFAAS_URL=http://127.0.0.1:8080
$ curl -d '{"instances": [1.0, 2.0, 5.0]}' $OPENFAAS_URL/function/ts-serve/v1/models/half_plus_two:predict
{
"predictions": [2.5, 3.0, 4.5
]
}
I'm trying to build a Docker image based on oracle/database:11.2.0.2-xe (which is based on Oracle Linux based on RHEL) and want to change the system locale in this image (using some RUN command inside a Dockerfile).
According to this guide I should use localectl set-locale <MYLOCALE> but this command is failing with Failed to create bus connection: No such file or directory message. This is a known Docker issue for commands that require SystemD to be launched.
I tried to start the SystemD anyway (using /usr/sbin/init as first process as well as using -v /sys/fs/cgroup:/sys/fs/cgroup:ro -v /run thanks to this help) but then the localectl set-locale failed with Could not get properties: Connection timed out message.
So I'm now trying to avoid the usage of localectl to change my system globale locale, how could I do this?
According to this good guide on setting locale on Linux, I should use
localedef -c -i fr_FR -f ISO-8859-15 fr_FR.ISO-8859-15
But this command failed with
cannot read character map directory `/usr/share/i18n/charmaps': No such file or directory`
This SO reply indicated one could use yum reinstall glibc-common -y to fix this and it worked.
So my final working Dockerfile is:
RUN yum reinstall glibc-common -y && \
localedef -c -i fr_FR -f ISO-8859-15 fr_FR.ISO-8859-15 && \
echo "LANG=fr_FR.ISO-8859-15" > /etc/locale.conf
ENV LANG fr_FR.ISO-8859-15
I successfully shelled to a Docker container using:
docker exec -i -t 69f1711a205e bash
Now I need to edit file and I don't have any editors inside:
root#69f1711a205e:/# nano
bash: nano: command not found
root#69f1711a205e:/# pico
bash: pico: command not found
root#69f1711a205e:/# vi
bash: vi: command not found
root#69f1711a205e:/# vim
bash: vim: command not found
root#69f1711a205e:/# emacs
bash: emacs: command not found
root#69f1711a205e:/#
How do I edit files?
As in the comments, there's no default editor set - strange - the $EDITOR environment variable is empty. You can log in into a container with:
docker exec -it <container> bash
And run:
apt-get update
apt-get install vim
Or use the following Dockerfile:
FROM confluent/postgres-bw:0.1
RUN ["apt-get", "update"]
RUN ["apt-get", "install", "-y", "vim"]
Docker images are delivered trimmed to the bare minimum - so no editor is installed with the shipped container. That's why there's a need to install it manually.
EDIT
I also encourage you to read my post about the topic.
If you don't want to add an editor just to make a few small changes (e.g., change the Tomcat configuration), you can just use:
docker cp <container>:/path/to/file.ext .
which copies it to your local machine (to your current directory). Then edit the file locally using your favorite editor, and then do a
docker cp file.ext <container>:/path/to/file.ext
to replace the old file.
You can use cat if it's installed, which will most likely be the case if it's not a bare/raw container. It works in a pinch, and ok when copy+pasting to a proper editor locally.
cat > file
# 1. type in your content
# 2. leave a newline at end of file
# 3. ctrl-c / (better: ctrl-d)
cat file
cat will output each line on receiving a newline. Make sure to add a newline for that last line. ctrl-c sends a SIGINT for cat to exit gracefully. From the comments you see that you can also hit ctrl-d to denote end-of-file ("no more input coming").
Another option is something like infilter which injects a process into the container namespace with some ptrace magic: https://github.com/yadutaf/infilter
To keep your Docker images small, don't install unnecessary editors. You can edit the files over SSH from the Docker host to the container:
vim scp://remoteuser#containerip//path/to/document
You can use cat if installed, with the > caracter.
Here is the manipulation :
cat > file_to_edit
#1 Write or Paste you text
#2 don't forget to leave a blank line at the end of file
#3 Ctrl + C to apply configuration
Now you can see the result with the command
cat file
For common edit operations I prefer to install vi (vim-tiny), which uses only 1491 kB or nano which uses 1707 kB.
In other hand vim uses 28.9 MB.
We have to remember that in order for apt-get install to work, we have to do the update the first time, so:
apt-get update
apt-get install vim-tiny
To start the editor in CLI we need to enter vi.
You can open existing file with
cat filename.extension
and copy all the existing text on clipboard.
Then delete old file with
rm filename.extension
or rename old file with
mv old-filename.extension new-filename.extension
Create new file with
cat > new-file.extension
Then paste all text copied on clipboard, press Enter and exit with save by pressing ctrl+z. And voila no need to install any kind of editors.
Sometime you must first run the container with root:
docker exec -ti --user root <container-id> /bin/bash
Then in the container, to install Vim or something else:
apt-get install vim
I use "docker run" (not "docker exec"), and I'm in a restricted zone where we cannot install an editor. But I have an editor on the Docker host.
My workaround is: Bind mount a volume from the Docker host to the container (https://docs.docker.com/engine/reference/run/#/volume-shared-filesystems), and edit the file outside the container. It looks like this:
docker run -v /outside/dir:/container/dir
This is mostly for experimenting, and later I'd change the file when building the image.
After you shelled to the Docker container, just type:
apt-get update
apt-get install nano
You can just edit your file on host and quickly copy it into and run it inside the container. Here is my one-line shortcut to copy and run a Python file:
docker cp main.py my-container:/data/scripts/ ; docker exec -it my-container python /data/scripts/main.py
If you use Windows container and you want change any file, you can get and use Vim in Powershell console easily.
To shelled to the Windows Docker container with PowerShell:
docker exec -it <name> powershell
First install Chocolatey package manager
Invoke-WebRequest https://chocolatey.org/install.ps1 -UseBasicParsing | Invoke-Expression;
Install Vim
choco install vim
Refresh ENVIRONMENTAL VARIABLE
You can just exit and shell back to the container
Go to file location and Vim it vim file.txt
See Stack Overflow question
sed edit file in place
It would be a good option here, if:
To modify a large file, it's impossible to use cat.
Install Vim is not allowed or takes too long.
My situation is using the MySQL 5.7 image when I want to change the my.cnf file, there is no vim, vi, and Vim install takes too long (China Great Firewall). sed is provided in the image, and it's quite simple. My usage is like
sed -i /s/testtobechanged/textwanted/g filename
Use man sed or look for other tutorials for more complex usage.
It is kind of screwy, but in a pinch you can use sed or awk to make small edits or remove text. Be careful with your regex targets of course and be aware that you're likely root on your container and might have to re-adjust permissions.
For example, removing a full line that contains text matching a regex:
awk '!/targetText/' file.txt > temp && mv temp file.txt
(More)
If you can only shell into container with bin/sh (in case bin/bash doesn't work)
and apt or apt-get doesn't work in the container, check whether apk is installed by entering apk in command prompt inside the container.
If yes, you can install nano as follows:
apk add nano
then nano will work as usual
An easy way to edit a few lines would be:
echo "deb http://deb.debian.org/debian stretch main" > sources.list
You can install nano
yum install nano
You can also use a special container which will contain only the command you need: Vim. I chose python-vim. It assumes that the data you want to edit are in a data container built with the following Dockerfile:
FROM debian:jessie
ENV MY_USER_PASS my_user_pass
RUN groupadd --gid 1001 my_user
RUN useradd -ms /bin/bash --home /home/my_user \
-p $(echo "print crypt("${MY_USER_PASS:-password}", "salt")" | perl) \
--uid 1001 --gid 1001 my_user
ADD src /home/my_user/src
RUN chown -R my_user:my_user /home/my_user/src
RUN chmod u+x /home/my_user/src
CMD ["true"]
You will be able to edit your data by mounting a Docker volume (src_volume) which will be shared by your data container (src_data) and the python-vim container.
docker volume create --name src_volume
docker build -t src_data .
docker run -d -v src_volume:/home/my_user/src --name src_data_1 src_data
docker run --rm -it -v src_volume:/src fedeg/python-vim:latest
That way, you do not change your containers. You just use a special container for this work.
First login as root :
docker run -u root -ti bash
Type following commands:
apt-get update &&
apt-get install nano
docker comes up with no editors. so simply install vim, 36MB space don't kill your docker!
Make sure to update the container before trying to install the editor.
apt-get update
apt-get install nano vi