Is it possible to pull texlive image in docker (e.g. https://hub.docker.com/r/texlive/texlive) and use it from my computer instead of installing texlive on my computer?
I like the idea using containers instead of installing software.
This should work something like
sudo docker run -i --rm --name latex -v "$PWD":/usr/src/app -w /usr/src/app registry.gitlab.com/islandoftex/images/texlive:latest pdflatex essay.tex
Taken from https://nico.dorfbrunnen.eu/de/posts/2020/docker-latex/ (german)
Check wiki page for docker images: https://gitlab.com/islandoftex/images/texlive/-/wikis/home
Related
I have followed the steps in the official CUDA on WSL tutorial (https://docs.nvidia.com/cuda/wsl-user-guide/index.html#ch05-sub02-jupyter) to set up a jupyter notebook. However, I can't figure out how to change the initial working directory. I tried mounting a local directory with the -v switch as well as appending to the launch command --notebook-dir, but neither one of these solutions worked. The jupyter notebook will always start under "/tf" no matter what I do. Ideally, I would like this to be the same working directory as the one I have on Windows (C:\Users\MyUser).
The only thing I haven't tried is changing the WORKDIR in the docker image "tensorflow/tensorflow:latest-gpu-py3-jupyter" supplied by hub.docker.com as I am not even sure if it is possible to edit it (line 57).
Here is a sample command I have tried running:
docker run -it --gpus all -p 8888:8888 -v /c/Users/MyUser/MyFolder:/home/MyFolder/ tensorflow/tensorflow:latest-gpu-py3-jupyter jupyter notebook --allow-root --ip=0.0.0.0 --NotebookApp.allow_origin='https://colab.research.google.com' --notebook-dir=/c/Users/MyUser/
What is the easiest way to achieve this?
I was able to solve this problem by mounting the directory I want to work in under the local directory that is given in the command "Serving notebooks from local directory:/tf". In my case it's '/tf', but yours could be different. In addition, I changed the first '/' to '//'. Also, the container name should be the last argument (per https://stackoverflow.com/a/34503625). So in your case, the command looks like:
docker run -it --gpus all -p 8888:8888 -v //c/Users/MyUser/MyFolder:/tf/home/MyFolder tensorflow/tensorflow:latest-gpu-py3-jupyter
I am using the docker gdal image to run certain commands for example:
docker run --rm -v /storage/:/storage osgeo/gdal gdal_translate -stats -of GTiff input.tif output.tif
however, the code that is executing these commands is within another container itself, so running this command doesn't work since docker is not found:
/bin/sh: 1: docker: not found
what can I do to resolve this? the gdal docker image is from another source
If you have available a statically linked docker client binary on your host, you can:
$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
-v /storage/:/storage osgeo/gdal \
gdal_translate -stats -of GTiff input.tif output.tif
This way you don't need to install docker in your container.
The other obvious option is to:
Install docker within your container using the available package manager (depends on the base image used for your container, i.e: apt-get, yum, etc)
Download the pre-compiled binaries directly to your container
Rebuild the original Docker image with docker package already installed to save you the trouble.
As #emory pointed out, if you do not trust the docker image, sharing your Host Docker Daemon might become a security issue. If this is the case, create a docker-machine and proceed with sharing your docker-mahchine Docker Daemon with the container. This is indeed safer!
I have downloaded a docker container that performs several different operations on an input file using several different kinds of software, i.e. alignment, variant calling etc. How do I find out what the contents of the docker container/image is? Sorry if this is trivial I am totally new to docker.
There are (at least) three ways to interpret your question:
which packages are installed in the container;
what files are there: explore container's filesystem;
what images and layers does the container consist of?
1. List packages installed in container
The way to get list of installed packages depends on distribution. There are three most popular families:
Alpine, lightweight Linux distribution based on musl and BusyBox
Debian-based (Debian, Ubuntu)
rpm-based (RHEL, CentOS and Fedora)
Alpine-based containers
Use apk info -vv command:
docker exec -i <container_id_1> apk info -vv | sort
Debian & Ubuntu - based containers
Use dpkg -l command:
docker exec -i <container_id_1> dpkg -l
RHEL, CentOS and Fedora - based containers
Use rpm -qa or yum list installed command:
docker exec -i <container_id_1> rpm -qa
docker exec -i <container_id_1> yum list installed
2. Explore container's filesystem
To see directory structure you can use either bash & tree or cool tools developed specially for exploring docker images
tree
docker exec -i <container_id_1> tree /
Note: not all images contain tree command.
docker export with tar
docker export adoring_kowalevski > contents.tar
And then, tou can explore contents.tar with your preferred archiver. I.e. for tar:
tar -tvf contents.tar
3. Special tools (explore images and layers OverlayFS)
wagoodman/dive
wagoodman/dive: A tool for exploring each layer in a docker image
docker run --rm -it \
-v /var/run/docker.sock:/var/run/docker.sock \
wagoodman/dive:latest \
<image_name|image_id>
A tool for exploring a docker image, layer contents, and discovering ways to shrink your Docker image size.
tomastomecek/sen
TomasTomecek/sen: Terminal User Interface for docker engine
docker run -v /var/run/docker.sock:/run/docker.sock -ti -e TERM tomastomecek/sen
it can interactively manage your containers and images:
justone/dockviz
justone/dockviz: Visualizing Docker data
$ dockviz containers -d -r | dot -Tpng -o containers.png
Containers are visualized with labelled lines for links. Containers that aren't running are greyed out.
$ dockviz containers -d -r | dot -Tpng -o containers.png
You can get information concerning the image by using:
docker image inspect <image> and docker image history <image> and then if you want to get information concerning the container, simply enter in the running container using exec command docker container exec -itu 0 <container> /bin/bash(pay attention your container may be using another shell) and afterward just gather the needed information (os, running processes, open files, etc)
More information about exec command: https://docs.docker.com/engine/reference/commandline/exec/.
PS: To list images docker image ls , to list running containers docker container ps
Since Docker Desktop 4.7.0 you can use the experimental sbom command to get the "Software Bill of Rights", which is a pretty comprehensive list of all contained libraries and corresponding versions. E.g.
$ docker sbom openjdk:11-jre-slim-buster
Syft v0.43.0
✔ Pulled image
✔ Loaded image
✔ Parsed image
✔ Cataloged packages [91 packages]
NAME VERSION TYPE
adduser 3.118 deb
apt 1.8.2.3 deb
base-files 10.3+deb10u12 deb
base-passwd 3.5.46 deb
bash 5.0-4 deb
bsdutils 1:2.33.1-0.1 deb
ca-certificates 20200601~deb10u2 deb
coreutils 8.30-3 deb
dash 0.5.10.2-5 deb
debconf 1.5.71+deb10u1 deb
debian-archive-keyring 2019.1+deb10u1 deb
[...]
As you can see it's based on Syft which is third party tool. That may change in the future though (hence experimental). In fact you can also use Syft directly so you don't need Docker Desktop.
I run Jupyter Notebook with Docker and trying to mount local directory onto the intended Docker volume. But I am unable to see my files in the Jupyter notebook. The Docker command is
sudo nvidia-docker create -v ~/tf/src -it -p 8888:8888
-e PASSWORD=password
--name container_name gcr.io/tensorflow/tensorflow:latest-gpu
and the GUI of the Jupyter Notebook looks like
but ~/tf/src are not shown up in the Jupyter GUI.
What are needed for the files to shown up in the Jupyter? Am I initializing the container incorrectly for this?
the way you mount your volume i think its incorrect -v ~/tf/src it should be
-v /host/directory:/container/directory
Ferdi D's answer targets only files inside interpreter, not precisely files inside Jupyter GUI, which makes things a little bit confusing. I target the title Show volume files in docker jupyter notebook by more generally showing the filse inside Jupyter notebook.
Files inside the interpreters
The -v flag gets you the files in the interpreter or the notebook but not necessarily in the Jupyter GUI
for which you run
$ docker run --rm -it -p 6780:8888 -v "$PWD":/home/jovyan/ jupyter/r-notebook
because the mount point depends on a distribution and hence its path. Here, you ask your current directory to be mounted to Jupyter's path /home/jovyan.
Files inside Jupyter GUIs
To get the files in Jupyter GUI:
OS X
If you had some other than /home/jovyan at the current Jupyter version, the files would not appear in Jupyter GUI so use
$ docker run --rm -it -p 6780:8888 -v "$PWD":/home/jovyan/ jupyter/r-notebook
Some other distros
$ docker run --rm -it -p 6780:8888 -v "$PWD":/tmp jupyter/r-notebook
More generally
For checking up for /home/jovyan/ or /tmp, you can getwd() in R to see your working directory.
Further threads
Reddit discussion more generally on the topic here
Posting this as an answer since the location seems to have changed and the accepted answer doesn't spell it out in full how to get your local directory to show up in Tensorflow Jupyter (Type this on one line with an appropriate <localdir> and <dockerdir>):
docker run --runtime=nvidia -it
--name tensorflow
-p 8888:8888
-v ~/<localdir>:/tf/<dockerdir>
tensorflow/tensorflow:nightly-jupyter
Karl L thinks the solution is the following below. The solution moved here for everyone to judge it and make the question easier to read.
Solution
sudo nvidia-docker create -v /Users/user/tf/src:/notebooks
-it -p 8888:8888 -e PASSWORD=password
--name container_name gcr.io/tensorflow/tensorflow:latest-gpu
As #fendi-d pointed out I was mounting my volume incorrectly.
Then I was pointed to the incorrect mounting dir and I found the correct one in the tensorflow docker file
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/dockerfiles/dockerfiles/gpu.Dockerfile
Which configures the jupyter notebook and then copies files to "/notebooks"
# Set up our notebook config.
COPY jupyter_notebook_config.py /root/.jupyter/
# Copy sample notebooks.
COPY notebooks /notebooks
After I ran with the correct mounting path it showed my files located in "/Users/user/tf/src"
I have two docker images: php and phantomjs.
Im using them to build simple command-line script application.
Also im having convinient run.bat script that contains:
docker run -it --rm --name my-running-script -v %cd%:/usr/src/myapp -w /usr/src/myapp php:7.0-cli php
What should i do to add nodejs into my php image?
I want to be able to use something like "phantomjs --help" inside php container.
I've tried to search documentation for similar issues, but havent found any tips on that.
This is phantomjs image that im using: https://hub.docker.com/r/wernight/phantomjs/
for php image im using:
https://hub.docker.com/_/php/
If you want phantomjs inside your php container, instead of using two docker images one with php and other with phantomjs you just need to build your own cutom docker image with both these packages.
It seems you are using debian:stretch as base image for phantomjs https://hub.docker.com/r/wernight/phantomjs/~/dockerfile/
You just need to google for installing php on debian. Probably it should be one more line in your dockerfile apt-get install -y php. Then build this image and use it.