With docker 1.7.1, foxx-manager update gets error 500 when downloading master.zip from central repository. However, no error was occurred with docker 1.6.1.
Did anyone encounter this problem?
How can I specify https_proxy for arangosh? foxx-manager update fails inside corporate proxy environment.
I've tried these settings:
export https_proxy=http://xx.xx.xx.xx:port
export https_proxy=xx.xx.xx.xx:port
export HTTPS_PROXY=http://xx.xx.xx.xx:port
export HTTPS_PROXY=xx.xx.xx.xx:port
... and all failed.
Below is my session log:
[t.suwa#devstudy ~]$ docker run -d arangodb
e3175d53cd1fc288201bfeebaaf95084c1409c4299ce1b39369d131bf2964d0a
Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
[t.suwa#devstudy ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e3175d53cd1f arangodb "/usr/sbin/arangod" 11 seconds ago Up 9 seconds 8529/tcp backstabbing_albattani
[t.suwa#devstudy ~]$ docker exec -it e3175d53cd1f /bin/bash
root#8155996d26ff:/# arangosh
_
__ _ _ __ __ _ _ __ __ _ ___ ___| |__
/ _` | '__/ _` | '_ \ / _` |/ _ \/ __| '_ \
| (_| | | | (_| | | | | (_| | (_) \__ \ | | |
\__,_|_| \__,_|_| |_|\__, |\___/|___/_| |_|
|___/
Welcome to arangosh 2.6.7 [linux]. Copyright (c) ArangoDB GmbH
Using Google V8 4.1.0.27 JavaScript engine, READLINE 6.3, ICU 54.1
Pretty printing values.
Connected to ArangoDB 'tcp://127.0.0.1:8529' version: 2.6.7 [standalone], database: '_system', username: 'root'
Type 'tutorial' for a tutorial or 'help' to see common examples
arangosh [_system]> fm.update()
JavaScript exception in file '/usr/share/arangodb/js/common/modules/org/arangodb/foxx/store.js' at 410,11: [ArangoError 1752: application download failed: Github download from 'https://github.com/arangodb/foxx-apps/archive/master.zip' failed with error code 500]
! throw err;
! ^
stacktrace: Error
at exports.throwDownloadError (/usr/share/arangodb/js/common/modules/org/arangodb-common.js:448:9)
at Object.update (/usr/share/arangodb/js/common/modules/org/arangodb/foxx/store.js:392:7)
at <shell command>:1:4
If your environment is behind proxy, please add these in Dockerfile.
ENV https_proxy=http://xx.xx.xx.xx:port
ENV https_proxy=xx.xx.xx.xx:port
ENV HTTPS_PROXY=http://xx.xx.xx.xx:port
ENV HTTPS_PROXY=xx.xx.xx.xx:port
ADD .gitconfig /.gitconfig
Local file .gitconfig should have proxy setting as well:
[http]
proxy = http://xx.xx.xx.xx:port
[https]
proxy = http://xx.xx.xx.xx:port
Suspose you install and run the applicaiton with root in container, if not, copy .gitconfig to that user's home directory.
Build the image with proxy, then you should be fine to download the package within container.
Related
I want to use docker 19.03 and above in order to have GPU support. I currently have docker 19.03.12 in my system. I can run this command to check that Nvidia drivers are running:
docker run -it --rm --gpus all ubuntu nvidia-smi
Wed Jul 1 14:25:55 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.64 Driver Version: 430.64 CUDA Version: N/A |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 107... Off | 00000000:01:00.0 Off | N/A |
| 26% 54C P5 13W / 180W | 734MiB / 8119MiB | 39% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
Also, if run locally my module works with GPU support just fine. But if I build a docker image and try to run it I get a message:
ImportError: libcuda.so.1: cannot open shared object file: No such file or directory
I am using cuda 9.0 with tensorflow 1.12.0 but I can switch to cuda 10.0 with tensorflow 1.15.
As I get it the problem is that I am probably using a previous dockerfile version with commands which does not make it compatible with new docker GPU enabled version (19.03 and above).
The actual commands are these:
FROM nvidia/cuda:9.0-base-ubuntu16.04
# Pick up some TF dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
cuda-command-line-tools-9-0 \
cuda-cublas-9-0 \
cuda-cufft-9-0 \
cuda-curand-9-0 \
cuda-cusolver-9-0 \
cuda-cusparse-9-0 \
libcudnn7=7.0.5.15-1+cuda9.0 \
libnccl2=2.2.13-1+cuda9.0 \
libfreetype6-dev \
libhdf5-serial-dev \
libpng12-dev \
libzmq3-dev \
pkg-config \
software-properties-common \
unzip \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
RUN apt-get update && \
apt-get install nvinfer-runtime-trt-repo-ubuntu1604-4.0.1-ga-cuda9.0 && \
apt-get update && \
apt-get install libnvinfer4=4.1.2-1+cuda9.0
I could not find a docker base file for fundamental GPU usage either.
In this answer there was a proposal for exposing libcuda.so.1 but it did not work in my case.
So, is there any workaround for this problem or a base dockerfile to adjust to?
My system is Ubuntu 16.04.
Edit:
I just noticed that nvidia-smi from within docker does not display any cuda version:
CUDA Version: N/A
in contrast with the one locally run. So, this probably means that no cuda is loaded inside docker for some reason I guess.
tldr;
A base Dockerfile which seems to work with docker 19.03+ & cuda 10 is this:
FROM nvidia/cuda:10.0-base
which can be conbined with tf 1.14 but for some reason could not found tf 1.15.
I just used this Dockerfile to test it:
FROM nvidia/cuda:10.0-base
CMD nvidia-smi
longer answer:
Well, after a lot of trials and errors (and frustration) I managed to make it work for docker 19.03.12+cuda 10 (although with tf 1.14 not 1.15).
I used the code from this post and used the base Dockerfiles provided there.
First I tried to check the nvidia-smi from within docker using Dockerfile:
FROM nvidia/cuda:10.0-base
CMD nvidia-smi
$docker build -t gpu_test .
...
$docker run -it --gpus all gpu_test
Fri Jul 3 07:31:05 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.64 Driver Version: 430.64 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 107... Off | 00000000:01:00.0 Off | N/A |
| 45% 65C P2 142W / 180W | 8051MiB / 8119MiB | 100% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
which finally seems to find cuda binaries: CUDA Version: 10.1.
Then, I made a minimal Dockerfile which could test the successful loading of tensorflow binary libraries within docker:
FROM nvidia/cuda:10.0-base
# The following are just declaring variables and ultimately use
ARG USE_PYTHON_3_NOT_2=True
ARG _PY_SUFFIX=${USE_PYTHON_3_NOT_2:+3}
ARG PYTHON=python${_PY_SUFFIX}
ARG PIP=pip${_PY_SUFFIX}
RUN apt-get update && apt-get install -y \
${PYTHON} \
${PYTHON}-pip
RUN ${PIP} install tensorflow_gpu==1.14.0
COPY bashrc /etc/bash.bashrc
RUN chmod a+rwx /etc/bash.bashrc
WORKDIR /src
COPY *.py /src/
ENTRYPOINT ["python3", "tf_minimal.py"]
and tf_minimal.py was simply:
import tensorflow as tf
print(tf.__version__)
and for completeness I just post the bashrc file I am using:
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ==============================================================================
export PS1="\[\e[31m\]tf-docker\[\e[m\] \[\e[33m\]\w\[\e[m\] > "
export TERM=xterm-256color
alias grep="grep --color=auto"
alias ls="ls --color=auto"
echo -e "\e[1;31m"
cat<<TF
________ _______________
___ __/__________________________________ ____/__ /________ __
__ / _ _ \_ __ \_ ___/ __ \_ ___/_ /_ __ /_ __ \_ | /| / /
_ / / __/ / / /(__ )/ /_/ / / _ __/ _ / / /_/ /_ |/ |/ /
/_/ \___//_/ /_//____/ \____//_/ /_/ /_/ \____/____/|__/
TF
echo -e "\e[0;33m"
if [[ $EUID -eq 0 ]]; then
cat <<WARN
WARNING: You are running this container as root, which can cause new files in
mounted volumes to be created as the root user on your host machine.
To avoid this, run the container by specifying your user's userid:
$ docker run -u \$(id -u):\$(id -g) args...
WARN
else
cat <<EXPL
You are running this container as user with ID $(id -u) and group $(id -g),
which should map to the ID and group for your user on the Docker host. Great!
EXPL
fi
# Turn off colors
echo -e "\e[m"
Note: This is on a headless AWS box on VNC, the current desktop I'm running on is DISPLAY=:1.0
I am trying to build a container that can hold an opengl application but I'm having trouble getting vglrun to work correctly. I am currently running it with --gpus all on the docker run line as well
# xhost +si:localuser:root
# docker run --rm -it \
-e DISPLAY=unix$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
--gpus all centos:7 \
sh -c "yum install epel-release -y && \
yum install -y VirtualGL glx-utils && \
vglrun glxgears"
No protocol specified
[VGL] ERROR: Could not open display :0
On the host:
$ nvidia-smi
Tue Jan 28 22:32:24 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.44 Driver Version: 440.44 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla M60 Off | 00000000:00:1E.0 Off | 0 |
| N/A 30C P8 16W / 150W | 56MiB / 7618MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 2387 G /usr/bin/X 55MiB |
+-----------------------------------------------------------------------------+
I can confirm running glxgears without vglrun works fine but my application I'm trying to build into docker inherently uses vglrun. I have also tried using the nvidia container nvidia/opengl:1.1-glvnd-runtime-centos7 with no success.
running it with vglrun -d :1.0 glxgears or vglrun -d unix:1.0 glxgears gives me this error:
Error: couldn't get an RGB, Double-buffered visual
What am I doing wrong here? does vglrun not work in a container?
EDIT: It seems I was approaching this problem the wrong way, it seems to work when I'm on the primary :0 display but when using VNC to view display :1, the Mesa drivers get used instead of the Nvidia ones. Is there a way I can use the GPU on spawned VNC displays?
I met the same problem and I solved by setting the variable "VGL_DISPLAY".
docker run ... -e VGL_DISPLAY=$DISPLAY ...
Then it worked! Please try.
I have successfully jailbreaked my Hue Bridge 2.1 and now have Root Access over SSH to it.
But I don't know how I can Install a Packet Manager (like opkg) on it.
It looks like wget is installed, but nothing else really. Also SCP works.
I have tried everything, but nothing seems to work. (See console). (Full console dump here)
login as: root
root#192.168.1.69's password:
BusyBox v1.23.2 (2018-10-25 16:12:28 UTC) built-in shell (ash)
_ _ _ _ ______ ____ _ _ ___ __ __
| | | | | | | ____| | _ \ (_) | | |__ \ \ \ / /
| |__| | | | | |__ | |_) |_ __ _ __| | __ _ ___ ) | \ V /
| __ | | | | __| | _ <| '__| |/ _` |/ _` |/ _ \ / / > <
| | | | |__| | |____ | |_) | | | | (_| | (_| | __/ / /_ _ / . \
|_| |_|\____/|______| |____/|_| |_|\__,_|\__, |\___| |____(_)_/ \_\
__/ |
|___/
----------------------------------------------------------------------
Version: 1810251352
----------------------------------------------------------------------
root#Wohnzimmer:~# busybox --install opkg
--install: applet not found
root#Wohnzimmer:~# opkg
-ash: opkg: not found
root#Wohnzimmer:~# wget
BusyBox v1.23.2 (2018-10-25 16:12:28 UTC) multi-call binary.
Usage: wget [-c|--continue] [-s|--spider] [-q|--quiet] [-O|--output-document FILE]
[--header 'header: value'] [-Y|--proxy on/off] [-P DIR]
[-U|--user-agent AGENT] URL...
Retrieve files via HTTP or FTP
-s Spider mode - only check file existence
-c Continue retrieval of aborted transfer
-q Quiet
-P DIR Save to DIR (default .)
-O FILE Save to FILE ('-' for stdout)
-U STR Use STR for User-Agent header
-Y Use proxy ('on' or 'off')
You can't just install the packet manager using busybox --install opkg orsudo apt-get install <any-package>.
This below error message clearly says the opkg packet is not found, you need to build the source, and add that in your target.
root#Wohnzimmer:~# busybox --install opkg
--install: applet not found
It's Busybox system for that you need to download the packet manager archive against the target what they are using, for example: if it is ARM then compile using ARM toolchain in your host system then move the compiled binary to the target. After moving the compiled utility you can place the binary in /sbin dir of the root fs.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
When I map a directory it doesn't show up in my docker container.
I am on docker 1.11.2 on Mac using toolbox.
calloway$ docker -v
Docker version 1.11.2, build b9f10c9
calloway$ ls -ltr /tmp/foo/
total 0
-rw-r--r-- 1 calloway wheel 0 Jun 8 09:21 regularfile.txt
calloway$ docker run -it -v /tmp/foo:/mytmp -w /mytmp ubuntu bash
root#26fc182f7964:/mytmp# ls
root#26fc182f7964:/mytmp# exit
More exploration: /tmp mapped is /tmp on "default" Virtual Machine.
Joshuas-MBP:~ joshuacalloway$ docker-machine ssh default
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.10.3, build master : 625117e - Thu Mar 10 22:09:02 UTC 2016
Docker version 1.10.3, build 20f81dd
docker#default:~$ mkdir /tmp/OnDefaultVM
docker#default:~$ touch /tmp/OnDefaultVM/myfile.txt
docker#default:~$ exit
Joshuas-MBP:~ joshuacalloway$ docker run -it -v /tmp/OnDefaultVM:/mytmp -w /mytmp ubuntu bash
root#1184ff43dc88:/mytmp# ls
myfile.txt
I'm using an Ansible playbook to manage installation of Docker containers. I have the following playbook, which installs Cassandra:
I want to run this playbook locally, and have it install into Boot2Docker. I am able to SSH into Boot2Docker using the instructions from this answer:
$ ssh -i $HOME/.ssh/id_boot2docker -p 2022 docker#localhost
## .
## ## ## ==
## ## ## ## ===
/""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.4.1, build master : 86f7ec8 - Tue Dec 16 23:11:29 UTC 2014
Docker version 1.4.1, build 5bc2ff8
docker#boot2docker:~$
I made an inventory file with the same SSH settings:
[local]
localhost ansible_ssh_port=2022 ansible_ssh_user=docker ansible_ssh_private_key_file=~/.ssh/id_boot2docker
But when I run the playbook, it fails with the error "/bin/sh: /usr/bin/python: not found":
$ ansible-playbook db-setup.yml -i hosts.local
PLAY [local] ******************************************************************
GATHERING FACTS ***************************************************************
failed: [localhost] => {"failed": true, "parsed": false}
/bin/sh: /usr/bin/python: not found
OpenSSH_6.2p2, OSSLShim 0.9.8r 8 Dec 2011
debug1: Reading configuration data /etc/ssh_config
debug1: /etc/ssh_config line 20: Applying options for *
debug1: /etc/ssh_config line 102: Applying options for *
debug1: auto-mux: Trying existing master
debug1: mux_client_request_session: master session id: 2
Shared connection to localhost closed.
TASK: [Database] **************************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/Users/bryan/db-setup.retry
localhost : ok=0 changed=0 unreachable=0 failed=1
I still get the error even if "gather facts" is turned off. If I SSH into Boot2Docker, I can see that /usr/bin/python exists:
$ ssh -i $HOME/.ssh/id_boot2docker -p 2022 docker#localhost
...
docker#boot2docker:~$ which python
boot2docker ssh "tce-load -w -i python.tcz" does the trick as well (you need internet ;-)) for docker and ansible you will need "docker-py"
Setup pip, login to boot2docker
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install docker-py
also add to your inventory file:
dockerhost ansible_connection=ssh ansible_ssh_host=192.168.59.103 ansible_ssh_user=docker ansible_ssh_private_key_file=~/.ssh/id_boot2docker ansible_python_interpreter=/usr/local/bin/python
The solution was simple: Python isn't installed by default by Boot2Docker.
To install, run
$ boot2docker ssh "wget http://www.tinycorelinux.net/6.x/x86/tcz/python.tcz && tce-load -i python.tcz && rm -f python.tcz"
I created a script to do this automatically, see
https://gist.github.com/bcattle/90e64fbe808b3409ec2f