I am trying to install SHAP (SHapley Additive exPlanations) for machine learning using Anaconda.
I have 3.9 Python version.
Also, i have tried all of these commands but none of them worked:
conda install -c conda-forge shap
conda install -c conda-forge/label/cf201901 shap
conda install -c conda-forge/label/cf202003 shap
Try using a fresh environment.
## create environment named foo
conda create --name foo -c conda-forge python=3.9 shap
## use the environment
conda activate foo
Related
I'm writing a Dockerfile which needs to install different pip wheels, depending on the PyTorch version and the CUDA version installed in the base image:
RUN pip install torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}+${CUDA}.html
RUN pip install torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}+${CUDA}.html
For this reason, I need to capture these versions at build time, into the environment variables TORCH and CUDA.
I know I could use the ENV command in a Dockerfile to assign to environmental variables:
ENV TORCH=1.12.0
ENV CUDA=113
(note that CUDA must not contain any dot, unlike TORCH), then build the container. Now, if I log into the running Docker container, I can get these versions from the command line:
python -c "import torch; print(torch.__version__)"
>>> 1.12.0
python -c "import torch; print(torch.version.cuda)"
>>> 11.3
However, I don't want to hardcode the versions in the Dockerfile, because if I change the base image, the hardcoded values will be wrong, I will try installing the wrong wheels, and installation will fail. I want to find them at build time, and assign them to TORCH & CUDA. How can I do it?
You can use ordinary shell syntax to set environment variables within a RUN command, with the limitation that those settings will be lost at the end of that command. So within a single RUN command you can use shell command substitution to set the environment variable and use it, but its value will not be available any more after that command.
# all within a single RUN line
RUN TORCH=$(python -c "import torch; print(torch.__version__)"); \
CUDA=$(python -c "import torch; print(torch.version.cuda)" | sed 's/\.//g'); \
pip install torch-scatter -f https://data.pyg.org/whl/torch-${TORCH}+${CUDA}.html; \
pip install torch-sparse -f https://data.pyg.org/whl/torch-${TORCH}+${CUDA}.html
EDIT: documentation given by the informatic administration was shitty, old version of singularity, now the order of arguments is different and the problem is solved.
To make my tool more portable, and because I have to use it on a cluster, I have to put my bioinformatics tool at disposal for docker. Tool is located here. The docker hub is 007ptar007/metadbgwas, if you want to experience with it. The Dockerfile is in the repo, and to make it easier to everyone :
FROM ubuntu:latest
ENV DEBIAN_FRONTEND=noninteractive
USER root
COPY ./install_docker.sh ./
RUN chmod +x ./install_docker.sh && sh ./install_docker.sh
ENTRYPOINT ["/MetaDBGWAS/metadbgwas.sh"]
ENV PATH="/MetaDBGWAS/:${PATH}"
And the install_docker.sh script contains :
apt-get update
apt install -y libgatbcore-dev libhdf5-dev libboost-all-dev libpstreams-dev zlib1g-dev g++ cmake git r-base-core
Rscript -e "install.packages(c('ape', 'phangorn'))"
Rscript -e "install.packages('https://raw.githubusercontent.com/sgearle/bugwas/master/build/bugwas_1.0.tar.gz', repos=NULL, type='source')"
git clone --recursive https://github.com/Louis-MG/MetaDBGWAS.git
cd MetaDBGWAS
sed -i "51i#include <limits>" ./REINDEER/blight/robin_hood.h #temporary fix for REINDEER compilation
sh install.sh
The problem :
My tool parses the command line, and needs a verbose (-v, or --verbose) argument. It also needs to reject unknown arguments; anything that isn't used by the tool causes the help message to be printed in the standard output and exits. To use the tool, I need to mount volumes were the data is; using -v /path/to/files:/input option:
singularity run docker://007ptar007/metadbgwas --volumes '/path/to/data:/inputd/:/input' --files /input --strains /input/strains --threads 8 --output ~/output
But my tool sees this as a bad -v option value or the --volume as an unknown option. I can't change this on my tool. How do I solve this conflict ?
You need to put any arguments intended for singularity - such as the volume mounting - before the name of the image you want to run (e.g. the docker image you specify in your command):
singularity run -v '/path/to/data:/input' docker://007ptar007/metadbgwas --files /input --strains /input/strains --threads 8 --output ~/output
Issue
I am trying to install some drivers on a Docker image. During the procedure, one of the installation asks me for some input from command line. This obviously freezes the procedure. How can I solve this?
How to reproduce the issue
Start a shell in the base image: docker run -it nvcr.io/nvidia/deepstream:6.0-triton
Run the following commands:
export DEBIAN_FRONTEND noninteractive
# Set some variables to download the proper header modules
export VERSION="2.83.18%2Brev1.dev"
export BALENA_MACHINE_NAME="genericx86-64-ext"
# Set variables for the Yocto version of the OS
export YOCTO_VERSION=5.10.43
export YOCTO_KERNEL=${YOCTO_VERSION}-yocto-standard
# Set variables to download proper NVIDIA driver
export NVIDIA_DRIVER_VERSION=470.86
export NVIDIA_DRIVER=NVIDIA-Linux-x86_64-${NVIDIA_DRIVER_VERSION}
# Install some prereqs
apt install -y git wget unzip build-essential libelf-dev bc libssl-dev bison flex software-properties-common
mkdir -p /usr/src/kernel_source
cd /usr/src/kernel_source
# Causes a pipeline to produce a failure return code if any command errors.
# Normally, pipelines only return a failure if the last command errors.
#SHELL ["/bin/bash", "-o", "pipefail", "-c"]
# Download the kernel source then prepare kernel source to build a module.
curl -fsSL "https://files.balena-cloud.com/images/${BALENA_MACHINE_NAME}/${VERSION}/kernel_source.tar.gz" \
| tar xz --strip-components=2 && \
make -C build modules_prepare -j"$(nproc)"
The last command will have the following output, and then it will freeze waiting for user input:
make: Entering directory '/usr/src/kernel_source/build'
SYNC include/config/auto.conf.cmd
HOSTCC scripts/basic/fixdep
HOSTCC scripts/kconfig/conf.o
LEX scripts/kconfig/lexer.lex.c
HOSTCC scripts/kconfig/expr.o
YACC scripts/kconfig/parser.tab.[ch]
HOSTCC scripts/kconfig/symbol.o
HOSTCC scripts/kconfig/preprocess.o
HOSTCC scripts/kconfig/util.o
HOSTCC scripts/kconfig/confdata.o
HOSTCC scripts/kconfig/lexer.lex.o
HOSTCC scripts/kconfig/parser.tab.o
HOSTLD scripts/kconfig/conf
*
* Restart config...
*
*
* BPF based packet filtering framework (BPFILTER)
*
BPF based packet filtering framework (BPFILTER) (BPFILTER) [Y/n/?] y
bpfilter kernel module with user mode helper (BPFILTER_UMH) [M/n/y/?] (NEW)
Notes
It seems that export DEBIAN_FRONTEND=noninteractive is not doing the job.
Looking at make options there doesn't seem to be an option to skip the question, but I guess I am wrong. How can I fix this?
I'm using the dotnet-sdk image and would like to install AWS CLI into it - but no pip, unzip, or even the setuptools Python module is available.
How can I get the AWS CLI tools going?
Got it going - forgot to chmod+x. You can do the following:
curl -sL "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
python -m zipfile -e awscli-bundle.zip .
chmod +x ./awscli-bundle/install
./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
I am trying to use the Ansible Docker module at the moment but I am currently encountering this error when I try to run my playbook -
NameError: global name 'DEFAULT_DOCKER_API_VERSION' is not defined
I found an official bug regarding this at https://github.com/ansible/ansible-modules-core/issues/1792.
I have tried the workaround by installing docker-py but have had no joy as of yet.
Any ideas on what could be going wrong? I'm trying to run my Playbook from my local OSX host that connects to AWS.
After further investigation we managed to get it to work by using -
name: Install Docker PY
pip: name=docker-py==1.1.0
In our .yml file
pip is a package manager for python and installs with it. So what you want to do is installing python.
On OS X I recommend you to first install Homebrew, which is a package manager for OS X. The command to install Homebrew is
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
After Homebrew is installed you can install python and along with it pip with
brew install python
Normally it is related to the absence of pip or docker-py library.
I have this in my docker ansible role.
- name: install the required packages
apt: pkg={{ item }} state=present update_cache=yes
with_items:
- python-pip
- name: Install docker-py as a workaround for Ansible issue
pip: name=docker-py version=1.2.3
TLDR; -e use_tls=encrypt
I'm using the newest dockerpy==1.5.0
~/.bash_profile I have eval "$(docker-machine env default)" which will run
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.100:2376"
export DOCKER_CERT_PATH="/Users/meyers/.docker/machine/machines/default"
export DOCKER_MACHINE_NAME="default"
Now that your environment is setup let me tell you the fix. The docker module needs the parameter tls=encrypt. You can supply it to each invocation of the docker module in your Ansible task or you can set it "globally" via -e use_tls=encrypt or in your playbook:
- hosts: all
vars:
use_tls: 'encrypt'
tasks:
...