How to install the latest version of rapids, without specifying the version number - nvidia

I would like to install the latest version of rapids without specifying the version number.
From here: https://rapids.ai/start.html
conda install -c rapidsai -c nvidia -c conda-forge -c defaults rapids=0.15 python=3.7 cudatoolkit=10.1
which works correctly. But if we drop the version number (0.15)
conda install -c rapidsai -c nvidia -c conda-forge -c defaults rapids python=3.7 cudatoolkit=10.1
conda installs 0.01
if we remove rapids, nothing installs
How do I set this to get the latest release each time?

From the great John Kirkham: please use rapidsai::rapids. It will force the latest install for both stable or nightlies.
conda install -c rapidsai -c nvidia -c conda-forge -c defaults rapidsai::rapids python=3.7 cudatoolkit=10.1

Related

Using conda in Docker

I am using conda-forge in my Dockerfile in order to install a ready environment from conda forge repository. To activate the environment, there are a lot of packages to be installed in the conda-forge command.
Problem is that this is happening every time when I re-build the Docker image.
Is there some possibility to cache it, and not reinstalling it again on every build process?
Critical part of code:
ADD https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh Miniconda3-latest-Linux-x86_64.sh
RUN mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh
RUN conda init bash
RUN conda create -c conda-forge --name arosics python=3
RUN conda install -c conda-forge 'arosics>=1.3.0'
RUN echo "conda init bash" >> $HOME/.bashrc
RUN echo "conda activate arosics" >> $HOME/.bashrc
SHELL ["/bin/bash"]

how can i reduce the size of the docker image

i'm new with docker,and i created a docker image with Dockerfile as follows, it's used for a raspberry pi so all the packages are needed, i read the articles of multistage of dockerfile, but i don't understand much, how can i reduce the size of the image to simplify this deployment on raspberry?
FROM continuumio/anaconda3:latest
RUN conda create -y -n dcase2020 python=3.7
SHELL ["conda", "run", "-n", "dcase2020", "/bin/bash", "-c"]
RUN conda install -c conda-forge vim -y
RUN conda install pyaudio
RUN pip install librosa
RUN conda install psutil
RUN pip install psds_eval
RUN conda install -y pandas h5py scipy \
&&conda install -y pytorch torchvision -c pytorch \
&&conda install -y pysoundfile youtube-dl tqdm -c conda-forge \
&&conda install -y ffmpeg -c conda-forge \
&&pip install dcase_util \
&&pip install sed-eval
EXPOSE 80
CMD [“bash”]
Thank you very much!
You are creating a new environment that probably only contains the requirements for your project, so no use in having the huge anaconda base env as extra weight, instead, just switch to a miniconda container like continuumio/miniconda3

Unable to deploy docker solution with conda-forge

I am deploying a docker solution for my application. In my docker file I used multiple conda-forge to build some containers. It worked very well for some of the containers, and give an error for the other and I am sure it is not about the package, because for the same package sometimes it work and others no.
I have tried to use pip instead of conda, but that lead to other errors since I am using conda originally for all of my configuration. Also, I read that RUN conda update --all will solve it, and for pip setup RUN pip install --upgrade setuptools
This is part of my docker file :
FROM dockerreg.cyanoptics.com/cyan/openjdk-java8:1.0.0
RUN conda update --all
RUN conda install -c conda-forge happybase=1.1.0 --yes
RUN conda install -c conda-forge requests-kerberos
RUN pip install --upgrade setuptools
RUN pip install --upgrade pip
RUN pip install kafka-python
RUN pip install requests-negotiate
The expected result is to build all containers successfully, but I am getting the following:
---> Using cache
---> 82f4cd49037d
Step 14 : RUN conda install -c conda-forge happybase=1.1.0 --yes
---> Using cache
---> c035b960aa3b
Step 15 : RUN conda install -c conda-forge requests-kerberos
---> Running in 54d869afcd00
Traceback (most recent call last):
File "/opt/conda/bin/conda", line 7, in <module>
from conda.cli import main
ModuleNotFoundError: No module named 'conda'
The command '/bin/sh -c conda install -c conda-forge requests-
kerberos' returned a non-zero code: 1
make: *** [dockerimage] Error 1
Try combining the two conda install commands into a single command: RUN conda install -c conda-forge happybase=1.1.0 requests-kerberos --yes.
I ran into a similar issue with the install commands split up; it turns out the issue was that the first caused the python version to be upgraded, which in turn was incompatible with the conda install command - causing the error you're seeing.
Another workaround I found was to add python 3.6.8 as another install arg. One of the packages I was installing must have had a python 3.7 dependency, forcing it to upgrade python, and breaking conda install.
Actually the error indicates the wrong path for conda /bin/sh
Therefore adding the proper path to the Dockerfile will solve the issue as following :
ENV PATH /opt/conda/envs/env/bin:$PATH
A good reference to related topic is here, where it suggests to create a new virtual environment within the dockerfile:
https://medium.com/#chadlagore/conda-environments-with-docker-82cdc9d25754

Error 137 on docker build command on Win7

Executing the following command:
docker build -m 3g --memory-swap -1 -f MyDockerfile .
And I'm getting this:
Solving package specifications: .....Killed
The command '/bin/sh -c conda update -y --all && conda install -y -c menpo m
enpo && conda install -y -c menpo menpofit && conda install -y -c menpo
menpodetect && conda install -y -c menpo dlib && conda install -y -c men
po opencv3 && conda install -y joblib && pip install pyprind && pip
install colorlog' returned a non-zero code: 137
From googling, my understanding is that the OS is killing my running process here due to running out of memory. I have 8gb on my host machine, and I can see that I am not going over 4gb used. I added the memory switches above, to no discernible effect.
Since I'm running this on Win7 and the older docker toolbox, am I being limited by Oracle's VM VirtualBox?
You can also have a look at this answer: https://stackoverflow.com/a/42398166/2878244
You may have to increase the memory resources assigned to docker by going to the Docker Tab > Preferences > Advanced
Restarting Docker solved it for me

Identical dockerfiles giving different behaviours

I am using the following dockerfile taken from (http://txt.fliglio.com/2013/11/creating-a-mysql-docker-container/):
FROM ubuntu
RUN dpkg-divert --local --rename --add /sbin/initctl
RUN ln -s /bin/true /sbin/initctl
RUN echo "deb http://archive.ubuntu.com/ubuntu precise main universe" > /etc/apt/sources.list
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get -y install mysql-client mysql-server
RUN sed -i -e"s/^bind-address\s*=\s*127.0.0.1/bind-address = 0.0.0.0/"
/etc/mysql/my.cnf
ADD ./startup.sh /opt/startup.sh
EXPOSE 3305
CMD ["/bin/bash", "/opt/startup.sh"]
This works with no errors when I build on Docker version 0.8 on my local machine.
I have been experimenting with trusted builds:
https://index.docker.io/u/hardingnj/sqlcontainer/
however on the docker servers I get an error with the second RUN command:
[91mln: failed to create symbolic link `/sbin/initctl': File exists
[0m
Error: build: The command [/bin/sh -c ln -s /bin/true /sbin/initctl] returned a non-zero code: 1
I was under the impression that Dockerfiles should work identically independently of context? Perhaps the versions of ubuntu that I am pulling aren't identical?
It is possible that the versions of the ubuntu image are different. To be extremely precise you could give the full image id that you want in the FROM statement, e.g.
# This is the id of the current Ubuntu 13.10 image.
# The tag could move to a different image at a later time.
FROM 9f676bd305a43a931a8d98b13e5840ffbebcd908370765373315926024c7c35e
RUN dpkg-divert --local --rename --add /sbin/initctl
...

Resources