Easily changing Conda environment (in Win7 64) by renaming folder method? acceptable? - environment-variables

Is this absolutely genius method of changing conda environments going to break something later on?
So I didn't have the patience to read through and digest all the mumbo jumbo of changing conda environments. (i'm not even sure my usage of the term "environment" is correct).
I was trying to install py2exe (pip install py2exe) and I got an error and in the traceback I noticed that pip was using my old 2.7 anaconda python located in E:\Anaconda. My "new" or current 3.4 anaconda python is located in E:\Anaconda3...
So what i did was rename my E:\Anaconda folder to E:\poopAnaconda using windows explorer (right click, rename, etc.; using win7 64). Then in cmd prompt I typed
E:\Anaconda3>conda info -a
And then magically after renaming, conda info is now showing 3.4.1.final.0 as my python version and my environment is now successfully at E:\Anaconda3 !!! (you can see it about midway through the cmd output)
There was a slight pause after hitting enter (for conda info -a) and then the info displayed. The only difference I can see in the output is some warning about licenses pfffffff
Also both my anaconda folders are listed in PATH:
...E:\Anaconda;E:\Anaconda\Scripts;E:\Anaconda3;E:\Anaconda3\Scripts
In the FAQ, there are 8 different ways to activate and/or create an environment and none of those are as easy as this one.
So, is this a "bad" way of changing conda environments? I mean it works so far. But, why did it work?
Before my genious breakthrough:
E:\Anaconda3>conda info -a
Current conda install:
platform : win-64
conda version : 3.8.4
conda-build version : 1.8.2
python version : 2.7.8.final.0
requests version : 2.5.1
root environment : E:\Anaconda (writable)
default environment : E:\Anaconda
envs directories : E:\Anaconda\envs
package cache : E:\Anaconda\pkgs
channel URLs : http://repo.continuum.io/pkgs/free/win-64/
http://repo.continuum.io/pkgs/free/noarch/
http://repo.continuum.io/pkgs/pro/win-64/
http://repo.continuum.io/pkgs/pro/noarch/
config file : None
is foreign system : False
# conda environments:
#
root * E:\Anaconda
sys.version: 2.7.8 |Anaconda 2.1.0 (64-bit)| (default...
sys.prefix: E:\Anaconda
sys.executable: E:\Anaconda\python.exe
conda location: E:\Anaconda\lib\site-packages\conda
conda-build: E:\Anaconda\Scripts\conda-build.exe
conda-convert: E:\Anaconda\Scripts\conda-convert.exe
conda-develop: E:\Anaconda\Scripts\conda-develop.exe
conda-env: E:\Anaconda\Scripts\conda-env.exe
conda-index: E:\Anaconda\Scripts\conda-index.exe
conda-metapackage: E:\Anaconda\Scripts\conda-metapackage.exe
conda-pipbuild: E:\Anaconda\Scripts\conda-pipbuild.exe
conda-skeleton: E:\Anaconda\Scripts\conda-skeleton.exe
user site dirs:
CIO_TEST: <not set>
CONDA_DEFAULT_ENV: <not set>
CONDA_ENVS_PATH: <not set>
PATH: C:\Program Files (x86)\RSA SecurID Token Common;C:\ProgramData\Oracle\Java\javapath;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsP
owerShell\v1.0\;e:\Program Files\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files (x86)\Windows Live\Shared;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;e:\Program Files\AMD\ATI.ACE\Core-Static;E:\Python34_64bit;E:\Anaconda;E:\Anaconda\Scripts;E:\Anaconda3;E:\Anaconda3\Scripts
PYTHONHOME: <not set>
PYTHONPATH: <not set>
License directories:
C:\Users\Kardo Paska\.continuum
C:\Users\Kardo Paska\AppData\Roaming\Continuum
E:\Anaconda\licenses
License files (license*.txt):
Package/feature end dates:
E:\Anaconda3>
And after:
E:\Anaconda3>conda info -a
Current conda install:
platform : win-64
conda version : 3.7.0
conda-build version : 1.8.2
python version : 3.4.1.final.0
requests version : 2.4.1
root environment : E:\Anaconda3 (writable)
default environment : E:\Anaconda3
envs directories : E:\Anaconda3\envs
package cache : E:\Anaconda3\pkgs
channel URLs : http://repo.continuum.io/pkgs/free/win-64/
http://repo.continuum.io/pkgs/pro/win-64/
config file : None
is foreign system : False
# conda environments:
#
root * E:\Anaconda3
sys.version: 3.4.1 |Anaconda 2.1.0 (64-bit)| (default...
sys.prefix: E:\Anaconda3
sys.executable: E:\Anaconda3\python.exe
conda location: E:\Anaconda3\lib\site-packages\conda
conda-build: E:\Anaconda3\Scripts\conda-build.exe
conda-convert: E:\Anaconda3\Scripts\conda-convert.exe
conda-develop: E:\Anaconda3\Scripts\conda-develop.exe
conda-index: E:\Anaconda3\Scripts\conda-index.exe
conda-metapackage: E:\Anaconda3\Scripts\conda-metapackage.exe
conda-pipbuild: E:\Anaconda3\Scripts\conda-pipbuild.exe
conda-skeleton: E:\Anaconda3\Scripts\conda-skeleton.exe
user site dirs:
CIO_TEST: <not set>
CONDA_DEFAULT_ENV: <not set>
CONDA_ENVS_PATH: <not set>
PATH: C:\Program Files (x86)\RSA SecurID Token Common;C:\ProgramData\Oracle\Java\javapath;C:\Program Files\Common Files\Microsoft Shared\Windows Live;C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\WindowsP
owerShell\v1.0\;e:\Program Files\ATI Technologies\ATI.ACE\Core-Static;C:\Program Files (x86)\Windows Live\Shared;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;e:\Program Files\AMD\ATI.ACE\Core-Static;E:\Python34_64bit;E:\Anaconda;E:\Anaconda\Scripts;E:\Anaconda3;E:\Anaconda3\Scripts
PYTHONHOME: <not set>
PYTHONPATH: <not set>
WARNING: could not import _license.show_info
# try:
# $ conda install -n root _license
E:\Anaconda3>
NICE!!
E:\Anaconda3>pip install py2exe
Downloading/unpacking py2exe
Installing collected packages: py2exe
Successfully installed py2exe
Cleaning up...
E:\Anaconda3>

This will work, but I wouldn't recommend it.
In the context of conda environments, "activating" an environment just means putting that environment to the front of your PATH, so that programs from that environment get picked up first when you type them. Putting multiple things on your PATH and moving them works too, because nonexistant paths are just skipped when the PATH is searched.
First off, you shouldn't install Anaconda twice. Rather, use conda to create additional environments.
You aren't making use of conda: One of the strengths of Anaconda is the conda package manager, which manages the environments. It would be better to pick one of your Anaconda installations as the base one and create the other one as a conda environment (e.g., if you pick Anaconda3 as your base create a Python 2 environment with conda create -n py2 python=2 anaconda). Then activate py2 and deactivate.
If you use conda, you can get confused: Each installation of Anaconda has a different conda installed. This means to manage each one, you'll need to use the conda that is in that one. Using the wrong conda could lead to issues (it's not really supported). With one Anaconda and environments, you can use conda install -n envname and it will do the right thing, because there will only be one conda.
But even ignoring that, regarding your genius idea, some issues would be:
PATH "leak through": If you have both Anaconda and Anaconda3 on your PATH and something is installed in the second but not the first, it will pick up the Anaconda3 one (because the way PATH works is it searches all the directories for the command until it finds it). On OS X and Linux source activate will remove the root environment from the PATH to prevent this from happening. This doesn't happen yet on Windows but we want to change it.
Inconvenience: Is moving a directory around really easier than typing activate envname? Also consider that if you create a new environment, you will have to add it to the PATH for this trick to work. If you make good use of conda, you'll be making many environments.
You might break the environment: This is not entirely true for Windows, at least for most packages. It is very true on OS X and Linux. Moving an environment can break it, because there are hard-coded paths in places. So things in your poopAnaconda directory might not work until you name it bake to Anaconda.

Related

CMAKE error when trying to build project inside of a docker container: cant find cudart lib

Using an nvidia jetson tx2 running ubuntu18.04 with docker,nvidia-docker2 and l4t-cuda installed on the host system.
Main Error when compiling:
CMake Error at /usr/local/lib/python3.6/dist-packages/cmake/data/share/cmake-3.25/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
Could NOT find CUDA (missing: CUDA_CUDART_LIBRARY) (found suitable version
"10.2", minimum required is "10.2")
Call Stack (most recent call first):
/usr/local/lib/python3.6/dist-packages/cmake/data/share/cmake-3.25/Modules/FindPackageHandleStandardArgs.cmake:600 (_FPHSA_FAILURE_MESSAGE)
/usr/local/lib/python3.6/dist-packages/cmake/data/share/cmake-3.25/Modules/FindCUDA.cmake:1266 (find_package_handle_standard_args)
CMakeLists.txt:17 (find_package)
CMakeLists.txt:
cmake_minimum_required (VERSION 3.5)
project(vision)
enable_testing()
# Variables scopes follow standard rules
# Variables defined here will carry over to its children, ergo subdirectories
# Setup ZED libs
find_package(ZED 3 REQUIRED)
include_directories(${ZED_INCLUDE_DIRS})
link_directories(${ZED_LIBRARY_DIR})
# Setup CUDA libs for zed and ai modules
find_package(CUDA ${ZED_CUDA_VERSION} REQUIRED)
include_directories(${CUDA_INCLUDE_DIRS})
link_directories(${CUDA_LIBRARY_DIRS})
# Setup OpenCV libs
find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INLCUDE_DIRS})
# Check if OpenMP is installed
find_package(OpenMP)
checkPackage("OpenMP" "OpenMP not found, please install it to improve performances: 'sudo apt install libomp-dev'")
# TensorRT
set(TENSORRT_ROOT /usr/src/tensorrt/)
find_path(TENSORRT_INCLUDE_DIR NvInfer.h
HINTS ${TENSORRT_ROOT} PATH_SUFFIXES include/)
message(STATUS "Found TensorRT headers at ${TENSORRT_INCLUDE_DIR}")
set(MODEL_INCLUDE ../code/includes)
set(MODEL_LIB_DIR libs)
set(YAML_INCLUDE ../depends/yaml-cpp/include)
set(YAML_LIB_DIR ../depends/yaml-cpp/libs)
include_directories(${MODEL_INCLUDE} ${YAML_INCLUDE})
link_directories(${MODEL_LIB_DIR} ${YAML_LIB_DIR})
# Setup Darknet libs
#find_library(DARKNET_LIBRARY NAMES dark libdark.so libdarknet.so)
#find_package(dark REQUIRED)
# Setup HTTP libs
find_package(httplib REQUIRED)
find_package(nlohmann_json 3.2.0 REQUIRED)
# System libs
SET(SPECIAL_OS_LIBS "pthread")
link_libraries(stdc++fs)
# Optional definitions
add_definitions(-std=c++17 -g -O3)
# Add sub directories
add_subdirectory(zed_module)
add_subdirectory(ai_module)
add_subdirectory(http_api_module)
add_subdirectory(executable_module)
option(RUN_TESTS "Build the tests" off)
if (RUN_TESTS OR CMAKE_BUILD_TYPE MATCHES Debug)
add_subdirectory(test)
steps that fail in Dockerfile using image stereolabs/zed:3.7-devel-jetson-jp4.6:
WORKDIR /opt
RUN git clone https://github.com/Cruiz102/Vision-Module
WORKDIR /opt/Vision-Module
RUN mkdir build-debug && cd build-debug
RUN pwd
WORKDIR /opt/Vision-Module/build-debug
RUN cmake -DCMAKE_BUILD_TYPE=Release ..
contents of /etc/docker/daemon.json
{
"runtimes": {
"nvidia": {
"path": "/usr/bin/nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
Using the jetson, I've tried using flags to set the toolkit dir as well as editing daemon.json reinstalling dependencies, changing docker images, installing and reinstalling cudaart on host, changing flags and finishing the build in interactive mode However I always get the same error.
I have looked into Docker some time ago, so I'm not an expert, put as far as I remember, Docker and Docker containers are like virtual machines. It doesn't matter whether your pc has any cuda support or whether the libraries are installed. They are not part of your Docker VM. And since Docker runs without GUI this stuff will not be installed right away.
I don't see any code to install it in your container. Are you using one that has cuda support? If not you need to install it using your Dockerfile, not on your host

Conda: how to add packages to environment from log (not yaml)?

I'm doing an internship (= yes I'm a newbie). My supervisor told me to create a conda environment. She passed me a log file containing many packages.
A quick qwant.com search shows me how to create envs via the
conda env create --file env_file.yaml
The file I was give is however NOT a yaml file it is structured like so:
# packages in environment at /home/supervisors_name/.conda/envs/pancancer:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
bedtools 2.29.2 hc088bd4_0 bioconda
blas 1.0 mkl
bzip2 1.0.8 h7b6447c_0
The file contains 41 packages = 44 lines including comments above. For simplicity I'm showing only the first 7.
Appart from adding env name (see 2. below), is there a way to use the file as it is to generate an environment with the packages?
I ran the cmd using
conda env create --file supervisors.log.txt
SpecNotFound: Environment with requirements.txt file needs a name
Where in the file should I put the name?
alright, so, it seems that they give you the output of conda list rather than the .yml file produced by conda with conda env export > myenv.yml. Therefore you have two solutions:
You ask for the proper file and then proceed to install the env with conda built-in pipeline
If you do not have any access on the proper file, you could do one of the following:
i) Parse with python into a proper .yml file and then do the conda procedure.
ii) Do a bash script, downloading the packages listed in the file she gave you.
This is how I would proceed, personally :)
Because there is no other SO post on this error, for people of the future: I got this error just because I named my file conda_environment.txt instead of conda_environment.yml. Looks like the yml extension is mandatory.

Make PowerShell recommendation like Linux-bash (E.g. docker)

I have windows 10 OS with WSL enabled and docker for windows installed.
When I type docker in PowerShell and hit tab, it suggests me with the corresponding folders and files in my working directory.
here AndroidStudioProjects is a directory in my working directory.
On the other hand,
When I type docker in WSL Ubuntu and hit tab, it suggests the available docker commands themselves. (My expected behavior)
I want PowerShell to also recommend like WSL ubuntu.
Presumably:
docker on WSL comes with tab-completion for POSIX-compatible shells such as bash, installed via the shell's initialization files.
no such support is provided for PowerShell, but there are third-party solutions - see below.
Installing PowerShell tab-completion for docker:
Install the DockerCompletion module from the PowerShell Gallery:
# Install the module in the scope of the current user.
Install-Module DockerCompletion -Scope CurrentUser
# Import the module into the session.
# Add this line to your $PROFILE file to make the tab-completion
# available in future sessions.
Import-Module DockerCompletion
Installing PowerShell tab-completion for all supported programs (CLIs):
The posh-cli meta-module - whose repo is here - offers a convenient way to automatically install tab-completion support for all locally installed CLIs for which application-specific tab-completion modules are available:
# Install the meta-module in the scope of the current user.
Install-Module posh-cli -Scope CurrentUser
# This looks for locally installed CLIs for which tab-completion
# modules are available, installs them, and adds
# Import-Module commands to your $PROFILE file.
Install-TabCompletion
See the README for more information.

How is Go executable created in Docker?

I'm pretty new to development Golang & Docker. I'm following the instructions in the official Golang DockerHub image. Here's the part I'm a bit confused:
The part I really don't get is the last line of the Dockerfile:
CMD ["app"]
My question is, how is the "app" executable created in the first place? I created a standard hello-world.go file and added this Docker file to a directory. I don't get how building the Docker image would generate an executable called "app". Can someone explain?
Excerpt of the go command https://golang.org/cmd/go/#hdr-Compile_and_install_packages_and_dependencies
Compile and install packages and dependencies
Usage:
go install [-i] [build flags] [packages]
Install compiles and installs the packages named by the import paths.
Executables are installed in the directory named by the GOBIN
environment variable, which defaults to $GOPATH/bin or $HOME/go/bin if
the GOPATH environment variable is not set. Executables in $GOROOT are
installed in $GOROOT/bin or $GOTOOLDIR instead of $GOBIN.
When module-aware mode is disabled, other packages are installed in
the directory $GOPATH/pkg/$GOOS_$GOARCH. When module-aware mode is
enabled, other packages are built and cached but not installed.
The -i flag installs the dependencies of the named packages as well.
For more about the build flags, see 'go help build'. For more about
specifying packages, see 'go help packages'.
See also: go build, go get, go clean.
This makes an executable out of your go code.

Apache Jena Commands not found

I'm trying to set up my system (Ubuntu 16.04) with Apache Jena 3.10.0, and followed the provided instructions, but I'm unable to access any of the commands that I should have access to.
For example, sparql --version and bin/sparql --version both return:
sparql: command not found
I have downloaded and extracted the files to /home/[user]/apache-jena-3.10.0, then run:
export JENA_HOME=/home/[user]/apache-jena-3.10.0
export PATH=$PATH:$JENA_HOME/bin
The command cd $JENA_HOME successfully goes the apache-jena-3.10.0 directory.
I feel that there is a basic linux thing here that I'm missing, but I've tried a lot of things and had no luck so far. Any help would be greatly appreciated. Thanks!
The files in the download from Apache were not marked as executable. From the main apache-jena-3.10.0 directory, chmod -R 775 bin changed all files so I could run them from command line.

Resources