I'm doing an internship (= yes I'm a newbie). My supervisor told me to create a conda environment. She passed me a log file containing many packages.
A quick qwant.com search shows me how to create envs via the
conda env create --file env_file.yaml
The file I was give is however NOT a yaml file it is structured like so:
# packages in environment at /home/supervisors_name/.conda/envs/pancancer:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
bedtools 2.29.2 hc088bd4_0 bioconda
blas 1.0 mkl
bzip2 1.0.8 h7b6447c_0
The file contains 41 packages = 44 lines including comments above. For simplicity I'm showing only the first 7.
Appart from adding env name (see 2. below), is there a way to use the file as it is to generate an environment with the packages?
I ran the cmd using
conda env create --file supervisors.log.txt
SpecNotFound: Environment with requirements.txt file needs a name
Where in the file should I put the name?
alright, so, it seems that they give you the output of conda list rather than the .yml file produced by conda with conda env export > myenv.yml. Therefore you have two solutions:
You ask for the proper file and then proceed to install the env with conda built-in pipeline
If you do not have any access on the proper file, you could do one of the following:
i) Parse with python into a proper .yml file and then do the conda procedure.
ii) Do a bash script, downloading the packages listed in the file she gave you.
This is how I would proceed, personally :)
Because there is no other SO post on this error, for people of the future: I got this error just because I named my file conda_environment.txt instead of conda_environment.yml. Looks like the yml extension is mandatory.
Related
When ever we try to run the command pip intall nltk or pip install numpy we get error that pip is not recognized as internal or external command then we add pip to the path. I want to know that what is path and why we add link in path. Any one help please.
From the Linux Information Project:
PATH is an environmental variable in Linux and other Unix-like operating systems that tells the shell which directories to search for executable files (i.e., ready-to-run programs) in response to commands issued by a user. It increases both the convenience and the safety of such operating systems and is widely considered to be the single most important environmental variable.
So basically it's a list of directories in which the shell looks to find commands.
Let's say your pip is installed at /usr/local/bin/pip, and /usr/local/bin/ is not in your PATH variable, the shell won't be able to find pip.
If you're using Python virtual environment, like python3 -m venv my-venv, you usually have to source bin/activate under my-venv, which adds all scripts under my-venv/bin to your PATH variable for the current shell. Then your shell will be able to find the virtual environment-specific scripts.
Since PATH is set by the login shell, when you close the current shell and open a new one, the variable gets reset. Then you have to call source bin/activate under my-venv again to get shell look into your virtual environment.
I created a a .yml file from a well stable conda env.
The .yml file states the following at the end which basically points to other packages that cannot be obtained from conda channels:
- pip:
- easygui==0.98.1
- nptdms==0.12.0
When creating a new env using this .yml file, it completely avoided installing these two packages. Is that something the user has to do manually? If yes, doesn't that defeat the purpose of creating-sharing .yml env file?
I have monkeyrunner set up and am trying to set up AndroidViewClient as well. I followed the tutorial at https://github.com/dtmilano/AndroidViewClient/wiki, doing a pip install, and added the env path to my bash profile using the code:
export ANDROID_VIEW_CLIENT_HOME=/Users/me/Library/Android/sdk/tools/bin/AndroidViewClient-master
I made sure to re-source my bash. However, when I run python check-import.py --debug from the /examples folder, I receive the error:
File "check-import.py", line 22
print("WARNING: '%s' is not a directory and is pointed by ANDROID_VIEW_CLIENT_HOME environment variable" % avcd, file=sys.stderr)
^
SyntaxError: invalid syntax
I'm not very familiar with environmental variables so I could have easily made a mistake that I didn't catch.
If you installed androidviewclient via pip like
pip install androidviewclient
and it didn't give you any errors, then androidviewclient should be installed and available to your scripts via import or command line via its commands (i.e. dump, culebra).
You don't need any environment variables.
Then when you run
./check-import.py --debug
you will see your python path printed and then
OK
It seems you have changed this line https://github.com/dtmilano/AndroidViewClient/blob/master/examples/check-import.py#L22
AndroidViewClient/culebra requires python 2.7.x, so if you have a different version on your system you can install https://github.com/pyenv/pyenv or other virtual environment.
I want to package up a few shell scripts + support files into a homebrew formula that installs these scripts somewhere on the user $PATH. I will serve the formula from my own tap.
Reading through the formula cookbook the examples seem to assume cmake or autotools system in the upstream library. What if my project only consists of a few scripts and config files? Should I just manually copy those into #{prefix}/ in the Formula?
There are two cases here:
Standalone Scripts
Install them under bin using bin.install. You can optionally rename them, e.g. to strip the extension:
class MyFormula < Formula
# ...
def install
# move 'myscript.sh' under #{prefix}/bin/
bin.install "myscript.sh"
# OR move 'myscript.sh' to #{prefix}/bin/mybettername
bin.install "myscript.sh" => "mybettername"
# OR move *.sh under bin/
bin.install Dir["*.sh"]
end
end
Scripts with Support Files
This case is tricky because you need to get all the paths right. The simplest way is to install everything under #{libexec}/ then write exec scripts under #{bin}/. That’s a very common pattern in Homebrew formulae.
class MyFormula < Formula
# ...
def install
# Move everything under #{libexec}/
libexec.install Dir["*"]
# Then write executables under #{bin}/
bin.write_exec_script (libexec/"myscript.sh")
end
end
Given a tarball (or a git repo) that contains the following content:
script.sh
supportfile.txt
The above formula will create the following hierarchy:
#{prefix}/
libexec/
script.sh
supportfile.txt
bin/
script.sh
Homebrew creates that #{prefix}/bin/script.sh with the following content:
#!/bin/bash
exec "#{libexec}/script.sh" "$#"
This means that your script can expect to have a support file in its own directory while not polluting bin/ and not making any assumption regarding the install path (e.g. you don’t need to use things like ../libexec/supportfile.txt in your script).
See this answer of mine for an example with a Ruby script and that one for an example with manpages.
Note Homebrew also have other helpers to e.g. not only write an exec script but also set environment variables or execute a .jar.
I have a requirement that before an application runs, some part of it needs to read the environmental variable. For this I have the following docker file
FROM nodesource/jessie:0.12.7
# install gettext for envsubst
RUN apt-get update
RUN apt-get install -y gettext-base
# cache package.json and node_modules to speed up builds
ADD package.json package.json
RUN npm install
# Add source files
ADD src src
# Substiture value for backend endpoint env var
RUN envsubst < src/js/envapp.js > src/js/app.js
ADD node_modules node_modules
EXPOSE 8000
CMD ["npm","start"]
The above envsubst line reads (should read) an env variable $MYENV and substitutes it. But when I open the file app.js, its empty.
I checked if the environmental variable exists in the container and it does. Any reason its value is not read and substituted?
I also tried the same command in teh container and it works. It only does not work when I run the image
This is likely because $MYENV is not available for envsubst when you run the image.
Each RUN command runs on its own shell.
From the Docker documentations:
RUN (the command is run in a shell - /bin/sh -c - shell form)
You need to source your profile as well, for example if the $MYENV environment variable is available in the .bashrc file, you can modify your Dockerfile like this:
RUN source ~/.bashrc && envsubst < src/js/envapp.js > src/js/app.js
I encountered the same issues, and after much research and fishing through the internet. I managed to find a few work arounds to this issue. Below I'll list them and identifiable risks at the time of this "Answer post"
Solutions:
1.) apt-get install -y gettext its a standard GNU package language library, one of these libraries that it includes is envsubst` and I can confirm that it works for docker UBUNTU:latest and it should work for every flavored version.
2.) npm install envsub dependent on the "use case" - this approach would be better supported by node based projects.
3.) enstub cli project lib in my opinion it seems a bit overkill to downloading a custom cli from a random stranger but, it's also another option.
Risk:
apt-get install -y gettext:
1.) gettext - this approach would NOT be ideal for VM's as with any package library, it requires maintenance and updates as time passes. However, this isn't necessary for docker because once an a container is initialized and up and running we can create a bashscript to add the package, substitute env vars and then remove the package.
2.) It's a bad idea for VM's because it can be used to execute arbitrary code
npm install ensub
1.) envsub - updating packages and this approach wouldn't be ideal if your dealing with a different stack and not using nodejs.
NOTE:
There's also a PHP version for those developing a PHP application and it seems to work PHP's cli if you need a custom environment.
Resources:
GetText package library info: https://www.gnu.org/software/gettext/
GetText Risk - https://ubuntu.com/security/notices/USN-3815-2
PHP-GetText - apt-get install -y php-gettext
Custom ensubst cli: https://github.com/a8m/envsubst
I suggest that since your are using Node, you use the npm envsub module.
This module is well tested and is developed with docker in mind.
It avoids the need for relying on other dependencies when you already have the full Node arsenal at your fingertips.
envsub is described as
envsub is envsubst for NodeJS
NodeJS global CLI module providing file-level environment variable substitution via Handlebars
I am the author of the package. I think you will enjoy it.
I had some issues with envsubst in Docker.
For some reasons envsubst doesn't work when I try to copy the output in the same file. For example, this is not working:
RUN envsubst < file.conf > file.conf
But when I when I tried to use a temp file the issue disappeared:
RUN envsubst < file.conf > file.conf.temp && cp -f file.conf.temp file.conf