Creating environment from yml file avoids pip installable packages? - environment-variables

I created a a .yml file from a well stable conda env.
The .yml file states the following at the end which basically points to other packages that cannot be obtained from conda channels:
- pip:
- easygui==0.98.1
- nptdms==0.12.0
When creating a new env using this .yml file, it completely avoided installing these two packages. Is that something the user has to do manually? If yes, doesn't that defeat the purpose of creating-sharing .yml env file?

Related

Conda: how to add packages to environment from log (not yaml)?

I'm doing an internship (= yes I'm a newbie). My supervisor told me to create a conda environment. She passed me a log file containing many packages.
A quick qwant.com search shows me how to create envs via the
conda env create --file env_file.yaml
The file I was give is however NOT a yaml file it is structured like so:
# packages in environment at /home/supervisors_name/.conda/envs/pancancer:
#
# Name Version Build Channel
_libgcc_mutex 0.1 main
bedtools 2.29.2 hc088bd4_0 bioconda
blas 1.0 mkl
bzip2 1.0.8 h7b6447c_0
The file contains 41 packages = 44 lines including comments above. For simplicity I'm showing only the first 7.
Appart from adding env name (see 2. below), is there a way to use the file as it is to generate an environment with the packages?
I ran the cmd using
conda env create --file supervisors.log.txt
SpecNotFound: Environment with requirements.txt file needs a name
Where in the file should I put the name?
alright, so, it seems that they give you the output of conda list rather than the .yml file produced by conda with conda env export > myenv.yml. Therefore you have two solutions:
You ask for the proper file and then proceed to install the env with conda built-in pipeline
If you do not have any access on the proper file, you could do one of the following:
i) Parse with python into a proper .yml file and then do the conda procedure.
ii) Do a bash script, downloading the packages listed in the file she gave you.
This is how I would proceed, personally :)
Because there is no other SO post on this error, for people of the future: I got this error just because I named my file conda_environment.txt instead of conda_environment.yml. Looks like the yml extension is mandatory.

Manage the path to the venv for pipenv

Is it possible to tell pipenv where the venv is located? Perhaps there's something you can put in the pipfile, or something for the .env file?
I fairly frequently have to recreate my venv because pipenv seemingly loses track of where it is.
For example, I started a project using Pycharm to configure the file system and create my pipenv interpreter. It created the venv in ~/.local/share/virtualenvs/my-project-ZbEWMGNA and it was able to keep track of where that interpreter was located.
Switching to a terminal window & running pipenv commands then resulted in;
Warning: No virtualenv has been created for this project yet! Consider running pipenv install first to automatically generate one for you or seepipenv install --help for further instructions.
At which point I ran pipenv install from the terminal & pointed pycharm at that venv, so the path would become ~/apps/my-project-ZbEWMGNA (which sits alongside the project files ~/apps/my-project)
Now I've got venvs in both paths and pipenv still can't find them.
mwalker#Mac my-project % pipenv --where
/Users/mwalker/apps/my-project
mwalker#Mac my-project % pipenv --venv
No virtualenv has been created for this project yet!
Aborted!
mwalker#Mac my-project % ls ~/apps
my-project
my-project-ZbEWMGNA
mwalker#Mac my-project % ls ~/.local/share/virtualenvs
my-project-ZbEWMGNA
Yes, it is possible by setting environment variables. You can set a path for virtual environments via the WORKON_HOME. Or have the virtual environment created in the project with PIPENV_VENV_IN_PROJECT.
Pipenv automatically honors the WORKON_HOME environment variable, if you have it set — so you can tell pipenv to store your virtual environments wherever you want
-- https://pipenv-fork.readthedocs.io/en/latest/advanced.html#custom-virtual-environment-location
or
PIPENV_VENV_IN_PROJECT
If set, creates .venv in your project directory.
-- https://pipenv-fork.readthedocs.io/en/latest/advanced.html#pipenv.environments.PIPENV_VENV_IN_PROJECT
In my experience, PyCharm will uses the existing venv created by Pipenv. Otherwise it will create it in the directory that PyCharm is configured to create it.

Docker, how to COPY docker-specific versions of files to WORKDIR

Can Docker's COPY or RUN cp be used in a Dockerfile to overwrite a default config file with a docker-specific version of the file?
In a Rails project, our config folder has multiple versions of database.yml for different environments:
# projectname/config/
database.yml # an unused default placeholder
database_for_docker_2.yml
database_for_vagrant.yml
For different dev environments (vagrant+virtualbox vs docker) during initialization of the machine/container we copy the appropriate version of the .yml to database.yml
In the Dockerfile, after this section:
WORKDIR /my_app
RUN bundle install
COPY . /my_app
we tried:
RUN cp ./config/database_docker_2.yml /my_app/config/database.yml
but the file does not seem to be copied, the default version of database.yml is used when we spin up the container.
we then tried:
COPY ./config/database_docker_2.yml /my_app/config/database.yml
the file still does not seem to be copied, the default version of the file gets used when we spin up the container.
What DOES work is adding another entry to the volume section of docker-compose.yml specifically for that one file:
volumes:
- .:/my_app
- ./config/database_docker_2.yml:/my_app/config/database.yml
but we prefer to manage the placement of env-specific versions of files in the Dockerfile (as opposed to littering the docker-compose.yml with such env-specific files)
The command COPY ./config/database_docker_2.yml /my_app/config/database.yml probably works, there is no reason it shouldn't assuming the source exists.
What I suspect happens, is that when you are testing it, you already have a volume with .:/my_app, which then shows you the local folder, and not the in-container folder.
Run it without the volume, and I believe you will in fact see that it copied it into the container, as you intended.
On a side note:
If you are not yet locked in your way of handling this multiple database config, I would consider re-evaluating your situation, and trying to find a solution that does not require you to change database.yml for each environment. One way, would be to have the database.yml use an environment variable (usually DATABASE_URL) and then you have one docker-compose for all, and one database.yml for all, and you only configure environment with environment variables.

pipenv virtual environment depends on current directory?

I am new to pipenv so there might be something I'm not understanding here. However it seems like the virtual environment which is created depends on the current directory, which seems bad to me.
Here is what I did:
Checked out code from Github which already had Pipfile and Pipfile.lock
Did some unrelated stuff... at this point I was in a directory called /home/user/me/miniconda3/bin/
Ran /home/user/me/miniconda3/bin/pipenv run python /home/user/me/my-script-dir/my-script.py
This caused Pipenv to create a virtual environment. Output:
Creating a virtualenv for this project...
Using /home/user/me/miniconda3/bin/python (3.6.4) to create virtualenv…
Already using interpreter /home/user/me/miniconda3/bin/python
Using base prefix '/home/user/me/miniconda3'
New python executable in /home/user/me/.local/share/virtualenvs/bin-YnM8YhRk/bin/python
Installing setuptools, pip, wheel...done.
Virtualenv location: /home/user/me/.local/share/virtualenvs/bin-YnM8YhRk
Creating a Pipfile for this project…
Then I realized that I needed to run pipenv install so this time I cd'd to the directory where the script is actually stored, /home/user/me/my-script-dir/, and ran /home/user/me/miniconda3/bin/pipenv install. Then I got this output:
Creating a virtualenv for this project…
Using /home/user/me/miniconda3/bin/python (3.6.4) to create virtualenv…
Already using interpreter /home/user/me/miniconda3/bin/python
Using base prefix '/home/user/me/miniconda3'
New python executable in /home/user/me/.local/share/virtualenvs/my-script-dir-Ex37BY7g/bin/python
Installing setuptools, pip, wheel...done.
Virtualenv location: /home/user/me/.local/share/virtualenvs/my-script-dir-Ex37BY7g
Installing dependencies from Pipfile.lock (6c24e4)…
So as you can see I actually was running the same script each time, but somehow it created two different virtual environments. And the virtual environments are named after what happened to be my current directory at the time, not the directory of the script. This seems like it would be very unwieldy unless I am missing something.
You are correct, the virtualenv Pipenv uses does depend on the current directory.

envsubst command getting stuck in a container

I have a requirement that before an application runs, some part of it needs to read the environmental variable. For this I have the following docker file
FROM nodesource/jessie:0.12.7
# install gettext for envsubst
RUN apt-get update
RUN apt-get install -y gettext-base
# cache package.json and node_modules to speed up builds
ADD package.json package.json
RUN npm install
# Add source files
ADD src src
# Substiture value for backend endpoint env var
RUN envsubst < src/js/envapp.js > src/js/app.js
ADD node_modules node_modules
EXPOSE 8000
CMD ["npm","start"]
The above envsubst line reads (should read) an env variable $MYENV and substitutes it. But when I open the file app.js, its empty.
I checked if the environmental variable exists in the container and it does. Any reason its value is not read and substituted?
I also tried the same command in teh container and it works. It only does not work when I run the image
This is likely because $MYENV is not available for envsubst when you run the image.
Each RUN command runs on its own shell.
From the Docker documentations:
RUN (the command is run in a shell - /bin/sh -c - shell form)
You need to source your profile as well, for example if the $MYENV environment variable is available in the .bashrc file, you can modify your Dockerfile like this:
RUN source ~/.bashrc && envsubst < src/js/envapp.js > src/js/app.js
I encountered the same issues, and after much research and fishing through the internet. I managed to find a few work arounds to this issue. Below I'll list them and identifiable risks at the time of this "Answer post"
Solutions:
1.) apt-get install -y gettext its a standard GNU package language library, one of these libraries that it includes is envsubst` and I can confirm that it works for docker UBUNTU:latest and it should work for every flavored version.
2.) npm install envsub dependent on the "use case" - this approach would be better supported by node based projects.
3.) enstub cli project lib in my opinion it seems a bit overkill to downloading a custom cli from a random stranger but, it's also another option.
Risk:
apt-get install -y gettext:
1.) gettext - this approach would NOT be ideal for VM's as with any package library, it requires maintenance and updates as time passes. However, this isn't necessary for docker because once an a container is initialized and up and running we can create a bashscript to add the package, substitute env vars and then remove the package.
2.) It's a bad idea for VM's because it can be used to execute arbitrary code
npm install ensub
1.) envsub - updating packages and this approach wouldn't be ideal if your dealing with a different stack and not using nodejs.
NOTE:
There's also a PHP version for those developing a PHP application and it seems to work PHP's cli if you need a custom environment.
Resources:
GetText package library info: https://www.gnu.org/software/gettext/
GetText Risk - https://ubuntu.com/security/notices/USN-3815-2
PHP-GetText - apt-get install -y php-gettext
Custom ensubst cli: https://github.com/a8m/envsubst
I suggest that since your are using Node, you use the npm envsub module.
This module is well tested and is developed with docker in mind.
It avoids the need for relying on other dependencies when you already have the full Node arsenal at your fingertips.
envsub is described as
envsub is envsubst for NodeJS
NodeJS global CLI module providing file-level environment variable substitution via Handlebars
I am the author of the package. I think you will enjoy it.
I had some issues with envsubst in Docker.
For some reasons envsubst doesn't work when I try to copy the output in the same file. For example, this is not working:
RUN envsubst < file.conf > file.conf
But when I when I tried to use a temp file the issue disappeared:
RUN envsubst < file.conf > file.conf.temp && cp -f file.conf.temp file.conf

Resources