Support python2 & python3 using multiple Pipfiles - pipenv

Loving pipenv so far however am wondering about using it to support both python2.7 & python3.7.
I'm writing a python package that I want to distribute via an internal pypi repository, and I'd like to support both python2.7, and python3.7 (so far I've developed against python2.7). Given I have to specify the python version within the Pipfile the logical conclusion I draw is that I need multiple Pipfiles.
I'm thinking I shall structure my project like so:
root
|
|-python2.7
| |-Pipfile
|-python3.7
| |-Pipfile
Any thoughts on that so far? Is that what others would do?
Assuming I'm going to do that I'm going to need to specify which Pipfile to use when running tests and building the package. According to https://pipenv.kennethreitz.org/en/latest/advanced/#configuration-with-environment-variables I can use env var PIPENV_PIPFILE to specify the Pipfile location. This is fine, I'm just surprised there is no option to specify the Pipfile location on the command-line (e.g. pipenv --pipfile-location). Is it worth my requesting such a feature?
Any and all comment on the above is welcome.

Hmmm...I had a rethink after reading:
The inclusion of [requires] python_version = "3.6" specifies that your application requires this version of Python, and will be used automatically when running pipenv install against this Pipfile in the future (e.g. on other machines). If this is not true, feel free to simply remove this section.
https://pipenv.readthedocs.io/en/latest/basics/#specifying-versions-of-python
I now have just one Pipfile. I build my code in docker containers so I choose an image with the correct version of python (https://hub.docker.com/_/python?tab=tags) as appropriate
Also:
Do not keep Pipfile.lock in version control if multiple versions of Python are being targeted.
https://pipenv.readthedocs.io/en/latest/basics/#general-recommendations-version-control

Related

What files or directories of a release are the bare minimum to run a release?

Let's say, I have a completely new VPS server which I've just rolled out, which I haven't installed anything on yet.
And I've compiled and build a production release of Phoenix application on my local machine which is identical to a VPS server Linux distributive- and version-wise.
In the directory _build/prod/rel/my_app123 there have been generated 4 subdirectories:
bin
erts-12.3
lib
releases
Will copying the content of rel/my_app123/, that is, these 4 subdirectories, over to a VPS will be absolutely enough in order to run an application?
Or will I have install something extra as well? Elixir and Erlang?
How about production dependencies from mix.exs? Or are these have been included and compiled into into a release?
P.S. Assume that my web application has no "js", "css" and the like files, and doesn't use a database.
When you run mix release, it bundles all of your Elixir/Erlang dependencies for the MIX_ENV in question into the release directory, the erlang BEAM runtime/VM that you were using in your build, and any files that you specify in your mix project in mix.exs.
Because the BEAM runtime and code that bootstraps loading your code are included in the release, you won't need to install Elixir or Erlang on the target machine.
Things that are not included include:
any non-Elixir dependencies. For example, if you rely on openssl, you'll need to make sure you have a binary-compatible version of that installed on the machine you plan to run on (typically, the equivalent major verson release).
Portable bytecode. BEAM isn't like the Java VM. The compiled BEAM code needs to run on a substantially similar architecture. Build on an Arm64 machine for deployment on an Arm64 virtual machine, or x86 for Intel-compatible hardware, for instance. And it's probably best to use the same major OS distribution. There may be cases where "Any Linux * Same CPU architecture" is fine, but for example, building on a Windows or MacOS install of Elixir/OTP and deploying on Linux is a non-starter; you'd need to use a sufficiently similar OS.
As an example, one of my projects has its releases built on Alpine using Docker, so we only really have to worry about CPU compatibility. In our case we do need to make sure some external non-Elixir dependencies our app binds to are included on the docker image.
RUN apk add --no-cache libstdc++ openssl ncurses-libs wkhtmltopdf xvfb \
fontconfig \
freetype \
ttf-dejavu
(ignore the fact that wkhtmltopdf is kind of deprecated, we're working on it. But for now it's a non-elixir dependency we rely on).
If you're building for a, say, an EC2 instance and not using Docker, you'd just need to make sure your release is built on a similar OS to what you're using for production, and make sure the production AMI (image) has those non-Elixir dependencies on it, or will at the time of deployment, perhaps using apt or another package manager. For a VPS, the solution for non-elixir dependencies will depend on whether they have the option for customizing the base machine image (maybe with Packer or Ansible)
Since you may seem to have been a bit confused about it in the comments, yes, MIX_ENV=prod mix release will build all of your production Elixir/Erlang dependencies and include them in the /_build/prod folder.
I include the whole ./prod folder in our release, but it looks like protocol consolidation binaries and the lib folder .Beam files are all in the rel folder so that's a bit unnecessary.
If you do a default build, the target will be inside your _build directory, with sub-directories for the config environment and your application, e.g. _build/dev/rel/your_app/. That directory should contain everything you need to run your app -- the prompt after running mix release provides some clues for this when it says something like:
Release created at _build/dev/rel/your_app!
I find it more useful, however, to zip up the app into a single portable file (and yes, I agree that the details about how to do this are not necessarily the first things you see when reading about Elixir releases). The trick is to customize your mix.exs by fleshing out the releases option -- this is usually done via a dedicated private function but the organization of how you supply the options is up to you.
What I find is often useful is the generation of a single zipped .tar.gz file. This can be accomplished by specifying the include_executables_for option along with steps. It looks something like this:
# mix.exs
defmodule YourApp.MixProject do
use Mix.Project
def project do
[
# ...
releases: releases()
# ...
]
end
defp releases do
[
my_app: [
include_executables_for: [:unix],
steps: [:assemble, :tar]
]
]
end
When you configure your application this way, running mix release will generate a nice portable file containing your app with everything it needs. Unzipping this file is education for understanding everything your app needs. By default this file will be created at a location like _build/dev/yourapp-1.0.0.tar.gz. You can configure the build path by specifying a path for your app. See Mix.Release for more options.

How does Nix able to install individual NPM packages as standalone software?

CAVEAT: If you would like to use Serverless Framework with Nix/NixOS, this is not the way to do it: the package you end up with is outdated, and (as stated below) it probably won't work anyway. See thread on NixOS Discourse.
Wanted to try out Serverless via nix-shell so I looked it up, ran the command
nix-shell -v -p nodePackages.serverless
a̶n̶d̶ i̶t̶ w̶o̶r̶k̶s̶ f̶l̶a̶w̶l̶e̶s̶s̶l̶y̶.1
What makes this possible without requiring me to install Node manually to be able to run npm install -g serverless? (E.g., Is the node_modules folder somewhere in the Nix store? What happens if I nix-shell another Node package - will they share that same directory?)
[1]: It does not... See this Reddit thread; probable setuid issue. Still interested in the behind the scene stuff though.
This question is more like a todo because I really would like to figure it out myself but don't have the time for it right now...
This is possible because it was packaged with node2nix. This tool generates expressions that fetch the various packages and put them in a node_modules directory.
Indeed it's not perfect and some package need some extra fixing up to make them work well. The node2nix tooling could 'learn' from the cabal2nix integration in Nixpkgs to improve the quality of packaging and the Nixpkgs developer experience.

Is it possible to create a custom cdk init template to leverage pipenv for my python project?

I would like to utilize pipenv as my virtual environment manager and for my dependency management for my Python cdk projects, upon running 'cdk init'. I read that you can specify a 'custom' application template but could not find documentation on creating one. Is it possible and can the virtual environment/dependency manager be controlled using this feature?
I would like to be able to run 'cdk init hello-world --language python' and have the scaffolding for the project be generated BUT using pipenv.
It's not possible to do that without modifying the source code for the CDK package itself. You likely won't want to manage your own divergent version of the standard package.
I've shoe-horned CDK to work with PipEnv a couple of times, and it's more work than it's worth at this point. The problem is that PipEnv forces the . delimiter in the package name to a -; pipenv install aws-cdk.aws-rds is listed as aws-cdk-aws-rds in the Pipfile, and the package installations don't actually work.
There's an open issue on the repo for this though (https://github.com/aws/aws-cdk/issues/3671), so you could +1 there in hopes that they can address it. It really is an issue with Pipenv though.
Following the link from Scott for the open issue, it looks like this works now, provided the package name is in quotes.

Is it possible to avoid updating drupal core with composer update?

I'm trying to build a docker container where the dockerfile installs a specific version of drupal, I copy over custom copies of composer.json/composer.lock and then do a composer update to download the contributed modules specified in these composer files. I know that ideally composer would also control core, but for this project, I'm trying to avoid that.
The problem I'm having is that composer update seems to also reinstall drupal, where I want the dockerfile to be in control of this and I'd like composer to just manage the modules.
Is this something I could do by modifying the composer files (so far tests have not worked)? It seems you can't specify a package for composer to ignore and where I see you can specify specific packages to update, that's not really a viable solution for this.
Thanks
OK, it's looking like the issue was the composer.lock/json files I was adding to run composer update were initially created by using composer create-project drupal-composer/drupal-project, which installed core and thus added it to the composer.lock/json files.
It seems that by just reinstalling the contributed modules with composer in a fresh drupal site (so simplified composer files) might be the answer.

How do I install a project built with bazel?

I am working on a project that is transitioning from CMake to Bazel. One critical feature that we are apparently losing in the bargain is the ability to install the project, so that it can be used by other (not necessarily Bazel) projects.
AFAICT, there is currently no built in support for installing a project?!
I need to create a portable (must work on at least Linux and MacOS) way to install the project. Specifically:
I need to be able to specify libraries, headers, executables, and other files (e.g. LICENSE) that need to be installed.
The user needs to be able to specify an absolute prefix where things should be installed.
I really, really should be able to execute the "install" step more than once, giving different prefixes each time, without Bazel getting confused (i.e. it must not try to "remember" what files it already installed, or if it does, must understand when the prefix is different from last time).
Libraries should be installed to the right place (e.g. lib64), or at least it should be possible for the user to specify the correct libdir.
The install step MUST NOT touch the time stamp on any file from a previous install that has not changed. (Ideally, Bazel itself would handle this; using the install command is tricky and has potential portability issues. Note platform requirements, above.)
What is the best way to go about doing this?
Unless you want to do specific package (e.g. deb or rpm), you probably want to create an executable rule that does the install for you.
You can create a rule that would create an executable (e.g. a shell script) that does the install for you (e.g. do checksums to check if there are change to the installed file and does the actual copy of the files if they have changed). You would have to use the extension language to do, that would look similar to what the docker rules does to load an image with the incremental loader
Addition: I forgot to say that the install itself would be run by using the run command: bazel run install if the rule is named install in the top level BUILD file.

Resources