CAVEAT: If you would like to use Serverless Framework with Nix/NixOS, this is not the way to do it: the package you end up with is outdated, and (as stated below) it probably won't work anyway. See thread on NixOS Discourse.
Wanted to try out Serverless via nix-shell so I looked it up, ran the command
nix-shell -v -p nodePackages.serverless
a̶n̶d̶ i̶t̶ w̶o̶r̶k̶s̶ f̶l̶a̶w̶l̶e̶s̶s̶l̶y̶.1
What makes this possible without requiring me to install Node manually to be able to run npm install -g serverless? (E.g., Is the node_modules folder somewhere in the Nix store? What happens if I nix-shell another Node package - will they share that same directory?)
[1]: It does not... See this Reddit thread; probable setuid issue. Still interested in the behind the scene stuff though.
This question is more like a todo because I really would like to figure it out myself but don't have the time for it right now...
This is possible because it was packaged with node2nix. This tool generates expressions that fetch the various packages and put them in a node_modules directory.
Indeed it's not perfect and some package need some extra fixing up to make them work well. The node2nix tooling could 'learn' from the cabal2nix integration in Nixpkgs to improve the quality of packaging and the Nixpkgs developer experience.
Related
I would like to utilize pipenv as my virtual environment manager and for my dependency management for my Python cdk projects, upon running 'cdk init'. I read that you can specify a 'custom' application template but could not find documentation on creating one. Is it possible and can the virtual environment/dependency manager be controlled using this feature?
I would like to be able to run 'cdk init hello-world --language python' and have the scaffolding for the project be generated BUT using pipenv.
It's not possible to do that without modifying the source code for the CDK package itself. You likely won't want to manage your own divergent version of the standard package.
I've shoe-horned CDK to work with PipEnv a couple of times, and it's more work than it's worth at this point. The problem is that PipEnv forces the . delimiter in the package name to a -; pipenv install aws-cdk.aws-rds is listed as aws-cdk-aws-rds in the Pipfile, and the package installations don't actually work.
There's an open issue on the repo for this though (https://github.com/aws/aws-cdk/issues/3671), so you could +1 there in hopes that they can address it. It really is an issue with Pipenv though.
Following the link from Scott for the open issue, it looks like this works now, provided the package name is in quotes.
I have cloned and built the waf script using:
./waf-light configure
Then to build my project (provided by Gomspace) I need to add waf and the eclipse.py to my path. So far I haven't found better than this setenv script:
WAFROOT=~/git/waf/
export PYTHONPATH=$WAFROOT/waflib/extras/:$PYTHONPATH
export PATH=~/git/waf/:$PATH
Called with:
source setenv
This is somehow a pretty ugly solution. Is there a more elegant way to install waf?
You don't install waf. The command you found correctly builds waf: /waf-light configure build Then for each project you create, you put the built waf script into that projects root directory. I can't find a reference, but this is the way in which waf:s primary author Thomas Nagy wants the tool to be used. Projects that repackage waf to make the tool installable aren't "officially sanctioned."
There are advantages and disadvantages with non-installation:
Disadvantages:
You have to add the semi-binary 100kb large waf file to your repository.
Because the file contains binary code, people can have legal objections to distributing it.
Advantages:
It doesn't matter if new versions of waf break the old API.
Users don't need to install waf before compiling the project -- having Python on the system is enough.
Fedora (at least Fedora 22) has a yum package for waf, so you could see that it's possible to do a system install of waf, albeit with a hack.
After you run something like python3 ./waf-light configure build, you'll get a file called waf that's actually a Python script with some binary data at the end. If you put it into /usr/bin and run it as non-root, you'll get an error because it fails to create a directory in /usr/bin. If you run it as root, you'll get the new directory and /usr/bin/waf runs normally.
Here's the trick that I learned from examining the find_lib() function in the waf Python script.
Copy the waf to /usr/bin/waf
As root, run /usr/bin/waf. Notice that it creates a directory. You'll see something like /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18
mv that directory to /usr/lib, dropping the . in the directory name, e.g. mv /usr/bin/.waf-2.0.19-b2f63c807a4215294bf6005410c74c18 /usr/lib/waf-2.0.19-b2f63c807a4215294bf6005410c74c18
If you want to use waf with Python3, repeat Steps 2-3 running the Python script /usr/bin/waf under Python3. Under Python3, the directory names will start with .waf3-/waf3- instead instead of .waf-/waf-.
(Optional) Remove the binary data at the end of /usr/bin/waf.
Now, non-root should be able to just use /usr/bin/waf.
That said, here's something to consider, like what another answer said: I believe waf's author intended waf to be embedded in projects so that each project can use its own version of waf without fear that a project will fail to build when there are newer versions of waf. Thus, the one-global-version use case seems to be not officially supported.
I am working on a project that is transitioning from CMake to Bazel. One critical feature that we are apparently losing in the bargain is the ability to install the project, so that it can be used by other (not necessarily Bazel) projects.
AFAICT, there is currently no built in support for installing a project?!
I need to create a portable (must work on at least Linux and MacOS) way to install the project. Specifically:
I need to be able to specify libraries, headers, executables, and other files (e.g. LICENSE) that need to be installed.
The user needs to be able to specify an absolute prefix where things should be installed.
I really, really should be able to execute the "install" step more than once, giving different prefixes each time, without Bazel getting confused (i.e. it must not try to "remember" what files it already installed, or if it does, must understand when the prefix is different from last time).
Libraries should be installed to the right place (e.g. lib64), or at least it should be possible for the user to specify the correct libdir.
The install step MUST NOT touch the time stamp on any file from a previous install that has not changed. (Ideally, Bazel itself would handle this; using the install command is tricky and has potential portability issues. Note platform requirements, above.)
What is the best way to go about doing this?
Unless you want to do specific package (e.g. deb or rpm), you probably want to create an executable rule that does the install for you.
You can create a rule that would create an executable (e.g. a shell script) that does the install for you (e.g. do checksums to check if there are change to the installed file and does the actual copy of the files if they have changed). You would have to use the extension language to do, that would look similar to what the docker rules does to load an image with the incremental loader
Addition: I forgot to say that the install itself would be run by using the run command: bazel run install if the rule is named install in the top level BUILD file.
As the title says, how to use luadoc in ubuntu/linux? I generated documentation in windows using batch file but no success in ubuntu. Any ideas?
luadoc
Usage: /usr/bin/luadoc [options|files]
Generate documentation from files. Available options are:
-d path output directory path
-t path template directory path
-h, --help print this help and exit
--noindexpage do not generate global index page
--nofiles do not generate documentation for files
--nomodules do not generate documentation for modules
--doclet doclet_module doclet module to generate output
--taglet taglet_module taglet module to parse input code
-q, --quiet suppress all normal output
-v, --version print version information
First off, I have little experience with Luadoc, but a lot of experience with Ubuntu and Lua, so I'm basing all my points off of that knowledge and a quick install that I've just done of luadoc. Luadoc, as far as I can see, is a Lua library (so can also be used in Lua scripts as well as bash). To make documentation (in bash), you just run
luadoc file.lua
(where file is the name of your file that you want to create documentation for)
The options -d and -t are there to choose where you want to put the file and what template you want to use (which I have no clue about, I'm afraid :P). For example (for -d):
luadoc file.lua -d ~/Docs
As far as I can see, there is little else to explain about the actual options (as your code snippet explains what they do well enough).
Now, looking at the errors you obtained when running (lua5.1: ... could not open "index.html" for writing), I'd suggest a few things. One, if you compiled the source code, then you may have made a mistake somewhere, such as not installing dependencies (which I'd be surprised about, because otherwise you wouldn't have been able to make it at all). If you did, you could try getting it from the repos with
sudo apt-get install luadoc
which will install the dependencies too. This is probably the problem, as my working copy of luadoc runs fine from /usr/bin with the command
./luadoc
which means that your luadoc is odd, or you're doing something funny (which I cannot work out from what you've said). I presume that you have lua5.1 installed (considering the errors), so it's not to do with that.
My advice to you is to try running
luadoc file.lua
in the directory of file.lua with any old lua file (although preferably one with at least a little data in) and see if it generates an index.html in the same folder (don't change the directory with -d, for testing purposes). If that DOESN'T work, then reinstall it from the repos with apt-get. If doing that and trying luadoc file.lua doesn't work, then reply with the errors, as something bigger is going wrong (probably).
What is the preferred method of exporting a homebrew environment so I can synchronize my workspace between computers? Seems like there should be something similar to composer.lock or pip freeze. Is there a better way than brew list > brews.txt?
There is a better way: brew leaves.
This command prints a simple list of installed formulae which are not dependencies of any other formulae. Essentially this lists everything that was manually installed or is a leftover dependency from a removed formula.
$ brew leaves
apple-gcc42
bash-completion
brew-cask
git
[...]
There's no built-in means of using brew leaves output to install, but just having a clean list of manually-installed formulae is a step in the right direction.
Thanks to this Gabe Berke-Williams for writing about this: http://robots.thoughtbot.com/brew-leaves
Homebrew Bundle seems like a pretty great solution.
There is not a better way, and there are no current plans to make one.
Source: https://github.com/mxcl/homebrew/issues/17771
Use git! Maintaining repos for environment setup scripts is a pretty slick approach.
I highly recommend using a script to set up a development environment in the first place. thoughtbot has a really lightweight approach that provisions a development environment, including a bunch of brew formulas. https://github.com/thoughtbot/laptop. GitHub just open sourced boxen for this (and much more), but it has a somewhat steeper learning curve.
As you can see from the thoughtbot/latop readme, the entire install is a one-liner. If you want different packages, fork the repo and add whatever you use. This only covers the initial install, but it is a fantastic start.
For ongoing synchronization of development environments, including updating your preferred homebrew setup, you might want to try a 'dotfiles' approach. Zach Holman has a great approach detailed here: https://github.com/holman/dotfiles
If you want to tweak or update anything, just make the appropriate changes to the script (holman's dot script does the ongoing update stuff). Commit, push, pull down from any other environments.