Make shell aliases declaratively depend on packages - nix

In NixOS, definitions of shell aliases can be defined inside the configurations.nix file like so:
environment.shellAliases = {
"my_some_cmd" = "some_cmd -flag 123";
}
This gets assigned even when the referred command (here: some_cmd) is not available in the system. Say, this command is included in a package. So it would be desirable to declare that the alias should only be assigned if the package is installed.
How could that be done? Would I have to just work with an wrapping if-statement or are there other ways to archive this?
If the if statement is the way to go, how could that be implemented?

You can bypass the need to install the package by using the full path to the command, example:
environment.shellAliases = {
"colored-tree" = "${pkgs.tree}/bin/tree -C";
};

Related

terraform plan returns the Error: Unsupported argument

I have the following three files as below:
main.tf, variables.tf and dev.auto.tfvars
Snippet from main.tf
module "sql_vms" {
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/"
rg_name = var.resource_group_name
location = module.resource_group.external_rg_location
vnet_name = var.virtual_network_name
subnet_name = var.sql_subnet_name
app_nsg = var.application_nsg
vm_count = var.count_vm
base_hostname = var.sql_host_basename
sto_acc_suffix = var.storage_account_suffix
vm_size = var.virtual_machine_size
vm_publisher = var.virtual_machine_image_publisher
vm_offer = var.virtual_machine_image_offer
vm_sku = var.virtual_machine_image_sku
vm_img_version = var.virtual_machine_image_version
username = var.username
password = var.password
}
Snippet from variables.tf
variable "app_subnet_name" {
type = string
}
variable "sql_subnet_name" {
type = string
}
Snippet from dev.auto.tfvars
app_subnet_name = "subnet_1"
sql_subnet_name = "subnet_2"
application_nsg = "test_nsg"
However, I'm getting error like below
Error: Unsupported argument
on main.tf line 7, in module "sql_vms":
7: subnet_name = var.sql_subnet_name
An argument named "subnet_name" is not expected here.
Error: Unsupported argument
on main.tf line 8, in module "sql_vms":
8: app_nsg = var.application_nsg
An argument named "app_nsg" is not expected here.
My modules directory structure looks like below
$ ls -R terraform-modules/
terraform-modules/:
aws azure gcp
terraform-modules/aws:
alb ec2-instance-rhel
terraform-modules/aws/alb:
terraform-modules/aws/ec2-instance-rhel:
main.tf
terraform-modules/azure:
compute resourcegroup sqlserver
terraform-modules/azure/compute:
main.tf README.md variable.tf
terraform-modules/azure/resourcegroup:
data.tf outputs.tf variables.tf
terraform-modules/azure/sqlserver:
main.tf README.md variables.tf
terraform-modules/gcp:
compute
terraform-modules/gcp/compute:
main.tf
Any idea what is going wrong here?
If you are starting out with Terraform, you will get that error message ("An argument named "example" is not expected here") if your module arguments refer to the resource properties and not to variable names, see below for an example:
Example of a Terraform module "example_mod.tf" you want to call from your module:
variable "sg_name" { } # Usually in a separate file
variable "sg_desc" { } # called variables.tf
resource "example_resource" "example_name" {
name = var.sg_name
description = var.sg_desc
...
}
CORRECT WAY:
module "my_module" {
source = "./modules/example_mod.tf"
sg_name = "whatever" # NOTE the left hand side "sg_name" is the variable name
sg_desc = "whatever"
...
}
INCORRECT WAY: (Gives the error "An argument named "name" is not expected here" )
module "my_module" {
source = "./modules/example_mod.tf"
name = "whatever" # WRONG because the left hand side "name" is a resource property
description = "whatever" # WRONG for the same reason
...
}
I think the issue is that you do not refer to the exact module with the source. I see you have three modules in the source:
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/"
They are compute, resourcegroup and sqlserver. But you want to load them in one module. So it cannot find the related variables for the modules. I also don't think it's the right way to load all the modules like that. I would recommend you load the modules one by one like below:
module "compute" {
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/compute"
...
}
module "resourcegroup" {
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/resourcegroup"
...
}
module "sqlserver" {
source = "git::git#github.com:xxxxxxxxxxxx/terraform-modules//azure/sqlserver"
...
}
Without knowing the details about the module it is usually hard to say what's the reason for an error, but in this particular case it seems that there isn't a requirement in the module you're importing to use those two arguments (subnet_name and app_nsg), or rather that you are using a version of the module that doesn't require them to be present. What helps with that type of error is to check if there is a version of the module that does have such a requirement. The syntax for using a particular module version from Github is explained in Terraform Module Sources documentation, Selecting a Revision section:
module "vpc" {
source = "git::https://example.com/vpc.git?ref=v1.2.0"
}
You are probably using SSH to fetch the module, so the recommended way to do that is:
When using Git over SSH, we recommend using the ssh://-prefixed URL form for consistency with all of the other URL-like git address forms.
In your example, this translates to:
module "sql_vms" {
source = "git::ssh://git#github.com/org/terraform-modules-repo.git//azure/module-name?ref=v1.2.0"
where org is your organisation's (or your private) Github account, terraform-modules-repo is the repo where modules reside, module-name is the module you are using and ref=v1.2.0 represents the module revision number.
The error An argument named "example" is not expected here. means that the module doesn't expect to see an input argument with that name. Think about Terraform modules as functions in a programming language: in order to have a function provide a result, you pass the function a set of required arguments. If you provide more (or less) input arguments than required by that function call, you will get an error. (There are special cases but it is out of the scope of this question.)
Another similarity between modules and functions is that Terraform modules can also provide output values, besides creating resources that are specified. That can be handy in cases where output can be used as input in other modules or resources. The line module.resource_group.external_rg_location is doing exactly that: getting the output value from another module and using it to assign a value to an argument location.
I had a similar issue when working with AWS Eventbridge and Terraform.
When I run terraform plan I get the error below:
Error: Unsupported argument
│
│ on ../../modules/aws/eventbridge/main.tf line 37, in resource "aws_cloudwatch_event_target" "ecs_cloudwatch_event_target":
│ 37: maximum_age_in_seconds = var.maximum_age_in_seconds
│
│ An argument named "maximum_age_in_seconds" is not expected here.
Here's how I solved it:
The issue was that I was not using the correct attribute for the AWS Eventbridge resource block.
The attribute should have been maximum_event_age_in_seconds and not maximum_age_in_seconds.
Another issue that could this is not defining a variable in your terraform script that is already defined in a module.
That's all
It could be happening due to plenty of reasons.
I'd suggest some verification:
Check if you are using the correct source URL, path or revision branch/tag.
I'm not sure about your implementation, but you probably want to double check the revision you are referencing contains theses variable declarations.
GitHub Modules addressing allows ref argument.
Refer to the GitHub Module Addressing for Terraform Documentation and how to specify a revision.
Check if all necessary variables are declared on every module, including the root module.
Did you declare those variables both in a variables.tf file on your root directory and on the module context/path?
I know that's exhausting and repetitive, but every module should be designed as an "independent project". Each module **MUST have its own declared variables.tf**, which work as inputs for that module, and it is also desirable that it has its own mapped outputs.tf, provider.tf, backend.tf, etc., though these last ones are not required.
FYI: Doing so you guarantee scalability, reusability, as well as reliability to work with different tfstate files and even different repositories for each module in order to guarantee atomicity and minimum permissions, hence preventing your infrastructure from being destroyed by undesired code changes.
I highly recommend this read to understand the importance of independent modularization design.
Furthermore, tools like Terragrunt, Terratest can make this job less painful by keeping your code DRY ( Don't Repeat Yourself ).
Check if the **type constraints of the related variables match.**
If that's not your case, try looking if the type constraints match between all declarations of the variables used both as arguments ( on your root variables.tf ) and inputs ( on your module level variables.tf ).
I'll share my pain as well.
Writing block configuration like this
vpc_config = {
subnet_ids = [aws_subnet.example1.id, aws_subnet.example2.id]
}
Instead of (Notice the = Equal Sign):
vpc_config {
subnet_ids = [aws_subnet.example1.id, aws_subnet.example2.id]
}
Will give an error of An argument named "vpc_config" is not expected here and will waste you a few good hours.

How to specify os platform in waf script?

I'm new to waf build tool and I've googled for answers but very few unhelpful links.
Does anyone know?
As wscript is essentially a python script, I suppose I could use the os package?
Don't use the os module, instead use the DEST_* variables:
ctx.load('compiler_c')
print (ctx.env.DEST_OS, ctx.env.DEST_CPU, ctx.env.DEST_BINFMT)
On my machine this would print ('linux', 'x86_64', 'elf'). Then you can dispatch on that.
You can use import at every point where you could use it any other python script.
I prefer using platform for programming a function os-agnostic instead on evaluate some attributes of os.
Writing the Build-related commands example in the waf book os-agnostic, could look something like this:
import platform
top = '.'
out = 'build_directory'
def configure(ctx):
pass
def build(ctx):
if platform.system().lower().startswith('win'):
cp = 'copy'
else:
cp = 'cp'
ctx(rule=cp+' ${SRC} ${TGT}', source='foo.txt', target='bar.txt')

Is it possible to keep my Nix packages in sync across machines not running NixOS?

I know with NixOS, you can simply copy over the configuration.nix file to sync your OS state including installed packages between machines.
Is it possible then, to do the same using Nix the package manager on a non-NixOS OS to sync only the installed packages?
Please note, that at least since 30.03.2017 (corresponding to 17.03 Nix/NixOS channel/release), as far as I understand the official, modern, supported and suggested solution is to use the so called overlays.
See the chapter titled "Overlays" in the nixpkgs manual for a nice guide on how to use the new approach.
As a short summary: you can put any number of files with .nix extension in $HOME/.config/nixpkgs/overlays/ directory. They will be processed in alphabetical order, and each one can modify the set of available Nix packages. Each of the files must be written as in the following pattern:
self: super:
{
boost = super.boost.override {
python = self.python3;
};
rr = super.callPackage ./pkgs/rr {
stdenv = self.stdenv_32bit;
};
}
The super set corresponds to the "old" set of packages (before the overlay was applied). If you want to refer to the old version of a package (as in boost above), or callPackage, you should reference it via super.
The self set corresponds to the eventual, "future" set of packages, representing the final result after all overlays are applied. (Note: don't be scared when sometimes using them might get rejected by Nix, as it would result in infinite recursion. Probably you should rather just use super in those cases instead.)
Note: with the above changes, the solution I mention below in the original answer seems "deprecated" now — I believe it should still work as of April 2017, but I have no idea for how long. It appears marked as "obsolete" in the nixpkgs repository.
Old answer, before 17.03:
Assuming you want to synchronize apps per-user (as non-NixOS Nix keeps apps visible on per-user basis, not system-wide, as far as I know), it is possible to do it declaratively. It's just not well advertised in the manual — though it seems quite popular among long-time Nixers!
You must create a text file at: $HOME/.nixpkgs/config.nix — e.g.:
$ mkdir -p ~/.nixpkgs
$ $EDITOR ~/.nixpkgs/config.nix
then enter the following contents:
{
packageOverrides = defaultPkgs: with defaultPkgs; {
home = with pkgs; buildEnv {
name = "home";
paths = [
nethack mc pstree #...your favourite pkgs here...
];
};
};
}
Then you should be able to install all listed packages with:
$ nix-env -i home
or:
$ nix-env -iA nixos.home # *much* faster than above
In paths you can put stuff in a similar way like in /etc/nixos/configuration.nix on NixOS. Also, home is actually a "fake package" here. You can add more custom package definitions beside it, and then include them your "paths".
(Side note: I'm hoping to write a blog post with what I learned on how exactly this works, and also showing how to extend it with more customizations. I'll try to remember to link it here if I succeed.)

DART string constant set to timestamp as at compilation

How can I get a string constant automatically set to the datestamp as at compile time?
Something like:
const String COMPILE_DATESTAMP = eval_static(DateTime.now().toString());
...
String s = "This program was compiled $COMPILE_DATESTAMP";
where s would then be for e.g.
"This program was compiled 1971-02-03 04:05:06"
Thanks for the question!
There's no required compile step in Dart. (We do have an optional Dart-to-JavaScript compiler, or even a Dart-to-Dart processor that does tree shaking.) Dart's VM accepts input as text files. Similar to Ruby or Python, it runs text-based scripts.
As others have mentioned, this is a job for some sort of build step.
I'm new to Dart, but I haven't seen anything in the documentation to suggest that such a thing is possible. I strongly suspect that it isn't.
If you really need functionality like you describe, I think your best bet is to roll your own build script. Something simple like:
#!/bin/bash
sed -ri "s/INSERT_DATETIME_HERE/`date`/" $1
dart2js $1 -o$1.js
could be modified to suit your needs. (I'd want some sanity checks in there if it were me; I'm just suggesting a starting point.) Your code would become:
const String COMPILE_DATESTAMP = "INSERT_DATETIME_HERE";
...
String s = "This program was compiled $COMPILE_DATESTAMP";
You must write another dart program that could examine the actual compiled program. Then is simple as:
File compiledApp = new File('path/to/compiled/app.dart');
compiledApp.lastModified().then(
(modifiedDate)
{
print("This program was compiled $modifiedDate");
},
onError: (exp)
{
// File doesn't exist ?
}
);
This trick build on knowledge that compiler will modify the 'last modify date' of the file

Equivalence of Rails console for Node.js

I am trying out Node.js Express framework, and looking for plugin that allows me to interact with my models via console, similar to Rails console. Is there such a thing in NodeJS world?
If not, how can I interact with my Node.js models and data, such as manually add/remove objects, test methods on data etc.?
Create your own REPL by making a js file (ie: console.js) with the following lines/components:
Require node's built-in repl: var repl = require("repl");
Load in all your key variables like db, any libraries you swear by, etc.
Load the repl by using var replServer = repl.start({});
Attach the repl to your key variables with replServer.context.<your_variable_names_here> = <your_variable_names_here>. This makes the variable available/usable in the REPL (node console).
For example: If you have the following line in your node app:
var db = require('./models/db')
Add the following lines to your console.js
var db = require('./models/db');
replServer.context.db = db;
Run your console with the command node console.js
Your console.js file should look something like this:
var repl = require("repl");
var epa = require("epa");
var db = require("db");
// connect to database
db.connect(epa.mongo, function(err){
if (err){ throw err; }
// open the repl session
var replServer = repl.start({});
// attach modules to the repl context
replServer.context.epa = epa;
replServer.context.db = db;
});
You can even customize your prompt like this:
var replServer = repl.start({
prompt: "Node Console > ",
});
For the full setup and more details, check out:
http://derickbailey.com/2014/07/02/build-your-own-app-specific-repl-for-your-nodejs-app/
For the full list of options you can pass the repl like prompt, color, etc: https://nodejs.org/api/repl.html#repl_repl_start_options
Thank you to Derick Bailey for this info.
UPDATE:
GavinBelson has a great recommendation for running with sequelize ORM (or anything that requires promise handling in the repl).
I am now running sequelize as well, and for my node console I'm adding the --experimental-repl-await flag.
It's a lot to type in every time, so I highly suggest adding:
"console": "node --experimental-repl-await ./console.js"
to the scripts section in your package.json so you can just run:
npm run console
and not have to type the whole thing out.
Then you can handle promises without getting errors, like this:
const product = await Product.findOne({ where: { id: 1 });
I am not very experienced in using node, but you can enter node in the command line to get to the node console. I then used to require the models manually
Here is the way to do it, with SQL databases:
Install and use Sequelize, it is Node's ORM answer to Active Record in Rails. It even has a CLI for scaffolding models and migrations.
node --experimental-repl-await
> models = require('./models');
> User = models.User; //however you load the model in your actual app this may vary
> await User.findAll(); //use await, then any sequelize calls here
TLDR
This gives you access to all of the models just as you would in Rails active record. Sequelize takes a bit of getting used to, but in many ways it is actually more flexible than Active Record while still having the same features.
Sequelize uses promises, so to run these properly in REPL you will want to use the --experimental-repl-await flag when running node. Otherwise, you can get bluebird promise errors
If you don't want to type out the require('./models') step, you can use console.js - a setup file for REPL at the root of your directory - to preload this. However, I find it easier to just type this one line out in REPL
It's simple: add a REPL to your program
This may not fully answer your question, but to clarify, node.js is much lower-level than Rails, and as such doesn't prescribe tools and data models like Rails. It's more of a platform than a framework.
If you are looking for a more Rails-like experience, you may want to look at a more 'full-featured' framework built on top of node.js, such as Meteor, etc.

Resources