How to install jbake from unstable with overlay in home-manager - nix

After adding the unstable channel
nix-channel --add https://nixos.org/channels/nixpkgs-unstable unstable
I added an overlay under ~/.config/nixpkgs/overlays/package-upgrades/default.nix
self: super:
let
unstable = import <unstable> {};
in {
jbake = unstable.jbake;
}
This overlay is added to home.nix
nixpkgs.overlays = [ (import ./overlays/package-upgrades) ];
When I run home-manager switch there is an error
0 + john#n1 nixpkgs $ home-manager switch
Too many heap sections: Increase MAXHINCR or MAX_HEAP_SECTS
The entire configuration can be found here.
How can I upgrade a single attribute from unstable using home-manager and an overlay?

This thread on nixos discourse seems relevant. It appears the overlay also gets applied when importing unstable, resulting in an infinite recursion. Try something like:
let
unstable = import <unstable> {};
in {
home.packages = with pkgs; [
...
] ++ (with unstable; [
jbake
]);
}

Related

Jetbrains Space shellScript variables

I'm trying to use bash variables in a shellScript in jetbrains space automation to no success.
My .space.kts is as follows;
job("mvn compile"){
container(displayName="mvn", image="maven:3.8.5-eclipse-temurin-17"){
shellScript {
content = """
FOO="bar"
echo $FOO
"""
}
}
}
in the above i'd expect "bar" to be echoed, but instead im getting the following error when this tries to run;
Dsl file '/tmp/16487320722162400386/.space.kts' downloaded in 1736 ms
Compiling DSL script /tmp/16487320722162400386/.space.kts...
downloading /home/pipelines-config-dsl-compile-container/space-automation-runtime.jar ...
[SUCCESSFUL ] com.jetbrains#space-automation-runtime;1.1.100932!space-automation-runtime.jar (71ms)
Compilation failed in 8.652797664s.
ERROR Unresolved reference: FOO (.space.kts:9:23)
Cleaned up the output folder: /tmp/16487320722162400386
DSL processing failed: Compilation exited with non zero exit code: 2. Exit code: 102
I had planned on parsing the branch name from JB_SPACE_GIT_BRANCH and storing it in a variable to use in a call to mvn to build and tag a container using Jib
Is there anyway that i can use variables within the content of a shellScript? or should/ can this be done in a different way?
You need to replace $ by ${"$"}:
job("mvn compile") {
container(displayName="mvn", image="maven:3.8.5-eclipse-temurin-17") {
shellScript {
content = """
FOO="bar"
echo ${"$"}FOO
"""
}
}
}
Or use a sh file file.sh like this:
FOO="bar"
echo $FOO
.
job("mvn compile") {
container(displayName="mvn", image="maven:3.8.5-eclipse-temurin-17") {
shellScript {
content = """
./file.sh
"""
}
}
}

Jest configuration setupFilesAfterEnv option was not found

I'm trying to make Jest work again on a project developped 1 year ago and not maintained.
I have an error with path of setupFilesAfterEnvor transform.
the error i get when i run "yarn test"
$ jest __testsv2__ --config=./jest.config.js
● Validation Error:
Module <rootDir>/jest/setup.js in the setupFilesAfterEnv option was not found.
<rootDir> is: /Users/alain/dev/ddf/release
Configuration Documentation:
https://jestjs.io/docs/configuration.html
error Command failed with exit code 1.
my filesystem, in /Users/alain/dev/ddf/release/ i have
babel.config.js
jest.config.js
/jest
/setup
setup.js ( so full path is : /Users/alain/dev/ddf/release/jest/setup.js )
staticFileAssetTransform.js ( so full path is : /Users/alain/dev/ddf/release/jest/staticFileAssetTransform.js )
My package.json
{ ...
"scripts": {
"test": "jest __testsv2__ --config=./jest.config.js"
...
}
}
babel.config.js
module.exports = function(api) {
api.cache(false);
const presets = ['#babel/preset-env', '#babel/preset-react'];
const plugins = [['#babel/proposal-object-rest-spread'],];
return {
presets, plugins, sourceMaps: "inline",
ignore: [(process.env.NODE_ENV !== 'test' ? "**/*.test.js" : null) ].filter(n => n)
};
};
jest.config.js
module.exports = {
resolver: 'browser-resolve',
clearMocks: true,
moduleNameMapper: { '\\.(css|less|styl|md)$': 'identity-obj-proxy' },
// A list of paths to modules that run some code to configure or set up the testing framework before each test
// setupFilesAfterEnv: ['./jest/setup.js'], // don't work too
setupFilesAfterEnv: ['<rootDir>/jest/setup.js'],
// An array of regexp pattern strings that are matched against all test paths, matched tests are skipped
testPathIgnorePatterns: ['/node_modules/', '/__gql_mocks__/'],
// A map from regular expressions to paths to transformers
transform: {
'^.+\\.js$': './jest/babelRootModeUpwardTransform.js',
'\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$': '<rootDir>/jest/staticFileAssetTransform.js',
},
};
You might want to delete your node modules, package-lock.json and run npm i again, I had a similar issue and that was a fix for me. Also try npm cache clean --force
For those coming here, make sure you prefix the path with <rootDir>
Like this:
setupFilesAfterEnv: ['<rootDir>/node_modules/#hirez_io/observer-spy/dist/setup-auto-unsubscribe.js']

How can a build or test a smaller nixos container config independent of my host's Nixos config?

I'm trying to configure a container within nixos containers for example:
containers.abc123 = {
config = { config, pkgs, ... }:
{
systemd.services = {
finder-email-requests = {
description = "";
enable = true;
environment = {
TESTING = "abcxyz";
};
serviceConfig = {
Type = "simple";
ExecStart = "bash -c \"echo '$TESTING hello' >> /tmp/abcxyz\"";
Restart = "always";
RestartSec = 30;
};
wantedBy = [ "default.target" ];
};
};
};
};
However needing to test/compile this means running nixos-rebuild test which can take 10+ seconds on my machine (or 7s on a newly installed VM I just tried).
Is there some way I can more quickly test this container config independent from my entire host's Nixos config? For example building just the container config itself rather than the entire instance of this nixos config?
I've found that the nixos-rebuild command is a small shell script for example at https://github.com/NixOS/nixpkgs/blob/216f0e6ee4a65219f37caed95afda1a2c66188dc/nixos/modules/installer/tools/nixos-rebuild.sh
However after reading through it, I don't quite understand whats going on in terms of the relationship between this 'containers' unit and the general 'nixos config'.
NixOS Containers don't have a testing facility of their own (as of Nov 2020). They behave like normal NixOS systems for most intents and purposes. If you want to test your container, a normal NixOS test should be adequate.
Writing NixOS tests is documented in the NixOS manual. You can use the pkgs.nixosTest function to write your own tests outside of the Nixpkgs repo.
You could test either an approximation of the host system with your container, or just the container configuration as if it was the "top-level" NixOS system. I'd go with the latter, unless you need to test multiple containers in conjunction, or if you need to test interactions between host and container.
However to test the container definition building correctly, we can use pkg.nixos. For example, a nix expression for your container (that can be be built with the usual nix-build method):
{ a = let pkgs = import <nixpkgs> {};
in (pkgs.nixos
{
fileSystems."/".device = "x";
boot.loader.grub.enable = false;
systemd.services = {
finder-email-requests-2 = {
description = "";
enable = true;
environment = {
TESTING = "abcxyz";
};
serviceConfig = {
Type = "simple";
ExecStart = "bash -c \"echo '$TESTING hello' >> /tmp/abcxyz\"";
Restart = "always";
RestartSec = 30;
};
wantedBy = [ "default.target" ];
};
};
}).toplevel;
}

Jenkins pipeline shell step

Trying to get this pipeline working..
I need to prepare some variables (list or string) in groovy, and iterate over it in bash. As I understand, groovy scripts run on jenkins master, but I need to download some files into build workspace, that's why I try to download them in SH step.
import groovy.json.JsonSlurper
import hudson.FilePath
pipeline {
agent { label 'xxx' }
parameters {
...
}
stages {
stage ('Get rendered images') {
steps {
script {
//select grafana API url based on environment
if ( params.grafana_env == "111" ) {
grafana_url = "http://xxx:3001"
} else if ( params.grafana_env == "222" ) {
grafana_url = "http://yyy:3001"
}
//get available grafana dashboards
def grafana_url = "${grafana_url}/api/search"
URL apiUrl = grafana_url.toURL()
List json = new JsonSlurper().parse(apiUrl.newReader())
def workspace = pwd()
List dash_names = []
// save png for each available dashboard
for ( dash in json ) {
def dash_name = dash['uri'].split('/')
dash_names.add(dash_name[1])
}
dash_names_string = dash_names.join(" ")
}
sh "echo $dash_names_string"
sh """
for dash in $dash_names_string;
do
echo $dash
done
"""
}
}
}
}
I get this error when run..
[Pipeline] End of Pipeline
groovy.lang.MissingPropertyException: No such property: dash for class: WorkflowScript
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:53)
at org.codehaus.groovy.runtime.ScriptBytecodeAdapter.getProperty(ScriptBytecodeAdapter.java:458)
at com.cloudbees.groovy.cps.sandbox.DefaultInvoker.getProperty(DefaultInvoker.java:33)
at com.cloudbees.groovy.cps.impl.PropertyAccessBlock.rawGet(PropertyAccessBlock.java:20)
at WorkflowScript.run(WorkflowScript:42)
Looks like I'm missing something obvious...
Escape the $ for the shell variable with a backslash, that should help:
for dash in $dash_names_string;
do
echo \$dash
done
the problem is on line three here:
for dash in $dash_names_string;
do
echo $dash
done
it's trying to find a $dash property in groovy-land and finding none. i can't actually think how to make this work vi an inline sh step (possibly not enough sleep), but if you save the relevant contents of your json response to a file and then replace those four lines with a shell script that reads the file and call it from the Jenkinsfile like sh './hotScript.sh', it will not try to evaluate that dollar value as groovy, and ought to at least fail in a different way. :)

How to determine the current operating system in a Jenkins pipeline

What would be the way to determine the current OS a Jenkins pipeline is running?
Context: I'm building a shared Jenkins pipeline script that should run on all platforms (windows, OSX, linux) and execute something different in each platform.
I tried something like:
import org.apache.commons.lang.SystemUtils
if (SystemUtils.IS_OS_WINDOWS){
bat("Command")
}
if (SystemUtils.IS_OS_MAC){
sh("Command")
}
if (SystemUtils.IS_OS_LINUX){
sh("Command")
}
But even it is running on windows or mac node it always goes into the SystemUtils.IS_OS_LINUX branch
I tried a quick pipeline like this.
node('windows ') {
println ('## OS ' + System.properties['os.name'])
}
node('osx ') {
println ('## OS ' + System.properties['os.name'])
}
node('linux') {
println ('## OS ' + System.properties['os.name'])
}
Each node get correctly run in a machine with the correct OS but all of them print ## OS Linux
any ideas?
Thanks
Fede
Assuming you have Windows as your only non-unix platform, you can use the pipeline function isUnix() and uname to check on which Unix OS you're on:
def checkOs(){
if (isUnix()) {
def uname = sh script: 'uname', returnStdout: true
if (uname.startsWith("Darwin")) {
return "Macos"
}
// Optionally add 'else if' for other Unix OS
else {
return "Linux"
}
}
else {
return "Windows"
}
}
As far as I know Jenkins only differentiates between windows and unix, i.e. if on windows, use bat, on unix/mac/linux, use sh. So you could use isUnix(), more info here, to determine if you're on unix or windows, and in the case of unix use sh and #Spencer Malone's answer to prope more information about that system (if needed).
I initially used #fedterzi answer but I found it problematic because it caused the following crash:
org.jenkinsci.plugins.workflow.steps.MissingContextVariableException: Required context class hudson.Launcher is missing
when attempting to call isUnix() outside of a pipeline (for example assigning a variable). I solved by relying on traditional Java methods to determine Os:
def getOs(){
String osname = System.getProperty('os.name');
if (osname.startsWith('Windows'))
return 'windows';
else if (osname.startsWith('Mac'))
return 'macosx';
else if (osname.contains('nux'))
return 'linux';
else
throw new Exception("Unsupported os: ${osname}");
}
This allowed to call the function in any pipeline context.
The workaround I found for this is
try{
sh(script: myScript, returnStdout: true)
}catch(Exception ex) {
//assume we are on windows
bat(script: myScript, returnStdout: true)
}
Or a little bit more elegant solution without using the try/catch is to use the env.NODE_LABELS. Assuming you have all the nodes correctly labelled you can write a function like this
def isOnWindows(){
def os = "windows"
def List nodeLabels = NODE_LABELS.split()
for (i = 0; i <nodeLabels.size(); i++)
{
if (nodeLabels[i]==os){
return true
}
}
return false
}
and then
if (isOnWindows()) {
def osName = bat(script: command, returnStdout: true)
} else {
def osName = sh(script: command, returnStdout: true)
}
Using Java classes is probably not the best approach. I'm pretty sure that unless it's a jenkins / groovy plugin, those run on the master Jenkins JVM thread. I would look into a shell approach, such as the one outlined here: https://stackoverflow.com/a/8597411/5505255
You could wrap that script in a shell step to get the stdout like so:
def osName = sh(script: './detectOS', returnStdout: true)
to call a copy of the script being outlined above. Then just have that script return the OS names you want, and branch logic based on the osName var.

Resources