how to add NixOS unstable channel declaratively in configuration.nix - nix

The NixOS cheatsheet describes how to install packages from unstable in configuration.nix.
It starts off by saying to add the unstable channel like so:
$ sudo nix-channel --add https://nixos.org/channels/nixpkgs-unstable
$ sudo nix-channel --update
Then, it is easy to use this channel in configuration.nix (since it should now be on NIX_PATH):
nixpkgs.config = {
allowUnfree = true;
packageOverrides = pkgs: {
unstable = import <nixos-unstable> {
config = config.nixpkgs.config;
};
};
};
environment = {
systemPackages = with pkgs; [
unstable.google-chrome
];
};
I would like to not have to do the manual nix-channel --add and nix-channel --update steps.
I would like to be able to install my system from configuration.nix without first having to run the nix-channel --add and nix-channel --update steps.
Is there a way to automate this from configuration.nix?

I was able to get this working with a suggestion by #EmmanuelRosa.
Here are the relevant parts of my /etc/nixos/configuration.nix:
{ config, pkgs, ... }:
let
unstableTarball =
fetchTarball
https://github.com/NixOS/nixpkgs/archive/nixos-unstable.tar.gz;
in
{
imports =
[ # Include the results of the hardware scan.
/etc/nixos/hardware-configuration.nix
];
nixpkgs.config = {
packageOverrides = pkgs: {
unstable = import unstableTarball {
config = config.nixpkgs.config;
};
};
};
...
};
This adds an unstable derivative that can be used in environment.systemPackages.
Here is an example of using it to install the htop package from nixos-unstable:
environment.systemPackages = with pkgs; [
...
unstable.htop
];

Related

Jenkinsfile Folder variables missing from env

I'm trying to use folder variables in a jenkinsfile
I tried how to access folder variables across pipeline stages? but I don't get it working.
On the Experimental Folder I added 2 properties
CCEBUILD_TOOLS_BRANCH
TEST
sh 'printenv' displays my Folder variables, but they do not show up with env.getEnvironment()
Why is this not working ?
def printParams() {
env.getEnvironment().each { name, value -> println "In printParams Name: $name -> Value $value" }
}
node() {
withFolderProperties{
stage('Info') {
sh 'printenv'
println env.getEnvironment().collect({environmentVariable -> "${environmentVariable.key} = ${environmentVariable.value}"}).join("\n")
/* printParams()*/
}
}
}
printenv output
_=/bin/printenv
BUILD_DISPLAY_NAME=#25
BUILD_ID=25
BUILD_NUMBER=25
BUILD_TAG=*****-*******-********-*****
BUILD_URL=https://hostname/job/Experimental/job/Linux%20info/25/
CCEBUILD_TOOLS_BRANCH=develop_staging
EXECUTOR_NUMBER=4
HOME=/var/lib/jenkins
HUDSON_COOKIE=*****-*******-********-*****
HUDSON_HOME=/var/lib/jenkins
HUDSON_SERVER_COOKIE=************
HUDSON_URL=https://hostname/
JENKINS_HOME=/var/lib/jenkins
JENKINS_NODE_COOKIE=*****-*******-********-*****
JENKINS_SERVER_COOKIE=durable-*****-*******-********-*****
JENKINS_URL=https://hostname/
JOB_BASE_NAME=Linux info
JOB_DISPLAY_URL=https://hostname/job/Experimental/job/Linux%20info/display/redirect
JOB_NAME=Experimental/Linux info
JOB_URL=https://hostname/job/Experimental/job/Linux%20info/
LANG=en_US.UTF-8
LOGNAME=jenkins
NODE_LABELS=developmenthost devlinux linux master releasehost rtbhost
NODE_NAME=master
PATH=/sbin:/usr/sbin:/bin:/usr/bin
PWD=/var/lib/jenkins/jobs/Experimental/jobs/Linux info/workspace
RUN_ARTIFACTS_DISPLAY_URL=https://hostname/job/Experimental/job/Linux%20info/25/display/redirect?page=artifacts
RUN_CHANGES_DISPLAY_URL=https://hostname/job/Experimental/job/Linux%20info/25/display/redirect?page=changes
RUN_DISPLAY_URL=https://hostname/job/Experimental/job/Linux%20info/25/display/redirect
RUN_TESTS_DISPLAY_URL=https://hostname/job/Experimental/job/Linux%20info/25/display/redirect?page=tests
SHELL=/bin/bash
SHLVL=4
STAGE_NAME=Info
TEST=Ok
USER=jenkins
WORKSPACE=/var/lib/jenkins/jobs/Experimental/jobs/Linux info/workspace
env output
BUILD_DISPLAY_NAME = #25
BUILD_ID = 25
BUILD_NUMBER = 25
BUILD_TAG =*****-*******-********-*****
BUILD_URL = https://hostname/job/Experimental/job/Linux%20info/25/
CLASSPATH =
HUDSON_HOME = /var/lib/jenkins
HUDSON_SERVER_COOKIE = ************
HUDSON_URL = https://hostname/
JENKINS_HOME = /var/lib/jenkins
JENKINS_SERVER_COOKIE = ************
JENKINS_URL = https://hostname/
JOB_BASE_NAME = Linux info
JOB_DISPLAY_URL = https://hostname/job/Experimental/job/Linux%20info/display/redirect
JOB_NAME = Experimental/Linux info
JOB_URL = https://hostname/job/Experimental/job/Linux%20info/
RUN_ARTIFACTS_DISPLAY_URL = https://hostname/job/Experimental/job/Linux%20info/25/display/redirect?page=artifacts
RUN_CHANGES_DISPLAY_URL = https://hostname/job/Experimental/job/Linux%20info/25/display/redirect?page=changes
RUN_DISPLAY_URL = https://hostname/job/Experimental/job/Linux%20info/25/display/redirect
RUN_TESTS_DISPLAY_URL = https://hostname/job/Experimental/job/Linux%20info/25/display/redirect?page=tests

Jenkins pipeline: How to get the hostname of the slave

I have two different Linux servers (prod and dev) with different $HOSTNAME and different certificates, which are by default named after the hostname.
Now I want to determine within the Jenkins-Pipeline on which host I am and thus use the right certificate.
To do so I wrote the following test script:
def labels = []
labels.add('jenkins_<slavehost>_x86')
def builders = [:]
for (x in labels) {
def label = x
builders[label] = {
ansiColor('xterm') {
node (label) {
stage('cleanup') {
deleteDir()
}
stage('test') {
sh """
echo $HOSTNAME
"""
}
}
}
}
}
parallel builders
Which does not work since the $HOSTNAME is not defined.
groovy.lang.MissingPropertyException: No such property: HOSTNAME for class: groovy.lang.Binding
How can I get the hostname of the jenkins-slave within a sh in a pipeline?
Since you can name the node in any way you like, you can't just use the NODE_NAME, it does not have to be the same as the $HOSTNAME you would get from echo $HOSTNAME on a bash on the slave machine.
def getJenkinsMaster() {
return env.BUILD_URL.split('/')[2].split(':')[0]
}
You can get the hostname from the url
sh "echo ${env.NODE_NAME}"
You can add this shell command and get the hostname from the environment variable.
not sh step, but you can use groovy:
import java.security.MessageDigest
import org.jenkinsci.plugins.workflow.cps.CpsThread
import hudson.FilePath
#NonCPS
def get_current_pipeline_node() {
def thread = CpsThread.current()
def cv = thread.contextVariables
def fp
try {
fp = cv.get(FilePath, null, null)
} catch(MissingMethodException) {
fp = cv.get(FilePath)
}
return fp?.toComputer()?.node
}
node = get_current_pipeline_node()
print(node.nodeName)

Nested deployment of AWS Lambda funtions with Jenkins

I am learning how to deploy AWS Lambda functions from Jenkins.
I have following forlder structure:
src -> favorites -> findAll -> index.js
src -> favorites -> insert -> index.js
src -> movies -> findAll -> index.js
src -> movies -> findOne -> index.js
Essentially 4x functions.
Here's part of Jenkinsfile:
def functions = ['MoviesStoreListMovies', 'MoviesStoreSearchMovie', MoviesStoreViewFavorites', 'MoviesStoreAddToFavorites']
stage('Build'){
sh """
docker build -t ${imageName} .
containerName=\$(docker run -d ${imageName})
docker cp \$containerName:/app/node_modules node_modules
docker rm -f \$containerName
zip -r ${commitID()}.zip node_modules src
"""
}
stage('Push'){
functions.each { function ->
sh "aws s3 cp ${commitID()}.zip s3://${bucket}/${function}/"
}
At the end I expect to have 4x AWS S3 buckets with same .zip content in it (i.e. all 4 same folders/functions present in each bucket).
Here now my issue. The build stage.
stage('Deploy'){
functions.each {
function ->
sh "aws lambda update-function-code --function-name ${function} --s3-bucket
${bucket} --s3-key ${function}/${commitID()}.zip --region ${region}"
}
}
Since the zip has same content, how can be that the 4 functions are deployed as exactly 4 functions? Again, each of the 4x .zip file contains in turn the same 4x folders/functions. So I would expect 4x4=16 functions eventually.
What am I missing?
Maybe my mistake. I missed to say that Lambda functions get created with Terraform first. And indeed Terraform creates handlers pointing to the right src paths:
module "MoviesStoreListMovies" {
source = "./modules/function"
name = "MoviesStoreListMovies"
handler = "src/movies/findAll/index.handler"
runtime = "nodejs12.x"
environment = {
TABLE_NAME = aws_dynamodb_table.movies.id
}
}
module "MoviesStoreSearchMovie" {
source = "./modules/function"
name = "MoviesStoreSearchMovie"
handler = "src/movies/findOne/index.handler"
runtime = "nodejs12.x"
environment = {
TABLE_NAME = aws_dynamodb_table.movies.id
}
}
module "MoviesStoreViewFavorites" {
source = "./modules/function"
name = "MoviesStoreViewFavorites"
handler = "src/favorites/findAll/index.handler"
runtime = "nodejs12.x"
environment = {
TABLE_NAME = aws_dynamodb_table.favorites.id
}
}
module "MoviesStoreAddToFavorites" {
source = "./modules/function"
name = "MoviesStoreAddToFavorites"
handler = "src/favorites/insert/index.handler"
runtime = "nodejs12.x"
environment = {
TABLE_NAME = aws_dynamodb_table.favorites.id
}
}

gradle docker plugin bmuschko Splitting a build.gradle into two files gives error

I am new to using gradle scripts. I have a build.gradle file that I want to split into two files. I get the following two files once I split the larger build.gradle file.
build.gradle
buildscript {
ext {
springBootVersion = '1.5.12.RELEASE'
gradleDockerVersion = '3.2.7'
}
repositories {
jcenter()
}
dependencies {
classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
classpath("com.bmuschko:gradle-docker-plugin:${gradleDockerVersion}")
}
}
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'
apply plugin: 'io.spring.dependency-management'
jar {
baseName = 'gs-spring-boot'
version = '0.1.0'
}
repositories {
mavenCentral()
}
sourceCompatibility = 1.8
targetCompatibility = 1.8
compileJava.options.encoding = 'UTF-8'
dependencies {
compile("org.springframework.boot:spring-boot-starter-web")
testCompile("junit:junit")
}
project.ext.imageName = 'myImage'
project.ext.tagName ='myTag'
project.ext.jarName = (jar.baseName + '-' + jar.version).toLowerCase()
apply from: 'dockerapp.gradle'
dockerapp.gradle
def gradleDockerVersion = '3.7.2'
buildscript {
repositories {
jcenter()
}
dependencies {
classpath("com.bmuschko:gradle-docker-plugin:${gradleDockerVersion}")
}
}
apply plugin: 'com.bmuschko.docker-remote-api'
import com.bmuschko.gradle.docker.tasks.image.DockerBuildImage
import com.bmuschko.gradle.docker.tasks.image.DockerRemoveImage
import com.bmuschko.gradle.docker.tasks.image.Dockerfile
def imageName = project.ext.imageName
def tagName = project.ext.tagName
def jarName = project.ext.jarName
task createAppDockerfile(type: Dockerfile) {
// Don't create dockerfile if file already exists
onlyIf { !project.file('Dockerfile').exists() }
group 'Docker'
description 'Generate docker file for the application'
dependsOn bootRepackage
destFile = project.file('Dockerfile')
String dockerProjFolder = project.projectDir.name
from 'openjdk:8-jre-slim'
runCommand("mkdir -p /app/springboot/${dockerProjFolder} && mkdir -p /app/springboot/${dockerProjFolder}/conf")
addFile("./build/libs/${jarName}.jar", "/app/springboot/${dockerProjFolder}/")
environmentVariable('CATALINA_BASE', "/app/springboot/${dockerProjFolder}")
environmentVariable('CATALINA_HOME', "/app/springboot/${dockerProjFolder}")
workingDir("/app/springboot/${dockerProjFolder}")
if (System.properties.containsKey('debug')) {
entryPoint('java', '-Xdebug', '-Xrunjdwp:server=y,transport=dt_socket,address=5005,suspend=n', '-jar', "/app/springboot/${dockerProjFolder}/${jarName}.jar")
} else {
entryPoint('java', '-jar', "/app/springboot/${dockerProjFolder}/${jarName}.jar")
}
}
task removeAppImage(type: DockerRemoveImage) {
group 'Docker'
description 'Remove the docker image using force'
force = true
targetImageId { imageName }
onError { exception ->
if (exception.message.contains('No such image')) {
println 'Docker image not found for the current project.'
}
}
}
task createAppImage(type: DockerBuildImage) {
group 'Docker'
description 'Executes bootRepackage, generates a docker file and builds image from it'
dependsOn(createAppDockerfile, removeAppImage)
dockerFile = createAppDockerfile.destFile
inputDir = dockerFile.parentFile
if (tagName)
tag = "${tagName}"
else if (imageName)
tag = "${imageName}"
else
tag = "${jarName}"
}
If I try to run the command ./gradlew createAppImage I get an error as follows:
The other two tasks within dockerapp.gradle file seem to work without issues. If I place all my code within the build.gradle file, it works properly without giving any errors. What is the best way to split the files and execute createAppImage without running into errors?
I was able to resolve this with help from CDancy (maintainer of the plugin) as follows:
build.gradle
buildscript {
ext {
springBootVersion = '1.5.12.RELEASE'
gradleDockerVersion = '3.2.7'
}
repositories {
jcenter()
}
dependencies {
classpath("org.springframework.boot:spring-boot-gradle-plugin:${springBootVersion}")
classpath("com.bmuschko:gradle-docker-plugin:${gradleDockerVersion}")
}
}
apply plugin: 'java'
apply plugin: 'eclipse'
apply plugin: 'idea'
apply plugin: 'org.springframework.boot'
apply plugin: 'io.spring.dependency-management'
jar {
baseName = 'gs-spring-boot'
version = '0.1.0'
}
repositories {
mavenCentral()
}
sourceCompatibility = 1.8
targetCompatibility = 1.8
compileJava.options.encoding = 'UTF-8'
dependencies {
compile("org.springframework.boot:spring-boot-starter-web")
testCompile("junit:junit")
}
project.ext.imageName = 'myimage'
project.ext.tagName ='mytag'
project.ext.jarName = (jar.baseName + '-' + jar.version).toLowerCase()
apply from: 'docker.gradle'
docker.gradle
buildscript {
repositories {
jcenter()
}
dependencies {
classpath 'com.bmuschko:gradle-docker-plugin:3.2.7'
}
}
repositories {
jcenter()
}
// use fully qualified class name
apply plugin: com.bmuschko.gradle.docker.DockerRemoteApiPlugin
// import task classes
import com.bmuschko.gradle.docker.tasks.image.*
def imageName = project.ext.imageName
def tagName = project.ext.tagName
def jarName = project.ext.jarName
task createAppDockerfile(type: Dockerfile) {
// Don't create dockerfile if file already exists
onlyIf { !project.file('Dockerfile').exists() }
group 'Docker'
description 'Generate docker file for the application'
dependsOn bootRepackage
destFile = project.file('Dockerfile')
String dockerProjFolder = project.projectDir.name
from 'openjdk:8-jre-slim'
runCommand("mkdir -p /app/springboot/${dockerProjFolder} && mkdir -p /app/springboot/${dockerProjFolder}/conf")
addFile("./build/libs/${jarName}.jar", "/app/springboot/${dockerProjFolder}/")
environmentVariable('CATALINA_BASE', "/app/springboot/${dockerProjFolder}")
environmentVariable('CATALINA_HOME', "/app/springboot/${dockerProjFolder}")
workingDir("/app/springboot/${dockerProjFolder}")
if (System.properties.containsKey('debug')) {
entryPoint('java', '-Xdebug', '-Xrunjdwp:server=y,transport=dt_socket,address=5005,suspend=n', '-jar', "/app/springboot/${dockerProjFolder}/${jarName}.jar")
} else {
entryPoint('java', '-jar', "/app/springboot/${dockerProjFolder}/${jarName}.jar")
}
}
task removeAppImage(type: DockerRemoveImage) {
group 'Docker'
description 'Remove the docker image using force'
force = true
targetImageId { imageName }
onError { exception ->
if (exception.message.contains('No such image')) {
println 'Docker image not found for the current project.'
} else {
print exception
}
}
}
task createAppImage(type: DockerBuildImage) {
group 'Docker'
description 'Executes bootRepackage, generates a docker file and builds image from it'
dependsOn(createAppDockerfile, removeAppImage)
dockerFile = createAppDockerfile.destFile
inputDir = dockerFile.parentFile
if (tagName)
tag = "${tagName}"
else if (imageName)
tag = "${imageName}"
else
tag = "${jarName}"
}
The change was basically in my docker.gradle file , I had to add the repositories section that pointed to jcenter() and to use fully qualified plugin class name.
I wonder why would you ever want to use Docker Plugins in your Gradle? After playing some time with the most maintained Mushko's Gradle Docker Plugin I don't undersand why should one build plugin boilerplate over docker boilerplate both in varying syntax?
Containerizing in Gradle is comfortable for programmers and DevOps would say they don't work Docker like this and you should go their way if you need support. I found a solution below that is short, pure Docker-style and functionally full.
Find normal Dockerfile and docker.sh shell script in the project structure:
In your build.gradle file:
buildscript {
repositories {
gradlePluginPortal()
}
}
plugins {
id 'java'
id 'application'
}
// Entrypoint:
jar {
manifest {
attributes 'Main-Class': 'com.example.Test'
}
}
mainClassName = 'com.example.Test'
// Run in DOCKER-container:
task runInDockerContainer {
doLast {
exec {
workingDir '.' // Relative to project's root folder.
commandLine 'sh', './build/resources/main/docker.sh'
}
// exec {
// workingDir "."
// commandLine 'sh', './build/resources/main/other.sh'
// }
}
}
Do anything you want in a clean shell script following normal Docker docs:
# Everything is run relative to project's root folder:
# Put everything together into build/docker folder for DOCKER-build:
if [ -d "build/docker" ]; then rm -rf build/docker; fi;
mkdir build/docker;
cp build/libs/test.jar build/docker/test.jar;
cp build/resources/main/Dockerfile build/docker/Dockerfile;
# Build image from files in build/docker:
cd build/docker || exit;
docker build . -t test;
# Run container based on image:
echo $PWD
docker run test;

Use files as input to Jenkins JobDSL

I am trying to use Jenkins' JobDSL plugin to programatically create jobs. However, I want to be able to define the parameters in a file. According to docs on distributed builds, this may not be possible. Anyone have any idea how I can achieve this? I could use the readFileFromWorkspace method but I still need to iterate over all files provided and run JobDSL x times. JobDSL code below. The important part I am struggling with is the first 15 lines or so.
#!groovy
import groovy.io.FileType
def list = []
hudson.FilePath workspace = hudson.model.Executor.currentExecutor().getCurrentWorkspace()
def dir = new File(workspace.getRemote() + "/pipeline/applications")
dir.eachFile (FileType.FILES) { file ->
list << file
}
list.each {
println (it.path)
def properties = new Properties()
this.getClass().getResource( it.path ).withInputStream {
properties.load(it)
}
def _git_key_id = 'jenkins'
consumablesRoot = '//pipeline_test'
application_folder = "${consumablesRoot}/" + properties._application_name
// Create the branch_indexer
def jobName = "${application_folder}/branch_indexer"
folder(consumablesRoot) {
description("Ensure consumables folder is in place")
}
folder(application_folder) {
description("Ensure app folder in consumables spaces is in place.")
}
job(jobName) {
println("in the branch_indexer: ${GIT_BRANCH}")
label('master')
/* environmentVariables(
__pipeline_code_repo: properties."__pipeline_code_repo",
__pipeline_code_branch: properties."__pipeline_code_branch",
__pipeline_scripts_code_repo: properties."__pipeline_scripts_code_repo",
__pipeline_scripts_code_branch: properties."__pipeline_scripts_code_branch",
__gcp_template_code_repo: properties."__gcp_template_code_repo",
__gcp_template_code_branch: properties."__gcp_template_code_branch",
_git_key_id: _git_key_id,
_application_id: properties."_application_id",
_application_name: properties."_application_name",
_business_mnemonic: properties."_business_mnemonic",
_control_repo: properties."_control_repo",
_project_name: properties."_project_name"
)*/
scm {
git {
remote {
url(control_repo)
name('control_repo')
credentials(_git_key_id)
}
remote {
url(pipeline_code_repo)
name('pipeline_pipelines')
credentials(_git_key_id)
}
}
}
triggers {
scm('#daily')
}
steps {
//ensure that the latest code from the pipeline source code repo has been pulled
shell("git ls-remote --heads control_repo | cut -d'/' -f3 | sort > .branches")
shell("git checkout -f pipeline_pipelines/" + properties."pipeline_code_branch")
//get the last branch from the control_repo repo
shell("""
git for-each-ref --sort=-committerdate refs/remotes | grep -i control_repo | head -n 1 > .last_branch
""")
dsl(['pipeline/branch_indexer.groovy'])
}
}
// Start the branch_indexer
queue(jobName)
}
In case someone else ends up here in search of a simple method for reading only one parameter file, use readFileFromWorkspace (as mentioned by #CodyK):
def file = readFileFromWorkspace(relative_path_to_file)
If the file contains a parameter called your_param, you can read it using ConfigSlurper:
def config = new ConfigSlurper().parse(file)
def your_param = config.getProperty("your_param")
Was able to get it working with this piece of code:
hudson.FilePath workspace = hudson.model.Executor.currentExecutor().getCurrentWorkspace()
// Build a list of all config files ending in .properties
def cwd = hudson.model.Executor.currentExecutor().getCurrentWorkspace().absolutize()
def configFiles = new FilePath(cwd, 'pipeline/applications').list('*.properties')
configFiles.each { file ->
def properties = new Properties()
def content = readFileFromWorkspace(file.getRemote())
properties.load(new StringReader(content))

Resources