Deploying my NixOps machines takes alot of time, as packages need to build. I want to do the building regularly on my trusted private Hydra instance.
My current approach involves this release.nix file, but it doesn't work out so well.
{ nixpkgs ? <nixpkgs>, onlySystem ? true, extraModules ? [] }:
let
nixos = import "${nixpkgs}/nixos";
buildEnv = conf: (nixos {
configuration = conf;
});
buildTarget = m: let build = buildEnv (buildConf m); in
if onlySystem then build.system else build.vm;
buildConf = module: { ... }:
{
imports = [ module ] ++ extraModules;
};
in
{
machine1 = buildTarget ./machine1/configuration.nix;
machine2 = buildTarget ./machine2/configuration.nix
machine3 = buildTarget ./machine3/configuration.nix
machine4 = buildTarget ./machine4/configuration.nix
}
I don't really understand this code, as I copied it from here.
This builds fine if I run nix-build release.nix locally, but on hydra I never get a full build. Sometimes builds don't dequeue (they just don't get build), sometimes they fail with various error messages. As nothing of the hydra problems are reproducible (beside the fact, that I never get a full build), I wonder what the best practice for building a NixOps deployment is.
Please note, that I have unfree packages in my deployment. The option nixpkgs.config.allowUnfree = true; is set on the hydra server.
This is not a question about my hydra failures, but about what would be a good way to build a NixOps deployment with Hydra CI.
As far as I know, there is no way to make this super easy. Your code looks ok, but my method is a slightly different. I only build the toplevel attribute and I construct the NixOS configuration differently.
I build NixOS 'installations' from inside Nix using something like:
let
modules = [ ./configuration.nix ];
nixosSystem = import (pkgs.path + "/nixos/lib/eval-config.nix") {
inherit (pkgs) system;
inherit modules;
};
in
nixosSystem.config.system.build.toplevel
Related
I inherited a small Kotlin Multiplatform/Ktor application a little while ago and and moving on to deployment. I'm extremely new to this framework, and gradle as a whole and I'm having issues including cross compiled client side JS inside of the Fat Jar or the Docker Container. I'm not entirely sure if this is supposed to be inside of the Fat Jar, or if I include the JS into the Docker Container, I just know that it doesn't work like it is supposed to and I can't find any good documentation or answers to these questions.
The error I get specifically is this:
The HTML Page errors here:
<script>
Application.main();
</script>
Uncaught Reference Error: Application is not defined
The page works normally and the function is called when running the project in Intellij IDEA, but when I create a Docker Image out of the Fat Jar, or run it outside of IDEA, it no longer works. Here is the Kotlin source file just for posterity:
(In Application.kt)
fun main() {
Here's my Dockerfile:
FROM gradle:7-jdk11 AS build
COPY --chown=gradle:gradle . /home/gradle/src
WORKDIR /home/gradle/src
RUN gradle buildFatJar --no-daemon
FROM openjdk:11
EXPOSE 8080:8080
RUN mkdir /app
COPY --from=build /home/gradle/src/build/libs/MyApp.jar /app/MyApp.jar
ENTRYPOINT ["java","-jar","/app/MyApp.jar"]
Here's my build.gradle.kts:
import org.jetbrains.kotlin.gradle.targets.js.webpack.KotlinWebpack
val kotlinVersion = "1.7.20-Beta"
val ktorVersion = "2.0.3"
val kotlinxHtmlVersion = "0.8.0"
val kotlinxCoroutinesVersion = "1.3.8" //curr: 1.6.4
val kotlinWrappersVersion = "1.0.0-pre.354"
val logbackVersion = "1.2.3" //11
plugins {
id("io.ktor.plugin") version "2.1.3"
kotlin("multiplatform") version "1.7.20-Beta"
application
}
application {
mainClass.set("com.example.application.ServerKt")
}
ktor {
fatJar {
archiveFileName.set("MyApp.jar")
}
}
group = "com.example"
version = "1.0-SNAPSHOT"
repositories {
jcenter()
mavenCentral()
}
dependencies {
implementation("org.apache.commons:commons-email:1.5")
}
kotlin {
jvm {
compilations.all {
kotlinOptions.jvmTarget = "1.8"
}
withJava()
testRuns["test"].executionTask.configure {
useJUnitPlatform()
}
}
js {
browser {
binaries.executable()
dceTask {
keep("Application.columnIndex", "Application.prepareTable")
}
}
}
sourceSets {...} // excluded for brevity
}
// include JS artifacts in any JAR we generate
tasks.getByName<Jar>("jvmJar") {
val taskName = if (project.hasProperty("isProduction") || project.gradle.startParameter.taskNames.contains("installDist")) {
"jsBrowserProductionWebpack"
} else {
"jsBrowserDevelopmentWebpack"
}
val webpackTask = tasks.getByName<KotlinWebpack>(taskName)
dependsOn(webpackTask) // make sure JS gets compiled first
from(File(webpackTask.destinationDirectory, webpackTask.outputFileName)) // bring output file along into the JAR
}
tasks {
withType<org.jetbrains.kotlin.gradle.tasks.KotlinCompile> {
kotlinOptions {
jvmTarget = "1.8"
}
}
}
distributions {
main {
contents {
from("$buildDir/libs") {
rename("${rootProject.name}-jvm", rootProject.name)
into("lib")
}
}
}
}
// Alias "installDist" as "stage" (for cloud providers)
tasks.create("stage") {
dependsOn(tasks.getByName("installDist"))
}
tasks.getByName<JavaExec>("run") {
// so that the JS artifacts generated by `jvmJar` can be found and served
classpath(tasks.getByName<Jar>("jvmJar"))
}
After running the docker file, which runs buildFatJar, it copies the created Jar to the Docker Image, then I manually add in static files & some .json files and all of that works fine; however, I cannot for the life of me get the cross compiled JS to work or be referenced in the application.
TLDR: How can I include the cross compiled JS either within the Fat Jar itself or in the Docker container?
I've already tried this: https://stackoverflow.com/questions/61245847/create-fat-jar-from-ktor-kotlin-multiplatform-project-with-kotlin-gradle-dsl
and many variations of this plus several other stack overflow answers and nothing seems to work for me.
I have tried several different methods to get the Cross compiled JS into the fat jar, and when that failed, I tried taking the contents of the build/js/ directory and copying them into the docker container instead to see if that would allow it to run and neither method worked for me.
I've been looking for answers on this all day and have found nothing, so any suggestions would be helpful, thanks!
Turns out the answer was actually pretty simple. The file from /build/distributions called Application.js needed to be moved into the project and in a place where the server could see & serve it, i.e. /static/script.
Leaving this up here because this isn't documented anywhere and I spent like 4 hours looking for a solution, so I hope this helps people in the future.
I'm trying to deploy a NixOS VM while storing its configuration on a private GitLab repository.
My configuration.nix looks like this (simplified to only include the relevant bits):
{ pkgs, ... }:
let
repo = pkgs.fetchFromGitLab { owner = "hectorj"; repo = "nix-fleet"; };
in {
imports = [
./hardware-configuration.nix
"${repo}/my-server-name/host.nix"
];
}
but it is giving me this error:
error: infinite recursion encountered
at /nix/var/nix/profiles/per-user/root/channels/nixos/lib/modules.nix:496:28:
495| builtins.addErrorContext (context name)
496| (args.${name} or config._module.args.${name})
| ^
497| ) (lib.functionArgs f);
I do not understand where the recursion is happening.
It doesn't seem like its even fetching the repo, as I can put any non-existing names in the args and get the same error.
I saw https://nixos.org/guides/installing-nixos-on-a-raspberry-pi.html doing something similar without issue:
imports = ["${fetchTarball "https://github.com/NixOS/nixos-hardware/archive/936e4649098d6a5e0762058cb7687be1b2d90550.tar.gz" }/raspberry-pi/4"];
And I can use that line on my VM and it will build fine.
What am I missing?
The recursion is as follows
Compute the configuration
Compute the config fixpoint of all modules
Find all modules
Compute "${repo}/my-server-name/host.nix"
Compute repo (pkgs.fetch...)
Compute pkgs
Compute config._module.args.pkgs (Nixpkgs can be configured by NixOS options)
Compute the configuration (= 1)
You can break the recursion at 6 by using builtins.fetchTarball instead.
Alternatively, you can break it around 7, by using a different "pkgs".
If you're using configuration.nix as part of a larger expression, you may be able to pass an invoked Nixpkgs to NixOS via specialArgs.pkgs2 = import nixpkgs { ... }. This creates a pkgs2 module argument that can't be configured by NixOS itself.
Otherwise, you could define pkgs2 in a let binding.
{ pkgs, ... }:
let
# pkgs2: An independent Nixpkgs that can be constructed before the NixOS
# imports are resolved.
pkgs2 = import <nixpkgs> {};
repo = pkgs2.fetchFromGitLab { owner = "hectorj"; repo = "nix-fleet"; };
in {
imports = [
./hardware-configuration.nix
"${repo}/my-server-name/host.nix"
];
}
I am trying to use Bazel with Pybind, and it requires that I set the following variables:
"""Repository rule for Python autoconfiguration.
`python_configure` depends on the following environment variables:
* `PYTHON_BIN_PATH`: location of python binary.
* `PYTHON_LIB_PATH`: Location of python libraries.
"""
https://github.com/pybind/pybind11_bazel/blob/master/python_configure.bzl
I dont want to have to pass it in manually when building my libraries, how can i hardcode these env vars in my WORKSPACE?
To (always) set environmental variable for a repository rule consumption, you case use --repo_env command line option. And if you want to include those with every invocation in your workspace, you can set add these flags to your .bazelrc file therein.
Now the wisdom of doing that could be questioned. If it's actually a project (repo) and not build host configuration, it would probably make more sense, be more targeted and more explicit, if it was an attribute of the given rule which was then checked in with the rest of the build configuration.
And looking at the name, there may be another question about specifying python configuration (from outside the bazel build) instead of actually using correctly resolved python toolchain (but there I have to say have no background in what the given rule is about and what is it trying to accomplish to render judgment, this is just a general comment).
To address your comment... I don't what other factors make it "not accept" or what exactly does that actually look like, but if I have this mini-example:
.
├── BUILD
├── WORKSPACE
└── customrule.bzl
Where customrule.bzl reads:
def _run_me(repo_ctx):
repo_ctx.file(
"WORKSPACE",
'workspace(name = "{}")\n'.format(repo_ctx.name),
executable = False,
)
repo_ctx.file(
"BUILD",
'exports_files(["var.sh"], visibility=["//visibility:public"])',
executable = False,
)
repo_ctx.file(
"var.sh",
"echo {}\n".format(repo_ctx.os.environ.get("var1")),
executable = True,
)
wsrule = repository_rule(
implementation = _run_me,
environ = ["var1"],
)
The WORKSPACE is:
load(":customrule.bzl", "wsrule")
wsrule(
name = "extdep"
)
And BUILD:
sh_binary(
name = "tgt",
srcs = ["#extdep//:var.sh"],
)
Then I do get:
$ bazel run --repo_env var1=val1 tgt
val1
and:
$ bazel run --repo_env var1=val2 tgt
val2
I.e. this is a way to pass variables to a repo rule and it does (as such) work.
If you absolutely know, you must call a build with some variable set to certain value (which as mentioned above is itself a requirement that is worth closer examination) and you want these to be associated with the project / repo. You can always check in a build.sh or any such file that wraps your bazel call to be exactly what it must be. But again, this looks more likely to not be really entirely "The Right Thing" to do or want.
So I am trying to convert a monorepo of micro services (C#, Go, NodeJS) to use bazel. Just playing with it for now.
I focus on one go service to get started and isolated it as a WORKSPACE.
The go service is gRPC service that uses protobuf obviously, grpc-gateway with the protoc-gen-swagger and also protoc-gen-gorm (this one does not support bazel).
The code builds using a command like go build cmd/server/server.go
I am hoping to get some guidance on how to get started to build this project with all the dependencies.
I see several rules available for protobuf/go and I am not yet comfortable browsing through them or deciding which is better (i cannot get any to work due to grpc gateway or protoc gen gorm)
- https://github.com/stackb/rules_proto
- https://github.com/bazelbuild/rules_go
- https://github.com/stackb/rules_proto/tree/master/github.com/grpc-ecosystem/grpc-gateway
Code structure looks like this:
/repo
svc1
svc2
svc3
cmd/server
BUILD.bazel
server.go
pkg
contains folders and some go files and a BUILD.bazel in each
proto
BUILD.bazel
test.proto
WORKSPACE
BUILD.bazel
Right now I only work on svc3. Later i will probably move the WORKSPACE to the parent folder.
My WORKSPACE looks like this:
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "io_bazel_rules_go",
sha256 = "96b1f81de5acc7658e1f5a86d7dc9e1b89bc935d83799b711363a748652c471a",
urls = [
"https://storage.googleapis.com/bazel-mirror/github.com/bazelbuild/rules_go/releases/download/0.19.2/rules_go-0.19.2.tar.gz",
"https://github.com/bazelbuild/rules_go/releases/download/0.19.2/rules_go-0.19.2.tar.gz",
],
)
load("#io_bazel_rules_go//go:deps.bzl", "go_register_toolchains", "go_rules_dependencies")
go_rules_dependencies()
go_register_toolchains()
http_archive(
name = "bazel_gazelle",
urls = [
"https://storage.googleapis.com/bazel-mirror/github.com/bazelbuild/bazel-gazelle/releases/download/0.18.1/bazel-gazelle-0.18.1.tar.gz",
"https://github.com/bazelbuild/bazel-gazelle/releases/download/0.18.1/bazel-gazelle-0.18.1.tar.gz",
],
sha256 = "be9296bfd64882e3c08e3283c58fcb461fa6dd3c171764fcc4cf322f60615a9b",
)
load("#bazel_gazelle//:deps.bzl", "gazelle_dependencies", "go_repository")
gazelle_dependencies()
load("#bazel_tools//tools/build_defs/repo:git.bzl", "git_repository")
git_repository(
name = "com_google_protobuf",
commit = "09745575a923640154bcf307fba8aedff47f240a",
remote = "https://github.com/protocolbuffers/protobuf",
shallow_since = "1558721209 -0700",
)
load("#com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
protobuf_deps()
+ a bunch of go_repository() created by Gazelle
Running gazelle created a bunch of build.bazel files for my go project in each folder.
Next to the .proto, I have a generated build.bazel file:
load("#io_bazel_rules_go//go:def.bzl", "go_library")
load("#io_bazel_rules_go//proto:def.bzl", "go_proto_library")
proto_library(
name = "svc_proto",
srcs = ["test.proto"],
visibility = ["//visibility:public"],
deps = [
# the two github below are referenced as go_repository
"#com_github_infobloxopen_protoc_gen_gorm//options:proto_library", # not sure what to put after the colon
"#com_github_grpc_ecosystem_grpc_gateway//protoc-gen-swagger/options:proto_library",
"#go_googleapis//google/api:annotations_proto",
],
)
go_proto_library(
name = "svc_go_proto",
compilers = ["#io_bazel_rules_go//proto:go_grpc"],
importpath = "src/test/proto/v1",
proto = ":svc_proto",
visibility = ["//visibility:public"],
deps = [
"//github.com/infobloxopen/protoc-gen-gorm/options:go_default_library",
"//github.com/grpc-ecosystem/grpc-gateway/protoc-gen-swagger/options:go_default_library",
"#go_googleapis//google/api:annotations_go_proto",
],
)
go_library(
name = "go_default_library",
embed = [":svc_go_proto"],
importpath = "src/test/proto/v1",
visibility = ["//visibility:public"],
)
Now the questions:
not sure what to put to reference other proto files: "#com_github_infobloxopen_protoc_gen_gorm//options:proto_library" ? and not sure this is the best way to reference other external libraries from git.
if i build the above using bazel build //proto/v1:svc_proto, i get: no such target '#com_github_grpc_ecosystem_grpc_gateway//protoc-gen-swagger/options:proto_library': target 'proto_library' not declared in package 'protoc-gen-swagger/options'. Probably linked to 1.
i am not sure which rule to use. As I need grpc gateway, I guess i
need to exclusively use
https://github.com/stackb/rules_proto/tree/master/github.com/grpc-ecosystem/grpc-gateway
but i can't make them to work either.
I use statik (https://github.com/rakyll/statik) to package the
swagger file in go to server the swagger. Is there any alternative
or if not, how can i call a custom bash/command as part of the build
process in the chain?
In summary, I am pretty sure my BUILD.bazel file to build the proto and library is structured wrong and would appreciate some up-to-date guidance (github is full of repos that are outdated, using outdated rules or simply not working).
I'm in the process of learning how to use Nix/NixOs/NixOps, and I'm having trouble refactoring a simple NixOps deployment.
My starting point is this working vbox-all.nix file :
{
server =
{ config, pkgs, ... }:
{
# deployment-specific config
deployment.targetEnv = "virtualbox";
deployment.virtualbox.memorySize = 1024; # megabytes
deployment.virtualbox.vcpu = 2; # number of cpus
# postgres-specific config
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql96;
# htop-specific config
environment.systemPackages =
[
pkgs.htop
];
};
}
Running nixops create ./vbox.nix -d mydeployment and then nixops deploy -d mydeployment works perfectly : I get a VirtualBox machine with Postgres 9.6 running and htop installed.
Now, having all of this in one file does not seem to be a good idea for long term maintenance.
Here is the file layout I think I want:
.
├── configuration-all.nix # forms a NixOs config with htop, postgres, etc.
├── htop.nix # NixOs config of just htop
├── postgres.nix # NixOs config of just Postgres
└── vbox-all.nix # NixOps config for virtualbox with htop, postgres, etc.
The idea being that vbox-all.nix imports configuration-all.nix which imports all services/packages/conf I might want (currently postgres and htop).
That's what I cannot get to work.
Here is my configuration-all.nix :
{ config, pkgs, ... }:
{
imports = [ ./postgres.nix ./htop.nix ];
}
Here is ./postgres.nix :
{ config, pkgs, ... }:
{
services.postgresql.enable = true;
services.postgresql.package = pkgs.postgresql96;
}
I think you can guess the content of ./htop.nix, and it doesn't really matter anyway.
And finally, my modified vbox-all.nix:
{
server =
{ config, pkgs, ... }:
with (pkgs.callPackage ./configuration-all.nix { });
{
# deployment-specific config
deployment.targetEnv = "virtualbox";
deployment.virtualbox.memorySize = 1024; # megabytes
deployment.virtualbox.vcpu = 2; # number of cpus
};
}
When I re-run nixops deploy -d mydeployment, I don't get any errors but the resulting VM doesn't have neither postgres nor htop.
I must be fundamentally misunderstanding either with or callPackage. For me it should : execute the function defined in ./configuration-all.nix (auto-filling all args) and merge the resulting expression with my "deployment-specific config".
I tried a few things like: replacing pkgs.callPackage with import (still no error, but still no good), using inherit (pkgs.callPackage ./configuration-all.nix { }) instead of with, etc. but so far no dice.
I must be missing something small and probably obvious...
Here is my final working vbox-all.nix I figured out while writing my question.
{
server =
{
imports = [ ./configuration-all.nix ];
# deployment-specific config
deployment.targetEnv = "virtualbox";
deployment.virtualbox.memorySize = 1024; # megabytes
deployment.virtualbox.vcpu = 2; # number of cpus
};
}
Thanks SO, you're a good rubber duck.
I still need to understand why my other attempts with with and inherit did not work, so don't hesitate to comment or post an alternative answer. I have a lot to learn.