Merlin is not picking up dependencies in a nix flake shell - nix

I have a flake containing a dev shell pinning all the dependencies required to develop a given OCaml program, including an actual OCaml compiler, merlin, findlib and the OCaml libraries. For instance, if the project only depended on Graphics, it could be:
devShell = pkgs.mkShell {
buildInputs = with pkgs; [
ocaml
] ++ (with ocamlPackages; [
findlib
merlin
graphics
]);
};
In addition to that, I have direnv setup to use the flake with the following in .envrc:
use flake
And it works fine. If I even add dune (to help finding out where the libraries out) and utop, like
buildInputs = with pkgs; [
ocaml
dune_3
] ++ (with ocamlPackages; [
findlib
merlin
graphics
utop
]);
I can run dune utop, then # open Graphics;; and it still works. After setting up a minimal dune project, dune build just works.
Direnv is also setup for Emacs, so when I edit an OCaml file merlin-mode picks up the shell's merlin flawlessly. Similarly for the shell's utop. But if I add open Graphics, it says "Unbound module Graphics", as if merlin couldn't pick up the dependency.
I've tried following this article, which essentially adds ${merlin}/share/emacs/site-lisp to the load-path in Emacs. But that didn't work.
I've also tried moving to ocaml-lsp+lsp-mode instead of merlin, but that didn't help.
I found this post that describes someone achieving Emacs' integration, but it provided no details on how to achieve that besides configuring ocamlformat, but since it's not in my minimal reproducible example, I don't think it's pertinent.

Related

How can I search in Nixpkgs for a package expression?

From the manual:
Components are installed from a set of Nix expressions that tell Nix
how to build those packages, including, if necessary, their
dependencies. There is a collection of Nix expressions called the
Nixpkgs package collection that contains packages ranging from basic
development stuff such as GCC and Glibc, to end-user applications like
Mozilla Firefox.
Lets assume I want to search for the nix expression of the package Go for example. Where should I look in the repository to find the right file?
My tool of choice is nix repl. Often its tab completion is sufficient to find attribute names. Sometimes you might want to use https://search.nixos.org/packages.
For a convenient workflow, you could set EDITOR and use the :edit command in nix repl. Usually I open nixpkgs in VSCode and then run nix repl . in a VSCode terminal so I can Ctrl+click file locations as well.
I never use the directory structure, because search is so much more convenient.
[~/nixpkgs]$ export EDITOR=... # nano doesn't seem to work for this
[~/nixpkgs]$ nix repl .
nix-repl> :edit go
or
nix-repl> go.meta.position
"~/nixpkgs/pkgs/development/compilers/go/1.17.nix:278"
This generally gives you the location of a mkDerivation call, or a call to a similar function.
To get the location where an attribute is defined, use
nix-repl> builtins.unsafeGetAttrPos "go" pkgs
{ column = 3; file = "~/nixpkgs/pkgs/top-level/all-packages.nix"; line = 12753; }
And then there's the recursive directory search option (like grep -R, IDE-integrated search, etc). This generally works really well as package names tend to be specific. Too bad go isn't. We generally don't do crazy code formatting in Nix, a leading space and an equals sign do a pretty good job at finding definitions, even for go, if you ignore the ones in lib/.
[~/nixpkgs]$ git grep -n ' go ='
lib/attrsets.nix:125: go = prefixLength: hasValue: value: updates:
lib/debug.nix:234: go = x: generators.toPretty
lib/deprecated.nix:92: let go = xs: acc:
lib/filesystem.nix:29: let go = path:
lib/generators.nix:237: go = indent: v: with builtins;
lib/trivial.nix:500: go = i:
nixos/modules/security/apparmor/includes.nix:9: let go = { path ? null, mode ? "r", trail ? "" }:
pkgs/stdenv/booter.nix:63: go = pred: n:
pkgs/top-level/all-packages.nix:12753: go = go_1_17;
pkgs/top-level/all-packages.nix:21043: go = buildPackages.go_1_16;
pkgs/top-level/all-packages.nix:21046: go = buildPackages.go_1_17;
pkgs/top-level/all-packages.nix:21049: go = buildPackages.go_1_18;
pkgs/top-level/all-packages.nix:21055: go = buildPackages.go_1_16;
pkgs/top-level/all-packages.nix:21058: go = buildPackages.go_1_17;
pkgs/top-level/all-packages.nix:21061: go = buildPackages.go_1_18;
pkgs/top-level/all-packages.nix:26493: go = go_1_16;

How to integrate C/C++ analysis tooling in Bazel?

I have a code analysis tool that I'd like to run for each cc_library (and cc_binary, silently implied for rest of the question). The tool has a CLI interfaces taking:
A tool project file
Compiler specifics, such as type sizes, built-ins, macros etc.
Files to analyze
File path, includes, defines
Rules to (not) apply
Files to add to the project
Options for synchronizing files with build data
JSON compilation database
Parse build log
Analyze and generate analysis report
I've been looking at how to integrate this in Bazel so that the files to analyze AND the associated includes and defines are updated automatically, and that any analysis result is properly cached. Generating JSON compilation database (using third party lib) or parsing build log both requires separate runs and updating the source tree. For this question I consider that a workaround I'm trying to remove.
What I've tried so far is using aspects, adding an analysis aspect to any library. The general idea is having a base project file holding library invariant configuration, appended with the cc_library files to analysis, and finally an analysis is triggered generating the report. But I'm having trouble to execute, and I'm not sure it's even possible.
This is my aspect implementation so far, trying to iterate through cc_library attributes and target compilation context:
def _print_aspect_impl(target, ctx):
# Make sure the rule has a srcs attribute
if hasattr(ctx.rule.attr, 'srcs'):
# Iterate through the files
for src in ctx.rule.attr.srcs:
for f in src.files.to_list():
if f.path.endswith(".c"):
print("file: ")
print(f.path)
print("includes: ")
print(target[CcInfo].compilation_context.includes)
print("quote_includes: ")
print(target[CcInfo].compilation_context.quote_includes)
print("system_includes: ")
print(target[CcInfo].compilation_context.system_includes)
print("define: " + define)
print(ctx.rule.attr.defines)
print("local_defines: ")
print(ctx.rule.attr.local_defines)
print("") # empty line to separate file prints
return []
What I cannot figure out is how to get ALL includes and defines used when compiling the library:
From libraries depended upon, recursively
copts, defines, includes
From the toolchain
features, cxx_builtin_include_directories
Questions:
How do I get the missing flags, continuing on presented technique?
Can I somehow retrieve the compile action command string?
Appended to analysis project using the build log API
Some other solution entirely?
Perhaps there is something one can do with cc_toolchain instead of aspects...
Aspects are the right tool to do that. The information you're looking for is contained in the providers, fragments, and toolchains of the cc_* rules the aspect has access to. Specifically, CcInfo has the target-specific pieces, the cpp fragment has the pieces configured from the command-line flag, and CcToolchainInfo has the parts from the toolchain.
CcInfo in target tells you if the current target has that provider, and target[CcInfo] accesses it.
The rules_cc my_c_compile example is where I usually look for pulling out a complete compiler command based on a CcInfo. Something like this should work from the aspect:
load("#rules_cc//cc:action_names.bzl", "C_COMPILE_ACTION_NAME")
load("#rules_cc//cc:toolchain_utils.bzl", "find_cpp_toolchain")
[in the impl]:
cc_toolchain = find_cpp_toolchain(ctx)
feature_configuration = cc_common.configure_features(
ctx = ctx,
cc_toolchain = cc_toolchain,
requested_features = ctx.features,
unsupported_features = ctx.disabled_features,
)
c_compiler_path = cc_common.get_tool_for_action(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
)
[in the loop]
c_compile_variables = cc_common.create_compile_variables(
feature_configuration = feature_configuration,
cc_toolchain = cc_toolchain,
user_compile_flags = ctx.fragments.cpp.copts + ctx.fragments.cpp.conlyopts,
source_file = src.path,
)
command_line = cc_common.get_memory_inefficient_command_line(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
variables = c_compile_variables,
)
env = cc_common.get_environment_variables(
feature_configuration = feature_configuration,
action_name = C_COMPILE_ACTION_NAME,
variables = c_compile_variables,
)
That example only handles C files (not C++), you'll have to change the action names and which parts of the fragment it uses appropriately.
You have to add toolchains = ["#bazel_tools//tools/cpp:toolchain_type"] and fragments = ["cpp"] to the aspect invocation to use those. Also see the note in find_cc_toolchain.bzl about the _cc_toolchain attr if you're using legacy toolchain resolution.
The information coming from the rules and the toolchain is already structured. Depending on what your analysis tool wants, it might make more sense to extract it directly instead of generating a full command line. Most of the provider, fragment, and toolchain is well-documented if you want to look at those directly.
You might pass required_providers = [CcInfo] to aspect to limit propagation to rules which include it, depending on how you want to manage propagation of your aspect.
The Integrating with C++ Rules documentation page also has some more info.

Bazel - depend on generated outputs

I have a yaml file in a Bazel monorepo that has constants I'd like to use in several languages. This is kind of like how protobuffs are created and used.
How can I parse this yaml file at build time and depend on the outputs?
For instance:
item1: "hello"
item2: "world"
nested:
nested1: "I'm nested"
nested2: "I'm also nested"
I then need to parse this yaml file so it can be used in many different languages (e.g., Rust, TypeScript, Python, etc.). For instance, here's the desired output for TypeScript:
export default {
item1: "hello",
item2: "world",
nested: {
nested1: "I'm nested",
nested2: "I'm also nested",
}
}
Notice, I don't want TypeScript code that reads the yaml file and converts it into an object. That conversion should be done in the build process.
For the actual conversion, I'm thinking of writing that in Python, but it doesn't need to be. This would then mean the python also needs to run at build time.
P.S. I care mostly about the functionality, so I'm flexible with the exactly how it's done. I'm even fine using another file format aside from yaml.
Thanks to help from #oakad, I was able to figure it out. Essentially, you can use genrule to create outputs.
So, assuming you have some target like python setup to generate the output named parse_config, you can just do this:
genrule(
name = "generated_output",
srcs = [],
outs = ["output.txt"],
cmd = "./$(execpath :parse_config) > $#" % name,
exec_tools = [":parse_config"],
visibility = ["//visibility:public"],
)
The generated file is output.txt. And you can access it via //lib/config:generated_output.
Note, essentially the cmd is piping the stdout into the file contents. In Python that means anything printed will appear in the generated file.

Import Pcaml grammar to extend OCaml's printer using camlp5

I want to create a printer extension for OCaml using camlp5. My code would look like the example of this tutorial but instead of creating my own extension of the grammar, I would like to use OCaml's grammar to parse a program.
For that, I would like to use the Pcaml module to parse the given string with OCaml's grammar. Unfortunately, each time I try to use it, I get the:
Required module 'Pcaml' is unavailable
This is the part of my code where I load and open modules, as well as part of the code that uses Pcaml:
#load "pa_extprint.cmo";;
#load "q_MLast.cmo";;
#load "pa_o.cmo";;
open Pcaml;;
open Pprintf;;
let pa_ocaml = Grammar.Entry.create Pcaml.gram "pcaml_gram";;
I tried multiple command to run the program, like for example:
ocamlc -pp camlp5o -I +camlp5 gramlib.cma <my_file>.ml
What do I need to be able to use Pcaml and Pcaml.gram?
I recommend to use ocamlfind to build and link your programs. The only reason for newcomer against it, is that thing could become buggy when you use Windows without WSL. The compilation command without error is below
ocamlfind c -syntax camlp5o -package camlp5 -linkpkg a.ml
#load "pa_extprint.cmo";;
#load "q_MLast.cmo";;
#load "pa_o.cmo";;
open Pcaml;;
open Pprintf;;
let pa_ocaml : int Grammar.Entry.e = Grammar.Entry.create Pcaml.gram "pcaml_gram";;
FYI, your #load commands can and should be replaced by specifying right ocamlfind's packages.

How can I use the JAR tool with Bazel v0.19+?

Starting with Bazel v0.19, if you have Starlark (formerly known as "Skylark") code that references #bazel_tools//tools/jdk:jar, you see messages like this at build time:
WARNING: <trimmed-path>/external/bazel_tools/tools/jdk/BUILD:79:1: in alias rule #bazel_tools//tools/jdk:jar: target '#bazel_tools//tools/jdk:jar' depends on deprecated target '#local_jdk//:jar': Don't depend on targets in the JDK workspace; use #bazel_tools//tools/jdk:current_java_runtime instead (see https://github.com/bazelbuild/bazel/issues/5594)
I think I could make things work with #bazel_tools//tools/jdk:current_java_runtime if I wanted access to the java command, but I'm not sure what I'd need to do to get the jar tool to work. The contents of the linked GitHub issue didn't seem to address this particular problem.
I stumbled across a commit to Bazel that makes a similar adjustment to the Starlark java rules. It uses the following pattern: (I've edited the code somewhat)
# in the rule attrs:
"_jdk": attr.label(
default = Label("//tools/jdk:current_java_runtime"),
providers = [java_common.JavaRuntimeInfo],
),
# then in the rule implementation:
java_runtime = ctx.attr._jdk[java_common.JavaRuntimeInfo]
jar_path = "%s/bin/jar" % java_runtime.java_home
ctx.action(
inputs = ctx.files._jdk + other inputs,
outputs = [deploy_jar],
command = "%s cmf %s" % (jar_path, input_files),
)
Additionally, java is available at str(java_runtime.java_executable_exec_path) and javac at "%s/bin/javac" % java_runtime.java_home.
See also, a pull request with a simpler example.
Because my reference to the jar tool is inside a genrule within top-level macro, rather than a rule, I was unable to use the approach from Rodrigo's answer. I instead explicitly referenced the current_java_runtime toolchain and was then able to use the JAVABASE make variable as the base path for the jar tool.
native.genrule(
name = genjar_rule,
srcs = [<rules that create files being jar'd>],
cmd = "some_script.sh $(JAVABASE)/bin/jar $# $(SRCS)",
tools = ["some_script.sh", "#bazel_tools//tools/jdk:current_java_runtime"],
toolchains = ["#bazel_tools//tools/jdk:current_java_runtime"],
outs = [<some outputs>]
)

Resources