Is there an easy way to get hold of a path object so I can check if a given label path exists. Say for example if path.exists("#external_project_name//:filethatmightexist.txt"):. I can see that the repository context has this. But I need to have a wrapping repository rule. Is it possible to do this in a macro or Skylark native call instead?
Even with a repository_rule, I had a lot of trouble with this due to what you already pointed out:
if you create a Label with a path that doesn't exist, it will cause the build to fail
But if you're willing to do a repository rule, here's a possible solution...
In this example, my rule allows specification of a default configuration if a config file is not present. The configuration can be checked into .gitignore and overridden for individual developers, but work out of the box for most cases.
I think I understand why the ctx.actions have sibling arguments now, same idea here. The trick is config_file_location is a true label, and then config_file is a string attribute. I chose BUILD arbitrarily, but since all workspaces have a top level BUILD that's public seemed legit-ish.
WORKSPACE Definition
...
workspace(name="E02_mysql_database")
json_datasource_configuration(name="E02_datasources",
config_file_location="#E02_mysql_database//:BUILD",
config_file="database.json")
The definition for json_datasource_configuration looks like this:
json_datasource_configuration = repository_rule(
attrs = {
"config_file_location": attr.label(
doc="""
Path relative to the repository root for a datasource config file.
"""),
"config_file": attr.string(
doc="""
Config file, maybe absent
"""),
"default_config": attr.string(
# better way to do this?
default="None",
doc = """
If no config is at the path, then this will be the default config.
Should look something like:
{
"datasource_name": {
"host": "<host>"
"port": <port>
"password": "<password>"
"username": "<username>"
"jdbc_connection_string": "<optional>"
}
}
There can be more than datasource configured... maybe, eventually.
""",
),
},
local = True,
implementation = _json_config_impl,
)
Then in the rule I can test for the file existence, and if not present, do other logic.
def _json_config_impl(ctx):
"""
Allows you to specify a file on disk to use for data connection.
If you pass a default
"""
config_path = ctx.path(ctx.attr.config_file_location).dirname.get_child(ctx.attr.config_file)
config = ""
if config_path.exists:
config = ctx.read(config_path)
elif ctx.attr.default_config == "None":
fail("Could not find config at %s, you must supply a default_config if this is intentional" % ctx.attr.config_file)
else:
config = ctx.attr.default_config
...
probably too late to help, but your question is the only thing I found referencing this goal. If someone knows a better way I am looking for other options. It's complicated to explain to other developers why the rule has to work the way it does.
Also note, if you change the config file, you have to clean to get the workspace to re-read the config. I haven't been able to figure out any way to fix that. glob() does not work in the workspace.
Related
Imagine I have a java_binary target triggered by a custom rule that generates source code and places the generated sources under a directory, let's call it "root".
So after the code generation we will have something like this:
// bazel-bin/...../src/com/example/root
root:
-> Foo.java
-> Bar.java
-> utils
-> Baz.java
Now, I have another target, a java_library, that depends on the previously generated sources, so it depends on the custom rule.
My custom rule definition currently looks something like this:
def _code_generator(ctx):
outputDir = ctx.actions.declare_directory("root")
files = [
ctx.actions.declare_file("root/Foo.java"),
ctx.actions.declare_file("root/Bar.java"),
ctx.actions.declare_file("root/utils/Baz.java"),
// and many,
// many other files
]
outputs = []
outputs.append(outputDir)
outputs.extend(files)
ctx.actions.run(
executable = // executable pointing to the java_binary
outputs = outputs
// ....
)
This works. But as you can see, every anticipated file that is to be generated, is hard-coded in the rule definition. This makes it very fragile, should the code generation produce a different set of files in the future (which it will).
(Without specifying each of the files, as shown above, Bazel will fail the build saying that the files have no generating action)
So I was wondering, is there a way to read the content of the root directory and automatically, somehow, declare each of the files as an output?
What I tried:
The documentation of declare_directory says:
The contents of the directory are not directly accessible from Starlark, but can be expanded in an action command with Args.add_all().
And add_all says:
[...] Each directory File item is replaced by all Files recursively contained in that directory.
This sounds like there could be a way to get access to the individual files in the directory, but I am not sure how.
I tried:
outputDir = ctx.actions.declare_directory("root")
//...
args = ctx.actions.args()
args.add_all(outputDir)
with the intention to access the individual files later from args, but the build fails with: "Error in add_all: expected value of type sequence or depset for values, got File".
Any other ideas on how to implement the rule, so that I don't have to hard-code each and every file that will be generated?
Bazel has been working great for me recently, but I've stumbled upon a question for which I have yet to find a satisfactory answer:
How can one collect all files bearing a certain extension from the workspace?
Another way of phrasing the question: how could one obtain the functional equivalent of doing a glob() across a complete Bazel workspace?
Background
The goal in this particular case is to collect all markdown files to run some checks and generate a static site from them.
At first glance, glob() sounds like a good idea, but will stop as soon as it runs into a BUILD file.
Current Approaches
The current approach is to run the collection/generation logic outside of the sandbox, but this is a bit dirty, and I'm wondering if there is a way that is both "proper" and easy (ie, not requiring that each BUILD file explicitly exposes its markdown files.
Is there any way to specify, in the workspace, some default rules that will be added to all BUILD files?
You could write an aspect for this to aggregate markdown files in a bottom-up manner and create actions on those files. There is an example of a file_collector aspect here. I modified the aspect's extensions for your use case. This aspect aggregates all .md and .markdown files across targets on the deps attribute edges.
FileCollector = provider(
fields = {"files": "collected files"},
)
def _file_collector_aspect_impl(target, ctx):
# This function is executed for each dependency the aspect visits.
# Collect files from the srcs
direct = [
f
for f in ctx.rule.files.srcs
if ctx.attr.extension == f.extension
]
# Combine direct files with the files from the dependencies.
files = depset(
direct = direct,
transitive = [dep[FileCollector].files for dep in ctx.rule.attr.deps],
)
return [FileCollector(files = files)]
markdown_file_collector_aspect = aspect(
implementation = _file_collector_aspect_impl,
attr_aspects = ["deps"],
attrs = {
"extension": attr.string(values = ["md", "markdown"]),
},
)
Another way is to do a query on file targets (input and output files known to the Bazel action graph), and process these files separately. Here's an example querying for .bzl files in the rules_jvm_external repo:
$ bazel query //...:* | grep -e ".bzl$"
//migration:maven_jar_migrator_deps.bzl
//third_party/bazel_json/lib:json_parser.bzl
//settings:stamp_manifest.bzl
//private/rules:jvm_import.bzl
//private/rules:jetifier_maven_map.bzl
//private/rules:jetifier.bzl
//:specs.bzl
//:private/versions.bzl
//:private/proxy.bzl
//:private/dependency_tree_parser.bzl
//:private/coursier_utilities.bzl
//:coursier.bzl
//:defs.bzl
I want to get a path, which leads to nixos /etc location (any one of /run/current-system/etc or /nix/store/hashhere-etc-1.0). I use this path to configure pppd connect script, some kind of the following,
environment.etc."huawei" =
{ text = ''
/dev/ttyUSB0
38400
lock
crtscts
nodetach
noipdefault
# Below here what I've struggled
connect ${pkgs.etc}/${environment.etc."huawei-script".target}
'';
mode = "0777";
target = "ppp/peers/huawei"; };
I have tried to write ${pkgs.etc} or ${system.build.etc} or even ${environment.etc} resulting errors.
The directory structure is actually relative, but I think it's safer to use absolute path.
/nix/store/...etc.../ppp/peers
|- huawei
|- huawei.d
|- huawei.sh
|- huawei.chat
You can refer to path to file in /nix/store/...etc... like this:
{ config, pkgs, lib, ... }:
{
environment.etc."test".text = "helo";
environment.etc."test2".text = "${config.environment.etc."test".source.outPath}";
}
Now I have in /etc/test2:
$ cat /etc/test2
/nix/store/1igc2rf011jmrr3cprsgbdp3hhm5d4l0-etc-test
If I understand correctly your problem is you simply need to pass the string value of the target attribute to the huawei.text connect directive. As per the description for the target attribute the value is a path relative to /etc so you should be able to either:
Make the value of the connect directive the string literal connect /etc/ppp/peers/huawei or
make the etc.huaweiattribute set a recursive one so that the attributes can refer to each other then do
environment.etc.huawei = rec {
target = "ppp/peers/huawei";
text = ''...
# Below here what I've struggled
connect ${target}
'';
};
Sorry, I was overlook a fact where NixOS actually map any files in /nix/store/...etc../ into the /etc itself.
So, to refer to a file, it is better to use /etc directly.
connect /etc/${environment.etc."huawei-script".target}
I am trying to set up Weceem using the source from GitHub. It requires a physical path definition for the uploads directory, and for a directory for appears to be used for writing searchable indexes. The default setting for uploads is:
weceem.upload.dir = 'file:/var/www/weceem.org/uploads/'
I would like to define those using relative paths like WEB-INF/resources/uploads. I tried a methodology I have used previously for accessing directories with relative path like this:
File uploadDirectory = ApplicationHolder.application.parentContext.getResource("WEB-INF/resources/uploads").file
def absoluteUploadDirectory = uploadDirectory.absolutePath
weceem.upload.dir = 'file:'+absoluteUploadDirectory
However, 'parentContext' under ApplicationHolder.application is NULL. Can anyone offer a solution to this that would allow me to use relative paths?
look at your Config.groovy you should have (maybe it is commented)
// locations to search for config files that get merged into the main config
// config files can either be Java properties files or ConfigSlurper scripts
// "classpath:${appName}-config.properties", "classpath:${appName}-config.groovy",
grails.config.locations = [
"file:${userHome}/.grails/${appName}-config.properties",
"file:${userHome}/.grails/${appName}-config.groovy"
]
Create Conig file in deployment server
"${userHome}/.grails/${appName}-config.properties"
And define your prop (even not relative path) in that config file.
To add to Aram Arabyan's response, which is correct, but lacks an explanation:
Grails apps don't have a "local" directory, like a PHP app would have. They should be (for production) deployed in a servlet container. The location of that content is should not be considered writable, as it can get wiped out on the next deployment.
In short: think of your deployed application as a compiled binary.
Instead, choose a specific location somewhere on your server for the uploads to live, preferably outside the web server's path, so they can't be accessed directly. That's why Weceem defaults to a custom folder under /var/www/weceem.org/.
If you configure a path using the externalized configuration technique, you can then have a path specific to the server, and include a different path on your development machine.
In both cases, however, you should use absolute paths, or at least paths relative to known directories.
i.e.
String base = System.properties['base.dir']
println "config: ${base}/web-app/config/HookConfig.grooy"
String str = new File("${base}/web-app/config/HookConfig.groovy").text
return new ConfigSlurper().parse(str)
or
def grailsApplication
private getConfig() {
String str = grailsApplication.parentContext.getResource("config/HookConfig.groovy").file.text
return new ConfigSlurper().parse(str)
}
As I am trying to write a Grails Plugin, I stumbled upon two problems:
how do I modify one of the configuration files like Config.groovy or DataSource.groovy from witin the _install.groovy script? It is easy to append something to those files, but how do I modify it in a clean way? text.replaceAll()? Or should I create a new config file?
how do I get the name of the current application into which the plugin will be installed? I tried to use app.name and appName but both do not work.
Is there maybe somewhere a good tutorial on creating plugins which I haven't found yet?
Here is an example of editing configuration files from scripts/_Install.groovy.
My plugin copies three files to the target directory.
.hgignore is used for version control,
DataSource.groovy replaces the default version, and
SecurityConfig.groovy contains extra settings.
I prefer to edit the application's files as little as possible, especially because I expect to change the security setup a few years down the road. I also need to use properties from a jcc-server-config.properties file which is customized for each application server in our system.
Copying the files is easy.
println ('* copying .hgignore ')
ant.copy(file: "${pluginBasedir}/src/samples/.hgignore",
todir: "${basedir}")
println ('* copying SecurityConfig.groovy')
ant.copy(file: "${pluginBasedir}/src/samples/SecurityConfig.groovy",
todir: "${basedir}/grails-app/conf")
println ('* copying DataSource.groovy')
ant.copy(file: "${pluginBasedir}/src/samples/DataSource.groovy",
todir: "${basedir}/grails-app/conf")
The hard part is getting Grails to pick up the new configuration file. To do this, I have to edit the application's grails-app/conf/Config.groovy. I will add two configuration files to be found on the classpath.
println ('* Adding configuration files to grails.config.locations');
// Add configuration files to grails.config.locations.
def newConfigFiles = ["classpath:jcc-server-config.properties",
"classpath:SecurityConfig.groovy"]
// Get the application's Config.groovy file
def cfg = new File("${basedir}/grails-app/conf/Config.groovy");
def cfgText = cfg.text
def appendedText = new StringWriter()
appendedText.println ""
appendedText.println ("// Added by edu-sunyjcc-addons plugin");
// Slurp the configuration so we can look at grails.config.locations.
def config = new ConfigSlurper().parse(cfg.toURL());
// If it isn't defined, create it as a list.
if (config.grails.config.locations.getClass() == groovy.util.ConfigObject) {
appendedText.println('grails.config.locations = []');
} else {
// Don't add configuration files that are already on the list.
newConfigFiles = newConfigFiles.grep {
!config.grails.config.locations.contains(it)
};
}
// Add each surviving location to the list.
newConfigFiles.each {
// The name will have quotes around it...
appendedText.println "grails.config.locations << \"$it\"";
}
// Write the new configuration code to the end of Config.groovy.
cfg.append(appendedText.toString());
The only problem is adding SecurityConfig.groovy to the classpath. I found that you can do that by creating the following event in the plugin's /scripts/Events.groovy.
eventCompileEnd = {
ant.copy(todir:classesDirPath) {
fileset(file:"${basedir}/grails-app/conf/SecurityConfig.groovy")
}
}
Ed.
You might try changing the configuration within the MyNiftyPlugin.groovy file (assuming that your plugin is named my-nifty). I've found that I can change the configuration values within the doWithApplicationContext closure. Here's an example.
def doWithApplicationContext = { applicationContext ->
def config = application.config;
config.edu.mycollege.server.name = 'http://localhost:8080'
config.edu.mycollege.server.instance = 'pprd'
}
The values you enter here do show up in the grailsApplication.config variable at run time. If it works for you, it will be a neater solution, because it doesn't require changes to the client project.
I must qualify that with the fact that I wasn't able to get Spring Security to work by this technique. I believe that my plugin (which depends on Spring Security) was loaded after the security was initialized. I decided to add an extra file to the grails-app/conf directory.
HTH.
For modifying configuration files, you should use ConfigSlurper:
def configParser = new ConfigSlurper(grailsSettings.grailsEnv)
configParser.binding = [userHome: userHome]
def config = configParser.parse(new URL("file:./grails-app/conf/Config.groovy"))
If you need to get application name from script, try:
metadata.'app.name'