Overlapping roots across multiple bundles - open-policy-agent

I was trying to understand the significance of roots.
As per the docs,
The roots are not overlapping (e.g., a/b/c and a/b are overlapped and will result in an error.) Note: This is not enforced across multiple bundles. Only within the same bundle manifest.
So, I loaded two bundles with same .manifest files with the hope that OPA will not be causing any initialization error based on the above note. But it failed with
error: initialization error: detected overlapping roots in bundle manifest with: [/var/folders/hl/7twvsdm52jx6qn3tgkh_4rzm0000gp/T/valid_roots.tar.gz /var/folders/hl/7twvsdm52jx6qn3tgkh_4rzm0000gp/T/duplicate_valid_roots.tar.gz]
Am I doing something wrong or have I understood the statement incorrectly or does the document need an update?
Structure:
valid_roots.tar.gz & duplicate_valid_roots.tar.gz
./rule
./policy
./.manifest
./policy/policy_1.rego
./rule/rule_1.rego
.manifest
{'roots':['rule/lob','policy/consumers']
OPA run command
opa run -s -a 0.0.0.0:8191 -b /var/folders/hl/7twvsdm52jx6qn3tgkh_4rzm0000gp/T/valid_roots.tar.gz -b /var/folders/hl/7twvsdm52jx6qn3tgkh_4rzm0000gp/T/duplicate_valid_roots.tar.gz

valid_roots.tar.gz & duplicate_valid_roots.tar.gz will be considered as 2 separate bundles and hence you see the detected overlapping roots error. If the valid_roots.tar.gz has a manifest file with the below content, it means the valid_roots.tar.gz owns these roots and no other bundle can write to those paths.
{'roots':['rule/lob','policy/consumers']
The statement you're referring in the docs is the check that is performed while reading a single bundle. If a bundle specifies roots as a/b/c and a/b, OPA cannot determine, if the bundle owns everything under a/b like a/b/foo, a/b/bar or it owns only a/b/c.

Related

With Jenkins Job Builder (JJB) what's the preferred way to inject values into a static set of job configuration files?

This bounty has ended. Answers to this question are eligible for a +100 reputation bounty. Bounty grace period ends in 14 hours.
frans is looking for an answer from a reputable source.
I have a working set of JJB YAML files successfully creating jobs and folders.
I now want to make certain values I use inside those YAML files configurable i.e. when running jenkins-jobs test|update -r jobfolder I want to set values for folder prefixes (to not damage existing production jobs), names for branches, nodes etc.
I don't want to use JJBs defaults approach for this since I'm already using it for configuration at a different place and it results in conflicts when used in projects and jobs together.
The ideal way of doing this I can think of would be a way to call JJB like this
jenkins-jobs test|update --define "folder-prefix=experimental/,node=test-node" -r jobfolder
Giving me variables I can use in the actual job definition files.
Since this option seemingly doesn't exist, I'm currently trying to provide files which contain those variables and somehow 'inject' them in my project.
Those are the approaches I can think of:
1 - having different configuration folders with YAML files inside, I would use like this:
jenkins-jobs test -r experimental-config:jobfolder
jenkins-jobs test -r production-config:jobfolder
with experimental-config and production-config being folders with additional files containing my configuration I can switch between.
But unfortunately I don't know how I would reference values I've defined in different yaml files. Is that even possible?
2 - having include files like described in the documentation
While that sounds promising I didn't manage to actually make this run. I tried to turn the following 'configuration header' I'm already using:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: "experimental/"
node: "test-node"
[Rest of the file making use of dynamic-config]
into something making use of the !include statement like this:
!include: dynamic-config.yaml.inc
[Rest of the file making use of stuff defined in dynamic-config.yaml.inc]
giving me a seemingly unrelated parser error:
yaml.parser.ParserError: expected '<document start>', but found '<block sequence start>'
in "/home/me/my/project.yml", line 11, column 1
so I tried this snippet, which looks more like the example by putting it inside an existing element:
- dynamic-config: &dynamic-config
name: "dynamic-config"
!include: dynamic-config.yaml.inc
giving me a different error but still an error:
yaml.scanner.ScannerError: while scanning a simple key
in "/home/me/my/project.yml", line 7, column 5
could not find expected ':'
in "/home/me/my/project.yml", line 8, column 5
In both cases it doesn't make a difference whether or not the specified include file exists or not, which makes me doubt you can just 'include' a file like this at all.
What am I doing wrong here? Is there a more obvious / straight forward way to customize a jenkins-jobs run?
Update:
I somehow managed to use the !include tag for individual items now, like this:
- dynamic-config: &dynamic-config
name: "dynamic-config"
folder-prefix: !include: job-configs/active/folder-prefix.inc
branch-name: !include: job-configs/active/branch-name.inc
node-name: !include: job-configs/active/node-name.inc
But I wasn't able to put the whole dynamic-config element (with the anchor) into an include file yet.
2nd update:
Looks like I'm trying something similar as the guy from this question.
Can someone confirm, that this is currently still a problem? What's the JJB way of handling this?

Error with BiGSCAPE even after installing

I am no coding person by any means but I try my best to work with issues. so I had installed BiGSCAPE to look at the secondary metabolites clusters. I am running it in conda and it seems to be installed fine. As it provides me the version number.
However I keep getting this error. I have test it with the example data as well. and it returns the same results/error
The version I have installed is
BiG-SCAPE 1.1.4 (2022-04-14)
(bigscape) Shaheens-MacBook-Pro:BIG-SCAPE shaheenbibi$ python bigscape.py -i /Downloads/gbks -o ResultsAndres
/Users/shaheenbibi/miniconda3/envs/bigscape/lib/python3.6/site-packages/Bio/SubsMat/init.py:131: BiopythonDeprecationWarning: Bio.SubsMat has been deprecated, and we intend to remove it in a future release of Biopython. As an alternative, please consider using Bio.Align.substitution_matrices as a replacement, and contact the Biopython developers if you still need the Bio.SubsMat module.
BiopythonDeprecationWarning,
Processing input files - -
Output folder already exists
Logs folder already exists
Cache folder already exists
BGC fastas folder already exists
Domtable folder already exists
Domains folder already exists
pfs folder already exists
pfd folder already exists
Including files with one or more of the following strings in their filename: 'cluster', 'region'
Skipping files with one or more of the following strings in their filename: 'final'
Importing GenBank files
Starting with 0 files
Files that had its sequence extracted: 0
Creating output directories
SVG folder already exists
Networks folder already exists
Trying threading on 4 cores
Predicting domains using hmmscan
All fasta files had already been processed
Finished generating domtable files.
Parsing hmmscan domtable files
All domtable files had already been processed
Finished generating pfs and pfd files.
Processing domains sequence files
Adding sequences to corresponding domains file
Reading the ordered list of domains from the pfs files
Creating arrower-like figures for each BGC
Parsing hmm file for domain information
Done
All SVG from the input files seem to be in the SVG folder
Finished creating figures
Calculating distance matrix - -
Performing multiple alignment of domain sequences
No domain fasta files found to align
Trying to read domain alignments (*.algn files)
No aligned sequences found in the domain folder (run without the --skip_ma parameter or point to the correct output folder)
Starting with 0 files Files seems to indicate that something is wrong with your input directory. You may need to put some .gbk files in /Downloads/gbks.
Also note that BiGSCAPE puts a bunch of constraints on the names of the .gbk files: https://git.wageningenur.nl/medema-group/BiG-SCAPE/-/wikis/input. Perhaps your inputs .gbk files need to be renamed.

Organize GraphQL files in directories in Rails

Upon running rails g graphql:install, a set of useful base type files are created in /app/graphql/types for us to extend when defining our own types. When running any of the generators, it creates these files in the same folder also. I set about creating sub directories thinking I could add some sense to this giant catch-all directory, but couldn't get things to load properly.
Since there is a base file for each type (base_enum.rb, base_object.rb, etc.), I created a folder for extensions of each of these types (enum_types, object_types, etc.). This broke auto loading though and I had to explicitly import these files to be able to use these custom types. So, at the top of query_type.rb, mutation_type.rb and app/graphql/mutations/base_mutation.rb I added the following:
['enum_typs', 'input_object_types', 'interface_types', 'object_types', 'scalar_types', 'union_types'].each do |dir|
Dir[File.dirname(__FILE__) + "/#{dir}/*.rb"].each {|file| require file }
end
This allowed things to run, but any change would break auto loading so I would have to restart the server on each change. I started reading through this article about auto loading on the rails site, but it was quite honestly a little over my head. Though it led me to believe I had to either find the correct names for my folders or namespace the objects defined in my type definition files properly to be able to do this.
Is there a sane way to organize these files in sub-directories which doesn't break auto loading? Do most projects just have a flat folder structure for these files?
Thank you!

Is there any way to include a file with a bang (!) in the path in a genrule?

I've got an iOS framework that has a dependency on the (presumably Google maintained) pod called '!ProtoCompiler'. In order to build my framework I'm going to need it in the sandbox. So, I have a genrule and can try to include it with
src = glob(['Pods/!ProtoCompiler/**/*']) but I get the following error:
ERROR: BUILD:2:1: //Foo:framework-debug: invalid label 'Pods/!ProtoCompiler/google/protobuf/any.proto' in element 1118 of attribute 'srcs' in 'genrule' rule: invalid target name 'Pods/!ProtoCompiler/google/protobuf/any.proto': target names may not contain '!'.
As is, this seems like a total blocker for me using bazel to do this build. I don't have the ability to rename the pod directory as far as I can tell. As far as I can tell, the ! prohibition is supposed to be for target labels, is there any way I can specify that this is just a file, not a label? Or are those two concepts completely melded in bazel?
(Also, if I get this to work I'm worried about the fact that this produces a .framework directory and it seems like rules are expected to produces files only. Maybe I'll zip it up and then unzip it as part of the build of the test harness.)
As far as I can tell, the ! prohibition is supposed to be for target
labels, is there any way I can specify that this is just a file, not a
label? Or are those two concepts completely melded in bazel?
They are mostly molded.
Bazel associates a label with all source files in a package that appear in BUILD files, so that you can write srcs=["foo.cc", "//bar:baz.cc"] in a build rule and it'll work regardless of foo.cc and baz.cc being a source file, a generated file, or a build rule's name that produces files suitable for this particular srcs attribute.
That said you can of course have any file in the package, but if the name won't allow Bazel to derive a label from it, then you can't reference them in the BUILD file. Since glob is evaluated during loading and is expanded to a list of labels, using glob won't work around this limitation.
(...) it seems like rules are expected to produces files only. Maybe
I'll zip it up and then unzip it as part of the build of the test
harness.
Yes, that's the usual approach.

Search for files that contain pattern

I have this search - I would like to print out the paths of files that contain the matching text:
grep -r "jasmine" .
and it yields results that look like this:
./app-root/runtime/repo/node_modules/jasmine-core/.github/CONTRIBUTING.md:- [Jasmine Google Group](http://groups.google.com/group/jasmine-js)
./app-root/runtime/repo/node_modules/jasmine-core/.github/CONTRIBUTING.md:- [Jasmine-dev Google Group](http://groups.google.com/group/jasmine-js-dev)
./app-root/runtime/repo/node_modules/jasmine-core/.github/CONTRIBUTING.md:git clone git#github.com:yourUserName/jasmine.git # Clone your fork
./app-root/runtime/repo/node_modules/jasmine-core/.github/CONTRIBUTING.md:cd jasmine # Change directory
./app-root/runtime/repo/node_modules/jasmine-core/.github/CONTRIBUTING.md:git remote add upstream https://github.com/jasmine/jasmine.git # Assign original repository to a remote named 'upstream'
./app-root/runtime/repo/node_modules/jasmine-core/.github/CONTRIBUTING.md:Note that Jasmine tests itself. The files in `lib` are loaded first, defining the reference `jasmine`. Then the files in `src` are loaded, defining the reference `j$`. So there are two copies of the code loaded under test.
./app-root/runtime/repo/node_modules/jasmine-core/.github/CONTRIBUTING.md:The tests should always use `j$` to refer to the objects and functions that are being tested. But the tests can use functions on `jasmine` as needed. _Be careful how you structure any new test code_. Copy the patterns you see in the existing code - this ensures that the code you're testing is not leaking into the `jasmine` reference and vice-versa.
But I just want the file names, I don't want to print out the matching contents, just the file names, how can I do that?
The problem is that the matching text will wrap around in the terminal and make the results basically unreadable.
Did you use the -l flag from grep?
-l, --files-with-matches
Suppress normal output; instead print the name of each input file from which output would normally have been printed. The scanning will stop on the first match.
A simple search on my home directory,
grep -rl 'bash' .
./.bashrc
./.bash_history
./.bash_logout
./.bash_profile
./.profile
./.viminfo
As a matter of fact, -l is a POSIX defined option for grep should be available in almost all distros.

Resources