I have seen this on the OPA website that I could use the following:
To Evaluate a policy on the command line.
./opa eval -i input.json -d example.rego
In my case, I have multiple input files and I have modified it to
./opa eval -i /tmp/tfplan.json -d /tmp/example.rego -d /tmp/input1.json -d /tmp/input2.json
Rather than specifying the files as input1, input2 and so on individually, can I directly modify it to read all the json files present in the tmp folder?
Yes! The CLI will accept directories for the -d/--data and -b/--bundle parameters and recursively load files from them.
Ex with:
/tmp/foo/input1.json
/tmp/foo/input2.json
opa eval -d /tmp/foo/input1.json -d /tmp/foo/input2.json ..
Is the same as
opa eval -d /tmp/foo ..
Keep in mind that -d/--data will attempt to load ANY files found in the directory, which can sometimes lead to conflicts (eg, duplicate keys in the data document) or loading incorrect files. So be careful about pointing at /tmp as its likely to include additional files you didn't neccisarily want (note the example above used a sub-directory). Typically we recommend using -b/--bundle and providing files as data.json with a directory structure following the bundle spec https://www.openpolicyagent.org/docs/latest/management/#bundle-file-format which can help avoid most of those common problems.
Related
I currently into "migrating" some third party dependency projects (typically old style configure/make based) to Bazel using it's foreign_cc rules.
One goal is to have identical output compared to before the migration, and among some attributes like permissions and RPATH I'm still struggling with symlinks being de-referenced seemingly unconditionally.
So instead of libfoo.so -> libfoo.so.3, libfoo.so.3 -> libfoo.so.3.14 I'll always get three files now.
Inspecting the generated bazel-bin/external/foo/foo_foreign_cc/build_script.sh the last commands contain two invocations of cp -L with no variables modifying the behavior:
[configure command]
[make commands]
set +x
cp -L -r --no-target-directory "$BUILD_TMPDIR/$INSTALL_PREFIX" "$INSTALLDIR" && find "$INSTALLDIR" -type f -exec touch -r "$BUILD_TMPDIR/$INSTALL_PREFIX" "{}" \;
[content of #postfix_script]
replace_in_files $INSTALLDIR $BUILD_TMPDIR \${EXT_BUILD_DEPS}
replace_in_files $INSTALLDIR $EXT_BUILD_DEPS \${EXT_BUILD_DEPS}
replace_in_files $INSTALLDIR $EXT_BUILD_ROOT \${EXT_BUILD_ROOT}
mkdir -p $EXT_BUILD_ROOT/bazel-out/k8-fastbuild/bin/external/foo/copy_foo/foo
cp -L -r --no-target-directory "$INSTALLDIR" "$EXT_BUILD_ROOT/bazel-out/k8-fastbuild/bin/external/foo/copy_foo/foo" && find "$EXT_BUILD_ROOT/bazel-out/k8-fastbuild/bin/external/foo/copy_foo/foo" -type f -exec touch -r "$INSTALLDIR" "{}" \;
cd $EXT_BUILD_ROOT
So it looks quite obvious to me that for some reason configure_make doesn't even consider to keep symlinks, turning this into something I have to do outside the Bazel rule (while also possibly polluting the remote cache).
Is there a reason for this? I.e. why shouldn't I create a fork of rules_foreign_cc just to remove this -L flag which someone seem to have added intentionally?
I'm one of the rules_foreign_cc maintainers.
The reason why rules_foreign_cc dereferences the symlinks there is because in general the outputs being copied into named outputs may be dangling symlinks as they may not be relative outputs to other build outputs and at least in Bazel 4 which is the minimum version we currently support, dangling symlinks are not allowed as build artifacts. (this behaviour may have changed in later Bazel versions but I'm not 100% sure on this).
What you likely want to actually consume is the output_group gendir. This can be accessed like so:
filegroup(
name = "my_install_tree",
src = ":cmake_target",
output_group = "gendir",
)
The gendir output group is the entire install directory as created by the build artifacts.
Note that you wouldn't actually need to fork the rules to achieve what you were proposing either. The shell script is generated by a toolchain (whose type is currently in the private package and so the right to change this is reserved.) and thus you could provide your own implementation of the toolchain to override the behaviour.
I have a container that I want to export as a .tar file. I have used a podman run with a tar --exclude=/dir1 --exclude=/dir2 … that outputs to a file located on a bind-mounted host dir. But recently this has been giving me some tar: .: file changed as we read it errors, which podman/docker export would avoid. Besides the export I suppose is more efficient. So I'm trying to migrate to using the export, but the major obstacle is I can't seem to find a way to exclude paths from the tar stream.
If possible, I'd like to avoid modifying a tar archive already saved on disk, and instead modify the stream before it gets saved to a file.
I've been banging my head for multiple hours, trying useless advices from ChatGPT, looking at cpio, and attempting to pipe the podman export to tar --exclude … command. With the last I did have small success at some point, but couldn't make tar save the result to a particularly named file.
Any suggestions?
(note: I do not make distinction between docker and podman here as their export command is completely the same, and it's useful for searchability)
Dipping my toes into Bash coding for the first time (not the most experienced person with Linux either) and I'm trying to read the version from the version.php inside a container at:
/config/www/nextcloud/version.php
To do so, I run:
docker exec -it 1c8c05daba19 grep -eo "(0|[1-9]\d*)\.(0|[1-9]\d*)\.(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*)(?:\.(?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-]*))*))?(?:\+([0-9a-zA-Z-]+(?:\.[0-9a-zA-Z-]+)*))?" /config/www/nextcloud/version.php
This uses a semantic versioning RegEx pattern (I know, a bit overkill, but it works for now) to read and extract the version from the line:
$OC_VersionString = '20.0.1';
However, when I run the command it tells me No such file or directory, (I've confirmed it does exist at that path inside the container) and then proceeds to spit out the entire contents of the file it just said doesn't exist?
grep: (0|[1-9]\d*).(0|[1-9]\d*).(0|[1-9]\d*)(?:-((?:0|[1-9]\d*|\d*[a-zA-Z-][0-9a-zA-Z-])(?:.(?:0|[1-9]\d|\d*[a-zA-Z-][0-9a-zA-Z-]))))?(?:+([0-9a-zA-Z-]+(?:.[0-9a-zA-Z-]+)*))?: No such file or directory
/config/www/nextcloud/version.php:$OC_Version = array(20,0,1,1);
/config/www/nextcloud/version.php:$OC_VersionString = '20.0.1';
/config/www/nextcloud/version.php:$OC_Edition = '';
/config/www/nextcloud/version.php:$OC_VersionCanBeUpgradedFrom = array (
/config/www/nextcloud/version.php: 'nextcloud' =>
/config/www/nextcloud/version.php: 'owncloud' =>
/config/www/nextcloud/version.php:$vendor = 'nextcloud';
Anyone able to spot the problem?
Update 1:
For the sake of clarity, I'm trying to run this from a bash script. I just want to fetch the version number from that file, to use it in other areas of the script.
Update 2:
Responding to the comments, I tried to login to the container first, and then run the grep, and still get the same result. Then I cat that file and it shows it's contents no problem.
Many containers don't have the GNU versions of Unix tools and their various extensions. It's popular to base containers on Alpine Linux, which in turn uses a very lightweight single-binary tool called BusyBox to provide the base tools. Those tend to have the set of options required in the POSIX specs, and no more.
POSIX grep(1) in particular doesn't have an -o option. So the command you're running is
grep \
-eo \ # specify "o" as the regexp to match
"(regexps are write-only)" \ # a filename
/config/www/nextcloud/version.php # a second filename
Notice that the grep output in the interactive shell only contains lines with the letter "o", but not for example the line just containing array.
POSIX grep doesn't have an equivalent for GNU grep's -o option
Print only the matched (non-empty) parts of matching lines, with each such part on a separate output line. Output lines use the same delimiters as input....
but it's easy to do that with sed(1) instead. Ask it to match some stuff, the regexp in question, and some stuff, and replace it with the matched group.
sed -e 's/.*\(any regexp here\).*/\1/' input-file
(POSIX sed only accepts basic regular expressions, so you'll have to escape more of the parentheses.)
Well, for any potential future readers, I had no luck getting grep to do it, I'm sure it was my fault somehow and not grep's, but thanks to the help in this post I was able to use awk instead of grep, like so:
docker exec -it 1c8c05daba19 awk '/^\$OC_VersionString/ && match($0,/\047[0-9]+\.[0-9]+\.[0-9]+\047/){print substr($0,RSTART+1,RLENGTH-2)}' /config/www/nextcloud/version.php
That ended up doing exactly what I needed:
It logs into a docker container.
Scans and returns just the version number from the line I am looking for at: /config/www/nextcloud/version.php inside the container.
Exits stage left from the container with just the info I needed.
I can get right back to eating my Hot Cheetos.
The man page for grep says:
-r, --recursive
Read all files under each directory, recursively
OK, then how is this possible:
# grep -r BUILD_AP24_TRUE apache
# grep BUILD_AP24_TRUE apache/Makefile.in
#BUILD_AP24_TRUE#mod_shib_24_la_DEPENDENCIES = $(am__DEPENDENCIES_1) \
(...)
There are two likely causes this type of problem:
grep is aliased to something that excluded the files you were interested in.
The file of interest is in a symlinked directory or is itself a symlink.
In your case, it appears to be the second cause. One solution is to use grep -R in place of grep -r. From man grep:
-r, --recursive
Read all files under each directory, recursively, following symbolic
links only if they are on the command line. Note that if no file
operand is given, grep searches the working directory.
This is equivalent to the -d recurse option.
-R, --dereference-recursive
Read all files under each directory, recursively.
Follow all symbolic links, unlike -r.
I am looking for a way of extracting all localizable strings from .xib files and have all of them saved in a single file.
Probably this involves ibtool but I was not able to determine a way of merging all these in only one translation dictionary (could be .strings, .plist or something else).
Open terminal and cd to the root directory of the project (or directory where you store all XIB files) and type in this command:
find . -name \*.xib | xargs -t -I '{}' ibtool --generate-strings-file '{}'.txt '{}'
The magic is the find and xargs commands working together. -I option generates placeholder. -t is just for verbose output (you see what commands has been generated and executed).
It generates txts files with the same name as xib files in the same directory.
This command can be improved to concatenate output into one file but still is a good starting point.
Joining them together:
You can concatenate those freshly created files into one using similar terminal command:
find . -name \*.xib.txt | xargs -t -I '{}' cat '{}' > ./xib-strings-concatenated.txt
This command will put all strings into one file xib-strings-concatenated.txt in root directory.
You can delete generated partial files (if you want) using find and xargs again:
find . -name \*.xib.txt | xargs -t -I '{}' rm -f '{}'
this is a lot easier now.
in xcode, select your project (not a target)
then use menu/editor/export for localisation
xcode will output an xliff file with all localisable strings from your entire project.