Conftest verify fixture data - open-policy-agent

I have been writing a few policies using Conftest and wish to verify my configuration with the conftest verify command. So far I have been able to successfully verify my policies like so
test_deployment_with_security_context {
no_violations with input as {
... json content ...
}
}
However the omitted JSON content above is rather large and clutters my policy tests. I want to put the JSON into a file and import it into the test. The conftest verify command takes a --data flag allowing files to be loaded as data and made available to the policies. For example, as per the documentation, conftest verify --data policy will recursively load in YAML and JSON files it finds. Therefore a file located in policy/examples/input.json is made available within the policies under import data.examples. My question is how can I use this data in the tests?

There's an open issue around this - the docs currently reflect OPA's behavior of recursively reading data files from dirs and using directory names for namespacing. This behavior is currently not mirrored in conftest. I'd suggest tracking the ticket for progress on that. As a workaround until then you could always "namespace" the data yourself, so that your input.json looks something like this:
{
"examples": {
"actual_data": {
...
}
}
}

Related

Vite+SvelteKit - Environment variables hyper-protection

I am trying to make a POC and I'm such making a really simple use-case.
In there, I use a src/lib/db.ts who, for our interest, contains this code
console.log(import.meta.env.MONGO_URI, import.meta.env.SSR);
giving
undefined true
Of course, my .env file contains a definition for MONGO_URI, I tried with VITE_MONGO_URI and could see the value.
I know a way to expose it is to use VITE_MONGO_URI but my point is exactly not to expose it on the client-side.
I checked and the file db.ts is not bundled with the client, even the import.meta.env.SSR being true shows that the bundler knows it's happening on the server.
Question: How to access my private environment variables server-side ?
EDIT: As specified by Shriji Kondan, the API for this purpose has been created now : here
You could use dotenv on the server side, assuming you are using node-adapter, you can have a file _constants.ts in your app
import 'dotenv/config';
export const MONGO_URI = process.env.MONGO_URI;
and then import this variable into your script.
It's not very awesome to put secrets on client-side code. It should be either utilities.ts with a performed action SUPER_SECRET_API_KEY="$ecret#p1Key" in .env file, then request it via in src/lib/utilities/utility.js as explained here :
import { SUPER_SECRET_API_KEY } from '$env/static/private';
export function performApiAction() {
const apiInstance = initialiseApi({key: SUPER_SECRET_API_KEY});
}
or from page.server.ts via form actions as stated here which is preferable way but it's more complex.

How can I pass a pointer to a file in helm upgrade command?

I have a truststore file(a binary file) that I need to provide during helm upgrade. This file is different for each target env(dev,qa,staging or prod). So I can only provide this file at time of deployment. helm upgrade --set-file does not take a binary file. This seem to be the issue I found here: https://github.com/helm/helm/issues/3276. This truststore files are stored in Jenkins Credential store.
As the command itself is described below:
--set-file stringArray set values from respective files specified via the command line (can specify multiple or separate values with commas: key1=path1,key2=path2)
it is also important to know The Format and Limitations of
--set.
The error you see: Error: failed parsing --set-file data... means that the file you are trying to use does not meet the requirements. See the example below:
--set-file key=filepath is another variant of --set. It reads the
file and use its content as a value. An example use case of it is to
inject a multi-line text into values without dealing with indentation
in YAML. Say you want to create a brigade project with certain value
containing 5 lines JavaScript code, you might write a values.yaml
like:
defaultScript: |
const { events, Job } = require("brigadier")
function run(e, project) {
console.log("hello default script")
}
events.on("run", run)
Being embedded in a YAML, this makes it harder for you to use IDE
features and testing framework and so on that supports writing code.
Instead, you can use --set-file defaultScript=brigade.js with
brigade.js containing:
const { events, Job } = require("brigadier")
function run(e, project) {
console.log("hello default script")
}
events.on("run", run)
I hope it helps.

Parsing NYC Transit/MTA historical GTFS data (not realtime)

I've been puzzling on this on and off for months and can't find a solution.
The MTA claims to provide historical data in form of daily dumps in GTFS format here:
[http://web.mta.info/developers/MTA-Subway-Time-historical-data.html][1]
See for yourself by downloading the example they provide, in this case Sep, 17th , 2014:
[https://datamine-history.s3.amazonaws.com/gtfs-2014-09-17-09-31][1]
My problem? The file is gobbledygook. It does not follow GTFS specifications, has no extension, and when I open it using a text editor it looks like 7800 lines of this:
n
^C1.0^X �枪�^Eʞ>`
^C1.0^R^K
^A1^R^F^P����^E^R^K
^A2^R^F^P����^E^R^K
^A3^R^F^P����^E^R^K
^A4^R^F^P����^E^R^K
^A5^R^F^P����^E^R^K
^A6^R^F^P����^E^R^K
^AS^R^F^P����^E^R[
^F000001^ZQ
6
^N050400_1..S02R^Z^H20140917*^A1�>^V
^P01 0824 242/SFY^P^A^X^C^R^W^R^F^Pɚ��^E"^D140Sʚ>^F
^AA^R^AA^RR
^F000002"H
6
Per the MTA site (appears untrue)
All data is formatted in GTFS-realtime
Any idea on the steps necessary to transform this mystery file into usable GTFS data? Is there some encoding I am missing? I have looked for 10+ and been unable to come up with a solution.
Also, not to be a stickler but I am NOT referring to the MTA's realtime data feed, which is correctly formatted and usable. I am specifically referring to the historical data dumps I reference above (have received many "solutions" referring only to realtime data feed)
The file you link to is in GTFS-realtime format, not GTFS, and the page you linked to does a very bad job of explaining which format their data is actually in (though it is mentioned in your quote).
GTFS is used to store schedule data, like routes and scheduled arrival times.
GTFS-realtime is generally used to transfer actual transit performance data in real-time, like vehicle locations and expected or actual arrival times. It is a protobuf, a specification for compiled binary data publicized by Google, which means you can't usefully read it in a text editor, but you instead have to load it programmatically using the Google protobuf tools. It can be used as a historical data format in the way MTA is here, by making daily dumps of the GTFS-rt feed publicly available. It's called GTFS-realtime because various data fields in the realtime like route_id, trip_id, and stop_id are designed to link to the published GTFS schedules.
I confirmed the validity of the data you linked to by decompiling it using the gtfs-realtime.proto specification and the Google protobuf tools for Python. It begins:
header {
gtfs_realtime_version: "1.0"
timestamp: 1410960621
}
entity {
id: "000001"
trip_update {
trip {
trip_id: "050400_1..S02R"
start_date: "20140917"
route_id: "1"
}
stop_time_update {
arrival {
time: 1410960713
}
stop_id: "140S"
}
}
}
...
and continues in that vein for a total of 55833 lines (in the default string output format).
EDIT: the Python script used to convert the protobuf into string representation is very simple:
import gtfs_realtime_pb2 as gtfs_rt
f = open('gtfs-rt.pb', 'rb')
raw_str = f.read()
msg = gtfs_rt.FeedMessage()
msg.ParseFromString(raw_str)
print msg
This requires gtfs-realtime.proto to have been compiled into gtfs_realtime_pb2.py using protoc (following the instructions in the Python protobuf documentation under "Compiling Your Protocol Buffers") and placed in the same directory as the Python script. Furthermore, the binary protobuf downloaded from the MTA needs to be named gtfs-rt.pb and located in the same directory as the Python script.

Fuse File System- general input/output error while accessing office files

I have written a fuse mirror file system using FUSE-JNA, Which mirror local directory.
This Mirror file system allow me to open all types of files correctly with no issue but it does not open all types of office files e.g. .docs , .xls etc. And give me be below error while opening any office file.
Note:
I thought its LibreOffice issue, so I removed it and installed OpenOffice. But get the same issue.
Secondly, the errors only pops up when I try to access an office file from my MirrorFileSystem. Office files opens correctly if accessed normally via ubuntu default file system.
So its some thing wrong with my File system.
Finally, (i don't know whether its related to the question or not but) in my mirror file system when I Right Click on a file>Properties> Permission its shows all the fields disabled, as below
This is my getatt() method:
public int getattr(final String path, final StatWrapper stat)
{
....
if (f.isFile())
{
stat.setMode(NodeType.FILE,true,true,true,true,true,true,true,true,true);
stat.size(f.length());
stat.atime(f.lastModified()/ 1000L);
stat.mtime(0);
stat.nlink(1);
stat.uid(0);
stat.gid(0);
stat.blocks((int) ((f.length() + 511L) / 512L));
return 0;
}
...
}
Please guide me how to fix general input/output error while office files?
Office files are not special. There is some other problem with your filesystem implementation, and you need to do more debugging work to find out precisely what the trigger and the cause is. It's very unlikely that the trigger truly is "the file is an office file", unless you have stuff in your filesystem code that operates differently based on the type of file it's dealing with (in which case you should look there). As a first debugging step, you could compare the sha1sum and stat output of the files from the fuse filesystem and from the root filesystem to see if they match. If they don't, adjust the filesystem code such that they do. You could also enable logging on your filesystem class and check if it's returning an I/O error code anywhere. The error message "general input/output error" makes it sound like that is the case.
As for the reason the permissions fields are disabled, that's because the file is owned by root, and you are not root so you can't change the permissions. The reason the file is owned by root is because you set stat.uid(0); and stat.gid(0); in getattr. UID 0 and GID 0 are for the root user and root group respectively. Fuse-JNA already puts the current UID and GID as default stat attributes in getattr, so if you want to use these then just don't call stat.uid(0); or stat.gid(0);.
Thanks for the answer.
I searched on web, on many websites they showed file locking as the reason e.g. https://forum.openoffice.org/en/forum/viewtopic.php?f=10&t=2020 etc
So in fuse, I implemented file lock function and simply return 0
My problem solved. Now I can open all types of office files.
But I do not know, is it perfect solution

how to set the path to where aapt add command adds the file

I'm using aapt tool to remove some files from different folders of my apk. This works fine.
But when I want to add files to the apk, the aapt tool add command doesn't let me specify the path to where I want the file to be added, therefore I can add files only to the root folder of the apk.
This is strange because I don't think that developers would never want to add files to a subfolder of the apk (res folder for example). Is this possible with aapt or any other method? Cause removing files from any folder works fine, and adding file works only for the root folder of the apk. Can't use it for any other folder.
Thanks
The aapt tool retains the directory structure specified in the add command, if you want to add something to an existing folder in an apk you simply must have a similar folder on your system and must specify each file to add fully listing the directory. Example
$ aapt list test.apk
res/drawable-hdpi/pic1.png
res/drawable-hdpi/pic2.png
AndroidManifest.xml
$ aapt remove test.apk res/drawable-hdpi/pic1.png
$ aapt add test.apk res/drawable-hdpi/pic1.png
The pic1.png that will is added resides in a folder in the current working directory of the terminal res/drawable-hdpi/ , hope this answered your question
There is actually a bug in aapt that will make this randomly impossible. The way it is supposed to work is as the other answer claims: paths are kept, unless you pass -k. Let's see how this is implemented:
The flag that controls whether the path is ignored is mJunkPath:
bool mJunkPath;
This variable is in a class called Bundle, and is controlled by two accessors:
bool getJunkPath(void) const { return mJunkPath; }
void setJunkPath(bool val) { mJunkPath = val; }
If the user specified -k at the command line, it is set to true:
case 'k':
bundle.setJunkPath(true);
break;
And, when the data is being added to the file, it is checked:
if (bundle->getJunkPath()) {
String8 storageName = String8(fileName).getPathLeaf();
printf(" '%s' as '%s'...\n", fileName, storageName.string());
result = zip->add(fileName, storageName.string(),
bundle->getCompressionMethod(), NULL);
} else {
printf(" '%s'...\n", fileName);
result = zip->add(fileName, bundle->getCompressionMethod(), NULL);
}
Unfortunately, the one instance of Bundle used by the application is allocated in main on the stack, and there is no initialization of mJunkPath in the constructor, so the value of the variable is random; without a way to explicitly set it to false, on my system I (seemingly deterministically) am unable to add files at specified paths.
However, you can also just use zip, as an APK is simply a Zip file, and the zip tool works fine.
(For the record, I have not submitted the trivial fix for this as a patch to Android yet, if someone else wants to the world would likely be a better place. My experience with the Android code submission process was having to put up with an incredibly complex submission mechanism that in the end took six months for someone to get back to me, in some cases with minor modifications that could have just been made on their end were their submission process not so horribly complex. Given that there is a really easy workaround to this problem, I do not consider it important enough to bother with all of that again.)

Resources