How to Whitelist a Container using the Open Policy Agent Gatekeeper K8sPSPCapabilities Constraint Template - open-policy-agent

I'd like to whitelist a container in the K8sPSPCapabilities constraint template but am having some difficulty with the rego language. I'd like to disallow the NET_RAW capability for all containers except a specific container. Would appreciate it if someone could point me in the right direction.

I believe you're referring to this template from the Gatekeeper ConstraintTemplate library, which is daunting. Unfortunately as written that ConstraintTemplate doesn't allow deny listing specific allowedCapabilities. Let's construct it.
If you're just looking for the Rego, here's it in the Rego Playground.
Validating a Container
We'll start from the inside out. First, let's assume we have a Container and want to ensure it does not have NET_RAW in .securityContext.capabilities.add. First, we'll collect the values of this list into a set.
capabilities := {c | c := container.securityContext.capabilities.add[_]}
This is called a set comprehension.
Since it's quite possible that you'll want to deny multiple capabilities, we'll assume you'll want to set this as a list parameter rather than having it hard-coded in your ConstraintTemplate. We need to convert this list from input.parameters into a set as well.
denied := {c | c := input.parameters.deniedCapabilities[_]}
What we want to do now is construct the set intersection between the capabilities in the Container and those in denied.
count(capabilities & denied) > 0
This returns true if there are any capabilities in both the container and in the list of denied capabilities.
We must consider the case where the Container specifies '*', which would implicitly include NET_RAW but not be matched by our logic above. We check this case with:
capabilities['*']
Note that every truthy statement in a Rego function is implicitly AND-ed together. So the following would check both conditions, but we want to match either:
count(capabilities & denied) > 0
capabilities['*']
We can do this by providing two definitions for has_disallowed_capabilities:
has_disallowed_capabilities(container) {
capabilities := {c | c := container.securityContext.capabilities.add[_]}
denied := {c | c := input.parameters.deniedCapabilities[_]}
count(capabilities & denied) > 0
}
has_disallowed_capabilities(container) {
capabilities := {c | c := container.securityContext.capabilities.add[_]}
capabilities["*"]
}
If we call has_disallowed_capabilities on a Container, Rego will automatically check both definitions and return true if at least one returns true.
Validating a Pod
Now we'll actually write the violation functions idiomatic to Gatekeeper.
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
has_disallowed_capabilities(container)
msg := sprintf("container <%v> has a disallowed capability. Denied capabilities are %v", [container.name, input.parameters.deniedCapabilities])
}
This violation first creates an iterator, container, which will iterate over all Containers in the Pod. By calling a function on the iterator, we implicitly call that function on every Container in the iterator.
We must also check initContainers as those may also have the NET_RAW capability. It looks near-identical to the above:
violation[{"msg": msg}] {
container := input.review.object.spec.initContainers[_]
has_disallowed_capabilities(container)
msg := sprintf("initContainer <%v> has a disallowed capability. Denied capabilities are %v", [container.name, input.parameters.deniedCapabilities])
}
Whitelisting a Container
Almost done. As you said above, we want to whitelist a specific container. Recall that all statements in a Rego function are AND-ed together, and the violation function must return true in order for the Container to fail validation. So if we create a statement that evaluates to false if it matches the Container we want, we're done!
Let's say we want to ignore Containers with the name: abc. We can match all other containers with:
container.name != "abc"
This just needs to go in both of our violation functions, and we'll be all set!
Putting everything together
Finally, we put all that we've created above into a ConstraintTemplate:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: k8spspdeniedcapabilities
annotations:
description: Denies Pod capabilities.
spec:
crd:
spec:
names:
kind: K8sPSPDeniedCapabilities
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
deniedCapabilities:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package capabilities
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
container.name != "abc"
has_disallowed_capabilities(container)
msg := sprintf("container <%v> has a disallowed capability. Denied capabilities are %v", [container.name, input.parameters.deniedCapabilities])
}
violation[{"msg": msg}] {
container := input.review.object.spec.initContainers[_]
container.name != "abc"
has_disallowed_capabilities(container)
msg := sprintf("initContainer <%v> has a disallowed capability. Denied capabilities are %v", [container.name, input.parameters.deniedCapabilities])
}
has_disallowed_capabilities(container) {
capabilities := {c | c := container.securityContext.capabilities.add[_]}
denied := {c | c := input.parameters.deniedCapabilities[_]}
count(capabilities & denied) > 0
}
has_disallowed_capabilities(container) {
capabilities := {c | c := container.securityContext.capabilities.add[_]}
capabilities["*"]
}
And a Constraint to instantiate it:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sPSPDeniedCapabilities
metadata:
name: denied-capabilities
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
deniedCapabilities: ["NET_RAW"]
If instead you want the whitelisted containers to be configurable, you would follow a similar process to the one we used to make deniedCapabilities.

Related

rego check if item in list exists in item in another list

We have an undertermined list of items in a resource that need to be checked in case they are using one of the deprecated given paramenters. In gatekeeper, the constraint with the parameters looks like this:
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: SOMEKind
metadata:
name: somename
spec:
match:
kinds:
- apiGroups: ["networking.istio.io"]
kinds: ["EnvoyFilter"]
parameters:
envoyfilters:
- http_squash:
canonical: "envoy.filters.http.squash"
deprecated: "envoy.squash"
- listener_http_inspector:
canonical: "envoy.filters.listener.http_inspector"
deprecated: "envoy.listener.http_inspector"
The resource to check looks like this:
apiVersion: networking.istio.io/v1alpha3
kind: EnvoyFilter
metadata:
name: envoyfilter
spec:
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_OUTBOUND
listener:
filterChain:
filter:
name: envoy.listener.http_inspector
subFilter:
name: envoy.filters.http.router
We would need to make sure than for any .spec.configPatches[].match.listener.filterChain.filter.name that could exist in the analyzed resource, if that name matches any of the given "parameters.envoyfilters[].deprecated given, a violation would happen.
Due to constraints, we are unable to update opa/gk at the moment so we can't use the "import future.keywords".
The reason for having the "canonical" in the list, is because we would like to provide the "good" alertnative fitler to use in the msg of the violation.
We are trying different approaches but this is the latest (not working) one:
violation[{"msg": msg}] {
config_patches := input.review.object.spec.configPatches[match][listener][filterchain][filter][_].name
deprecated_envoyfilters := input.parameters.envoyfilters
use_deprecated_envoyfilters(config_patches,deprecated_envoyfilters)
msg := sprintf("REVIEW OBJECT: %v", [config_patches])
}
contains(filters, filter) {
filters[_] = filter
}
use_deprecated_envoyfilters(config_patches,deprecated_envoyfilters) = true {
counter := [ef | efs := config_patches ; contains(efs,deprecated_envoyfilters[_].deprecated) ]
count(counter) > 0
First of, did you really mean these parameters?
parameters:
envoyfilters:
- http_squash:
canonical: "envoy.filters.http.squash"
deprecated: "envoy.squash"
- listener_http_inspector:
canonical: "envoy.filters.listener.http_inspector"
deprecated: "envoy.listener.http_inspector"
because they translate to, for example,
{
"http_squash": null,
"canonical": "envoy.filters.http.squash",
"deprecated": "envoy.squash"
},
So I think some key/val structure is nicer to work with here, let's go with:
envoyfilters:
http_squash:
canonical: "envoy.filters.http.squash"
deprecated: "envoy.squash"
listener_http_inspector:
canonical: "envoy.filters.listener.http_inspector"
deprecated: "envoy.listener.http_inspector"
Having converted your yaml to the GK input via kube-review, and adding the parameters to the input, I've put up this example on the playground: https://play.openpolicyagent.org/p/o8QrD3xj1y
It boils down to this:
package play
violation[{"msg": msg}] {
config_patch := input.review.object.spec.configPatches[match][listener][filterchain][filter][_].name
some depr_name # name of deprecation
input.parameters.envoyfilters[depr_name].deprecated == config_patch
better := input.parameters.envoyfilters[depr_name].canonical
msg := sprintf("REVIEW OBJECT: %s, violation %s, use %s instead", [config_patch, depr_name, better])
}
Looping over the different config patches happens implicitly by Rego trying to satisfy
config_patch := input.review.object.spec.configPatches[match][listener][filterchain][filter][_].name
and then we need to look up in our parameters if one of the to-be-applied patches is deprecated:
some depr_name # name of deprecation
input.parameters.envoyfilters[depr_name].deprecated == config_patch
better := input.parameters.envoyfilters[depr_name].canonical
When you use a looping construct like deprecated := input.parameters.envoyfilters[_].deprecated, it does not return a list, but assigns the deprecated variable to each item matching the expression. Your policy could thus be simplified to just:
violation[{"msg": msg}] {
config_patch := input.review.object.spec.configPatches[match][listener][filterchain][filter][_].name
deprecated := input.parameters.envoyfilters[_].deprecated
config_patch == deprecated
msg := sprintf("deprecated config patch: %v", [config_patch])
}
Full example on the Rego Playground.

Using Redis as a Link Between Docker Containers

I have a docker-compose file with several containers in, two of which are supposed to communicate via a Redis DB. Both containers have connections to Reids and I can read/write from both. However I'd like a container to be triggered every time the something is added from the other container. I thought I could accomplish this via Redis Sub/Pub but it when I run the code it never triggers anything, even when I can see I've added new items to the Redis queue.
From this I have two questions:
1. Is what I'm looking to do even possible? Can I do publish/subscribe in in two separate docker containers and expect it work as described above?
2. If it is possible, can someone please point me below where I"m going wrong with this tools?
This is my function that I add new data to the Redis queue and then publish the data in Docker container 1.
func redisShare(key string, value string) {
jobsQueue.Set(key, value, 0) //setting in the queue
jobsQueue.Publish(key, value) //publishing for the other docker container to notice
fmt.Println("added ", key, "with a value of ", value, "to the redis queue")
}
I'm using this line in my other docker container to subscribe to the Redis queue and listen for changes:
redisdb.Subscribe()
I would expect if something was added to the redis queue it would share the data to the other container and I'd see the message received, but right now Docker Container 2 just runs and then closes.
Thanks!
Just in case anyone else wonders about the answer: I ended up using a combination of both Aleksandrs and sui's answers.
In my first Docker container I published the results to a specific channel:
publishData := redisdb.Subscribe("CHANNELNAME")
And then in my second Docker container that was subscribing to the channel, thanks to sui for the assistance on this part, I subscribed to the channel and pulled the both the UID and the IP information like so:
ch := pubsub.Channel()
for msg := range ch {
fmt.Println(msg.Payload)
s := strings.Split(msg.Payload, ":")
UID, IP := s[0], s[1]
fmt.Println(UID, IP)
}
This is working great for me so far - thanks so much to sui and Aleksandrs for the assistance!
As from docs
receiver.go
package main
import (
"fmt"
"github.com/go-redis/redis"
)
func main() {
c := redis.NewClient(&redis.Options{
Addr: ":6379",
})
pubsub := c.Subscribe("mychannel1")
// Wait for confirmation that subscription is created before publishing anything.
_, err := pubsub.Receive()
if err != nil {
panic(err)
}
// Go channel which receives messages.
ch := pubsub.Channel()
// Consume messages.
for msg := range ch {
fmt.Println(msg.Channel, msg.Payload)
}
}
sender.go
package main
import (
"time"
"github.com/go-redis/redis"
)
func main() {
c := redis.NewClient(&redis.Options{
Addr: ":6379",
})
// Publish a message.
for range time.Tick(time.Second) {
err := c.Publish("mychannel1", "hello").Err()
if err != nil {
panic(err)
}
}
}

XPath 2.0 Leading '/' cannot select the root node of the tree containing the context item: the context item is not a node

Trying to check that one value (once tokenized) doesn't match another value in my document:
/foo/bar/baz/tokenize(value,',')[not(. =(/foo/biz/value/string(),'bing'))]
Specifically here, checking that /foo/bar/baz/value (which is 'ding,dong,bing') doesn't match /foo/biz/value/string() or the value 'bong'.
But I'm getting "Leading '/' cannot select the root node of the tree containing the context item: the context item is not a node"
Is there any way that I can do this in XPath, or do I need to get out into XQuery and start to worry about variables?
Given that you're using Saxon, you can take advantage of the fact that XPath 3.0 allows you to bind variables:
let $foo := /foo return $foo/bar/baz/tokenize(value,',')
[not(. =($foo/biz/value/string(),'bing'))]
or you could pull the expression out of the predicate:
let $exceptions := (/foo/biz/value/string(),'bing')
return /foo/bar/baz/tokenize(value,',')[not(. = $exceptions)]
If you want pure XPath 2.0 you can achieve the same with an ugly "for" binding:
for $foo in /foo return $foo/bar/baz/tokenize(value,',')
[not(. =($foo/biz/value/string(),'bing'))]
If you're in XSLT, of course, you can use current().

Expand tilde to home directory

I have a program that accepts a destination folder where files will be created. My program should be able to handle absolute paths as well as relative paths. My problem is that I don't know how to expand ~ to the home directory.
My function to expand the destination looks like this. If the path given is absolute it does nothing otherwise it joins the relative path with the current working directory.
import "path"
import "os"
// var destination *String is the user input
func expandPath() {
if path.IsAbs(*destination) {
return
}
cwd, err := os.Getwd()
checkError(err)
*destination = path.Join(cwd, *destination)
}
Since path.Join doesn't expand ~ it doesn't work if the user passes something like ~/Downloads as the destination.
How should I solve this in a cross platform way?
Go provides the package os/user, which allows you to get the current user, and for any user, their home directory:
usr, _ := user.Current()
dir := usr.HomeDir
Then, use path/filepath to combine both strings to a valid path:
if path == "~" {
// In case of "~", which won't be caught by the "else if"
path = dir
} else if strings.HasPrefix(path, "~/") {
// Use strings.HasPrefix so we don't match paths like
// "/something/~/something/"
path = filepath.Join(dir, path[2:])
}
(Note that user.Current() is not implemented in the go playground (likely for security reasons), so I can't give an easily runnable example).
In general the ~ is expanded by your shell before it gets to your program. But there are some limitations.
In general is ill-advised to do it manually in Go.
I had the same problem in a program of mine and what I have understood is that if I use the flag format as --flag=~/myfile, it is not expanded. But if you run --flag ~/myfile it is expanded by the shell (the = is missing and the filename appears as a separate "word").
Normally, the ~ is expanded by the shell before your program sees it.
Adjust how your program acquires its arguments from the command line in a way compatible with the shell expansion mechanism.
One of the possible problems is using exec.Command like this:
cmd := exec.Command("some-binary", someArg) // say 'someArg' is "~/foo"
which will not get expanded. You can, for example use instead:
cmd := exec.Command("sh", "-c", fmt.Sprintf("'some-binary %q'", someArg))
which will get the standard ~ expansion from the shell.
EDIT: fixed the 'sh -c' example.
If you are expanding tilde '~' for use with exec.Command() you should use the users local shell for expansion.
// 'sh', 'bash' and 'zsh' all respect the '-c' argument
cmd := exec.Command(os.Getenv("SHELL"), "-c", "cat ~/.myrc")
cmd.Stdout = os.Stdout
if err := cmd.Run(); err != nil {
fmt.Fprintln(os.Stderr, err)
}
However; when loading application config files such as ~./myrc this solution is not acceptable. The following has worked well for me across multiple platforms
import "os/user"
import "path/filepath"
func expand(path string) (string, error) {
if len(path) == 0 || path[0] != '~' {
return path, nil
}
usr, err := user.Current()
if err != nil {
return "", err
}
return filepath.Join(usr.HomeDir, path[1:]), nil
}
NOTE: usr.HomeDir does not respect $HOME instead determines the home directory by reading the /etc/passwd file via the getpwuid_r syscall on (osx/linux). On windows it uses the OpenCurrentProcessToken syscall to determine the users home directory.
I know this is an old question but there is another option now. You can use go-homedir to expand the tidle to the user's homedir:
myPath := "~/.ssh"
fmt.Printf("path: %s; with expansion: %s", myPath, homedir.Expand(myPath))
This works on go >= 1.12:
if strings.HasPrefix(path, "~/") {
home, _ := os.UserHomeDir()
path = filepath.Join(home, path[2:])
}

How to retain service settings through InstallShield upgrade install

I have an InstallScript project in IS2010. It has a handful of services that get installed. Some are C++ exes and use the "InstallShield Object for NT Services". Others are Java apps installed as services with Java Service Wrapper through LaunchAppAndWait command line calls. Tomcat is also being installed as a service through a call to its service.bat.
When the installer runs in upgrade mode, the services are reinstalled, and the settings (auto vs. manual startup, restart on fail, log-on account, etc.) are reverted to the defaults.
I would like to save the service settings before the file transfer and then repopulate them afterward, but I haven't been able to find a good mechanism to do this. How can I save and restore the service settings?
I got this working by reading the service information from the registry in OnUpdateUIBefore, storing it in a global variable, and writing the information back to the registry in OnUpdateUIAfter.
Code:
export prototype void LoadServiceSettings();
function void LoadServiceSettings()
number i, nResult;
string sServiceNameArray(11), sRegKey, sTemp;
BOOL bEntryFound;
begin
PopulateServiceNameList(sServiceNameArray);
RegDBSetDefaultRoot(HKEY_LOCAL_MACHINE);
//write service start values to the registry
for i = 0 to 10
if (ServiceExistsService(sServiceNameArray(i))) then
sRegKey = "SYSTEM\\CurrentControlSet\\Services\\" + sServiceNameArray(i);
nResult = RegDBSetKeyValueEx(sRegKey, "Start", REGDB_NUMBER, sServiceSettings(i), -1);
if(nResult < 0) then
MessageBox ("Unable to save service settings: " + sServiceNameArray(i) + ".", SEVERE);
endif;
endif;
endfor;
RegDBSetDefaultRoot(HKEY_CLASSES_ROOT); //set back to default
end;
export prototype void SaveServiceSettings();
function void SaveServiceSettings()
number i, nType, nSize, nResult;
string sServiceNameArray(11), sRegKey, sKeyValue;
begin
PopulateServiceNameList(sServiceNameArray);
RegDBSetDefaultRoot(HKEY_LOCAL_MACHINE);
for i = 0 to 10
if (ServiceExistsService(sServiceNameArray(i))) then
//get service start values from registry
sRegKey = "SYSTEM\\CurrentControlSet\\Services\\" + sServiceNameArray(i);
nResult = RegDBGetKeyValueEx(sRegKey, "Start", nType, sKeyValue, nSize);
if(nResult < 0) then
MessageBox ("Unable to save service settings: " + sServiceNameArray(i) + ".", SEVERE);
endif;
sServiceSettings(i) = sKeyValue;
endif;
endfor;
RegDBSetDefaultRoot(HKEY_CLASSES_ROOT); //set back to default
end;

Resources