OPA policy to allow docker exec - docker

I've deployed the OPA docker plugin as per instruction. And everything was fine until I've tried to create custom docker API permissions for docker exec.
I've added following section to authz.rego file:
allow {
user_id := input.Headers["Authz-User"]
users[user_id].readOnly
input.path[0] == "/v1.41/containers/busybox/exec"
input.Method == "POST"
}
But it still gives me error when I try to run following bash command: docker exec -it busybox sh under Bob test user as per instruction.
journalctl -u docker.service provides following error:
level=error msg="AuthZRequest for POST /v1.41/containers/busybox/exec returned error: authorization denied by plugin openpolicyagent/opa-docker-authz-v2:0.4: request rejected by administrative policy"
The funny thing is when I comment out input.path section it works as full RW user so the rule works but the strict mention of API path - does not. Maybe I'm specifying it in a wrong way?
Tried different variations like:
input.path == ["/v1.41/containers/busybox/exec"]
input.path = ["/v1.41/containers/busybox/exec"]
input.path = ["/v1.41*"]
input.path = ["/v1.41/*"]
input.path = ["/v1.41%"]
input.path = ["/v1.41/%"]
Also would appreciate advice on how to allow exec operations for any container not only the specified one.
Thanks in advance!

Looking at the input map provided to OPA, you should find both input.Path, input.PathPlain and input.PathArr:
input := map[string]interface{}{
"Headers": r.RequestHeaders,
"Path": r.RequestURI,
"PathPlain": u.Path,
"PathArr": strings.Split(u.Path, "/"),
"Query": u.Query(),
"Method": r.RequestMethod,
"Body": body,
"User": r.User,
"AuthMethod": r.UserAuthNMethod,
}
There's no lowercase input.path available, but using any of the other alternatives should work.

Related

How can I move transformation module to Memgraph that runs in Docker?

I'm trying to figure out how streaming with Kafka works in combination with Memgraph. I have a Memgraph running in a Docker container. I've created a module called music.py using Visual Studio Code but I can't save it into the docker.
import mgp
import json
#mgp.transformation
def rating(messages: mgp.Messages
) -> mgp.Record(query=str, parameters=mgp.Nullable[mgp.Map]):
result_queries = []
for i in range(messages.total_messages()):
message = messages.message_at(i)
movie_dict = json.loads(message.payload().decode('utf8'))
result_queries.append(
mgp.Record(
query=("MERGE (u:User {id: $userId}) "
"MERGE (m:Album {id: $albumId, title: $title}) "
"WITH u, m "
"UNWIND $genres as genre "
"MERGE (m)-[:OF_GENRE]->(:Genre {name: genre}) "
"MERGE (u)-[r:RATED {rating: ToFloat($rating), timestamp: $timestamp}]->(m)"),
parameters={
"userId": album_dict["userId"],
"albumId": album_dict["movie"]["movieId"],
"title": album_dict["album"]["title"],
"genres": album_dict["album"]["genres"],
"rating": album_dict["rating"],
"timestamp": album_dict["timestamp"]}))
return result_queries
Should I run vi inside docker and copy/paste the code into it or is there another way?
You need to copy your music.py file to the Docker container. Procedure for that is following:
Open command prompt/terminal and find out CONTAINER ID of Memgraph Docker container
Go to a folder/directory where music.py is saved and write the following command: docker cp ./music.py <CONTAINER ID>:/usr/lib/memgraph/query_modules/music.py
Don't forget to replace <CONTAINER ID> with real CONTAINER ID from step one. The CONTAINER ID looks something like 38cd0e84f17b
You could also create file using vi if you have it installed in your container. I've never tried to do that, but take a look at this question for instructions on how to do that.
For me personally the easiest way to create transformation module is using the Memgraph Lab, but you didn't mention if you have it or not.

How to create multiple branch restrictions using Bitbucket api?

I'm trying to automate the branch permissions setup using the bitbucket api but when I try to add multiple rules it doesn't overwrite the old rule (in case it exists). I'm creating 2 rules for a repository for one branch but if I re-run the api again with a little change in the rule, it will add the rule I added, instead of editing it the current rule.
I run this call:
curl -X POST -v -u "username:secret" -H "Content-Type: application/vnd.atl.bitbucket.bulk+json" https://bitbucket.example.com/rest/branch-permissions/2.0/projects/myproj/repos/myrepo/restrictions -d '[{ "type": "read-only","matcher": {"id": "master","displayId": "master","type": {"id":"PATTERN","name": "Pattern"}},"users": ["my.user"],"groups": ["StashAdmins"]},{ "type": "no-deletes","matcher": {"id": "master","displayId": "master","type": { "id":"PATTERN","name": "Pattern"}},"users": ["user.my"],"groups": []}]'
Then I wanted to overwrite the current branch permissions so I changed the first rule from read-only to pull-request-only, so I run :
curl -X POST -v -u "username:secret" -H "Content-Type: application/vnd.atl.bitbucket.bulk+json" https://bitbucket.example.com/rest/branch-permissions/2.0/projects/myproj/repos/myrepo/restrictions -d '[{ "type": "pull-request-only","matcher": {"id": "master","displayId": "master","type": {"id":"PATTERN","name": "Pattern"}},"users": ["my.user"],"groups": ["StashAdmins"]},{ "type": "no-deletes","matcher": {"id": "master","displayId": "master","type": { "id":"PATTERN","name": "Pattern"}},"users": ["user.my"],"groups": []}]'
but it added the new rule (pull-request-only) instead of editing the whole rule.
Does anyone know how to forces overwrite the branch restriction rule?
With this Rest Api endpoint you can just create new restrictions as you can have several ones per repository and/or project.
See here for more: https://docs.atlassian.com/bitbucket-server/rest/6.4.0/bitbucket-ref-restriction-rest.html#idp1
You first need to delete all restrictions which were created before and then post a new one. To get all restrictions per repository you will need to use this endpoint:
GET /rest/branch-permissions/2.0/projects/{projectKey}/repos/{repositorySlug}/restrictions
https://docs.atlassian.com/bitbucket-server/rest/6.4.0/bitbucket-ref-restriction-rest.html#idp3
And then you can delete them one by one with this one:
DELETE /rest/branch-permissions/2.0/projects/{projectKey}/repos/{repositorySlug}/restrictions/{id}
https://docs.atlassian.com/bitbucket-server/rest/6.4.0/bitbucket-ref-restriction-rest.html#idp6

How to get Task ID from within ECS container?

Hello I am interested in retrieving the Task ID from within inside a running container which lives inside of a EC2 host machine.
AWS ECS documentation states there is an environment variable ECS_CONTAINER_METADATA_FILE with the location of this data but will only be set/available if ECS_ENABLE_CONTAINER_METADATA variable is set to true upon cluster/EC2 instance creation. I don't see where this can be done in the aws console.
Also, the docs state that this can be done by setting this to true inside the host machine but would require to restart the docker agent.
Is there any other way to do this without having to go inside the EC2 to set this and restart the docker agent?
This doesn't work for newer Amazon ECS container versions anymore, and in fact it's now much simpler and also enabled by default. Please refer to this docu, but here's a TL;DR:
If you're using Amazon ECS container agent version 1.39.0 and higher, you can just do this inside the docker container:
curl -s "$ECS_CONTAINER_METADATA_URI_V4/task" \
| jq -r ".TaskARN" \
| cut -d "/" -f 3
Here's a list of container agent releases, but if you're using :latest – you're definitely fine.
The technique I'd use is to set the environment variable in the container definition.
If you're managing your tasks via Cloudformation, the relevant yaml looks like so:
Taskdef:
Type: AWS::ECS::TaskDefinition
Properties:
...
ContainerDefinitions:
- Name: some-name
...
Environment:
- Name: AWS_DEFAULT_REGION
Value: !Ref AWS::Region
- Name: ECS_ENABLE_CONTAINER_METADATA
Value: 'true'
This technique helps you keep everything straightforward and reproducible.
If you need metadata programmatically and don't have access to the metadata file, you can query the agent's metadata endpoint:
curl http://localhost:51678/v1/metadata
Note that if you're getting this information as a running task, you may not be able to connect to the loopback device, but you can connect to the EC2 instance's own IP address.
We set it with the so called user data, which are executed at the start of the machine. There are multiple ways to set it, for example: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html#user-data-console
It could look like this:
#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_CLUSTER=ecs-staging
ECS_ENABLE_CONTAINER_METADATA=true
EOF
Important: Adjust the ECS_CLUSTER above to match your cluster name, otherwise the instance will not connect to that cluster.
Previous answers are correct, here is another way of doing this:
From the ec2 instance where container is running, run this command
curl http://localhost:51678/v1/tasks | python -mjson.tool |less
From the AWS ECS cli Documentation
Command:
aws ecs list-tasks --cluster default
Output:
{
"taskArns": [
"arn:aws:ecs:us-east-1:<aws_account_id>:task/0cc43cdb-3bee-4407-9c26-c0e6ea5bee84",
"arn:aws:ecs:us-east-1:<aws_account_id>:task/6b809ef6-c67e-4467-921f-ee261c15a0a1"
]
}
To list the tasks on a particular container instance
This example command lists the tasks of a specified container instance, using the container instance UUID as a filter.
Command:
aws ecs list-tasks --cluster default --container-instance f6bbb147-5370-4ace-8c73-c7181ded911f
Output:
{
"taskArns": [
"arn:aws:ecs:us-east-1:<aws_account_id>:task/0cc43cdb-3bee-4407-9c26-c0e6ea5bee84"
]
}
My ECS solution as bash and Python snippets. Logging calls are able to print for debug by piping to sys.stderr while print() is used to pass the value back to a shell script
#!/bin/bash
TASK_ID=$(python3.8 get_ecs_task_id.py)
echo "TASK_ID: ${TASK_ID}"
Python script - get_ecs_task_id.py
import json
import logging
import os
import sys
import requests
# logging configuration
# file_handler = logging.FileHandler(filename='tmp.log')
# redirecting to stderr so I can pass back extracted task id in STDOUT
stdout_handler = logging.StreamHandler(stream=sys.stderr)
# handlers = [file_handler, stdout_handler]
handlers = [stdout_handler]
logging.basicConfig(
level=logging.INFO,
format="[%(asctime)s] {%(filename)s:%(lineno)d} %(levelname)s - %(message)s",
handlers=handlers,
datefmt="%Y-%m-%d %H:%M:%S",
)
logger = logging.getLogger(__name__)
def get_ecs_task_id(host):
path = "/task"
url = host + path
headers = {"Content-Type": "application/json"}
r = requests.get(url, headers=headers)
logger.debug(f"r: {r}")
d_r = json.loads(r.text)
logger.debug(d_r)
ecs_task_arn = d_r["TaskARN"]
ecs_task_id = ecs_task_arn.split("/")[2]
return ecs_task_id
def main():
logger.debug("Extracting task ID from $ECS_CONTAINER_METADATA_URI_V4")
logger.debug("Inside get_ecs_task_id.py, redirecting logs to stderr")
logger.debug("so that I can pass the task id back in STDOUT")
host = os.environ["ECS_CONTAINER_METADATA_URI_V4"]
ecs_task_id = get_ecs_task_id(host)
# This print statement passes the string back to the bash wrapper, don't remove
logger.debug(ecs_task_id)
print(ecs_task_id)
if __name__ == "__main__":
main()

k6 using docker with mounted volume errors on run, with "accepts 1 arg(s), received 2"

I'm trying to run perf test in my CI environment, using the k6 docker, and a simple single script file works fine. However, I want to break down my tests into multiple JS files. In order to do this, I need to mount a volume on Docker so I can import local modules.
The volume seems to be mounting correctly, with my command
docker run --env-file ./test/performance/env/perf.list -v \
`pwd`/test/performance:/perf -i loadimpact/k6 run - /perf/index.js
k6 seems to start, but immediately errors with
time="2018-01-17T13:04:17Z" level=error msg="accepts 1 arg(s), received 2"
Locally, my file system looks something like
/toychicken
/test
/performance
/env
- perf.list
- index.js
- something.js
And the index.js looks like this
import { check, sleep } from 'k6'
import http from 'k6/http'
import something from '/perf/something'
export default () => {
const r = http.get(`https://${__ENV.DOMAIN}`)
check(r, {
'status is 200': r => r.status === 200
})
sleep(2)
something()
}
You need to remove the "-" after run in the Docker command. The "-" instructs k6 to read from stdin, but in this case you want to load the main JS file from the file system. That's why it complains that it receives two args, one being the "-" and the second being the path to index.js (the error message could definitely be more descriptive).
You'll also need to add .js to the '/perf/something' import.

How to check the blockchain height in hyperledger-fabric

I am playing with hyperledger-fabric v.1.0 - actually a newbie. How can I check the chain height ? Is there a command or something that I can use to "ask" about the blockchain height? Thanks in advance.
Well, you have a few options of how you can do it:
You can leverage peer cli command line tool to obtain latest available block by running
peer channel fetch newest -o ordererIP:7050 -c mychannel last.block
Next you can leverage configtxlator to decode content of the block as following:
curl -X POST --data-binary #last.block http://localhost:7059/protolator/decode/common.Block
(note you need to start configtxlator first)
Alternative path assumes you are going to use one of available SDK's to invoke QSCC (Query System ChainCode) with GetChainInfo command. This will return you back following structure:
type BlockchainInfo struct {
Height uint64 `protobuf:"varint,1,opt,name=height" json:"height,omitempty"`
CurrentBlockHash []byte `protobuf:"bytes,2,opt,name=currentBlockHash,proto3" json:"currentBlockHash,omitempty"`
PreviousBlockHash []byte `protobuf:"bytes,3,opt,name=previousBlockHash,proto3" json:"previousBlockHash,omitempty"`
}
Which has information about current ledger height.
Another alternative.
Using the cli peer command line (for example docker exec -it cli bash) you can do:
peer channel getinfo -c mychannel
It seems that I found something - maybe cumbersome, but better than nothing:
Command:
docker logs -f peer0.org1.example.com 2>&1 | grep blockNo
Check for the "latest" line in the output, something like:
2017-07-18 19:40:39.586 UTC [historyleveldb] Commit -> DEBU b75b Channel [mychannel]: Updates committed to history database for blockNo [34]
So, if I am not wrong, in this case the block height is: 34
Thanks
you can use blockchain-explorer (UI tool)
https://github.com/hyperledger/blockchain-explorer
You should also be able to use the fabric CORE API (JSON/REST).
See the docs for the Blockchain GET/chain operation at;
https://github.com/hyperledger-archives/fabric/blob/master/docs/API/CoreAPI.md#rest-api

Resources