Create AWS Policies-serverless framework - serverless

I am trying to create policies using serverless framework. The idea is to access S3 services, depending on the user's company.
I tried to deploy my serverless.yaml with the policy:
- PolicyName: IAM_AWS_S3
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: '*'
Resource:
- !Sub 'arn:aws:s3:${AWS::AccountId}-${aws:PrincipalTag/company}'
- !Sub 'arn:aws:s3:${AWS::AccountId}-${aws:PrincipalTag/company}/*'
but I get this error:
CREATE_FAILED: AuthenticatedRole (AWS::IAM::Role) The policy failed
legacy parsing (Service: AmazonIdentityManagement; Status Code: 400;
Error Code: MalformedPolicyDocument; Request ID:
da38iiii; Proxy: null)
So, here is my question, is it possible to create a policy before I have a user? can PrincipalTag/company be null?
Thanks in advance

It is not possible to use PropertyTag for this issue due to I needed to use it in DynamoDB too.
I just create the policies through a Lambda.
Take into account these answers:
IAM Policy with `aws:ResourceTag` not supported
Use tags inside IAM policy resource

Related

Not able to add ARN layer

I tried to add this layer to my lambda function:
arn:aws:lambda:us-east-1:723663554526:layer:lumigo-telemetry-shipper:1
Got this error:
Failed to load layer version details: User: arn:aws:iam::XXXX78362623:root is not authorized to perform: lambda:GetLayerVersion on resource:
arn:aws:lambda:us-east-1:723663554526:layer:lumigo-telemetry-shipper:1
because no resource-based policy allows the lambda:GetLayerVersion action
What is the correct way of adding a layer? I got this ARN from the official blog post:
https://lumigo.io/blog/lambda-telemetry-api-a-new-way-to-process-lambda-telemetry-data-in-real-time/

How to set the scope using Google Operators in Airflow

I have a task using the GCSToGoogleSheetsOperator in Airflow where Im trying to add data to a sheet.
I have added the service credential email to the sheet I want to edit with editor privileges, and received this error:
googleapiclient.errors.HttpError:
<HttpError 403 when requesting
https://sheets.googleapis.com/v4/spreadsheets/<SHEET_ID>/values/Sheet1?valueInputOption=RAW&includeValuesInResponse=false&responseValueRenderOption=FORMATTED_VALUE&responseDateTimeRenderOption=SERIAL_NUMBER&alt=json
returned "Request had insufficient authentication scopes.".
Details: "[{
'#type': 'type.googleapis.com/google.rpc.ErrorInfo',
'reason': 'ACCESS_TOKEN_SCOPE_INSUFFICIENT',
'domain': 'googleapis.com',
'metadata': {
'service': 'sheets.googleapis.com',
'method': 'google.apps.sheets.v4.SpreadsheetsService.UpdateValues'}
}]>
I cant update the sheet, but the GCS and BigQuery operators work fine.
My connection configuration looks like the following:
AIRFLOW_CONN_GOOGLE_CLOUD=google-cloud-platform://?extra__google_cloud_platform__key_path=%2Fopt%2Fairflow%2Fcredentials%2Fgoogle_credential.json
I tried following the instructions to add the scope https://www.googleapis.com/auth/spreadsheets.
Which URL encoded looks like:
AIRFLOW_CONN_GOOGLE_CLOUD=google-cloud-platform://?extra__google_cloud_platform__key_path=%2Fopt%2Fairflow%2Fcredentials%2Fgoogle_credential.json&extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fspreadsheets
Now, operators which previously worked error out like this:
google.api_core.exceptions.Forbidden: 403 POST https://bigquery.googleapis.com/bigquery/v2/projects/my-project/jobs?prettyPrint=false: Request had insufficient authentication scopes.
And the GCSToGoogleSheetsOperator operator still error out like this:
google.api_core.exceptions.Forbidden: 403 GET https://storage.googleapis.com/download/storage/v1/b/my-bucket/o/folder%2Fobject.csv?alt=media: Insufficient Permission: ('Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>)
How can I set the permissions correctly to use both BigQuery, GCS and Sheets operators?
Adding a scope seems to ignore the IAM roles, so its either one or the other.
The service account had roles needed to access GCS and BigQuery, but by adding the scope https://www.googleapis.com/auth/spreadsheets, the service would ignore the privileges granted by the roles and look only at the ones specified by the scopes.
So, to recover it, you must add both the spreadsheet and cloud-platform scopes (or more strict scopes). cloud-platform will provide access to GCS and BigQuery and spreadsheets to Google Sheets API.
If you set your connection using environment variables, you have to URL encode the arguments, so to create a GOOGLE_CLOUD connection, you will have to do something like this, which is not encoded...
AIRFLOW_CONN_GOOGLE_CLOUD=google-cloud-platform://?extra__google_cloud_platform__key_path=/abs/path_to_file/credential.json&extra__google_cloud_platform__scope=https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/spreadsheets
To encode, which is the version you have to use, replace /, , and ::
AIRFLOW_CONN_GOOGLE_CLOUD=google-cloud-platform://?extra__google_cloud_platform__key_path=%2Fabs%2Fpath_to_file%2Fcredentials%2Fgoshare-driver-c08e0904285b.json&extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform%2Chttps%3A%2F%2Fwww.googleapis.com%2Fauth%2Fspreadsheets

AWS CDK: lambda permissions and EFS mount

I am trying to declare in my stack a lambda function with an EFS mount.
The lambda has a custom execution role with arn
arn:aws:iam::ACCOUNTID:role/service-role/ROLENAME
i.e. it was created in the stack using
lambda_role = aws_iam.Role(..., path="/service-role/", ...)
The code snippet declaring the lambda is
_lambda.Function(
self,
id="myId",
runtime=_lambda.Runtime.PYTHON_3_8,
code=_lambda.Code.asset('lambda'),
handler='my_module.lambda_handler',
role=lambda_role,
function_name="function-name",
timeout=core.Duration.seconds(30),
vpc=vpc,
filesystem=_lambda.FileSystem.from_efs_access_point(access_point, '/efs')
)
The deployment fails with this error:
API: iam:PutRolePolicy User: USERNAME is not authorized to perform: iam:PutRolePolicy on resource: role service-role with an explicit deny
That "role service-role" in the error message seemed weird, so I inspected the synthesized CF template, and noticed this section:
lambdarolePolicy2FC0B982:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
** policy giving elasticfilesystem:ClientWrite and elasticfilesystem:ClientMount permissions**
PolicyName: lambdarolePolicy2FC0B982
Roles:
- Fn::Select:
- 1
- Fn::Split:
- /
- Fn::Select:
- 5
- Fn::Split:
- ":"
- Fn::ImportValue: sgmkr-iam-arn-mr
That ImportValue maps to the arn of the lambda execution role. The problem I see is with the string manipulation, which does not seem to take into account the fact that the role has a path. The result of that chain of Select/Split is indeed "service-role", not the proper role name.
I have two questions:
why is CDK trying to add extra permissions to a role I defined? I already added the needed permissions to the relevant role, and I really don't want anything added to it. Moreover, in my setup, iam:PutRolePolicy calls are strictly regulated, so this would almost certainly fail. That's the very reason I pass my own role. How can I switch off this automatic policy generation?
why is CDK ignoring the role path? Is this intended?
Thanks for your help,
Andrea.
The easiest workaround would I could think of would be using "role name" instead "role ARN".
Instead
role_arn = iam.Role.from_role_arn(self, "Role", role_arn=<role_arn>)
Use:
role_name = iam.Role.from_role_name(self, "Role", role_name=<role_name>)

Failed to authenticate to Cloud IAP Backend from Cloud Tasks HTTP Request

I'm trying to use Cloud Tasks HTTP Requests to reach a Kubernetes endpoint behind an HTTPS Load Balancer protected by Cloud IAP.
The endpoint works using any Gsuite company account as it should be but when the Cloud Task executes this is the Cloud Audit - Data Access log (only important parts displayed)
authenticationInfo: {
}
authorizationInfo: [
0: {
permission: "iap.webServiceVersions.accessViaIAP"
resource: "projects/<PROJECT_NUMBER>/iap_web/compute/services/<SERVICE_NUMBER>/versions/bs_0"
resourceAttributes: {
service: "iap.googleapis.com"
type: "iap.googleapis.com/WebServiceVersion"
}
}
]
status: {
code: 7
message: "PERMISSION_DENIED"
}
I'm using the compute-engine service account to create the task so I've granted this account the appropriate permissions:
When I create the task I add the appropriate OIDC service account email to the http request
'oidc_token': {'service_account_email': <PROJECT_NUMBER>-compute#developer.gserviceaccount.com}}
I also checked the Cloud Tasks HTTP Request on another endpoint and the Authentication Bearer token is present.
I really don't have any idea at this point on how to make it work.
Thanks for the help
I found the problem, the OIDC needed a specific audience to work with Cloud IAP.
The audience needed is the IAP ClientID that could be found in API & Services > Credentials under the section OAuth 2.0 client IDs with a name starting with IAP.
Just as an example here is the python code to add a task that can be granted access by the Cloud IAP.
# This is the important part, the audience filed is very important!
oidc_token = {'service_account_email': <PROJECT_NUMBER>-compute#developer.gserviceaccount.com, 'audience': <PROJECT_NUMBER>-<NUMBER_GENERATED_AUTOMATICALLY_BY_IAP>.apps.googleusercontent.com}
http_request = {'http_method': 'POST', 'url': url, 'body': json.dumps(payload).encode(), 'headers': headers, 'oidc_token': oidc_token}
task['schedule_time'] = timestamp
created_task = client.create_task(parent, {'http_request': http_request})

ejabberd - Configuration of mod_http_api

I'm in the midst of testing mod_http_api to replace the existing usage of mod_rest in our implementation.
I can unrestrict access to some commands from group of IP addresses by using option "admin_ip_access". I can successfully execute some commands (e.g. change_password).
However, for some cases, we may require login as well for both user (own)and admin(own and other user).
However, when I tried to login with Basic Auth. It's not successful. I'm keep on getting the following. If my assumption is correct, this might be related to configuration.
Will be much appreciated if someone could show me how the correct configuration should be done.
{
"status": "error",
"code": 31,
"message": "Command need to be run with admin priviledge."
}
Current config
modules:
mod_http_api:
admin_ip_access: admin_ip_access_rule
acl:
admin_ip_acl:
ip:
- "xx.xx.xx.xx/32"
access:
admin_ip_access_rule:
admin_ip_acl:
- all
EDIT
For testing purpose, I've enabled the following configuration:
commands_admin_access: configure
commands:
- add_commands:
- status
- get_roster
- change_password
- register
- unregister
- registered_users
- muc_online_rooms
- oauth_issue_token
I able to run both of user and admin commands successfully for those listed commands inside add_commands tags. It works as expected. However, I still facing some issues, most related to the IP restriction. Calling the API from the host that is not listed in admin_ip_acl also successful where I expect to get error when calling for non-whitelited host
The API requires an OAuth token for authentication. You need to generate one with correct scope. When a command is restricted to an admin, you need to also pass the HTTP header: "X-Admin: true" to let ejabberd know that it should consider you would like to act as an admin.

Resources