I tried to add this layer to my lambda function:
arn:aws:lambda:us-east-1:723663554526:layer:lumigo-telemetry-shipper:1
Got this error:
Failed to load layer version details: User: arn:aws:iam::XXXX78362623:root is not authorized to perform: lambda:GetLayerVersion on resource:
arn:aws:lambda:us-east-1:723663554526:layer:lumigo-telemetry-shipper:1
because no resource-based policy allows the lambda:GetLayerVersion action
What is the correct way of adding a layer? I got this ARN from the official blog post:
https://lumigo.io/blog/lambda-telemetry-api-a-new-way-to-process-lambda-telemetry-data-in-real-time/
Related
I am trying to create policies using serverless framework. The idea is to access S3 services, depending on the user's company.
I tried to deploy my serverless.yaml with the policy:
- PolicyName: IAM_AWS_S3
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Action: '*'
Resource:
- !Sub 'arn:aws:s3:${AWS::AccountId}-${aws:PrincipalTag/company}'
- !Sub 'arn:aws:s3:${AWS::AccountId}-${aws:PrincipalTag/company}/*'
but I get this error:
CREATE_FAILED: AuthenticatedRole (AWS::IAM::Role) The policy failed
legacy parsing (Service: AmazonIdentityManagement; Status Code: 400;
Error Code: MalformedPolicyDocument; Request ID:
da38iiii; Proxy: null)
So, here is my question, is it possible to create a policy before I have a user? can PrincipalTag/company be null?
Thanks in advance
It is not possible to use PropertyTag for this issue due to I needed to use it in DynamoDB too.
I just create the policies through a Lambda.
Take into account these answers:
IAM Policy with `aws:ResourceTag` not supported
Use tags inside IAM policy resource
I have a task using the GCSToGoogleSheetsOperator in Airflow where Im trying to add data to a sheet.
I have added the service credential email to the sheet I want to edit with editor privileges, and received this error:
googleapiclient.errors.HttpError:
<HttpError 403 when requesting
https://sheets.googleapis.com/v4/spreadsheets/<SHEET_ID>/values/Sheet1?valueInputOption=RAW&includeValuesInResponse=false&responseValueRenderOption=FORMATTED_VALUE&responseDateTimeRenderOption=SERIAL_NUMBER&alt=json
returned "Request had insufficient authentication scopes.".
Details: "[{
'#type': 'type.googleapis.com/google.rpc.ErrorInfo',
'reason': 'ACCESS_TOKEN_SCOPE_INSUFFICIENT',
'domain': 'googleapis.com',
'metadata': {
'service': 'sheets.googleapis.com',
'method': 'google.apps.sheets.v4.SpreadsheetsService.UpdateValues'}
}]>
I cant update the sheet, but the GCS and BigQuery operators work fine.
My connection configuration looks like the following:
AIRFLOW_CONN_GOOGLE_CLOUD=google-cloud-platform://?extra__google_cloud_platform__key_path=%2Fopt%2Fairflow%2Fcredentials%2Fgoogle_credential.json
I tried following the instructions to add the scope https://www.googleapis.com/auth/spreadsheets.
Which URL encoded looks like:
AIRFLOW_CONN_GOOGLE_CLOUD=google-cloud-platform://?extra__google_cloud_platform__key_path=%2Fopt%2Fairflow%2Fcredentials%2Fgoogle_credential.json&extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fspreadsheets
Now, operators which previously worked error out like this:
google.api_core.exceptions.Forbidden: 403 POST https://bigquery.googleapis.com/bigquery/v2/projects/my-project/jobs?prettyPrint=false: Request had insufficient authentication scopes.
And the GCSToGoogleSheetsOperator operator still error out like this:
google.api_core.exceptions.Forbidden: 403 GET https://storage.googleapis.com/download/storage/v1/b/my-bucket/o/folder%2Fobject.csv?alt=media: Insufficient Permission: ('Request failed with status code', 403, 'Expected one of', <HTTPStatus.OK: 200>, <HTTPStatus.PARTIAL_CONTENT: 206>)
How can I set the permissions correctly to use both BigQuery, GCS and Sheets operators?
Adding a scope seems to ignore the IAM roles, so its either one or the other.
The service account had roles needed to access GCS and BigQuery, but by adding the scope https://www.googleapis.com/auth/spreadsheets, the service would ignore the privileges granted by the roles and look only at the ones specified by the scopes.
So, to recover it, you must add both the spreadsheet and cloud-platform scopes (or more strict scopes). cloud-platform will provide access to GCS and BigQuery and spreadsheets to Google Sheets API.
If you set your connection using environment variables, you have to URL encode the arguments, so to create a GOOGLE_CLOUD connection, you will have to do something like this, which is not encoded...
AIRFLOW_CONN_GOOGLE_CLOUD=google-cloud-platform://?extra__google_cloud_platform__key_path=/abs/path_to_file/credential.json&extra__google_cloud_platform__scope=https://www.googleapis.com/auth/cloud-platform,https://www.googleapis.com/auth/spreadsheets
To encode, which is the version you have to use, replace /, , and ::
AIRFLOW_CONN_GOOGLE_CLOUD=google-cloud-platform://?extra__google_cloud_platform__key_path=%2Fabs%2Fpath_to_file%2Fcredentials%2Fgoshare-driver-c08e0904285b.json&extra__google_cloud_platform__scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform%2Chttps%3A%2F%2Fwww.googleapis.com%2Fauth%2Fspreadsheets
I am trying to declare in my stack a lambda function with an EFS mount.
The lambda has a custom execution role with arn
arn:aws:iam::ACCOUNTID:role/service-role/ROLENAME
i.e. it was created in the stack using
lambda_role = aws_iam.Role(..., path="/service-role/", ...)
The code snippet declaring the lambda is
_lambda.Function(
self,
id="myId",
runtime=_lambda.Runtime.PYTHON_3_8,
code=_lambda.Code.asset('lambda'),
handler='my_module.lambda_handler',
role=lambda_role,
function_name="function-name",
timeout=core.Duration.seconds(30),
vpc=vpc,
filesystem=_lambda.FileSystem.from_efs_access_point(access_point, '/efs')
)
The deployment fails with this error:
API: iam:PutRolePolicy User: USERNAME is not authorized to perform: iam:PutRolePolicy on resource: role service-role with an explicit deny
That "role service-role" in the error message seemed weird, so I inspected the synthesized CF template, and noticed this section:
lambdarolePolicy2FC0B982:
Type: AWS::IAM::Policy
Properties:
PolicyDocument:
** policy giving elasticfilesystem:ClientWrite and elasticfilesystem:ClientMount permissions**
PolicyName: lambdarolePolicy2FC0B982
Roles:
- Fn::Select:
- 1
- Fn::Split:
- /
- Fn::Select:
- 5
- Fn::Split:
- ":"
- Fn::ImportValue: sgmkr-iam-arn-mr
That ImportValue maps to the arn of the lambda execution role. The problem I see is with the string manipulation, which does not seem to take into account the fact that the role has a path. The result of that chain of Select/Split is indeed "service-role", not the proper role name.
I have two questions:
why is CDK trying to add extra permissions to a role I defined? I already added the needed permissions to the relevant role, and I really don't want anything added to it. Moreover, in my setup, iam:PutRolePolicy calls are strictly regulated, so this would almost certainly fail. That's the very reason I pass my own role. How can I switch off this automatic policy generation?
why is CDK ignoring the role path? Is this intended?
Thanks for your help,
Andrea.
The easiest workaround would I could think of would be using "role name" instead "role ARN".
Instead
role_arn = iam.Role.from_role_arn(self, "Role", role_arn=<role_arn>)
Use:
role_name = iam.Role.from_role_name(self, "Role", role_name=<role_name>)
We're making a forum where access is supposed to be denied by unregistered users. I want to show a proper error message - not an exception - for users that are not allowed there. How do I achieve this in Neos 2.0?
Both read and write access should be denied. Maybe it's easier to deny access to the node where the forum is? But wouldn't that need hard-coding of node path?
Current Policy.yaml:
privilegeTargets:
'TYPO3\Flow\Security\Authorization\Privilege\Method\MethodPrivilege':
'My.Package:PostControllerLoggedInActions':
matcher: 'method(My\Package\PostController->(index|new|create)Action(.*))'
roles:
'TYPO3.Flow:Everybody':
privileges:
-
privilegeTarget: 'My.Package:PostControllerLoggedInActions'
permission: DENY
'My.Package:User':
privileges:
-
privilegeTarget: 'My.Package:PostControllerLoggedInActions'
permission: GRANT
Edit: Here are some slides about (among other things) how to create a custom 404 page: https://speakerdeck.com/aertmann/tasty-recipes-for-every-day-neos
Edit 2: Use Flow exception handler?
You can try to set it in your root Configuration/Settings.yaml. You can do it by status code (like in example) or exception class:
TYPO3:
Flow:
# you have already persistence
# and maybe other stuff under flow
# just add it below them but still under TYPO3.Flow
error:
exceptionHandler:
renderingGroups:
accessRestricted:
matchingStatusCodes: [ 401, 403]
options:
templatePathAndFilename: 'resource://TYPO3.Neos/Private/Templates/Error/Index.html'
layoutRootPath: 'resource://TYPO3.Neos/Private/Layouts/'
format: 'html'
variables:
errorTitle: 'Restricted Area'
errorDescription: 'Go home boy.'
Based on this information:
http://www-10.lotus.com/ldd/appdevwiki.nsf/xpDocViewer.xsp?lookupName=API+Reference#action=openDocument&res_title=OpenSocial_Profiles_API_sbar&content=pdcontent
And a working url for posting updates:
I created this one to try and find out to whom this access token belongs:
https://connections4.e-office.com/connections/opensocial/oauth/rest/people/#me/#self
But than I get Error 501: No service defined for path people/#me/#self
what should the url be ?
Apperently you don't need to include: #Self
This is it:
connections/opensocial/oauth/rest/people/#me/
See also : http://www-10.lotus.com/ldd/appdevwiki.nsf/xsp/.ibmmodres/domino/OpenAttachment/ldd/appdevwiki.nsf/B49DB47061DA9DEB85257AC9006D5256/attach/AppDev_OpenSocial.pdf
You can use the Profiles REST API URLs:
http(s)://yourserver/profiles/admin/atom/profileEntry.do?email=mailaddress
or
http(s)://yourserver/profiles/admin/atom/profileEntry.do?uid=uid