When I try to deploy a seemingly simple CDK stack, it fails with a strange error. I don't get this same behavior when I create a different iam.ManagedPolicy in a different file, and that one has a much more complicated policy with several actions, etc. What am I doing wrong?
import aws_cdk.core as core
from aws_cdk import aws_iam as iam
from constructs import Construct
from master_payer import ( env, myenv )
class FromStack(core.Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
#myenv['pma'] = an account ID (12 digits)
#env = 'dev'
rolename = f"arn:aws:iam:{myenv['pma']}:role/CrossAccount{env.capitalize()}MpaAdminRole"
mpname = f"{env.capitalize()}MpaAdminPolicy"
pol = iam.ManagedPolicy(self, mpname, managed_policy_name = mpname,
document = iam.PolicyDocument(statements= [
iam.PolicyStatement(actions=["sts:AssumeRole"], effect=iam.Effect.ALLOW, resources=[rolename])
]))
grp = iam.Group(self, f"{env.capitalize()}MpaAdminGroup", managed_policies=[pol])
The cdk deploy output:
FromStack: deploying...
FromStack: creating CloudFormation changeset...
2:19:52 AM | CREATE_FAILED | AWS::IAM::ManagedPolicy | DevMpaAdminPolicyREDACTED
The policy failed legacy parsing (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: REDACTED-GUID; Proxy: null)
new ManagedPolicy (/tmp/jsii-kernel-EfRyKw/node_modules/#aws-cdk/aws-iam/lib/managed-policy.js:39:26)
\_ /tmp/tmpxl5zxf8k/lib/program.js:8432:58
\_ Kernel._wrapSandboxCode (/tmp/tmpxl5zxf8k/lib/program.js:8860:24)
\_ Kernel._create (/tmp/tmpxl5zxf8k/lib/program.js:8432:34)
\_ Kernel.create (/tmp/tmpxl5zxf8k/lib/program.js:8173:29)
\_ KernelHost.processRequest (/tmp/tmpxl5zxf8k/lib/program.js:9757:36)
\_ KernelHost.run (/tmp/tmpxl5zxf8k/lib/program.js:9720:22)
\_ Immediate._onImmediate (/tmp/tmpxl5zxf8k/lib/program.js:9721:46)
\_ processImmediate (node:internal/timers:464:21)
❌ FromStack failed: Error: The stack named FromStack failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE
at Object.waitForStackDeploy (/usr/local/lib/node_modules/aws-cdk/lib/api/util/cloudformation.ts:307:11)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at prepareAndExecuteChangeSet (/usr/local/lib/node_modules/aws-cdk/lib/api/deploy-stack.ts:351:26)
at CdkToolkit.deploy (/usr/local/lib/node_modules/aws-cdk/lib/cdk-toolkit.ts:194:24)
at initCommandLine (/usr/local/lib/node_modules/aws-cdk/bin/cdk.ts:267:9)
The stack named FromStack failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE
And the cdk synth output, which cfn-lint is happy with (no warnings, errors, or informational violations):
{
"Resources": {
"DevMpaAdminPolicyREDACTED": {
"Type": "AWS::IAM::ManagedPolicy",
"Properties": {
"PolicyDocument": {
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Resource": "arn:aws:iam:REDACTED-ACCOUNT-ID:role/CrossAccountDevMpaAdminRole"
}
],
"Version": "2012-10-17"
},
"Description": "",
"ManagedPolicyName": "DevMpaAdminPolicy",
"Path": "/"
},
"Metadata": {
"aws:cdk:path": "FromStack/DevMpaAdminPolicy/Resource"
}
},
"DevMpaAdminGroupREDACTED": {
"Type": "AWS::IAM::Group",
"Properties": {
"ManagedPolicyArns": [
{
"Ref": "DevMpaAdminPolicyREDACTED"
}
]
},
"Metadata": {
"aws:cdk:path": "FromStack/DevMpaAdminGroup/Resource"
}
},
"CDKMetadata": {
"Type": "AWS::CDK::Metadata",
"Properties": {
"Analytics": "v2:deflate64:REDACTED-B64"
},
"Metadata": {
"aws:cdk:path": "FromStack/CDKMetadata/Default"
}
}
}
}
Environment Specs
$ cdk --version
2.2.0 (build 4f5c27c)
$ cat /etc/redhat-release
Red Hat Enterprise Linux releease 8.5 (Ootpa)
$ python --version
Python 3.6.8
$ node --version
v16.8.0
The role ARN rolename was incorrect; I was missing a colon after iam. So it's iam:: not iam:. I think I copied the single colon from a (wrong) example somewhere on the Internet. Gah...
Related
I have generic repository "my_repo". I uploaded files there from jenkins with to paths like my_repo/branch_buildNumber/package.tar.gz and with custom property "tag" like "1.9.0","1.10.0" etc. I want to get item/file with latest/newest tag.
I tried to modify Example 2 from this link ...
https://www.jfrog.com/confluence/display/JFROG/Using+File+Specs#UsingFileSpecs-Examples
... and add sorting and limit the way it was done here ...
https://www.jfrog.com/confluence/display/JFROG/Artifactory+Query+Language#ArtifactoryQueryLanguage-limitDisplayLimitsandPagination
But im getting "unknown property desc" error.
The Jenkins Artifactory Plugin, like most of the JFrog clients, supports File Specs for downloading and uploading generic files.
The File Specs schema is described here. When creating a File Spec for downloading files, you have the option of using the "pattern" property, which can include wildcards. For example, the following spec downloads all the zip files from the my-local-repo repository into the local froggy directory:
{
"files": [
{
"pattern": "my-local-repo/*.zip",
"target": "froggy/"
}
]
}
Alternatively, you can use "aql" instead of "pattern". The following spec, provides the same result as the previous one:
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$or": [
{
"$and": [
{
"path": {
"$match": "*"
},
"name": {
"$match": "*.zip"
}
}
]
}
]
}
},
"target": "froggy/"
}
]
}
The allowed AQL syntax inside File Specs does not include everything the Artifactory Query Language allows. For examples, you can't use the "include" or "sort" clauses. These limitations were put in place, to make the response structure known and constant.
Sorting however is still available with File Specs, regardless of whether you choose to use "pattern" or "aql". It is supported throw the "sortBy", "sortOrder", "limit" and "offset" File Spec properties.
For example, the following File Spec, will download only the 3 largest zip file files:
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$or": [
{
"$and": [
{
"path": {
"$match": "*"
},
"name": {
"$match": "*.zip"
}
}
]
}
]
}
},
"sortBy": ["size"],
"sortOrder": "desc",
"limit": 3,
"target": "froggy/"
}
]
}
And you can do the same with "pattern", instead of "aql":
{
"files": [
{
"pattern": "my-local-repo/*.zip",
"sortBy": ["size"],
"sortOrder": "desc",
"limit": 3,
"target": "local/output/"
}
]
}
You can read more about File Specs here.
(After answering this question here, we also updated the File Specs documentation with these examples).
After a lot of testing and experimenting i found that there are many ways of solving my main problem (getting latest version of package) but each of way require some function which is available in paid version. Like sort() in AQL or [RELEASE] in REST API. But i found that i still can get JSON with a full list of files and its properties. I can also download each single file. This led me to solution with simple python script. I can't publish whole but only the core which should bu fairly obvious
import requests, argparse
from packaging import version
...
query="""
items.find({
"type" : "file",
"$and":[{
"repo" : {"$match" : \"""" + args.repository + """\"},
"path" : {"$match" : \"""" + args.path + """\"}
}]
}).include("name","repo","path","size","property.*")
"""
auth=(args.username,args.password)
def clearVersion(ver: str):
new = ''
for letter in ver:
if letter.isnumeric() or letter == ".":
new+=letter
return new
def lastestArtifact(response: requests):
response = response.json()
latestVer = "0.0.0"
currentItemIndex = 0
chosenItemIndex = 0
for results in response["results"]:
for prop in results['properties']:
if prop["key"] == "tag":
if version.parse(clearVersion(prop["value"])) > version.parse(clearVersion(latestVer)):
latestVer = prop["value"]
chosenItemIndex = currentItemIndex
currentItemIndex += 1
return response["results"][chosenItemIndex]
req = requests.post(url,data=query,auth=auth)
if args.verbose:
print(req.text)
latest = lastestArtifact(req)
...
I just want to point that THIS IS NOT permanent solution. We just didnt want to buy license yet only because of one single problem. But if there will be more of such problems then we definetly buy PRO subscription.
I'm trying to set up a Module which will interact with /dev/serial0 on a Raspberry Pi B+ running Raspian Stretch. I've used dtoverlay=pi3-miniuart-bt in /boot/config.txtto restore UART0/ttyAMA0 to GPIOs 14 and 15 (which is what my Raspi-based HW needs me to do).
I have attempted to make that device accessible to the Module using the following Container Create Options:
{
"HostConfig": {
"PortBindings": {
"1880/tcp": [
{
"HostPort": "1880"
}
]
},
"Privileged": true,
"Devices": [
{
"PathOnHost": "/dev/serial0",
"PathInContainer": "/dev/serial0",
"CgroupPermissions": "rwm"
},
{
"PathOnHost": "/dev/ttyAMA0",
"PathInContainer": "/dev/ttyAMA0",
"CgroupPermissions": "rwm"
},
{
"PathOnHost": "/dev/ttyS0",
"PathInContainer": "/dev/ttyS0",
"CgroupPermissions": "rwm"
}
]
}
}
I can see /dev/serial0 when I ssh in, but I can't see it from within the running Module:
pi#azure-iot-test:~ $ ls -l /dev/ser*
lrwxrwxrwx 1 root root 7 Sep 24 21:17 /dev/serial0 -> ttyAMA0
lrwxrwxrwx 1 root root 5 Sep 24 21:17 /dev/serial1 -> ttyS0
pi#azure-iot-test:~ $ sudo docker exec hub-nodered ls -l /dev/ser*
ls: /dev/serial0: No such file or directory
ls: /dev/serial1: No such file or directory
Any ideas?
Followup:
Additional things I have tried the following ideas gleaned from here:
Adding "User": "node-red" to the root of the Container Create Options
Adding "User": "root" to the root of the Container Create Options
Adding "GroupAdd": "dialout" to "HostConfig": {...} in the Container Create Options
Followup #2
While I still can't interact with /dev/serial0, I am able to interact with /dev/ttyAMA0 using the following Container Create Options:
{
"HostConfig": {
"GroupAdd": [
"dialout"
],
"PortBindings": {
"1880/tcp": [
{
"HostPort": "80"
}
]
},
"Devices": [
{
"PathOnHost": "/dev/serial0",
"PathInContainer": "/dev/serial0",
"CgroupPermissions": "rwm"
},
{
"PathOnHost": "/dev/ttyAMA0",
"PathInContainer": "/dev/ttyAMA0",
"CgroupPermissions": "rwm"
}
]
}
}
The noteworthy items appear to be:
I didn't need "Privileged": true in "HostConfig"
I don't seem to need a "User" added
I needed "GroupAdd": ["dialout"] in "HostConfig"
So, while it's satisfying that I can interact with a serial device as I wanted to, it seems odd that I can't interact with /dev/serial0, which seems like it's "the recommended way" from the reading I've done.
Thanks to help and insight from Raymond Mouthaan over on the very helpful Node-RED Slack channel, I found my way to this Container Create Options providing me access to /dev/serial0:
{
"User": "node-red:dialout",
"HostConfig": {
"PortBindings": {
"1880/tcp": [
{
"HostPort": "80"
}
]
},
"Devices": [
{
"PathOnHost": "/dev/serial0",
"PathInContainer": "/dev/serial0",
"CgroupPermissions": "rwm"
}
]
}
}
This is different than the partial solution I found in "Followup #2" above in that I now do get access to /dev/serial0 as desired.
UPDATE: I originally posted this answer using "User": "root:dialout" rather than the "User": "node-red:dialout" you currently see above.
The first working solution was with "User": "root:root", but it seemed good to me to constrain to just the devices that are called out in Devices, which root:dialout seemed to do.
But I wondered whether I should be concerned security-wise with running as root at all.
Then I tried using node-red:dialout and it seems to be working perfectly, so I've updated the Container Create Options above to be what I think is the best answer.
I tried following all the steps in the blog whose URL is mentioned below.
https://blogs.sap.com/2019/04/29/sap-cloud-platform-backend-service-tutorial-13-api-called-from-external-tool/
While I am getting the authentication token and the entire flow is running properly, I just cannot change the value of expires_in which is 43199 by default.
How do I change that to some other value, let's say 5 minutes (300 seconds) ?
You can include this in the UAA configuration in the xs-security.json or manually update the uaa using cf update-service <uaa_instance_name> -c <json_file | inline-JSON object>
"oauth2-configuration": {
"token-validity": 7200
}
For fullness, here's a sample UAA JSON
{
"xsappname": "example_uaa",
"tenant-mode": "dedicated",
"description": "Security profile of called application",
"scopes": [
{
"name": "uaa.user",
"description": "UAA"
}
],
"oauth2-configuration":{
"token-validity": 7200
},
"role-templates": [
{
"name": "Token_Exchange",
"description": "UAA",
"scope-references": [
"uaa.user"
]
}
]
}
I am trying to get the VolumeId and State of the Volume attached to the machines using aws API .
Code
#!/usr/local/bin/ruby
require "aws-sdk"
require "rubygems"
list=Aws::EC2::Client.new(region: "us-east-1")
volume=list.describe_volumes()
volumes=%x( aws ec2 describe-volumes --region='us-east-1' )
puts volumes
Below is the sample output of the command
aws ec2 describe-volumes --region='us-east-1' .
Please help to get VolumeID and state from the below
Sample Output of API(JSON):
{
"Volumes": [
{
"AvailabilityZone": "us-east-1d",
"Attachments": [
{
"AttachTime": "2015-02-02T07:31:36.000Z",
"InstanceId": "i-bca66353",
"VolumeId": "vol-892a2acd",
"State": "attached",
"DeleteOnTermination": true,
"Device": "/dev/sda1"
}
],
"Encrypted": false,
"VolumeType": "gp2",
"VolumeId": "vol-892a2acd",
"State": "in-use",
"Iops": 100,
"SnapshotId": "snap-df910966",
"CreateTime": "2015-02-02T07:31:36.380Z",
"Size": 8
},
]
}
for getting just the volume_ids ->
JSON.parse(volumes)['Volumes'].map{|v|v["VolumeId"]}
for getting just the states ->
JSON.parse(volumes)['Volumes'].map{|v|v["state"]}
for getting a hash/map with volume-ids as keys and their states as values ->
JSON.parse(volumes)['Volumes'].map{|v| [v["VolumeId"],v["state"]] }.to_h
I'm having some trouble getting this WebComponents polyfill + native-shim to work right across all devices, though webpack.
Some background on my setup:
* Webpack2 + babel-6
* app is written in ES6, transpiling to ES5
* imports a node_module package written in ES6, which defines/registers a CustomElement used in the app
So the relevant webpack dev config looks something like this:
const config = webpackMerge(baseConfig, {
entry: [
'webpack/hot/only-dev-server',
'#webcomponents/custom-elements/src/native-shim',
'#webcomponents/custom-elements',
'<module that uses CustomElements>/dist/src/main',
'./src/client',
],
output: {
path: path.resolve(__dirname, './../dist/assets/'),
filename: 'app.js',
},
module: {
rules: [
{
test: /\.js$/,
loader: 'babel-loader',
options: {
cacheDirectory: true,
},
include: [
path.join(NODE_MODULES_DIR, '<module that uses CustomElements>'),
path.join(__dirname, '../src'),
],
},
],
},
...
key take aways:
* I need CustomElement poly loaded before <module that uses CustomElements>
* I need <module that uses CustomElements> loaded before my app soure
* <module that uses CustomElements> is ES6 so we're transpiling it ( thus the include in the babel-loader).
The above works as-expected in modern ES6 browsers ( IE desktop Chrome ), HOWEVER
it does not work in older browsers. I get the following error in older browsers, for example iOS 8:
SyntaxError: Unexpected token ')'
pointing to the opening anonymous function in the native-shim pollyfill:
(() => {
'use strict';
// Do nothing if `customElements` does not exist.
if (!window.customElements) return;
const NativeHTMLElement = window.HTMLElement;
const nativeDefine = window.customElements.define;
const nativeGet = window.customElements.get;
So it seems to me like the native-shim would need to be transpiled to ES5:
include: [
+ path.join(NODE_MODULES_DIR, '#webcomponents/custom-elements/src/native-shim'),
path.join(NODE_MODULES_DIR, '<module that uses CustomElements>'),
path.join(__dirname, '../src'),
],
...but doing so now breaks both Chrome and iOS 8 with the following error:
app.js:1 Uncaught TypeError: Failed to construct 'HTMLElement': Please use the 'new' operator, this DOM object constructor cannot be called as a function.
at new StandInElement (native-shim.js:122)
at HTMLDocument.createElement (<anonymous>:1:1545)
at ReactDOMComponent.mountComponent (ReactDOMComponent.js:504)
at Object.mountComponent (ReactReconciler.js:46)
at ReactCompositeComponentWrapper.performInitialMount (ReactCompositeComponent.js:371)
at ReactCompositeComponentWrapper.mountComponent (ReactCompositeComponent.js:258)
at Object.mountComponent (ReactReconciler.js:46)
at Object.updateChildren (ReactChildReconciler.js:121)
at ReactDOMComponent._reconcilerUpdateChildren (ReactMultiChild.js:208)
at ReactDOMComponent._updateChildren (ReactMultiChild.js:312)
.. which takes me to this constructor() line in the native-shim:
window.customElements.define = (tagname, elementClass) => {
const elementProto = elementClass.prototype;
const StandInElement = class extends NativeHTMLElement {
constructor() {
Phew. So it's very unclear to me how we actually include this in a webpack based build, where the dependency using CustomElements is ES6 ( and needs transpiling).
Transpiling the native-shim to es5 doesn't work
using the native-shim as-is at the top of the bundle entry point doesn't work for iOS 8, but does for Chrome
not including the native-shim breaks both Chrome and iOS
I'm really quite frustrated with web components at this point. I just want to use this one dependency that happens to be built with web components. How can I get it to work properly in a webpack build, and work across all devices? Am I missing something obvious here?
My .babelrc config for posterity sake (dev config most relevant):
{
"presets": [
["es2015", { "modules": false }],
"react"
],
"plugins": [
"transform-custom-element-classes",
"transform-object-rest-spread",
"transform-object-assign",
"transform-exponentiation-operator"
],
"env": {
"test": {
"plugins": [
[ "babel-plugin-webpack-alias", { "config": "./cfg/test.js" } ]
]
},
"dev": {
"plugins": [
"react-hot-loader/babel",
[ "babel-plugin-webpack-alias", { "config": "./cfg/dev.js" } ]
]
},
"dist": {
"plugins": [
[ "babel-plugin-webpack-alias", { "config": "./cfg/dist.js" } ],
"transform-react-constant-elements",
"transform-react-remove-prop-types",
"minify-dead-code-elimination",
"minify-constant-folding"
]
},
"production": {
"plugins": [
[ "babel-plugin-webpack-alias", { "config": "./cfg/server.js" } ],
"transform-react-constant-elements",
"transform-react-remove-prop-types",
"minify-dead-code-elimination",
"minify-constant-folding"
]
}
}
}
I was able to achieve something similar with the .babelrc plugin pipeline below. It looks like the only differences are https://babeljs.io/docs/plugins/transform-es2015-classes/ and https://babeljs.io/docs/plugins/transform-es2015-classes/, but I honestly can't remember what problems those were solving specifically:
{
"plugins": [
"transform-runtime",
["babel-plugin-transform-builtin-extend", {
"globals": ["Error", "Array"]
}],
"syntax-async-functions",
"transform-async-to-generator",
"transform-custom-element-classes",
"transform-es2015-classes"
]
}