we are creating ecr repository using terraform. i created repos using for each. i am trying to attach policy. i am unable to use repo in resource - foreach

we are creating ecr repository using terraform. i created repos using for each. i am trying to attach policy. i am unable to use repo in resource
tfvars file
app_ecr_repo = [
{ name = "project-1" },
{ name = "project-2" }
]
using for each we are taking two repo names
module "ecr" {
source = "../../modules/ecr"
# Common
default_tags = var.default_tags
# ECR
for_each = { for repos in var.app_ecr_repo : join("-", [repos.name]) => repos }
ecr_respositories = [
{
repo_name = each.value.name
lifecycle_policy_file = "ecr_policy_01_tagged.json"
image_tag_mutability = "IMMUTABLE"
image_scanning_enabled = true
}
]
}
how to attach ecr repository name here
resource "aws_ecr_repository_policy" "repo_policy" {
repository = module.ecr.name
policy = <<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "new policy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:DescribeRepositories",
"ecr:GetRepositoryPolicy",
"ecr:ListImages",
"ecr:DeleteRepository",
"ecr:BatchDeleteImage",
"ecr:SetRepositoryPolicy",
"ecr:DeleteRepositoryPolicy"
]
}
]
}
EOF
}
This is root module. He we passed resources for aws_ecr_repository
############################################################################################################
Elastic Container Registry (ECR)
############################################################################################################
resource "aws_ecr_repository" "this" {
count = length(var.ecr_respositories) > 0 ? length(var.ecr_respositories) : 0
name = lookup(var.ecr_respositories[count.index], "repo_name", null)
image_tag_mutability = lookup(var.ecr_respositories[count.index], "image_tag_mutability", var.image_tag_mutability)
image_scanning_configuration {
scan_on_push = lookup(var.ecr_respositories[count.index], "image_scanning_enabled", var.image_scanning_enabled)
}
tags = merge(
{
"Name" = lookup(var.ecr_respositories[count.index], "repo_name", null)
},
var.tags,
var.default_tags
)
}
############################################################################################################
# ECR Lifecycle Policy
############################################################################################################
locals {
ecr_respositories_with_policy = [
for repo in var.ecr_respositories :
repo
if lookup(repo, "lifecycle_policy_file", null) != null
]
}
resource "aws_ecr_lifecycle_policy" "this" {
count = length(local.ecr_respositories_with_policy) > 0 ? length(local.ecr_respositories_with_policy) : 0
policy = file("${path.cwd}/ecr_lifecycle_policy/${local.ecr_respositories_with_policy[count.index].lifecycle_policy_file}")
repository = local.ecr_respositories_with_policy[count.index].repo_name
depends_on = [aws_ecr_repository.this]
}

In order to be able to access the attributes of the resources created using modules, the child module has to have an output defined [1]. Accessing the child module output [2] is a bit different compared to outputs defined without using modules. So, in the child module code, you would have to add the following:
output "ecr_name" {
description = "ECR repository name."
value = aws_ecr_repository.this.name
}
Since the module was invoked by using the for_each meta-argument, in the policy, you would say something like:
resource "aws_ecr_repository_policy" "repo_policy" {
for_each = { for repos in var.app_ecr_repo : join("-", [repos.name]) => repos }
repository = module.ecr[each.key].ecr_name
.
.
.
}
Referring to module instances is described in [3].
EDIT
The child module is using the count meta-argument and the root module is using for_each meta-argument. Because of that, it is hard to map between the output of the module and the input required in the aws_ecr_repository_policy resource and make it dynamic. The only way this could work is:
a) Hardcoding the value of the key for the resource created with the module, e.g., repository = module.ecr["project-1"].ecr_name[count.index], along with the count meta-argument set to count = length(module.ecr["project-1"].ecr_name). This would have to be repeated for project-2.
b) Hardcoding the value of the index for the output and using the same for_each, i.e., for_each = { for repos in var.app_ecr_repo : join("-", [repos.name]) => repos } and the repository = module.ecr[each.key].ecr_name[0]
The second case is a bit better, but only because in the module call currently a list with one element gets passed:
ecr_respositories = [
{
repo_name = each.value.name
image_tag_mutability = "IMMUTABLE"
image_scanning_enabled = true
}
]
If the number of elements would be increased, the solution would not work and there would have to be multiple instances of aws_ecr_repository_policy resource. Additionally, the resource could be added to the module itself which could help avoiding these headaches.
Solution 1
In the root module, add this:
resource "aws_ecr_repository_policy" "repo_policy" {
for_each = { for repos in var.app_ecr_repo : join("-", [repos.name]) => repos }
repository = module.ecr[each.key].ecr_name[0]
policy = <<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "new policy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:DescribeRepositories",
"ecr:GetRepositoryPolicy",
"ecr:ListImages",
"ecr:DeleteRepository",
"ecr:BatchDeleteImage",
"ecr:SetRepositoryPolicy",
"ecr:DeleteRepositoryPolicy"
]
}
]
}
EOF
}
Solution 2
In the child module, add the following code:
resource "aws_ecr_repository_policy" "repo_policy" {
count = length(var.ecr_respositories) > 0 ? length(var.ecr_respositories) : 0
repository = aws_ecr_repository.this[count.index].name
policy = <<EOF
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "new policy",
"Effect": "Allow",
"Principal": "*",
"Action": [
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"ecr:BatchCheckLayerAvailability",
"ecr:PutImage",
"ecr:InitiateLayerUpload",
"ecr:UploadLayerPart",
"ecr:CompleteLayerUpload",
"ecr:DescribeRepositories",
"ecr:GetRepositoryPolicy",
"ecr:ListImages",
"ecr:DeleteRepository",
"ecr:BatchDeleteImage",
"ecr:SetRepositoryPolicy",
"ecr:DeleteRepositoryPolicy"
]
}
]
}
EOF
}
[1] https://www.terraform.io/language/values/outputs#declaring-an-output-value
[2] https://www.terraform.io/language/values/outputs#accessing-child-module-outputs
[3] https://www.terraform.io/language/meta-arguments/for_each#referring-to-instances

Related

AKS ARM template | Query

I am working to build an AKS cluster using ARM template. i have a situation where if i define OS Type to Windows, template should populate "WindowsProfile" and if i choose linux in OS type then it should populate linuxprofile.
here is the template with linux profile chosen, if i provide parameter OSType value to windows, how do i insert windowsprofile here.
"name": "[parameters('AKSClustername')]",
"type": "Microsoft.ContainerService/managedClusters",
"apiVersion": "2021-05-01",
"location": "[parameters('Region')]",
"properties": {
"kubernetesVersion": "[parameters('kubernetesversion')]",
"dnsPrefix": "dnsprefix",
"agentPoolProfiles": [
{
"name": "[parameters('agentPoolName')]",
"count": "[parameters('nodeCount')]",
"vmSize": "[parameters('vmSize')]",
"osType": "[parameters('OSType')]",
"storageProfile": "ManagedDisks",
"enableAutoScaling": "[parameters('autoscalepools')]",
// "availabilityZones": "[if(equals(parameters('availabilityZones'), bool('true')), variables('AVZone'), json('null'))]"
"availabilityZones": "[if(equals(parameters('availabilityZones'), or('1', '2', '3')), variables('AVZone'), json('null'))]"
}
],
"linuxProfile": {
"adminUsername": "adminUserName",
"ssh": {
"publicKeys": [
{
"keyData": "keyData"
}
]
}
},
To be honest it won't be really readable, maintainable with ARM Template.
I would suggest you to have a look at Bicep.
It will compile to ARM template but will be more readable. Using bicep you would be able to do something like that:
//main.bicep
param AKSClustername string
param Region string
param kubernetesversion string
param agentPoolName string
param nodeCount int
param vmSize string
param OSType string
param autoscalepools bool
// Define common properties
var baseProperties = {
kubernetesVersion: kubernetesversion
dnsPrefix: 'dnsprefix'
agentPoolProfiles: [
{
name: agentPoolName
count: nodeCount
vmSize: vmSize
osType: OSType
storageProfile: 'ManagedDisks'
enableAutoScaling: autoscalepools
}
]
}
// Add profile based on OSType
var propertiesWithOsProfile = union(baseProperties, OSType == 'Linux' ? {
linuxProfile: {
adminUsername: 'adminUserName'
ssh: {
publicKeys: [
{
keyData: 'keyData'
}
]
}
}
} : {
windowsProfile: {
adminPassword: ''
adminUsername: ''
licenseType: 'Windows_Server'
}
}
)
// Create the cluster
resource aks 'Microsoft.ContainerService/managedClusters#2021-05-01' = {
name: AKSClustername
location: Region
properties: propertiesWithOsProfile
}
Both Azure CLI and Powershell supports Bicep.
If you do need to generate an ARM template, you could run this command
az bicep build --file main.bicep
It will generate an ARM template for you.

Creating Managed Policy in CDK errors with MalformedPolicy

When I try to deploy a seemingly simple CDK stack, it fails with a strange error. I don't get this same behavior when I create a different iam.ManagedPolicy in a different file, and that one has a much more complicated policy with several actions, etc. What am I doing wrong?
import aws_cdk.core as core
from aws_cdk import aws_iam as iam
from constructs import Construct
from master_payer import ( env, myenv )
class FromStack(core.Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
#myenv['pma'] = an account ID (12 digits)
#env = 'dev'
rolename = f"arn:aws:iam:{myenv['pma']}:role/CrossAccount{env.capitalize()}MpaAdminRole"
mpname = f"{env.capitalize()}MpaAdminPolicy"
pol = iam.ManagedPolicy(self, mpname, managed_policy_name = mpname,
document = iam.PolicyDocument(statements= [
iam.PolicyStatement(actions=["sts:AssumeRole"], effect=iam.Effect.ALLOW, resources=[rolename])
]))
grp = iam.Group(self, f"{env.capitalize()}MpaAdminGroup", managed_policies=[pol])
The cdk deploy output:
FromStack: deploying...
FromStack: creating CloudFormation changeset...
2:19:52 AM | CREATE_FAILED | AWS::IAM::ManagedPolicy | DevMpaAdminPolicyREDACTED
The policy failed legacy parsing (Service: AmazonIdentityManagement; Status Code: 400; Error Code: MalformedPolicyDocument; Request ID: REDACTED-GUID; Proxy: null)
new ManagedPolicy (/tmp/jsii-kernel-EfRyKw/node_modules/#aws-cdk/aws-iam/lib/managed-policy.js:39:26)
\_ /tmp/tmpxl5zxf8k/lib/program.js:8432:58
\_ Kernel._wrapSandboxCode (/tmp/tmpxl5zxf8k/lib/program.js:8860:24)
\_ Kernel._create (/tmp/tmpxl5zxf8k/lib/program.js:8432:34)
\_ Kernel.create (/tmp/tmpxl5zxf8k/lib/program.js:8173:29)
\_ KernelHost.processRequest (/tmp/tmpxl5zxf8k/lib/program.js:9757:36)
\_ KernelHost.run (/tmp/tmpxl5zxf8k/lib/program.js:9720:22)
\_ Immediate._onImmediate (/tmp/tmpxl5zxf8k/lib/program.js:9721:46)
\_ processImmediate (node:internal/timers:464:21)
❌ FromStack failed: Error: The stack named FromStack failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE
at Object.waitForStackDeploy (/usr/local/lib/node_modules/aws-cdk/lib/api/util/cloudformation.ts:307:11)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at prepareAndExecuteChangeSet (/usr/local/lib/node_modules/aws-cdk/lib/api/deploy-stack.ts:351:26)
at CdkToolkit.deploy (/usr/local/lib/node_modules/aws-cdk/lib/cdk-toolkit.ts:194:24)
at initCommandLine (/usr/local/lib/node_modules/aws-cdk/bin/cdk.ts:267:9)
The stack named FromStack failed creation, it may need to be manually deleted from the AWS console: ROLLBACK_COMPLETE
And the cdk synth output, which cfn-lint is happy with (no warnings, errors, or informational violations):
{
"Resources": {
"DevMpaAdminPolicyREDACTED": {
"Type": "AWS::IAM::ManagedPolicy",
"Properties": {
"PolicyDocument": {
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Resource": "arn:aws:iam:REDACTED-ACCOUNT-ID:role/CrossAccountDevMpaAdminRole"
}
],
"Version": "2012-10-17"
},
"Description": "",
"ManagedPolicyName": "DevMpaAdminPolicy",
"Path": "/"
},
"Metadata": {
"aws:cdk:path": "FromStack/DevMpaAdminPolicy/Resource"
}
},
"DevMpaAdminGroupREDACTED": {
"Type": "AWS::IAM::Group",
"Properties": {
"ManagedPolicyArns": [
{
"Ref": "DevMpaAdminPolicyREDACTED"
}
]
},
"Metadata": {
"aws:cdk:path": "FromStack/DevMpaAdminGroup/Resource"
}
},
"CDKMetadata": {
"Type": "AWS::CDK::Metadata",
"Properties": {
"Analytics": "v2:deflate64:REDACTED-B64"
},
"Metadata": {
"aws:cdk:path": "FromStack/CDKMetadata/Default"
}
}
}
}
Environment Specs
$ cdk --version
2.2.0 (build 4f5c27c)
$ cat /etc/redhat-release
Red Hat Enterprise Linux releease 8.5 (Ootpa)
$ python --version
Python 3.6.8
$ node --version
v16.8.0
The role ARN rolename was incorrect; I was missing a colon after iam. So it's iam:: not iam:. I think I copied the single colon from a (wrong) example somewhere on the Internet. Gah...

Error using EFS in ECS, returns unknown filesystem type 'efs'

I'm using a docker image for jenkins (jenkins/jenkins:2.277.1-lts-alpine) in an AWS ECS, and I want to persist the data using a AWS EFS.
I created the EFS and got the ID (fs-7dcef848)
My terraform code looks like:
resource "aws_ecs_service" "jenkinsService" {
cluster = var.ECS_cluster
name = var.jenkins_name
task_definition = aws_ecs_task_definition.jenkinsService.arn
deployment_maximum_percent = "200"
deployment_minimum_healthy_percent = 50
desired_count = var.service_desired_count
tags = {
"ManagedBy" : "Terraform"
}
}
resource "aws_ecs_task_definition" "jenkinsService" {
family = "${var.jenkins_name}-task"
container_definitions = file("task-definitions/service.json")
volume {
name = var.EFS_name
efs_volume_configuration {
file_system_id = "fs-7dcef848"
}
}
tags = {
"ManagedBy" : "Terraform"
}
}
and the service.json
[
{
"name": "DevOps-jenkins",
"image": "jenkins/jenkins:2.284-alpine",
"cpu": 0,
"memoryReservation": 1024,
"essential": true,
"portMappings": [
{
"containerPort" : 8080,
"hostPort" : 80
}
],
"mountPoints": [
{
"sourceVolume" : "DevOps-Jenkins",
"containerPath" : "/var/jenkins_home"
}
]
}
]
The terraform apply works OK, but the task cannot start returning:
Stopped reason Error response from daemon: create ecs-DevOps-jenkins-task-33-DevOps-Jekins-bcb381cd9dd0f7ae2700: VolumeDriver.Create: mounting volume failed: mount: unknown filesystem type 'efs'
Does anyone know whats happening?
There is another way to persist data?
Thanks in advance.
Solved: The first attempt was to install the "amazon-efs-utils" package using a remote-exec
But following the indications provided by #Oguzhan Aygun , I did it on the USER DATA section and it worked!
Thanks!

Artifactory and Jenkins - get file with newest/biggest custom property

I have generic repository "my_repo". I uploaded files there from jenkins with to paths like my_repo/branch_buildNumber/package.tar.gz and with custom property "tag" like "1.9.0","1.10.0" etc. I want to get item/file with latest/newest tag.
I tried to modify Example 2 from this link ...
https://www.jfrog.com/confluence/display/JFROG/Using+File+Specs#UsingFileSpecs-Examples
... and add sorting and limit the way it was done here ...
https://www.jfrog.com/confluence/display/JFROG/Artifactory+Query+Language#ArtifactoryQueryLanguage-limitDisplayLimitsandPagination
But im getting "unknown property desc" error.
The Jenkins Artifactory Plugin, like most of the JFrog clients, supports File Specs for downloading and uploading generic files.
The File Specs schema is described here. When creating a File Spec for downloading files, you have the option of using the "pattern" property, which can include wildcards. For example, the following spec downloads all the zip files from the my-local-repo repository into the local froggy directory:
{
"files": [
{
"pattern": "my-local-repo/*.zip",
"target": "froggy/"
}
]
}
Alternatively, you can use "aql" instead of "pattern". The following spec, provides the same result as the previous one:
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$or": [
{
"$and": [
{
"path": {
"$match": "*"
},
"name": {
"$match": "*.zip"
}
}
]
}
]
}
},
"target": "froggy/"
}
]
}
The allowed AQL syntax inside File Specs does not include everything the Artifactory Query Language allows. For examples, you can't use the "include" or "sort" clauses. These limitations were put in place, to make the response structure known and constant.
Sorting however is still available with File Specs, regardless of whether you choose to use "pattern" or "aql". It is supported throw the "sortBy", "sortOrder", "limit" and "offset" File Spec properties.
For example, the following File Spec, will download only the 3 largest zip file files:
{
"files": [
{
"aql": {
"items.find": {
"repo": "my-local-repo",
"$or": [
{
"$and": [
{
"path": {
"$match": "*"
},
"name": {
"$match": "*.zip"
}
}
]
}
]
}
},
"sortBy": ["size"],
"sortOrder": "desc",
"limit": 3,
"target": "froggy/"
}
]
}
And you can do the same with "pattern", instead of "aql":
{
"files": [
{
"pattern": "my-local-repo/*.zip",
"sortBy": ["size"],
"sortOrder": "desc",
"limit": 3,
"target": "local/output/"
}
]
}
You can read more about File Specs here.
(After answering this question here, we also updated the File Specs documentation with these examples).
After a lot of testing and experimenting i found that there are many ways of solving my main problem (getting latest version of package) but each of way require some function which is available in paid version. Like sort() in AQL or [RELEASE] in REST API. But i found that i still can get JSON with a full list of files and its properties. I can also download each single file. This led me to solution with simple python script. I can't publish whole but only the core which should bu fairly obvious
import requests, argparse
from packaging import version
...
query="""
items.find({
"type" : "file",
"$and":[{
"repo" : {"$match" : \"""" + args.repository + """\"},
"path" : {"$match" : \"""" + args.path + """\"}
}]
}).include("name","repo","path","size","property.*")
"""
auth=(args.username,args.password)
def clearVersion(ver: str):
new = ''
for letter in ver:
if letter.isnumeric() or letter == ".":
new+=letter
return new
def lastestArtifact(response: requests):
response = response.json()
latestVer = "0.0.0"
currentItemIndex = 0
chosenItemIndex = 0
for results in response["results"]:
for prop in results['properties']:
if prop["key"] == "tag":
if version.parse(clearVersion(prop["value"])) > version.parse(clearVersion(latestVer)):
latestVer = prop["value"]
chosenItemIndex = currentItemIndex
currentItemIndex += 1
return response["results"][chosenItemIndex]
req = requests.post(url,data=query,auth=auth)
if args.verbose:
print(req.text)
latest = lastestArtifact(req)
...
I just want to point that THIS IS NOT permanent solution. We just didnt want to buy license yet only because of one single problem. But if there will be more of such problems then we definetly buy PRO subscription.

How to protect awsconfiguration.json data details in iOS app?

I'm using awsconfiguration.json for AWS Cognito for my iOS Application written in swift. But I'm afraid of security that awsconfiguration.json is stored in my local directory. How can I protect this json file against a third man attack?
Please see similar Github Issue https://github.com/aws-amplify/aws-sdk-ios/issues/1671
The comments talk about
the file is non-sensitive data, so resources that should be accessed by authenticated users should be configured with the approiate controls. Amplify CLI helps you with this, depending on the resources you are provisioning in AWS.
there is a way to configure it in-memory via AWSInfo.configureDefaultAWSInfo(awsConfiguration)
Configure your AWS dependencies using in-memory configuration instead of the configuration JSON file as suggested by AWS documentation.
Sample code:
func buildAuthConfiguration() -> [String:JSONValue] {
return [
"awsCognitoAuthPlugin": [
"IdentityManager": [
"Default": [:]
],
"Auth": [
"Default": [
"authenticationFlowType": "String"
]
],
"CognitoUserPool": [
"Default": [
"PoolId": "String",
"AppClientId": "String",
"Region": "String"
]
],
"CredentialsProvider": [
"CognitoIdentity": [
"Default": [
"PoolId": "String",
"Region": "String"
]
]
]
]
]
}
func buildAPIConfiguration() -> [String: JSONValue] {
return [
"awsAPIPlugin": [
"apiName" : [
"endpoint": "String",
"endpointType": "String",
"authorizationType": "String",
"region": "String"
]
]
]
}
func configureAmplify() {
let authConf = AuthCategoryConfiguration(plugins: buildAuthConfiguration())
let apiConf = APICategoryConfiguration(plugins: buildAPIConfiguration())
let config = AmplifyConfiguration(
analytics: nil,
api: apiConf,
auth: authConf,
dataStore: nil,
hub: nil,
logging: nil,
predictions: nil,
storage: nil
)
try Amplify.configure(config)
// Rest of your code
}
Source: https://github.com/aws-amplify/amplify-ios/issues/1171#issuecomment-832988756
You can provide data protection to your app files by saving it into file directory
Following documentation can help you to achieve it.
https://developer.apple.com/documentation/uikit/protecting_the_user_s_privacy/encrypting_your_app_s_files
The fix to add a new constructor has been released in 2.13.6 version of the SDK.
to allow passing a JSONObject containing the configuration from the awsconfiguration.json file. You can store the information in JSONObject in your own security mechanism and provide it at runtime through the constructor.
https://github.com/aws-amplify/aws-sdk-android/pull/1002

Resources