Getting error trying to use Custome Cert with azure bicep - azure-application-gateway

Here is my demo sandbox code how to deploy with Bicep. Im using custom certificate for this
param profileName string = 'testresearchcdn'
#allowed([
'Standard_Verizon'
'Premium_Verizon'
'Custom_Verizon'
'Standard_Akamai'
'Standard_ChinaCdn'
'Standard_Microsoft'
'Premium_ChinaCdn'
'Standard_AzureFrontDoor'
'Premium_AzureFrontDoor'
'Standard_955BandWidth_ChinaCdn'
'Standard_AvgBandWidth_ChinaCdn'
'StandardPlus_ChinaCdn'
'StandardPlus_955BandWidth_ChinaCdn'
'StandardPlus_AvgBandWidth_ChinaCdn'
])
param sku string = 'Standard_Microsoft'
param endpointName string = 'testresearchcdn'
#description('Whether the HTTP traffic is allowed.')
param isHttpAllowed bool = true
#description('Whether the HTTPS traffic is allowed.')
param isHttpsAllowed bool = true
#description('Query string caching behavior.')
#allowed([
'IgnoreQueryString'
'BypassCaching'
'UseQueryString'
])
param queryStringCachingBehavior string = 'IgnoreQueryString'
#description('Content type that is compressed.')
param contentTypesToCompress array = [
'text/plain'
'text/html'
'text/css'
'application/x-javascript'
'text/javascript'
]
#description('Whether the compression is enabled')
param isCompressionEnabled bool = true
#description('Location for all resources.')
param location string = 'global'
resource testresearchcdn 'Microsoft.Cdn/profiles#2020-09-01' = {
name: profileName
location: location
properties: {}
sku: {
name: sku
}
}
resource Microsoft_Cdn_profiles_endpoints_testresearchcdn 'Microsoft.Cdn/profiles/endpoints#2020-09-01' = {
name: endpointName
parent: testresearchcdn
location: location
properties: {
originHostHeader: 'testresearchcdn.blob.core.windows.net'
isHttpAllowed: isHttpAllowed
isHttpsAllowed: isHttpsAllowed
queryStringCachingBehavior: queryStringCachingBehavior
contentTypesToCompress: contentTypesToCompress
isCompressionEnabled: isCompressionEnabled
origins: [
{
name: 'testresearchcdn-blob-core-windows-net'
properties: {
hostName: 'testresearchcdn.blob.core.windows.net'
}
}
]
}
}
resource test_researchcdn_example_com 'Microsoft.Cdn/profiles/endpoints/customDomains#2016-04-02' = {
name: 'test-researchcdn-example-com'
parent: Microsoft_Cdn_profiles_endpoints_testresearchcdn
properties: {
hostName: 'test-researchcdn.example.com'
}
}
resource example_wildcard_2019 'Microsoft.Cdn/profiles/secrets#2020-09-01' = {
name: 'DDKeyVault1'
parent: testresearchcdn
properties: {
parameters: {
type: 'CustomerCertificate'
certificateAuthority: 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
secretSource: {
id: 'https://DDkeyvault1.vault.azure.net/certificates/example-wildcard-2019/xxxxxxxxxxxxxxxxxxxxx'
}
secretVersion: ''
subjectAlternativeNames: [
'*.example.com'
'example.com'
]
useLatestVersion: false
}
}
dependsOn: [
test_researchcdn_example_com
]
}
This is my error:
"code": "BadRequest",
"message": "SecretSource id is invalid."
I have used Certificate Identifier, Secret Identifier and kvID where the secret is located for SecretSource but im getting the same error. What am i missing?

You are defining the Secret SourceId in a wrong way . In ARM template , we cannot specify id as https:///certificates/certificateName instead you have to specify as /subscriptions/<SubscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.KeyVault/vaults/<KeyvaultName>/certificates/<CertificateName>
So in your code instead of the below :
secretSource: {
id: 'https://DDkeyvault1.vault.azure.net/certificates/example-wildcard-2019/xxxxxxxxxxxxxxxxxxxxx'
}
You have to use this :
secretSource: {
id: '/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups/<YOUR-KEYVAULT-RESOURCE-GROUP-NAME>/providers/Microsoft.KeyVault/vaults/DDkeyvault1/certificates/example-wildcard-2019/xxxxxxxxxxxxxxxxxxxxx'
}
Note: Please make sure that before running the above you will have to Grant Azure CDN access to your key vault.

Related

cdk watch command forces full deploy with unrelated error on file change

I'm developing a little CDKv2 script to instantiate a few AWS services.
I have some lambda code deployed in the lambda/ folder and the frontend stored in a bucket populated using the frontend/ folder in the source.
I've noticed that whenever I make a change to any of the file inside these two, cdk watch return the following error and falls back to perform a full redeploy (which is significantly slow).
Could not perform a hotswap deployment, because the CloudFormation template could not be resolved: Parameter or resource 'DomainPrefix' could not be found for evaluation
Falling back to doing a full deployment
Is there any way to make changes in these folders only trigger updating the related bucket content or the related lambda?
Following here the stack.ts for quick reference, just in case here you can take a look at the repo.
export class CdkAuthWebappStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const domainPrefixParam = new CfnParameter(this, 'DomainPrefix', {
type: 'String',
description: 'You have to set it in google cloud as well', //(TODO: add link to explain properly)
default: process.env.DOMAIN_NAME || ''
})
const googleClientIdParam = new CfnParameter(this, 'GoogleClientId', {
type: 'String',
description: 'From google project',
noEcho: true,
default: process.env.GOOGLE_CLIENT_ID || ''
})
const googleClientSecretParam = new CfnParameter(this, 'GoogleClientSecret', {
type: 'String',
description: 'From google project',
noEcho: true,
default: process.env.GOOGLE_CLIENT_SECRET || ''
})
if(!domainPrefixParam.value || !googleClientIdParam.value || !googleClientSecretParam.value){
throw new Error('Make sure you initialized DomainPrefix, GoogleClientId and GoogleClientSecret in the stack parameters')
}
const s3frontend = new s3.Bucket(this, 'Bucket', {
bucketName: domainPrefixParam.valueAsString+'-frontend-bucket',
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
encryption: s3.BucketEncryption.S3_MANAGED,
enforceSSL: true,
versioned: false,
removalPolicy: cdk.RemovalPolicy.DESTROY,
autoDeleteObjects: true,
websiteIndexDocument: "index.html",
});
//TODO: fare in modo che questa origin access identity non sia legacy quando deployo
const cfdistributionoriginaccessidentity = new cloudfront.OriginAccessIdentity(this, 'CFOriginAccessIdentity', {
comment: "Used to give bucket read to cloudfront"
})
const cfdistribution = new cloudfront.CloudFrontWebDistribution(this, 'CFDistributionFrontend', {
originConfigs: [
{
s3OriginSource: {
s3BucketSource: s3frontend,
originAccessIdentity: cfdistributionoriginaccessidentity
},
behaviors: [{
isDefaultBehavior: true,
allowedMethods: cloudfront.CloudFrontAllowedMethods.GET_HEAD_OPTIONS,
forwardedValues: {
queryString: true,
cookies: { forward: 'all' }
},
minTtl: cdk.Duration.seconds(0),
defaultTtl: cdk.Duration.seconds(3600),
maxTtl: cdk.Duration.seconds(86400)
}]
}
]
})
s3frontend.grantRead(cfdistributionoriginaccessidentity)
const cfdistributionpolicy = new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['cloudfront:CreateInvalidation'],
resources: [`"arn:aws:cloudfront::${this.account}:distribution/${cfdistribution.distributionId}"`]
});
const userpool = new cognito.UserPool(this, 'WebAppUserPool', {
userPoolName: 'web-app-user-pool',
selfSignUpEnabled: false
})
const userpoolidentityprovidergoogle = new cognito.UserPoolIdentityProviderGoogle(this, 'WebAppUserPoolIdentityGoogle', {
clientId: googleClientIdParam.valueAsString,
clientSecret: googleClientSecretParam.valueAsString,
userPool: userpool,
attributeMapping: {
email: cognito.ProviderAttribute.GOOGLE_EMAIL
},
scopes: [ 'email' ]
})
// this is used to make the hostedui reachable
userpool.addDomain('Domain', {
cognitoDomain: {
domainPrefix: domainPrefixParam.valueAsString
}
})
const CLOUDFRONT_PUBLIC_URL = `https://${cfdistribution.distributionDomainName}/`
const client = userpool.addClient('Client', {
oAuth: {
flows: {
authorizationCodeGrant: true
},
callbackUrls: [
CLOUDFRONT_PUBLIC_URL
],
logoutUrls: [
CLOUDFRONT_PUBLIC_URL
],
scopes: [
cognito.OAuthScope.EMAIL,
cognito.OAuthScope.OPENID,
cognito.OAuthScope.PHONE
]
},
supportedIdentityProviders: [
cognito.UserPoolClientIdentityProvider.GOOGLE
]
})
client.node.addDependency(userpoolidentityprovidergoogle)
// defines an AWS Lambda resource
const securedlambda = new lambda.Function(this, 'AuhtorizedRequestsHandler', {
runtime: lambda.Runtime.NODEJS_14_X,
code: lambda.Code.fromAsset('lambda'),
handler: 'secured.handler'
});
const lambdaapiintegration = new apigw.LambdaIntegration(securedlambda)
const backendapigw = new apigw.RestApi(this, 'AuthorizedRequestAPI', {
restApiName: domainPrefixParam.valueAsString,
defaultCorsPreflightOptions: {
"allowOrigins": apigw.Cors.ALL_ORIGINS,
"allowMethods": apigw.Cors.ALL_METHODS,
}
})
const backendapiauthorizer = new apigw.CognitoUserPoolsAuthorizer(this, 'BackendAPIAuthorizer', {
cognitoUserPools: [userpool]
})
const authorizedresource = backendapigw.root.addMethod('GET', lambdaapiintegration, {
authorizer: backendapiauthorizer,
authorizationType: apigw.AuthorizationType.COGNITO
})
const s3deploymentfrontend = new s3deployment.BucketDeployment(this, 'DeployFrontEnd', {
sources: [
s3deployment.Source.asset('./frontend'),
s3deployment.Source.data('constants.js', `const constants = {domainPrefix:'${domainPrefixParam.valueAsString}', region:'${this.region}', cognito_client_id:'${client.userPoolClientId}', apigw_id:'${backendapigw.restApiId}'}`)
],
destinationBucket: s3frontend,
distribution: cfdistribution
})
new cdk.CfnOutput(this, 'YourPublicCloudFrontURL', {
value: CLOUDFRONT_PUBLIC_URL,
description: 'Navigate to the URL to access your deployed application'
})
}
}
Recording the solution from the comments:
Cause:
cdk watch apparently does not work with template parameters. I guess this is because the default --hotswap option bypasses CloudFormation and deploys instead via SDK commands.
Solution:
Remove the CfnParamters from the template. CDK recommends not using parameters in any case.
Perhaps cdk watch --no-hotswap would also work?

Ruby make put / post http call with array of object

i have this http call code, the type is form
param = {
form: {
"creatives[]" => [
{
is_visible: params[:creative_banner_is_visible],
type: "banner",
value_translations: {
id: params[:creative_banner_value_id],
en: params[:creative_banner_value_en]
}
},
{
is_visible: params[:creative_video_is_visible],
type: "video",
value_translations: {
id: params[:creative_video_value_id],
en: params[:creative_video_value_en]
}
}
]
}
}
http = HTTP.headers(headers)
http.put(base_url, param)
but somehow this is translated to this on the target server
"creatives"=>[
"{:is_visible=>\"true\", :type=>\"banner\", :value_translations=>{:id=>\"Banner URL ID\", :en=>\"Banner URL EN\"}}",
"{:is_visible=>\"true\", :type=>\"video\", :value_translations=>{:id=>\"12345ID\", :en=>\"12345EN\"}}"
]
do you know how to make this http call not stringified? i used same schema on postman and work just fine
"creatives": [
{
"is_visible": true,
"type": "banner",
"value_translations": {
"id": "http://schroeder.info/elinore",
"en": "http://wehner.info/dusti"
}
},
{
"is_visible": true,
"type": "video",
"value_translations": {
"id": "85177e87-6b53-4268-9a3c-b7f1c206e002",
"en": "5134f3ca-ead7-4ab1-986f-a695e69ace96"
}
}
]
i'm using this gem https://github.com/httprb/http
EDIT
First, replace your "creatives[]" => [ ... with creatives: [ ... so the end result should be the following.
creatives = [
{
is_visible: params[:creative_banner_is_visible],
type: "banner",
value_translations: {
id: params[:creative_banner_value_id],
en: params[:creative_banner_value_en]
}
},
{
is_visible: params[:creative_video_is_visible],
type: "video",
value_translations: {
id: params[:creative_video_value_id],
en: params[:creative_video_value_en]
}
}
]
http = HTTP.headers(headers)
http.put(base_url, creatives.to_json)
Second, I don't see any problem with what you get in your target server, you just have to parse it to JSON, so if you also have a Rails app there use JSON.parse on the body.
somehow this approach fixed the issue
create_params = {}.compare_by_identity
create_params["creatives[][is_visible]"] = params[:creative_banner_is_visible]
create_params["creatives[][type]"] = 'banner'
create_params["creatives[][value_translations][id]"] = params[:creative_banner_value_id]
create_params["creatives[][value_translations][en]"] = params[:creative_banner_value_en]
create_params["creatives[][is_visible]"] = params[:creative_video_is_visible]
create_params["creatives[][type]"] = 'video'
create_params["creatives[][value_translations][id]"] = params[:creative_video_value_id]
create_params["creatives[][value_translations][en]"] = params[:creative_video_value_en]

"Client network socket disconnected before secure TLS connection was established" - Neo4j/GraphQL

Starting up NestJS & GraphQL using yarn start:dev using await app.listen(3200);. When trying to connect to my Neo4J Desktop, I get this error trying to get my queries at localhost:3200/graphQL:
"errors": [
{
"message": "Client network socket disconnected before secure TLS connection was established",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": [
"getMovies"
],
"extensions": {
"code": "INTERNAL_SERVER_ERROR",
"exception": {
"code": "ServiceUnavailable",
"name": "Neo4jError"
}
}
}
],
"data": null
}
So I figured my local Neo4J desktop graph is not running correctly, but I can't seem to find any answer how to solve it.. Currently I have a config.ts file which has:
export const HOSTNAME = 'localhost';
export const NEO4J_USER = 'neo4j';
export const NEO4J_PASSWORD = '123';
and a file neogql.resolver.ts:
import {
Resolver,
Query,
Args,
ResolveProperty,
Parent,
} from '#nestjs/graphql';
import { HOSTNAME, NEO4J_USER, NEO4J_PASSWORD } from '../config';
import { Movie } from '../graphql';
import { Connection, relation, node } from 'cypher-query-builder';
import { NotFoundException } from '#nestjs/common';
const db = new Connection(`bolt://${HOSTNAME}`, {
username: NEO4J_USER,
password: NEO4J_PASSWORD,
});
#Resolver('Movie')
export class NeogqlResolver {
#Query()
async getMovies(): Promise<Movie> {
const movies = (await db
.matchNode('movies', 'Movie')
.return([
{
movies: [{ id: 'id', title: 'title', year: 'year' }],
},
])
.run()) as any;
return movies;
}
#Query('movie')
async getMovieById(
#Args('id')
id: string,
): Promise<any> {
const movie = (await db
.matchNode('movie', 'Movie')
.where({ 'movie.id': id })
.return([
{
movie: [{ id: 'id', title: 'title', year: 'year' }],
},
])
.run<any>()) as any;
if (movie.length === 0) {
throw new NotFoundException(
`Movie id '${id}' does not exist in database `,
);
}
return movie[0];
}
#ResolveProperty()
async actors(#Parent() movie: any) {
const { id } = movie;
return (await db
.match([node('actors', 'Actor'), relation('in'), node('movie', 'Movie')])
.where({ 'movie.id': id })
.return([
{
actors: [
{
id: 'id',
name: 'name',
born: 'born',
},
],
},
])
.run()) as any;
}
}
Be sure to pass the Config object like this:
var hostname = this.configService.get<string>('NEO4J_URL');
var username = this.configService.get<string>('NEO4J_USERNAME');
var password = this.configService.get<string>('NEO4J_PASSWORD');
db = new Connection(`${hostname}`, {
username: username,
password: password,
}, {
driverConfig: { encrypted: "ENCRYPTION_OFF" }
});
I had the same problem with grandSTACK when running against a neo4j version 4 server. According to Will Lyon this is due to mismatched encryption defaults between driver and database: https://community.neo4j.com/t/migrating-an-old-grandstack-project-to-neo4j-4/16911/2
So passing a config object with
{ encrypted: "ENCRYPTION_OFF"}
to the Connection constructor should do the trick.

Falcor - HTTPDataSource to post Json

Is it possible to post a Json file using the falcor.browser's model? I have used the get method in it. Below is what I require, but it is not working.
<script src="./js/falcor.browser.js"></script>
function registerUser() {
var dataSource = new falcor.HttpDataSource("http://localhost/registerUser.json");
var model = new falcor.Model({
source: dataSource
});
var userJson = {"name":"John","age":"35","email":"john#abc.com"};
model.
set(userJson).
then(function(done){
console.log(done);
});
This is the server.js code:
app.use('/registerUser.json', falcorExpress.dataSourceRoute(function (req, res) {
return new Router([
{
route: "rating",
get: function() {
// Post call to external Api goes here
}
}
]);
}));
A few things:
The Model's set() method takes 1+ pathValues, so reformat your userJson object literal into a set of pathValues. Something like:
model.
set(
{ path: ['users', 'id1', 'name'], value: 'John' },
{ path: ['users', 'id1', 'age'], value: 35 },
{ path: ['users', 'id1', 'email'], value: 'john#abc.com' }
).
then(function(done){
console.log(done);
});
Second, your router must implement set handlers to correspond to the paths you are trying to set. These handlers should also return pathValues:
new Router([
{
route: 'users[{keys:ids}]["name", "age", "email"]',
set: function(jsonGraph) {
// jsonGraph looks like { users: { id1: { name: "John", age: 35, email: "john#abc.com" }
// make request to update name/age/email fields and return updated pathValues, e.g.
return [
{ path: ['users', 'id1', 'name'], value: 'John' },
{ path: ['users', 'id1', 'age'], value: 35 },
{ path: ['users', 'id1', 'email'], value: 'john#abc.com' },
];
}
}
]);
Given that your DB request is likely asynchronous, your route get handler will have to return a promise or observable. But the above should work as a demonstration.
Edit
You can also use route pattern matching on the third path key if the number of fields gets large, as was demonstrated above on the second id key.
{
route: 'users[{keys:ids}][{keys:fields}]',
set: function(jsonGraph) {
/* jsonGraph looks like
{
users: {
id1: { field1: "xxx", field2: "yyy", ... },
id1: { field1: "xxx", field2: "yyy", ... },
...
}
}
*/
}
}

Sencha Model.save sends /0.json to server

I've got the following code, which supposed to create a new item. Proxy type is REST.
var inst = Ext.ModelMgr.create({
title: values.title
}, "EntriesModel");
inst.save({
success: function(model) {
console.log(model);
}
});
After save(), I see that request is sent to http://localhost:3000/entries/0.json, while I assume it should have been sent to http://localhost:3000/entries
Entries model looks like this
Ext.regModel("EntriesModel", {
fields: [
{name: "id", type: "int"},
{name: "title", type: "string"},
{name: "list_id", type:"int"},
{name: "bought", type: "boolean"},
],
proxy: {
type: 'rest',
url: '/entries',
format: 'json',
noCache: true,
reader: {
type: 'json',
root: 'data'
},
writer: {
type: 'json'
},
listeners: {
exception: function (proxy, response, operation) {
console.log(proxy, response, operation);
}
}
}
});
Backend is Rails.
Look at this, how building a link for a Rest Proxy:
buildUrl: function(request) {
var records = request.operation.records || [],
record = records[0],
format = this.format,
url = request.url || this.url;
if (this.appendId && record) {
if (!url.match(/\/$/)) {
url += '/';
}
url += record.getId();
}
if (format) {
if (!url.match(/\.$/)) {
url += '.';
}
url += format;
}
request.url = url;
return Ext.data.RestProxy.superclass.buildUrl.apply(this, arguments);
}
Override this to provide further customizations, but remember to call the superclass buildUrl
Try to read this http://dev.sencha.com/deploy/touch/docs/?class=Ext.data.RestProxy
and example:
new Ext.data.RestProxy({
url: '/users',
format: 'json'
});
// Collection url: /users.json
// Instance url : /users/123.json

Resources