aws-cdk add SQS eventSource to existing Lambda - aws-cdk

I'm creating multiple SQS queues, and I want to add a Lambda trigger after the queues are created. I'm getting an error (see below) when I do the cdk synth command.
I'm using the version 1.130 of aws-cdk and the packages are all the same version (1.130.0)
Error with build TypeError: Cannot read property 'scopes' of undefined
Looking at the stack trace error it fails at the lambdaFunction.addEventSource section.
I'm following the CDK documentation (https://docs.aws.amazon.com/cdk/api/v1/docs/aws-lambda-event-sources-readme.html) and based on that I think I'm following the right steps.
Below is the code I'm using:
const cdk = require(`#aws-cdk/core`);
const sqs = require(`#aws-cdk/aws-sqs`);
const { SqsEventSource } = require(`#aws-cdk/aws-lambda-event-sources`);
const lambda = require(`#aws-cdk/aws-lambda`);
const fs = require(`fs`);
const { env } = require(`process`);
class PVInfraSQSTopic extends cdk.Construct {
constructor(scope, id, props) {
super(scope, id, props);
const buildEnvironment = JSON.parse(fs.readFileSync(`./config/`+JSON.parse(env.CDK_CONTEXT_JSON).config+`.json`, `utf8`));
const sqsDLQ = buildEnvironment.sqsDeadLetterQueue;
const lambdas = buildEnvironment.sqsLambdaTrigger;
const sqsQueues = buildEnvironment.sqsQueues;
const alias = buildEnvironment.alias;
const region = buildEnvironment.region;
const awsAccount = buildEnvironment.awsAccount;
const queueName = `sqs-queue`;
// Create Dead Letter Queue.
const dlq = new sqs.Queue(this, `SQSBuild`, {
queueName: sqsDLQ
});
// Create queues and configure dead letter queue for said queues.
sqsQueues.map((item) => {
new sqs.Queue(this, `queue-${item}`, {
queueName: `${item}`,
deadLetterQueue: {
maxReceiveCount: 3,
queue: dlq
}
});
});
// Add SqsEventSource (Lambda Triggers) to new SQS queues
const lambdaFunction = lambda.Function.fromFunctionAttributes(this, `function`, {
functionArn: `arn:aws:lambda:${region}:${awsAccount}:function:${lambdas}:${alias}`
});
lambdaFunction.addEventSource(new SqsEventSource(queueName, {
batchSize: 10
}));
}
}
module.exports = { PVInfraSQSTopic };
The lambda already exists, so that is why I'm not creating it as part of this stack.

Your first problem is that you are passing the SqsEventSource constructor a string (queueName), when it requires an IQueue.
It still won't synth, through. You also need to give CDK more information about your existing lambda, namely the lambda's IAM role.
Here's a minimal working example. I am importing existing lambda resource ARNs that were exported as StackOutputs in the existing Lambda stack, but this is an implementation detail.
export class SqsExistingEventSourceStack extends cdk.Stack {
constructor(scope: Construct, id: string, props: cdk.StackProps) {
super(scope, id, props);
const q = new sqs.Queue(this, 'MyQueue');
const lambdaFunction = lambda.Function.fromFunctionAttributes(this, `function`, {
functionArn: cdk.Fn.importValue('MinimalLambdaArn'),
role: iam.Role.fromRoleArn(this, 'importedRole', cdk.Fn.importValue('MinimalLambdaRoleArn')),
});
lambdaFunction.addEventSource(new sources.SqsEventSource(q, {batchSize: 10,}) );
}
}
The OP does not show how you are adding the PVInfraSQSTopic construct to a stack, which may also be a source of "scope" errors.

Related

typeerror (0 graphql _1.is definition node) is not a function

I'm reading while coding a book(Pro MERN Stack 2nd Edition) and I get the error:
typeerror (0 graphql _1.is definition node) is not a function
in my GitCMD.
Click here to see the error in CMD
I have tried this, from another StackOverflow error but since Im using the apollo-server-express I think is kind of different.
Also tried installing older and newer package versions of the following:
"apollo-server-express": "^3.11.1"
and
"graphql": "^0.13.2",
The code looks like this:
const express = require('express');
const { ApolloServer } = require('apollo-server-express');
let aboutMessage = "Issue Tracker API v1.0";
const typeDefs = `
type Query {
about: String!
}
type Mutation {
setAboutMessage(message: String!): String
}
`;
const resolvers = {
Query: {
about: ()=> aboutMessage,
},
Mutation: {
setAboutMessage,
},
};
function setAboutMessage(_, { message }) {
return aboutMessage = message;
}
const server = new ApolloServer({
typeDefs,
resolvers
});
const app = express();
app.use(express.static('public'));
server.applyMiddleware({ app, path: '/graphql' });
app.listen(3000, function () {
console.log('App started on port 3000');
});
this part alone:
const express = require('express');
const app = express();
app.use(express.static('public'));
app.listen(3000, function () {
console.log('App started on port 3000');
});
worked perfectly fine, but since I added the rest of it, it returns me the error :/
I updated my graphql to "graphql": "^16.6.0" and I got a different error: You must `await server.start()` before calling `server.applyMiddleware() so for that I needed to create an Async function, and inside the function I needed the name for my new server, the typeDefs, resolvers and then the await myserver.start():
and then the
myserver.applyMiddleware({ app, path: '/graphql' }); I dont know if thats well explained but here are the changes I made for the code to work:
const express = require('express');
const { ApolloServer } = require('apollo-server-express');
let aboutMessage = "Issue Tracker API v1.0";
const typeDefs = `
type Query {
about: String!
}
type Mutation {
setAboutMessage(message: String!): String
}
`;
const resolvers = {
Query: {
about: ()=> aboutMessage,
},
Mutation: {
setAboutMessage,
},
};
function setAboutMessage(_, { message }) {
return aboutMessage = message;
}
async function startServer(){
const server = new ApolloServer({
typeDefs,
resolvers,
});
await server.start();
server.applyMiddleware({ app, path: '/graphql' });
}
startServer();
const app = express();
app.use(express.static('public'));
app.listen(3000, function () {
console.log('App started on port 3000');
});
I didnt figured out on my own actually, I saw a post in Qiita about the same issue: Qiita same error URL.

Solana keeps showing "baseAccount not provided error" Whats missing?

I'm trying to call my solana contract from my frontend app.
Even though I added the baseAccount. Solana complains it's not added. Whats going on?
const baseAccount = Keypair.generate()
async function createNFT({ nftPrice, nftRoyalty, nftInfo }) {
// console.log('image', image)
// const added = await client.add(image)
// console.log(
// '`https://ipfs.infura.io/ipfs/${added.path}`',
// `https://ipfs.infura.io/ipfs/${added.path}`
// )
const provider = await getProvider()
const program = new Program(idl, programID, provider)
const price = new BN(nftPrice * 10 ** 9)
const result = await program.rpc.mintNft(
wallet.publicKey,
price,
nftRoyalty,
wallet.publicKey,
nftInfo,
{
accounts: {
baseAccount: baseAccount.publicKey,
user: provider.wallet.publicKey,
systemProgram: SystemProgram.programId,
},
signers: [baseAccount],
}
)
console.log('result', result)
}

control Electron instances

Wanted to check how many instances are running and control the number of instances running in one exe electron bundle. Let us say I wanted to allow only three instances running for the one exe bundle. I am not able to do this.
Current Behavior:
Only one and remaining can block. Or open for any number of instances. We need to control only three instances running, not more than that.
Example:
const { app } = require('electron')
let myWindow = null
const gotTheLock = app.requestSingleInstanceLock()
if (!gotTheLock) {
app.quit()
} else {
app.on('second-instance', (event, commandLine, workingDirectory) => {
// Someone tried to run a second instance, we should focus our window.
if (myWindow) {
if (myWindow.isMinimized()) myWindow.restore()
myWindow.focus()
}
})
// Create myWindow, load the rest of the app, etc...
app.on('ready', () => {
})
}
You can try with the following code to know how many windows have been opened.
const count = BrowserWindow.getAllWindows().length;
To check visible windows, you can try the following code
let count = BrowserWindow.getAllWindows()
.filter(b => {
return b.isVisible()
}).length
Once you get the number of instances, based upon the condition for number of instance, ie. if it is more than 3, you can quit using app.quit().
You can make each instance write to a file (increment a counter for example) when the instance starts and when it exits. (decrement the counter). You should check that file to see if the maximum number of instances are running
import { app } from "electron";
import path from "path";
import fs from "fs";
const MAX_APP_INSTANCES = 3;
const INSTANCE_COUNT_FILE_PATH = path.join(
app.getPath("userData"),
"numOfInstances"
);
// utils to read/write number of instances to a file
const instanceCountFileExists = () => fs.existsSync(INSTANCE_COUNT_FILE_PATH);
const readInstanceCountFile = () =>
parseInt(fs.readFileSync(INSTANCE_COUNT_FILE_PATH, "utf-8"));
const writeInstanceCountFile = (value) =>
fs.writeFileSync(INSTANCE_COUNT_FILE_PATH, value);
const incInstanceCountFile = () => {
const value = readInstanceCountFile() + 1;
writeInstanceCountFile(value.toString());
};
const decInstanceCountFile = () => {
const value = readInstanceCountFile() - 1;
writeInstanceCountFile(value.toString());
};
// logic needed to only allow a certain number of instances to be active
if (instanceCountFileExists() && readInstanceCountFile() >= MAX_APP_INSTANCES) {
app.quit();
} else {
if (!instanceCountFileExists()) {
writeInstanceCountFile("1");
} else {
incInstanceCountFile();
}
app.on("quit", () => decInstanceCountFile());
}
Note: this is solution is somewhat hacky. For example, the quit event is not guaranteed to fire when the Electron app exits

How to use/consume an event stream from wolkenkit-eventstore

I want to use wolkenkit's eventstore and was trying to set up a quick example. But I'm not able to simply output an event stream.
Simplified example:
const eventstore = require("wolkenkit-eventstore/inmemory");
const Stream = require("stream");
const uuidv4 = require("uuid/v4");
const Event = require("commands-events/dist/Event");
const main = async () => {
await eventstore.initialize();
const aggregateId = uuidv4();
const event = new Event({ ... });
event.metadata.revision = 1;
await eventstore.saveEvents({ events: event });
const writableStream = new Stream.Writable();
writableStream._write = (chunk, encoding, next) => {
console.log(chunk.toString());
next()
};
const readableStream = eventstore.getUnpublishedEventStream();
readableStream.pipe(writableStream);
};
main();
As far as I understand, getUnpublishedEventStream returns a readable stream. I followed this instructions, but it didn't work as expected.
All I get is the following error:
(node:10988) UnhandledPromiseRejectionWarning: TypeError: readableStream.pipe is not a function
According to the documentation of wolkenkit-eventstore, getUnpublishedEventStream is an async function, i.e. you have to call it with await. Otherwise, you don't get a stream back, but a promise (and a promise doesn't have a pipe function).
So, this line
const readableStream = eventstore.getUnpublishedEventStream();
should be:
const readableStream = await eventstore.getUnpublishedEventStream();
I have not taken a closer look at your code apart from this, but this is the reason why you get the current error message.
PS: Please note that I am one of the core developers of wolkenkit, so please take my answer with a grain of salt.

How do I do a simple HTTP request against the dataflow API on gcloud with node?

I want to monitor my dataflow jobs with an application. The application I'm developing is a nodejs application and ideally it would exist a package like #google-cloud/bigquery but for dataflow instead. I'm fully aware that I might not be able to start job, if it is not a template job, but it should be an easy way to list jobs or get job information.
Update:
I found this spec, https://dataflow.googleapis.com/$discovery/rest?version=v1b3, but I don't understand what location is for the list operation. The spec was linked from this page: https://cloud.google.com/dataflow/docs/reference/rest/
I did find the solution myself. There is a repo that basically has all the APIs for gcloud out there: https://github.com/google/google-api-nodejs-client
After I found that I could easily do what I wanted:
'use strict';
var google = require('googleapis');
var dataflow = google.dataflow('v1b3');
google.auth.getApplicationDefault(function (err, authClient, projectId) {
if (err) {
throw err;
}
// The createScopedRequired method returns true when running on GAE or a local developer
// machine. In that case, the desired scopes must be passed in manually. When the code is
// running in GCE or a Managed VM, the scopes are pulled from the GCE metadata server.
// See https://cloud.google.com/compute/docs/authentication for more information.
if (authClient.createScopedRequired && authClient.createScopedRequired()) {
// Scopes can be specified either as an array or as a single, space-delimited string.
authClient = authClient.createScoped([
'https://www.googleapis.com/auth/compute'
]);
}
// Fetch the list of GCE zones within a project.
// NOTE: You must fill in your valid project ID before running this sample!
var compute = google.compute({
version: 'v1',
auth: authClient
});
var result = dataflow.projects.jobs.list({
'projectId': projectId,
'auth': authClient
}, function (err, result) {
console.log(err, result);
});
});
For posterity . . . there is a way to do this without a client library, but it requires generating a jwt from service account credentials and exchanging the jwt for an access token to execute a Dataflow template. This example uses the Cloud_Bigtable_to_GCS_Avro template:
import axios from "axios";
import jwt from "jsonwebtoken";
import mem from "mem";
const loadCredentials = mem(function() {
// This is a string containing service account credentials
const serviceAccountJson = process.env.GOOGLE_APPLICATION_CREDENTIALS;
if (!serviceAccountJson) {
throw new Error("Missing GCP Credentials");
}
const credentials = JSON.parse(serviceAccountJson.replace(/\n/g, "\\n").replace(/\r/g, "\\r").replace(/\t/g, "\\t"));
return {
projectId: credentials.project_id,
privateKeyId: credentials.private_key_id,
privateKey: credentials.private_key,
clientEmail: credentials.client_email,
};
});
interface ProjectCredentials {
projectId: string;
privateKeyId: string;
privateKey: string;
clientEmail: string;
}
function generateJWT(params: ProjectCredentials) {
const scope = "https://www.googleapis.com/auth/cloud-platform";
const authUrl = "https://www.googleapis.com/oauth2/v4/token";
const issued = new Date().getTime() / 1000;
const expires = issued + 60;
const payload = {
iss: params.clientEmail,
sub: params.clientEmail,
aud: authUrl,
iat: issued,
exp: expires,
scope: scope,
};
const options = {
keyid: params.privateKeyId,
algorithm: "RS256",
};
return jwt.sign(payload, params.privateKey, options);
}
async function getAccessToken(credentials: ProjectCredentials): Promise<string> {
const jwt = generateJWT(credentials);
const authUrl = "https://www.googleapis.com/oauth2/v4/token";
const params = {
grant_type: "urn:ietf:params:oauth:grant-type:jwt-bearer",
assertion: jwt,
};
try {
const response = await axios.post(authUrl, params);
return response.data.access_token;
} catch (error) {
console.error("Failed to get access token", error);
throw error;
}
}
function buildTemplateParams(projectId: string, table: string) {
return {
jobName: `[job-name]`,
parameters: {
bigtableProjectId: projectId,
bigtableInstanceId: "[table-instance]",
bigtableTableId: table,
outputDirectory: `[gs://your-instance]`,
filenamePrefix: `${table}-`,
},
environment: {
zone: "us-west1-a" // omit or define your own,
tempLocation: `[gs://your-instance/temp]`,
},
};
}
async function backupTable(table: string) {
console.info(`Executing backup template for table=${table}`);
const credentials = loadCredentials();
const { projectId } = credentials;
const accessToken = await getAccessToken(credentials);
const baseUrl = "https://dataflow.googleapis.com/v1b3/projects";
const templatePath = "gs://dataflow-templates/latest/Cloud_Bigtable_to_GCS_Avro";
const url = `${baseUrl}/${projectId}/templates:launch?gcsPath=${templatePath}`;
const template = buildTemplateParams(projectId, table);
try {
const response = await axios.post(url, template, {
headers: { Authorization: `Bearer ${accessToken}` },
});
console.log("GCP Response", response.data);
} catch (error) {
console.error(`Failed to execute template for ${table}`, error.message);
}
}
async function run() {
await backupTable("my-table");
}
try {
run();
} catch (err) {
process.exit(1);
}

Resources