Why my Vercel serverless functions not working in production? - serverless

Hello I have a problem when I deploy my application in vercel, I am creating a function without server that brings me the data of a collection in mongodb, using mongoose. In local everything works fine but when I deploy the function it gives me this error:
504: GATEWAY_TIMEOUT
Code: FUNCTION_INVOCATION_TIMEOUT
ID: gru1 :: 594vg-1620811347355-66f311d5c992
I understand that the free plan of vercel allows you to create functions with a maximum of 10 seconds of waiting, is this why I get the error? Or it may be for one of these reasons: https://vercel.com/support/articles/why-does-my-serverless-function-work-locally-but-not-when-deployed
Serverless function:
import mongoose from "mongoose";
module.exports = function(req, res) {
const Schema = mongoose.Schema;
const inmuebleSchema = new Schema({
user_id: String, //{type: Schema.Types.ObjectId, ref: 'User'}
desc: String,
tipo_propiedad: { type: String, enum: ["casa", "quinta", "departamento"] },
amb: Number,
estado: { type: String, enum: ["venta", "alquiler"] },
banios: Number,
dorm: Number,
coch: Number,
direc: String,
precio: String,
images: { type: Array, default: [] },
uploadedDate: { type: Date, default: Date.now() }
});
const Inmueble = mongoose.model("inmueble", inmuebleSchema);
mongoose
.connect(process.env.MONGODB_URI, {
useNewUrlParser: true,
useUnifiedTopology: true
})
.then(() => {
Inmueble.find()
.exec()
.then(inmuebles => res.json({ inmuebles }))
.catch(err => res.json({ status: "error" }));
})
.catch(err => {
res.status(500);
});
};

The problem was, I wasn't adding my site's IP to my cluster's whitelist.

Related

cdk watch command forces full deploy with unrelated error on file change

I'm developing a little CDKv2 script to instantiate a few AWS services.
I have some lambda code deployed in the lambda/ folder and the frontend stored in a bucket populated using the frontend/ folder in the source.
I've noticed that whenever I make a change to any of the file inside these two, cdk watch return the following error and falls back to perform a full redeploy (which is significantly slow).
Could not perform a hotswap deployment, because the CloudFormation template could not be resolved: Parameter or resource 'DomainPrefix' could not be found for evaluation
Falling back to doing a full deployment
Is there any way to make changes in these folders only trigger updating the related bucket content or the related lambda?
Following here the stack.ts for quick reference, just in case here you can take a look at the repo.
export class CdkAuthWebappStack extends Stack {
constructor(scope: Construct, id: string, props?: StackProps) {
super(scope, id, props);
const domainPrefixParam = new CfnParameter(this, 'DomainPrefix', {
type: 'String',
description: 'You have to set it in google cloud as well', //(TODO: add link to explain properly)
default: process.env.DOMAIN_NAME || ''
})
const googleClientIdParam = new CfnParameter(this, 'GoogleClientId', {
type: 'String',
description: 'From google project',
noEcho: true,
default: process.env.GOOGLE_CLIENT_ID || ''
})
const googleClientSecretParam = new CfnParameter(this, 'GoogleClientSecret', {
type: 'String',
description: 'From google project',
noEcho: true,
default: process.env.GOOGLE_CLIENT_SECRET || ''
})
if(!domainPrefixParam.value || !googleClientIdParam.value || !googleClientSecretParam.value){
throw new Error('Make sure you initialized DomainPrefix, GoogleClientId and GoogleClientSecret in the stack parameters')
}
const s3frontend = new s3.Bucket(this, 'Bucket', {
bucketName: domainPrefixParam.valueAsString+'-frontend-bucket',
blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL,
encryption: s3.BucketEncryption.S3_MANAGED,
enforceSSL: true,
versioned: false,
removalPolicy: cdk.RemovalPolicy.DESTROY,
autoDeleteObjects: true,
websiteIndexDocument: "index.html",
});
//TODO: fare in modo che questa origin access identity non sia legacy quando deployo
const cfdistributionoriginaccessidentity = new cloudfront.OriginAccessIdentity(this, 'CFOriginAccessIdentity', {
comment: "Used to give bucket read to cloudfront"
})
const cfdistribution = new cloudfront.CloudFrontWebDistribution(this, 'CFDistributionFrontend', {
originConfigs: [
{
s3OriginSource: {
s3BucketSource: s3frontend,
originAccessIdentity: cfdistributionoriginaccessidentity
},
behaviors: [{
isDefaultBehavior: true,
allowedMethods: cloudfront.CloudFrontAllowedMethods.GET_HEAD_OPTIONS,
forwardedValues: {
queryString: true,
cookies: { forward: 'all' }
},
minTtl: cdk.Duration.seconds(0),
defaultTtl: cdk.Duration.seconds(3600),
maxTtl: cdk.Duration.seconds(86400)
}]
}
]
})
s3frontend.grantRead(cfdistributionoriginaccessidentity)
const cfdistributionpolicy = new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['cloudfront:CreateInvalidation'],
resources: [`"arn:aws:cloudfront::${this.account}:distribution/${cfdistribution.distributionId}"`]
});
const userpool = new cognito.UserPool(this, 'WebAppUserPool', {
userPoolName: 'web-app-user-pool',
selfSignUpEnabled: false
})
const userpoolidentityprovidergoogle = new cognito.UserPoolIdentityProviderGoogle(this, 'WebAppUserPoolIdentityGoogle', {
clientId: googleClientIdParam.valueAsString,
clientSecret: googleClientSecretParam.valueAsString,
userPool: userpool,
attributeMapping: {
email: cognito.ProviderAttribute.GOOGLE_EMAIL
},
scopes: [ 'email' ]
})
// this is used to make the hostedui reachable
userpool.addDomain('Domain', {
cognitoDomain: {
domainPrefix: domainPrefixParam.valueAsString
}
})
const CLOUDFRONT_PUBLIC_URL = `https://${cfdistribution.distributionDomainName}/`
const client = userpool.addClient('Client', {
oAuth: {
flows: {
authorizationCodeGrant: true
},
callbackUrls: [
CLOUDFRONT_PUBLIC_URL
],
logoutUrls: [
CLOUDFRONT_PUBLIC_URL
],
scopes: [
cognito.OAuthScope.EMAIL,
cognito.OAuthScope.OPENID,
cognito.OAuthScope.PHONE
]
},
supportedIdentityProviders: [
cognito.UserPoolClientIdentityProvider.GOOGLE
]
})
client.node.addDependency(userpoolidentityprovidergoogle)
// defines an AWS Lambda resource
const securedlambda = new lambda.Function(this, 'AuhtorizedRequestsHandler', {
runtime: lambda.Runtime.NODEJS_14_X,
code: lambda.Code.fromAsset('lambda'),
handler: 'secured.handler'
});
const lambdaapiintegration = new apigw.LambdaIntegration(securedlambda)
const backendapigw = new apigw.RestApi(this, 'AuthorizedRequestAPI', {
restApiName: domainPrefixParam.valueAsString,
defaultCorsPreflightOptions: {
"allowOrigins": apigw.Cors.ALL_ORIGINS,
"allowMethods": apigw.Cors.ALL_METHODS,
}
})
const backendapiauthorizer = new apigw.CognitoUserPoolsAuthorizer(this, 'BackendAPIAuthorizer', {
cognitoUserPools: [userpool]
})
const authorizedresource = backendapigw.root.addMethod('GET', lambdaapiintegration, {
authorizer: backendapiauthorizer,
authorizationType: apigw.AuthorizationType.COGNITO
})
const s3deploymentfrontend = new s3deployment.BucketDeployment(this, 'DeployFrontEnd', {
sources: [
s3deployment.Source.asset('./frontend'),
s3deployment.Source.data('constants.js', `const constants = {domainPrefix:'${domainPrefixParam.valueAsString}', region:'${this.region}', cognito_client_id:'${client.userPoolClientId}', apigw_id:'${backendapigw.restApiId}'}`)
],
destinationBucket: s3frontend,
distribution: cfdistribution
})
new cdk.CfnOutput(this, 'YourPublicCloudFrontURL', {
value: CLOUDFRONT_PUBLIC_URL,
description: 'Navigate to the URL to access your deployed application'
})
}
}
Recording the solution from the comments:
Cause:
cdk watch apparently does not work with template parameters. I guess this is because the default --hotswap option bypasses CloudFormation and deploys instead via SDK commands.
Solution:
Remove the CfnParamters from the template. CDK recommends not using parameters in any case.
Perhaps cdk watch --no-hotswap would also work?

Upload in GraphQL with NestJs and ValidationPipe

Im trying to make file upload in my api using this strategy: https://stephen-knutter.github.io/2020-02-07-nestjs-graphql-file-upload/.
Without the ValidationPipe works, but when i enable ValidationPipe this apresent error on class-transformer:
TypeError: Promise resolver undefined is not a function
at new Promise (<anonymous>)
at TransformOperationExecutor.transform (/Users/victorassis/Workspace/barreiroclub/api/node_modules/class-transformer/TransformOperationExecutor.js:117:32)
at _loop_1 (/Users/victorassis/Workspace/barreiroclub/api/node_modules/class-transformer/TransformOperationExecutor.js:235:45)
at TransformOperationExecutor.transform (/Users/victorassis/Workspace/barreiroclub/api/node_modules/class-transformer/TransformOperationExecutor.js:260:17)
at ClassTransformer.plainToClass (/Users/victorassis/Workspace/barreiroclub/api/node_modules/class-transformer/ClassTransformer.js:17:25)
at Object.plainToClass (/Users/victorassis/Workspace/barreiroclub/api/node_modules/class-transformer/index.js:20:29)
at ValidationPipe.transform (/Users/victorassis/Workspace/barreiroclub/api/node_modules/#nestjs/common/pipes/validation.pipe.js:40:39)
at /Users/victorassis/Workspace/barreiroclub/api/node_modules/#nestjs/core/pipes/pipes-consumer.js:15:33
at processTicksAndRejections (internal/process/task_queues.js:97:5)
I searched a lot, but seens like class-transformer is abandoned, and the answers was to not use ValidationPipe with upload.
Someone pass for this and found a solution?
I try to follow the example you posted above and then enable the transformation of class and I got no error as you mentioned. But I met this error before when I was trying to put the wrong type of argument in the resolver.
Below is where I setup my app bootstrap:
async function bootstrap() {
const app = await NestFactory.create(AppModule);
app.enableCors({
origin: extractOrigins(app.get(ConfigService).get('CORS_ORIGINS')),
});
app.useGlobalPipes(
new ValidationPipe({
transform: true,
}),
);
app.use(graphqlUploadExpress());
await app.listen(app.get(ConfigService).get('PORT') ?? 3000);
logScaffoldApp(app);
}
bootstrap();
The resolver code:
#Mutation(() => Boolean, { nullable: true })
async uploadVocabularies(
#Args({
name: 'file',
type: () => GraphQLUpload,
})
{ createReadStream, filename }: FileUpload,
) {
console.log('attachment:', filename);
const stream = createReadStream();
stream.on('data', (chunk: Buffer) => {
console.log(chunk);
});
}
And I did get the error when I try to follow another tutorial and trying to make the argument as a Promise so then the class transformer got the same error:
#Mutation(() => Boolean, { nullable: true })
async uploadVocabularies(
#Args({
name: 'file',
type: () => GraphQLUpload,
})
attachment: Promise<FileUpload>,
) {
const { filename, createReadStream } = await attachment;
console.log('attachment:', filename);
const stream = createReadStream();
stream.on('data', (chunk: Buffer) => {
console.log(chunk);
});
}
I hope this can help you and other people who viewed this post ^^

Ngrx-effect doesn't sending payload in action on iOS

For some time I have been trying to find a solution to my problem, however, nothing has worked so far. I'm working on Ionic 4 application with Angular 8 and Ngrx. I created #Effect that calling a service which calling http service and then I need to dispatch two actions. One of them have a payload also.
Everything working fine in development (browsers). I've tried on Chrome, Firefox, Safari. Problem is appearing when I'm trying on the iPhone. On the iPhone payload sending to action is empty object {} instead of object with proper fields.
I've tried to build in non-production mode, disabling aot, build-optimizer, optimization.
Store init:
StoreModule.forFeature('rental', reducer),
EffectsModule.forFeature([RentalServiceEffect]),
Store:
export interface Contract {
address: string;
identity: string;
endRentSignature?: string;
}
export interface RentalStoreState {
status: RentStatus;
contract?: Contract;
metadata?: RentalMetadata;
summary?: RentalSummary;
carState?: CarState;
}
export const initialState: RentalStoreState = {
status: RentStatus.NOT_STARTED,
contract: {
address: null,
identity: null,
endRentSignature: null,
},
};
Action:
export const rentVerified = createAction(
'[RENTAL] RENT_VERIFIED',
(payload: Contract) => ({ payload })
);
Reducer:
const rentalReducer = createReducer(
initialState,
on(RentActions.rentVerified, (state, { payload }) => ({
...state,
contract: payload,
status: RentStatus.RENT_VERIFIED
})));
export function reducer(state: RentalStoreState | undefined, action: Action) {
return rentalReducer(state, action);
}
Method from a service:
public startRentalProcedure(
vehicle: Vehicle,
loading: any
): Observable<IRentalStartResponse> {
loading.present();
return new Observable(observe => {
const id = '';
const key = this.walletService.getActiveAccountId();
this.fleetNodeSrv
.startRent(id, key, vehicle.id)
.subscribe(
res => {
loading.dismiss();
observe.next(res);
observe.complete();
},
err => {
loading.dismiss();
observe.error(err);
observe.complete();
}
);
});
}
Problematic effect:
#Effect()
public startRentalProcedure$ = this.actions$.pipe(
ofType(RentalActions.startRentVerifying),
switchMap(action => {
return this.rentalSrv
.startRentalProcedure(action.vehicle, action.loading)
.pipe(
mergeMap(response => {
return [
RentalActions.rentVerified({
address: response.address,
identity: response.identity
}),
MainActions.rentalProcedureStarted()
];
}),
catchError(err => {
this.showConfirmationError(err);
return of({ type: '[RENTAL] START_RENTAL_FAILED' });
})
);
})
);

Mongodb and Node js

i am new in node js, and I am trying to build an app quiz engine using node js and mongo DB. I am not sure what I need to make a schema for quiz engine. So anyone can help me.
Here is an example of a User Schema...
var userSchema = new Schema({
name: {
type: String,
unique: true,
required: true
},
password: {
type: String,
required: true
}
});
But like the comment stated, you will have to be more specific.
As far as i can guess, a quiz will be given by user and it will have questions. So, you can make two entities :
i) User entity
ii) Quiz/Questions entity
User entity schema :
module.exports = {
attributes = {
name: {
type: String,
required: true
},
password: {
type: String,
required: true
}
password: {
type: String,
required: true
}
}
};
Question entity schema :
module.exports = {
attributes = {
questionLabel: {
type: 'String',
required: true
},
choices: {
type: 'Array',
required: true
}
};
Hello this is my schema
enter code here var mongoose = require("mongoose");
var Schema = mongoose.Schema;
var img_schema = new Schema({
title:{type:String,require:true},
creator:{type:Schema.Types.ObjectId, ref: "User" },
extension:{type:String,require:true},
foto:{type:String,require:true},
uso:{type:String,require:true}
});
var Imagen = mongoose.model("Imagen",img_schema);
module.exports = Imagen;
This is the example of user schema. you can replace with your requirement.
// User Schema
var UserSchema = mongoose.Schema({
username: {
type: String,
index: true
},
password: {
type: String
},
email: {
type: String
},
name: {
type: String
},
profileimage:{
type: String
}
});
var User = module.exports = mongoose.model('User', UserSchema);
I will suggest you to use mongoose to define your mongoDB collection schemas. Mongoose facilitates a lot of processes between nodejs and mongoDB.
You can install mongoose using following command:
npm i mongoose
Then create a schema like this:
import mongoose from 'mongoose';
const { Schema } = mongoose; //Pulling schema out of mongoose object
const QuizEngineSchema = new Schema({
name: String,
phoneNumber: Number,
// other data that you need to save in your model
},
{timestamps: true},
{id: false});
//Plugging the Schema into the model
const QuizEngine = mongoose.Model('QuizEngine',QuizEngineSchema);
export default QuizEngine;
Hope this helps!

Jaydata web api with OData provider -- provider fallback failed error

I'm developing an application with jaydata, OData and web api. Source code is given below:
$(document).ready(function () {
$data.Entity.extend('$org.types.Student', {
Name: { type: 'Edm.String', nullable: false, required: true, maxLength: 40 },
Id: { key: true, type: 'Edm.Int32', nullable: false, computed: false, required: true },
Gender: { type: 'Edm.String', nullable: false, required: true, maxLength: 40 },
Age: { type: 'Edm.Int32', nullable: false, required: true, maxLength: 40 }
});
$data.EntityContext.extend("$org.types.OrgContext", {
Students: { type: $data.EntitySet, elementType: $org.types.Student },
});
var context = new $org.types.OrgContext({ name: 'OData', oDataServiceHost: '/api/students' });
context.onReady(function () {
console.log('context initialized.');
});
});
In above JavaScript code, I defined an entity named Student. In context.onReady() method, I'm getting the following error:
Provider fallback failed! jaydata.min.js:100
Any idea, how I could get rid of this error??
As per suggested solution, I tried to change the key from required to computed. But sadly its still giving the same error. Modified code is given below.
$(document).ready(function () {
$data.Entity.extend('Student', {
Id: { key: true, type: 'int', computed: true },
Name: { type: 'string', required: true}
});
$data.EntityContext.extend("$org.types.OrgContext", {
Students: { type: $data.EntitySet, elementType: Student },
});
var context = new $org.types.OrgContext({
name: 'OData',
oDataServiceHost: '/api/students'
});
context.onReady(function () {
console.log('context initialized.');
});
});
I thinks the issue is with Odata provider because I tried the same code with indexdb provider and its working properly.
The issue is caused by the value oDataServiceHost parameter. You should configure it with the service host, not with a particular collection of the service. I don't know if the provider name is case-sensitive or not, but 'oData' is 100% sure.
For WebAPI + OData endpoints the configuration should look like this:
var context = new $org.types.OrgContext({
name: 'oData',
oDataServiceHost: '/odata'
});

Resources