I have a doubt about using of "promises" in typescript. I'm writting e2e testing framework with protractor and typescript and I would like to make some query to the database in order to use the data retrieved to fill forms or make validations.
I've create a new class "UserService" and the idea is to use some static methods to return data. I've installed the typeOrm library to handle it.
The problem is I cannot find the way to convert the "promise" returns to "strings". How can I do it?
Take a look to the code:
import "reflect-metadata";
import { User } from "././entities/user";
import { ConnectionOptions, Connection, Driver, createConnection } from "typeorm";
const connectionOptions: ConnectionOptions = {
driver: {
type: "mysql",
host: "localhost",
port: 3306,
username: "root",
password: "admin123",
database: "user"
},
entities: [User],
autoSchemaSync: false
};
export class UserService {
static getUserName(userId:number): string {
let us = createConnection(connectionOptions).then(connection => {
return connection.getRepository(User).findOne({Id: userId})
}).then(user => user.name);
return us; //it return an Promise<string>.
}
}
Into the "step" classes, the above class will be used for example as:
let name: string = UserService.getUserName(1);
txtUsername.Sendkeys(name);
Use await/async:
let name = await UserService.getUserName(1);
txtUsername.Sendkeys(name);
This has to be in a function defined as async, and you'll probably want to surround it with a try/catch, and it will not be synchronous, but that's the easiest way to access it.
And have no doubts about promises... they are super awesome.
Related
I am facing a problem where my DTO types are named one thing, but I want them to appear with a different name in the OpenAPI doc page.
For example, I have a UserDto class that I use in my controller, but wanted it to appear as simply "User" in the schemas section (and everywhere else this applies). Is that possible? Is there any decorator I can use?
I know I can simply modify the class name, but there is already a different user class used elsewhere.
I have searched everywhere with no avail.
BTW, I am using typescript and nestjs.
Every help will be appreciated, thanks!
Out of the box, Nest.js doesn't yet offer a ready-made solution. There is an open pull request (as mentioned earlier) https://github.com/nestjs/swagger/pull/983, but when it will be merged is unknown.
You can change the DTO name in schemas using one of the following approaches:
Add a static name property to your DTO.
class UserDto {
static name = 'User'; // <- here
#ApiProperty()
firstName: string;
// ...
}
But in strict mode, TypeScript will show an error like:
Static property 'name' conflicts with built-in property 'Function.name' of constructor function 'UserDto'.
Write a decorator with an interface as suggested in the pull request and use it until the desired functionality appears in Nest.js.
The decorator adds the name property with the needed value to the wrapper class for the DTO.
type Constructor<T = object> = new(...args: any[]) => T;
type Wrapper<T = object> = { new(): (T & any), prototype: T };
type DecoratorOptions = { name: string };
type ApiSchemaDecorator = <T extends Constructor>(options: DecoratorOptions) => (constructor: T) => Wrapper<T>;
const ApiSchema: ApiSchemaDecorator = ({ name }) => {
return (constructor) => {
const wrapper = class extends constructor { };
Object.defineProperty(wrapper, 'name', {
value: name,
writable: false,
});
return wrapper;
}
}
Use as suggested in the proposal:
#ApiSchema({ name: 'User' }) // <- here
class UserDto {
#ApiProperty()
firstName: string;
// ...
}
And don't forget that in TypeScript 5 the decorator API will change to something close to the implementation in JavaScript 😉
I solved in my case using #ApiModel
like this
#ApiModel(value="MeuLindoDto")
public class NameOriginalClassResponseDto ...
I am trying to generate mock data using relay for storybook.
My query is
const QUERY_LIST = graphql`
query modelControllerAllUsersQuery #relay_test_operation {
allUsers {
pageInfo {
hasNextPage
}
edges {
node {
id
firstName
lastName
}
}
}
}
`
and provided RelayEnvironmentProvider as a decorator to the story. I'm trying to return some default values to my query using custom mock resolvers.
const customMockResolvers = {
...mockResolvers,
allUsers:() => ({
pageInfo:{
hasNextPage:false,
},
edges:[
{
node:{
id :'id',
firstName:'fname',
lastName :'lname',
},
},
],
}),
};
and calling it as
(operation) => MockPayloadGenerator.generate(operation, customMockResolvers)
I don't seem to be able to get the default values returned.
Currently, it is returning
{"allUsers":{"pageInfo":{"hasNextPage":false},"edges":[{"node":{"id":"<UserNode-mock-id-1>","firstName":"<mock-value-for-field-\"firstName\">","lastName":"<mock-value-for-field-\"lastName\">"}}]}}
What am I doing wrong?
When using the #relay-test-operation, the keys within your customMockResolvers object must match the type name of the fields, which can be different from the field names themselves.
For example, you could have the following in your schema:
type Foo {
id: ID!
name: String!
}
and the following query:
query FooQuery #relay_test_operation {
foo {
id
name
}
}
Then the customMockResolvers object would look like this:
const customMockResolvers = {
Foo: () => ({
id: "fooId",
name: "fooName"
})
}
Notice that I'm passing in Foo as the key instead of foo.
You can check your schema and see what the the type name of allUsers is. I suspect it would be something like AllUsers or allUsersConnection, or something similar.
Also, if you're interested in creating Storybook stories for Relay components, I created a NPM package just for that: https://www.npmjs.com/package/use-relay-mock-environment
It doesn't require adding the #relay-test-operation directive to your query, and instead relies only on resolving the String type (which is the default for all scalar properties). You can of course still add the #relay-test-operation directive and also extend the resolvers by providing customResolvers in the config.
You can also extend the the String resolver as well, by providing extendStringResolver in the config.
Feel free to review the source code here if you want to implement something similar: https://github.com/richardguerre/use-relay-mock-environment.
Note: it's still in its early days, so some things might change, but would love some feedback!
I had set logging: true when createConnection() from TypeORM, which works fine for most of times.
But there is a situation where one of my data field from a specific query/mutation may contains a long string (50000 - 300000 char depends on the input, it could be more). When typeORM try to log the content in VScode terminal, it can crush VScode, I wonder if there is anything that TypeORM can hide such a long string instead of completely disable all the query loggings.
My ideal approach would be something like longstring=== (modification based on the string and text ellipsis) ===> long.... Or just apply a custom logger for specific query if necessary.
So it can still log to indicate the bit of code is running with minimal info.
From the doc, it seems like I can only add additional information instead of modifying.
https://github.com/typeorm/typeorm/blob/master/docs/logging.md
=============== Update ==============
Based on the accepted solution, it seems like I did not understand the doc properly. We can custom the logger based on our needs. Since I want to cut off the parameter. So I can do the following.
import { AdvancedConsoleLogger, Logger, LoggerOptions, QueryRunner } from "typeorm";
export class CustomLogger extends AdvancedConsoleLogger implements Logger {
constructor(options?: LoggerOptions) {
super(options);
}
//override logquery
logQuery(query: string, paramters?: any[], queryRunner?: QueryRunner) {
const limit = 100;
const paramTextEllipsis = paramters?.map((param) => {
//only cut off string and length longer than 100
if(typeof param === "string" && param.length > limit){
return param.substring(0, limit) + "...";
}
return param;
});
super.logQuery(query, paramTextEllipsis, queryRunner);
}
};
The result from terminal. Note that the json string originally has more than 100k which can easily crush editor.
It is explained under changing default logger on the TypeOrm logging page.
You need to implement your own logger. This is simple as you can subclass one of the standard loggers provided by TypeOrm, check if log query string if it is too long, and call the inherited log method.
For example, to subclass the "advanced-console" logger, which is the default logger TypeOrm uses.
import { Logger, QueryRunner, AdvancedConsoleLogger, LoggerOptions } from "typeorm";
export class MyCustomLogger extends AdvancedConsoleLogger implements Logger {
constructor(options?: LoggerOptions) {
super(options);
}
logQuery(query: string, parameters?: any[], queryRunner?: QueryRunner) {
let logText = query;
if (logText.length > 100)
// Truncate the log text if it's too long:
logText = logText.substring(0, 10) + "...";
super.logQuery(logText, parameters, queryRunner);
}
}
There are more 4 methods you can override, but you can omit these from your implementation and it will use the inherited methods:
logQueryError(error: string, query: string, parameters?: any[], queryRunner?: QueryRunner)
logQuerySlow(time: number, query: string, parameters?: any[], queryRunner?: QueryRunner)
logSchemaBuild(message: string, queryRunner?: QueryRunner)
logMigration(message: string, queryRunner?: QueryRunner)
log(level: "log" | "info" | "warn", message: any, queryRunner?: QueryRunner)
Now you have created your custom logger, how to use is is explained under using custom logger.
You need to change the examples on the page slightly to add the logging option(s) you want; logging options are explained under logging options, e.g: (true) or ("All") or (["query", "error"])
import {createConnection} from "typeorm";
import {MyCustomLogger} from "./logger/MyCustomLogger";
createConnection({
name: "mysql",
type: "mysql",
host: "localhost",
port: 3306,
username: "test",
password: "test",
database: "test",
logger: new MyCustomLogger(true) // Logging option "true" = enable logging
});
Or when using the ormconfig.json configuration file:
import {createConnection, getConnectionOptions} from "typeorm";
import {MyCustomLogger} from "./logger/MyCustomLogger";
// getConnectionOptions will read options from your ormconfig file
// and return it in connectionOptions object
// then you can simply append additional properties to it
getConnectionOptions().then(connectionOptions => {
return createConnection(Object.assign(connectionOptions, {
logger: new MyCustomLogger(connectionOptions.logging) // pass in logging options specified in the ormconfig.json file
}))
});
If you are using async/await instead of promises, you can rewrite the latter code more clearly:
// getConnectionOptions reads options from your ormconfig file
const options = await getConnectionOptions();
// append MyCustomLogger to the connection options
await createConnection({ ...options, logger: new MyCustomLogger(options.logging) });
I am new to nest.js and I have a question.
I have a Roles Guard like this
import { CanActivate, ExecutionContext, Injectable } from '#nestjs/common';
import { Observable } from 'rxjs';
import { Reflector } from '#nestjs/core';
#Injectable()
export class RolesGuard implements CanActivate {
constructor(private readonly reflector: Reflector) {
}
canActivate(context: ExecutionContext): boolean | Promise<boolean> | Observable<boolean> {
const roles = this.reflector.get<string[]>('roles', context.getHandler());
if (!roles) {
return true;
}
const request = context.switchToHttp().getRequest();
const user = request.user;
return user.role.some(role => !!roles.find(item => item === role));
}
}
Now I want to use this guard as a global guard like this
app.useGlobalGuards(new RolesGuard())
But it says that I need to pass argument(the reflector) to the guard as I mentioned in the constructor, now will it be okay to initialize the reflector like this?
const reflector:Reflector = new Reflector();
app.useGlobalGuards(new RolesGuard(reflector))
Or is there a better way to do this?
On the official Nest JS fundamentals course, in lecture "54 Protect Routes with Guards", the instructor specifies it is not best practice to create instance of reflector yourself.
A better way to resolve dependencies is to create a common module, and register your guard there. That way, reflector instance is resolved by nest runtime and you can also specify imports array for any other dependencies.
import { Module } from '#nestjs/common';
import { APP_GUARD } from '#nestjs/core';
import { AuthTokenGuard } from './guards/auth-token.guard';
import { ConfigModule } from '#nestjs/config';
#Module({
imports: [ConfigModule],
providers: [
{
provide: APP_GUARD,
useClass: AuthTokenGuard,
},
],
})
export class CommonModule {}
app.useGlobalGuards(new RolesGuard(new Reflector()));
It is working also. Could not find any better solution.
Although my answer might not add much value, I just want to re-iterate that is the intended way to get the reflector, this is a quote from NestJS's creator
kamilmysliwiec
When you create instance manually, you can create Reflector by
yourself:
new RoleGuard(new Reflector());
Source: https://github.com/nestjs/nest/issues/396#issuecomment-363111707
2023, NestJs 9, related problem:
In case you inject request-scoped dependency into the globally registered guard, the reflector will be undefined.
You can solve this issue by resolving such dependencies using ContextIdFactory and moduleRef.resolve() instead of injecting them normally:
const req = context.switchToHttp().getRequest();
const contextId = ContextIdFactory.getByRequest(req);
this.moduleRef.registerRequestByContextId(req, contextId);
this.authorizationService = await this.moduleRef.resolve(
RequestScopedService,
contextId
);
References:
https://docs.nestjs.com/fundamentals/module-ref
code example
https://discord.com/channels/520622812742811698/1060904277607985172
Is there a way to tell makeExecutableSchema from graphql-tools to ignore certain directives?
I want to query my neo4j database with graphql. I also want to be able to specify subtypes in graphql. There is a library called graphql-s2s which adds subtypes to graphql. The library neo4j-graphql-js uses custom directives (#cyper and #relation) to build an augmented Schema. It takes typeDefs or an executableSchema from graphql-tools. qraphql-s2s lets me create an executableSchema out of the Schema containing subtypes. My hope was that I should be easy to just pipe the different schema outputs into each other like in a decorator pattern.
Unfortunately this is apparently not how it works as I get a lot of parser error messages, which are not really descriptive.
Unfortunately I haven't found any Grandstack documentation where it is shown how to use augmentSchema() on an executableSchema with relations and cypher in it.
Is there a way how to do this?
Below my naive approach:
const { transpileSchema } = require('graphql-s2s').graphqls2s;
const { augmentSchema } = require('neo4j-graphql-js');
const { makeExecutableSchema} = require('graphql-tools');
const { ApolloServer} = require('apollo-server');
const driver = require('./neo4j-setup');
/** The #relation and #cypher directives don't make any sense here they are
just for illustration of having directives that make sense to
'augmentSchema' and not to 'makeExecutableSchema' **/
const schema = `
type Node {
id: ID!
}
type Person inherits Node {
firstname: String
lastname: String #relation(name: "SOUNDS_LIKE", direction: "OUT")
}
type Student inherits Person {
nickname: String #cypher(
statement: """ MATCH (n:Color)...some weird cypher query"""
)
}
type Query {
students: [Student]
}
`
const resolver = {
Query: {
students(root, args, context) {
// Some dummy code
return [{ id: 1, firstname: "Carry", lastname: "Connor", nickname: "Cannie" }]
}
}
};
const executabledSchema = makeExecutableSchema({
typeDefs: [transpileSchema(schema)],
resolvers: resolver
})
const schema = augmentSchema(executabledSchema)
const server = new ApolloServer({ schema, context: { driver } });
server.listen(3003, '0.0.0.0').then(({ url }) => {
console.log(`GraphQL API ready at ${url}`);
});