Is it possible to use connection class as provide like here?
import { Connection, createConnection } from 'typeorm';
export const databaseProviders = [{
provide: Connection,
useFactory: async () => await createConnection({
type: 'postgres',
host: 'localhost',
port: 5432,
username: 'postgres',
password: 'postgres',
database: 'testo',
entities: [
__dirname + '/../**/*.entity{.ts,.js}',
],
logging: true,
synchronize: true,
}),
}];
To make imports work like:
constructor(
private connection: Connection,
) {
this.repository = connection.getRepository(Project);
}
In that case nestjs can't find dependency. I guess the problem is in typeorm, it is compiled to plain es5 function. But maybe there a solution for this?
repository to reproduce
UPDATE:
I found acceptable solution nestjs typeorm module, but don't understand why my Connection class is not worked, but it works well with strings. Hope #kamil-myśliwiec will help to understand.
modules: [
TypeOrmModule.forRoot(
[
Build,
Project,
],
{
type: 'postgres',
host: 'localhost',
port: 5432,
username: 'postgres',
password: 'postgres',
database: 'testo',
entities: [
Build,
],
logging: true,
synchronize: true,
}),
],
// And the n inject like this by entity name
#InjectRepository(Build) private repository: Repository<Build>,
Related
I am trying to create a Fargate Service that connects to an Aurora Postgres DB through the CDK, but I am unable. I get an error connection. This should be pretty straightforward though. Does anybody have any resources?
export class myStack extends cdk.Stack {
constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);
const DATABASE_NAME = "myDatabase";
const vpc = new ec2.Vpc(this, "myVpc");
const cluster = new ecs.Cluster(this, "myCluster", {
vpc: vpc
});
const databaseAdminSecret = new Secret(this, 'myCredentialsSecret', {
secretName: 'database-secret',
description: 'Database auto-generated user password',
generateSecretString: {
secretStringTemplate: JSON.stringify({username: 'boss'}),
generateStringKey: 'password',
passwordLength: 30,
excludeCharacters: "\"#/\\",
excludePunctuation: true,
}
});
const database = new rds.DatabaseCluster(this, 'myDatabase', {
engine: rds.DatabaseClusterEngine.auroraPostgres({
version: rds.AuroraPostgresEngineVersion.VER_14_5
}),
credentials: rds.Credentials.fromSecret(databaseAdminSecret),
instanceProps: {
vpc,
},
defaultDatabaseName: DATABASE_NAME,
port: 5432,
});
// Create a load-balanced Fargate service and make it public
const service = new ecs_patterns.ApplicationLoadBalancedFargateService(this, "myService", {
cluster: cluster, // Required
taskImageOptions: {
image: ecs.ContainerImage.fromRegistry('my-custom-image'),
environment: {
DB_URL: `jdbc:postgresql://${database.clusterEndpoint.socketAddress}/${DATABASE_NAME}`,
DB_USERNAME: databaseAdminSecret.secretValueFromJson('username').unsafeUnwrap().toString(),
DB_PASSWORD: databaseAdminSecret.secretValueFromJson('password').unsafeUnwrap().toString(),
},
},
});
// Allow the service to connect to the database
database.connections.allowDefaultPortFrom(service.service);
}
}
When I spin up this stack with cdk deploy my Fargate service ends up dying because "Connection was refused".
What am I doing wrong?
Thanks,
Is it possible to "require SSL" on an Aurora database created using the AWS CDK? We've enabled encryption, but that is only "at rest", and we're also required to encrypt "in transit" and are being flagged by a security monitor because the database does not "require SSL".
Here is the code we use to set up the database:
const cluster = new rds.DatabaseCluster(scope, 'TheDB', {
defaultDatabaseName: dbName,
engine: rds.DatabaseClusterEngine.auroraPostgres({ version: rds.AuroraPostgresEngineVersion.VER_13_7 }),
credentials: {
username: dbUser,
password: pgPasswordSecret.secretValue,
},
instanceProps: {
securityGroups: [securityGroup],
instanceType: primaryPostgresInstanceType(),
vpcSubnets: {
subnetType: ec2.SubnetType.PRIVATE_WITH_NAT,
},
vpc,
},
storageEncrypted: true,
backup: {
retention: Duration.days(15),
},
})
The solution (from https://gitter.im/awslabs/aws-cdk?at=5e2ab552f196225bd64b7581) is to pass a parameterGroup when creating the database cluster, setting rds.force_ssl to '1':
const postgresVersion = rds.AuroraPostgresEngineVersion.VER_13_7
const parameterGroup = new rds.ParameterGroup(scope, 'ClusterParameterGroup', {
engine: rds.DatabaseClusterEngine.auroraPostgres({ version: postgresVersion}),
parameters: {
'rds.force_ssl': '1',
},
})
const cluster = new rds.DatabaseCluster(scope, 'TheDB', {
...
parameterGroup,
...
})
I am working on a database migration project and it requires me to use sequelize. I initialized sequelize's CLI (using npx sequelize-cli init) that added the config.json file:
config, contains config file, which tells CLI how to connect with database
The config.json file has this object:
"production": {
"username": "root",
"password": null,
"database": "database_production",
"host": "127.0.0.1",
"dialect": "mysql"
}
But I don't want to save my password in a config.json file. I want to use an environmental variable instead. What can I do?
Rename the config.json to config.js, install dotenv and use the below code
require('dotenv').config();
module.exports = {
development: {
username: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
host: process.env.DB_HOST,
dialect: process.env.DB_DRIVER
},
test: {
username: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
host: process.env.DB_HOST,
dialect: process.env.DB_DRIVER
},
production: {
username: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
host: process.env.DB_HOST,
dialect: process.env.DB_DRIVER
}
};
First you have to create a .sequelizerc file in root. Then add the following code there.
var path = require('path')
module.exports = {
'config': path.resolve('config', 'config.js'),
}
Note: Change the path as per your project.
Then you can rename the config.json file to config.js and add the below code there. (As in Tunde's answer.)
require('dotenv').config();
module.exports = {
development: {
username: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
host: process.env.DB_HOST,
dialect: process.env.DB_DRIVER
},
test: {
username: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
host: process.env.DB_HOST,
dialect: process.env.DB_DRIVER
},
production: {
username: process.env.DB_USER,
password: process.env.DB_PASSWORD,
database: process.env.DB_NAME,
host: process.env.DB_HOST,
dialect: process.env.DB_DRIVER
}
};
I have multiple connections using typeorm getConnections as follows
await createConnections([
{
name: 'main',
type: 'mysql',
host: process.env.DB_HOST,
port: process.env.DB_PORT,
username: process.env.DB_USERNAME,
password: process.env.DB_PASS,
database: process.env.DB,
logging: true,
entities: [Property],
},
{
name: 'favourites',
type: 'postgres',
url: process.env.DATABASE_URL,
logging: false,
synchronize: true,
migrations: [path.join(__dirname, './migrations/*')],
entities: [User],
},
]);
In my Property entity as defined in the 1st connection under the name main I have the following, stating the database the entity is to use.
import { ObjectType, Field } from 'type-graphql';
import { Entity, PrimaryGeneratedColumn, Column, BaseEntity } from 'typeorm';
#ObjectType()
#Entity({ database: 'main', name: 'property' })
class Property extends BaseEntity {
#Field()
#PrimaryGeneratedColumn()
id!: number;
#Field(() => String)
#Column({ name: 'account_number' })
accountNumber!: string;
}
export default Property;
However, in a GraphQL resolver, when I try
await Property.findOne(1234)
I get the following error
ConnectionNotFoundError: Connection \"default\" was not found.
at ConnectionNotFoundError.TypeORMError [as constructor]
However, when I remove the name property in the first connection in createConnections and remove the database property in the Property Entity constructor, it works.
So, how do I get this to work on a named connection?
I'm developing a Django+Vue app using VSCode devcontainers (Docker).
I have recently migrated from Vue CLI v4 to Vue CLI v5 following the migration guide.
After the migration, the HMR of the dev-server stopped working.
This was my vue.config.js before the migration:
const BundleTracker = require("webpack-bundle-tracker");
module.exports = {
publicPath: process.env.NODE_ENV === "development" ? "http://localhost:8080/" : "/static/",
devServer: {
host: "0.0.0.0",
port: 8080,
public: "0.0.0.0:8080",
https: false,
headers: { "Access-Control-Allow-Origin": ["*"] },
hotOnly: true,
watchOptions: {
ignored: "./node_modules/",
aggregateTimeout: 300,
poll: 1000,
},
},
transpileDependencies: ["vuetify"],
css: {
sourceMap: true,
},
chainWebpack: (config) => {
config.plugin("BundleTracker").use(BundleTracker, [
{
filename: `./config/webpack-stats-${process.env.NODE_ENV}.json`,
},
]);
config.resolve.alias.set("__STATIC__", "static");
},
};
And after:
const { defineConfig } = require("#vue/cli-service");
const BundleTracker = require("webpack-bundle-tracker");
module.exports = defineConfig({
publicPath: process.env.NODE_ENV === "development" ? "http://localhost:8080/" : "/static/",
devServer: {
host: "0.0.0.0",
port: 8080,
client: {
webSocketURL: "auto://0.0.0.0:8080/ws",
},
https: false,
headers: { "Access-Control-Allow-Origin": ["*"] },
hot: "only",
static: {
watch: {
ignored: "./node_modules/",
aggregateTimeout: 300,
poll: 1000,
},
},
},
transpileDependencies: ["vuetify"],
css: {
sourceMap: true,
},
chainWebpack: (config) => {
config.plugin("BundleTracker").use(BundleTracker, [
{
filename: `./config/webpack-stats-${process.env.NODE_ENV}.json`,
},
]);
config.resolve.alias.set("__STATIC__", "static");
},
});
After the migration, a new warning shows everytime I run npm run serve (but devServer.public has been removed in v5!):
App running at:
- Local: http://localhost:8080/
It seems you are running Vue CLI inside a container.
Since you are using a non-root publicPath, the hot-reload socket
will not be able to infer the correct URL to connect. You should
explicitly specify the URL via devServer.public.
Access the dev server via http://localhost:<your container's external mapped port>http://localhost:8080/
Note that the development build is not optimized.
To create a production build, run npm run build.
Any ideas?
Thanks in advance!
My team had similar issues as you are describing. We could fix it by adding the optimization object to our Webpack config (vue.config.js`).
const {defineConfig} = require('#vue/cli-service');
module.exports = defineConfig({
/* your config */
configureWebpack: {
optimization: {
runtimeChunk: 'single',
},
},
devServer: {
// we need this for apollo to work properly
proxy: `https://${process.env.SANDBOX_HOSTNAME}/`,
host: '0.0.0.0',
allowedHosts: 'all',
},
});
The optimization part fixed the HMR for us. However, if you're using Firefox, your console might still be spammed by error messages like these:
The connection to wss://SANDBOX_HOSTNAME:8080/ws was interrupted while
the page was loading.
This has been a blocker of vue 3 for us, so I hope it helps. ✌️