Using connect-pg-simple with Heroku PostgreSQL tries to connect without SSL - connection-pooling

pGetting this when trying to connect to a Heroku-hosted PostgreSQL database.
error: no pg_hba.conf entry for host "IP_ADDRESS", user "USER",
database "DBNAME", SSL off
Here's my code...
import dotenv from 'dotenv'
import pg from 'pg'
import session from 'express-session'
import ConnectPgSimple from 'connect-pg-simple'
const result = dotenv.config()
const { Client, native } = pg
const pgNativePool = new native.Pool({
max: 10, // default
connectionString: process.env.DATABASE_URL,
ssl: {
rejectUnauthorized: false
}
})
// This is the block of code causing the error
const pgSession = new ConnectPgSimple(session);
const store = new pgSession({
pool: pgNativePool,
tableName: 'sessions'
})
export default {
query: (sql, params) => pgNativePool.query(sql, params),
store // used by express-session
}
I am using connect-pg-simple to store sessions in the database. If I cut that part out, the errors do not occur. I would have thought connect-pg-simple would be reusing existing connections in native.Pool.
Any thoughts as to why it would try to reconnect without SSL?

if you install Heroku Postgress addon, they should attach environment variable to your application having postgresql connection string in it.
Their official example is this:
https://devcenter.heroku.com/articles/heroku-postgresql#connecting-in-node-js
So, i think this code should work:
import { Pool, Client } from 'pg';
const pool = new Pool({
max: 10,
connectionString: process.env.DATABASE_URL,
ssl: {
rejectUnauthorized: true
}
});
export default {
query: (sql, params) => pool.query(sql, params)
};

By default, if you're working in your local development, you need to connect using connectionString.
However, if you're deploying to heroku, then heroku will provide the connectionString via process.env.DATABASE_URL
You may try the following code and see if it works.
const isProduction = process.env.NODE_ENV === 'production';
const connectionString = `postgresql://${process.env.DB_USER}:${process.env.DB_PASSWORD}#${process.env.DB_HOST}:${process.env.DB_PORT}/${process.env.DB_DATABASE}`;
const pool = new Pool({
connectionString: isProduction ? process.env.DATABASE_URL : connectionString,
ssl: {
rejectUnauthorized: false,
},
});

I left out the pool property name when defining the pgSession (corrected above).

Related

HonoJS CORS with cloudflare worker

I'm starting with cloudflare worker and used the recommended routing framework HonoJS.
Now the documented way of implementing cors functionallity doesn't work for me on my developement machine (npm run dev). I didn't test it on production, since I need it to work on development environment.
The problem is: The OPTION request gets an 404 returned.
How do I set a global CORS configuration?
My code is currently this:
import { Hono } from 'hono'
import { cors } from 'hono/cors'
import { basicAuth } from 'hono/basic-auth'
import { default as register } from './register.js'
const app = new Hono()
app.use('*', cors())
const user = new Hono()
// also tried: user.use('/*', cors())
user.post('/register', register)
// Register route groups
app.route('/user', user)
export default app
Also tried following cors call:
cors({
origin: 'http://localhost:5173',
allowHeaders: ['X-Custom-Header', 'Upgrade-Insecure-Requests'],
allowMethods: ['POST', 'GET', 'OPTIONS'],
exposeHeaders: ['Content-Length', 'X-Kuma-Revision'],
maxAge: 600,
credentials: true,
})
Thank you very much for your time!
I fixed it by adding a wildcard for options.
app.use('*', cors({
origin: 'http://localhost:5173',
allowHeaders: ['Content-Type', 'Authorization'],
allowMethods: ['POST', 'GET', 'OPTIONS'],
exposeHeaders: ['Content-Length'],
maxAge: 600,
credentials: true,
}))
app.options('*', (c) => {
return c.text('', 204)
})

504 timeout when querying Firebase Realtime Database in Next JS / Vercel

I'm facing a 504 error (timeout) when making a request to a Firebase Realtime database
import admin from "firebase-admin";
if (!admin.apps.length) {
admin.initializeApp({
credential: admin.credential.cert({
project_id: process.env.FIREBASE_PROJECT_ID,
private_key: process.env.FIREBASE_PRIVATE_KEY.replace(/\\n/g, '\n'),
client_email: process.env.FIREBASE_CLIENT_EMAIL,
}),
databaseURL: process.env.FIREBASE_DATABASE,
});
}
const db = admin.database();
export default async (req, res) => {
if (req.method === "GET") {
try {
const snapshot = await db
.ref("properties")
.child(req.query.slug)
.once("value");
const properties = snapshot.val();
return res.status(200).json({ total: properties });
} catch(e) {
console.error(e);
}
}
};
The issue happens here
const snapshot = await db
.ref("properties")
.child(req.query.slug)
.once("value");
I think the vercel app domain (i.e. the staging domain) needs to be added in GCP as an autorised domain, but I can't seem to find out where.
The request works locally (on localhost:3000) so I assume localhost is allowed in GCP.
Any ideas on how to let a Next JS app on Vercel talk to a Firebase Realtime database without a 504 timeout?
Ended up using the Firebase REST api to query firebase realtime database

Manage Rails credentials on lib installed with Yarn

I have installed Algolia places in rails with yarn. I am trying to add the api key to rails credentials.
I am importing into application.js with a places.js file:
'use strict';
import places from 'places.js';
import placesAutocompleteDataset from 'places.js/autocompleteDataset';
$(document).on('turbolinks:load', function() {
var placesAutocomplete = places({
appId: 'xxxxxxxx',
apiKey: 'xxxxxxxx',
container: document.querySelector('#user_street_address'),
templates: {
value: function(suggestion) {
return suggestion.name;
}
}
}).configure({
type: 'address'
});
placesAutocomplete.on('change', function resultSelected(e) {
document.querySelector('#user_state').value = e.suggestion.administrative || '';
document.querySelector('#user_city').value = e.suggestion.city || '';
document.querySelector('#user_zipcode').value = e.suggestion.postcode || '';
document.querySelector('#user_country').value = e.suggestion.country || '';
});
});
I have tried to add a initializers in /config/initializers/algoliasearch.rb
AlgoliaSearch.configuration = {
application_id: Rails.application.credentials.algolia[:applicationId],
api_key: Rails.application.credentials.algolia[:apiKey],
# uncomment to use backend pagination
# pagination_backend: :will_paginate
}
but I receive a uninitialized constant error
How can I secure the credentials ?
You would have to use ERB interpolation if you want to use Rails encrypted secrets.
But its really a very pointless endeavor as the credentials are passed to the client in cleartext. There are no secrets in the browser.
Instead you can just pass them as ENV variables to webpack.

Neo4jError: The client is unauthorized due to authentication failure

I am newbie to neo4j. I am using javascript driver . Need to know what i am doing wrong . I am attaching code snippet . Am I missing something ?
Please guide me.
app.js
var express = require('express');
var bodyParser = require('body-parser');
var neo4j = require('neo4j-driver').v1;
var app = express();
app.use(bodyParser.json());
var driver = neo4j.driver('bolt://localhost', neo4j.auth.basic("neo4j", "neo4j"));
var session = driver.session();
session
.run('MERGE (alice:Person {name : {nameParam} }) RETURN alice.name AS name', {nameParam: 'Alice'})
.subscribe({
onNext: function (record) {
console.log(record.get('name'));
},
onCompleted: function () {
session.close();
},
onError: function (error) {
console.log(error);
}
});
/* Start the express App and listen on port 8080 */
var initServer = function () {
var server = app.listen(8080);
console.log('info', '*********** Application Server is listening on Port 8080 ***********');
};
initServer();
error :
I briefly bumped into the same problem earlier today. But then I remembered that when I started the neo4j server earlier, I had navigated to http://localhost:7474 from the browser and signed in using the default credentials username=neo4j and password=neo4j, which then prompted me to create a new password before I could proceed.
My hunch is that you probably have not changed the default password, which you need to. After that, you should have no problem with authentication. Use the short program below you check if you are good. Create a file index.js and add this:
const neo4j = require('neo4j-driver').v1;
const driver = neo4j.driver("bolt://localhost", neo4j.auth.basic("neo4j", "YOUR_NEW_PASSWORD"));
const session = driver.session();
const personName = 'Alice';
session.run(
'CREATE (a:Person {name: $name}) RETURN a',
{name: personName})
.then(result => {
session.close();
const singleRecord = result.records[0];
const node = singleRecord.get(0);
console.log(node.properties.name);
// on application exit:
driver.close();
})
.catch(error => console.log(error));
The using nodejs, simply execute this from the command prompt:
node index.js
You should see the output "Alice" on the command line

React-Native + Apollo-Link-State + AWS Appsync : Defaults are not stored in cache

I'm struggling with this for days.
I've successfully configure Apollo-Link-State with AppSync a month ago and start adding defaults and resolvers like this :
const cache = new InMemoryCache() //create the cache to be shared by appsync and apollo-link-state
const appSyncAtrributes = {
//...
//setup appSync
}
const stateLink = createLinkWithCache(() => withClientState({
cache,
resolvers: {
Mutation: {
updateCurrentUser: (_, { username, thumbnailUrl }, { cache }) => {
// ...
// working mutation that store the login information when user authenticate.
}
}
},
defaults: {
currentUser: {
__typename: 'CurrentUser',
username: '',
thumbnailUrl: ''
}
}
}))
const appSyncLink = createAppSyncLink({
url: appSyncAtrributes.graphqlEndpoint,
region: appSyncAtrributes.region,
auth: appSyncAtrributes.auth,
complexObjectsCredentials: () => Auth.currentCredentials()
})
const link = ApolloLink.from([stateLink, appSyncLink])
const client = new AWSAppSyncClient({}, { link })
So long it worked (I'm calling the #client mutation and query around my app).
But now I'm trying to add other data in my Linked State as this (everything else stayed the same):
defaults: {
currentUser: {
__typename: 'CurrentUser',
username: '',
thumbnailUrl: '',
foo: 'bar',
},
hello: 'world',
userSettings: {
__typename: 'userSettings',
isLeftHanded: true
}
}
And my cache doesn't update. I mean :
currentUser still contains __typename: 'CurrentUser', username: '', thumbnailUrl: ' but doesn't contains foo: 'bar'. And the cache doensn't containshello: 'bar'oruserSettings`.
More confusing is the fact that if I give a value to username or thumbnailUrl like username: 'joe', the cache actually reflect that change! (while ignoring all my other modifications)
I tried all variation of this experiment and cleared all caches (even running it on a fresh colleague computer to be sure they were no dirty cache involved).
I'm completely clueless.
Context :
It happens in the iOS simulator
I'm debugging watching the redux cache of appsync
apollo-cache-inmemory: 1.2.8
apollo-client: 2.3.5
apollo-link: 1.2.2
apollo-link-state: 0.4.1
aws-appsync: 1.3.2
Update : Actually currentUser is not stored neither from defaults. It get into the cache when the mutation is called.
Ok my issue wasn't an issue after all (2 days wasted).
The lack of proper debug tools (had to watch the cache evolving thru redux) was the issue.
The cache is actually written correctly but doesn't show anywhere.
As soon as I start querying that cache, everything worked.
Can't wait for react-native-debugger to integrate a proper apollo/graphql analyser

Resources