Is passing environment variables to sapper's client side secure with Rollup Replace? - environment-variables

I am using replace in my rollup configuration for sapper and sapper-environment to pass environment variables to the client side in sapper - is this secure? Is there a better/safer way to approach this?
Using this config below:
rollup.config.js
const sapperEnv = require('sapper-environment');
export default {
client: {
input: config.client.input(),
output: config.client.output(),
plugins: [
replace({
...sapperEnv(),
'process.browser': true,
'process.env.NODE_ENV': JSON.stringify(mode)
})
...
And then this allows me to use the variables in stores.js:
import { writable } from 'svelte/store';
import Client from 'shopify-buy';
const key = process.env.SAPPER_APP_SHOPIFY_KEY;
const domain = process.env.SAPPER_APP_SHOPIFY_DOMAIN;
// Initialize a client
const client = Client.buildClient({
domain: domain,
storefrontAccessToken: key
});
export { key, domain, client };
I have tried running this in server,js and passing the variables through the session data, but client side no matter what I do they always seems to return 'undefined'.

There are two questions here — a) is it secure, and b) why are the values undefined?
The answer to the first question is 'no'. Any time you include credentials in JavaScript that gets served to the client (or in session data), you're making those credentials available to anyone who knows how to look for them. If you need to avoid that, you'll need your server (or another server) to make requests on behalf of authenticated clients.
As for the second part, it's very hard to tell without a reproduction unfortunately!

Related

Autodesk Simple Viewer - "Could not list models. "

I'm trying to implement the code example in this repo:
https://github.com/autodesk-platform-services/aps-simple-viewer-dotnet
While launching in debugging mode, I get an error in the AuthController.cs says:
Could not list models. See the console for more details
I didn't make any significant changes to the original code, I only changed the env vars (client id, secret etc..)
The error is on the below function:
async function setupModelSelection(viewer, selectedUrn) {
const dropdown = document.getElementById('models');
dropdown.innerHTML = '';
try {
const resp = await fetch('/api/models');
if (!resp.ok) {
throw new Error(await resp.text());
}
const models = await resp.json();
dropdown.innerHTML = models.map(model => `<option value=${model.urn} ${model.urn === selectedUrn ? 'selected' : ''}>${model.name}</option>`).join('\n');
dropdown.onchange = () => onModelSelected(viewer, dropdown.value);
if (dropdown.value) {
onModelSelected(viewer, dropdown.value);
}
} catch (err) {
alert('Could not list models. See the console for more details.');
console.error(err);
}
}
I get an access token so my client id and secret are probably correct, I also added the app to the cloud hub, what could be the problem, why the app can't find the projects in the hub?
I can only repeat what AlexAR said - the given sample is not for accessing files from user hubs like ACC/BIM 360 Docs - for that follow this: https://tutorials.autodesk.io/tutorials/hubs-browser/
To address the specific error. One way I can reproduce that is if I set the APS_BUCKET variable to something simple that has likely been used by someone else already, e.g. "mybucket", and so I'll get an error when trying to access the files in it, since it's not my bucket. Bucket names need to be globally unique. If you don't want to come up with a unique name yourself, then just do not declare the APS_BUCKET environment variable and the sample will generate a bucket name for you based on the client id of your app.

How to use a Google Secret in a deployed Cloud Run Service (managed)?

I have a running cloud run service user-service. For test purposes I passed client secrets via environment variables as plain text. Now since everything is working fine I'd like to use a secret instead.
In the "Variables" tab of the "Edit Revision" option I can declare environment variables but I have no idea how to pass in a secret? Do I just need to pass the secret name like ${my-secret-id} in the value field of the variable? There is not documentation on how to use secrets in this tab only a hint at the top:
Store and consume secrets using Secret Manager
Which is not very helpful in this case.
You can now read secrets from Secret Manager as environment variables in Cloud Run. This means you can audit your secrets, set permissions per secret, version secrets, etc, and your code doesn't have to change.
You can point to the secrets through the Cloud Console GUI (console.cloud.google.com) or make the configuration when you deploy your Cloud Run service from the command-line:
gcloud beta run deploy SERVICE --image IMAGE_URL --update-secrets=ENV_VAR_NAME=SECRET_NAME:VERSION
Six-minute video overview: https://youtu.be/JIE89dneaGo
Detailed docs: https://cloud.google.com/run/docs/configuring/secrets
UPDATE 2021: There is now a Cloud Run preview for loading secrets to an environment variable or a volume. https://cloud.google.com/run/docs/configuring/secrets
The question is now answered however I have been experiencing a similar problem using Cloud Run with Java & Quarkus and a native image created using GraalVM.
While Cloud Run is a really interesting technology at the time of writing it lacks the ability to load secrets through the Cloud Run configuration. This has certainly added complexity in my app when doing local development.
Additionally Google's documentation is really quite poor. The quick-start lacks a clear Java example for getting a secret[1] without it being set in the same method - I'd expect this to have been the most common use case!
The javadoc itself seems to be largely autogenerated with protobuf language everywhere. There are various similarly named methods like getSecret, getSecretVersion and accessSecretVersion
I'd really like to see some improvment from Google around this. I don't think it is asking too much for dedicated teams to make libraries for common languages with proper documentation.
Here is a snippet that I'm using to load this information. It requires the GCP Secret library and also the GCP Cloud Core library for loading the project ID.
public String getSecret(final String secretName) {
LOGGER.info("Going to load secret {}", secretName);
// SecretManagerServiceClient should be closed after request
try (SecretManagerServiceClient client = buildClient()) {
// Latest is an alias to the latest version of a secret
final SecretVersionName name = SecretVersionName.of(getProjectId(), secretName, "latest");
return client.accessSecretVersion(name).getPayload().getData().toStringUtf8();
}
}
private String getProjectId() {
if (projectId == null) {
projectId = ServiceOptions.getDefaultProjectId();
}
return projectId;
}
private SecretManagerServiceClient buildClient() {
try {
return SecretManagerServiceClient.create();
} catch(final IOException e) {
throw new RuntimeException(e);
}
}
[1] - https://cloud.google.com/secret-manager/docs/reference/libraries
Google have documentation for the Secret manager client libraries that you can use in your api.
This should help you do what you want
https://cloud.google.com/secret-manager/docs/reference/libraries
Since you haven't specified a language I have a nodejs example of how to access the latest version of your secret using your project id and secret name. The reason I add this is because the documentation is not clear on the string you need to provide as the name.
const [version] = await this.secretClient.accessSecretVersion({
name: `projects/${process.env.project_id}/secrets/${secretName}/versions/latest`,
});
return version.payload.data.toString()
Be sure to allow secret manager access in your IAM settings for the service account that your api uses within GCP.
I kinda found a way to use secrets as environment variables.
The following doc (https://cloud.google.com/sdk/gcloud/reference/run/deploy) states:
Specify secrets to mount or provide as environment variables. Keys
starting with a forward slash '/' are mount paths. All other keys
correspond to environment variables. The values associated with each
of these should be in the form SECRET_NAME:KEY_IN_SECRET; you may omit
the key within the secret to specify a mount of all keys within the
secret. For example:
'--update-secrets=/my/path=mysecret,ENV=othersecret:key.json' will
create a volume with secret 'mysecret' and mount that volume at
'/my/path'. Because no secret key was specified, all keys in
'mysecret' will be included. An environment variable named ENV will
also be created whose value is the value of 'key.json' in
'othersecret'. At most one of these may be specified
Here is a snippet of Java code to get all secrets of your Cloud Run project. It requires the com.google.cloud/google-cloud-secretmanager artifact.
Map<String, String> secrets = new HashMap<>();
String projectId;
String url = "http://metadata.google.internal/computeMetadata/v1/project/project-id";
HttpURLConnection conn = (HttpURLConnection)(new URL(url).openConnection());
conn.setRequestProperty("Metadata-Flavor", "Google");
try {
InputStream in = conn.getInputStream();
projectId = new String(in.readAllBytes(), StandardCharsets.UTF_8);
} finally {
conn.disconnect();
}
Set<String> names = new HashSet<>();
try (SecretManagerServiceClient client = SecretManagerServiceClient.create()) {
ProjectName projectName = ProjectName.of(projectId);
ListSecretsPagedResponse pagedResponse = client.listSecrets(projectName);
pagedResponse
.iterateAll()
.forEach(secret -> { names.add(secret.getName()); });
for (String secretName : names) {
String name = secretName.substring(secretName.lastIndexOf("/") + 1);
SecretVersionName nameParam = SecretVersionName.of(projectId, name, "latest");
String secretValue = client.accessSecretVersion(nameParam).getPayload().getData().toStringUtf8();
secrets.put(secretName, secretValue);
}
}
Cloud Run support for referencing Secret Manager Secrets is now at general availability (GA).
https://cloud.google.com/run/docs/release-notes#November_09_2021

How to fetch SSM Parameters from two different accounts using AWS CDK

I have a scenario where I'm using CodePipeline to deploy my cdk project from a tools account to several environment accounts.
The way my pipeline is deploying is by running cdk deploy from within a CodeBuild job.
My team has decided to use SSM Parameter Store to store configuration and we ended up with some parameters living in the environment account, for example the VPC_ID (resources/vpc/id) that I can read in deployment time => ssm.StringParameter.valueForStringParameter.
However, other parameters are living in the tools account, such as the Account Ids from my environment accounts (environment/nonprod/account/id) and other Global Config. I'm having trouble fetching those values.
At the moment, the only way I could think of was by using a step to read all those values in a previous step and loaded them into the context values.
Is there a more elegant approach for this problem? I was hoping I could specify in which account to get the SSM values from. Any ideas?
Thank you.
As you already stated there is no native support for that. I am also using CodePipeline in cross-account deployments, so all the automation parameters or product specified parameters are stored in a secured account and CodePipeline deploys the resources using CloudFormation as an action provider.
Cross account resolution of SSM parameters isn't supported, so in the end, I had added an extra step (stage) in my CodePipeline, which is nothing else but a CodeBuild project, which runs a script in a containerized environment and scripts then "syncs" the parameters from the automation account to the destination account.
As part of your pipeline, I would add a preliminary step to execute a Lambda. That Lambda can then execute whatever queries you wish to obtain whatever metadata/config that is required. The output from that Lambda can then be passed in to the CodeBuild step.
e.g. within the Lambda:
export class ConfigFetcher {
codepipeline = new AWS.CodePipeline();
async fetchConfig(event: CodePipelineEvent, context : Context) : Promise<void> {
// Retrieve the Job ID from the Lambda action
const jobId = event['CodePipeline.job'].id;
// now get your config by executing whatever queries you need, even cross-account, via the SDK
// we assume that the answer is in the variable someValue
const params = {
jobId: jobId,
outputVariables: {
MY_CONFIG: someValue,
},
};
// now tell CodePipeline you're done
await this.codepipeline.putJobSuccessResult(params).promise().catch(err => {
console.error('Error reporting build success to CodePipeline: ' + err);
throw err;
});
// make sure you have some sort of catch wrapping the above to post a failure to CodePipeline
// ...
}
}
const configFetcher = new ConfigFetcher();
exports.handler = async function fetchConfigMetadata(event: CodePipelineEvent, context : Context): Promise<void> {
return configFetcher.fetchConfig(event, context);
};
Assuming that you create your pipeline using CDK, then your Lambda step will be created using something like this:
const fetcherAction = new LambdaInvokeAction({
actionName: 'FetchConfigMetadata',
lambda: configFetcher,
variablesNamespace: 'ConfigMetadata',
});
Note the use of variablesNamespace: we need to refer to this later in order to retrieve the values from the Lambda's output and insert them as env variables into the CodeBuild environment.
Now our CodeBuild definition, again assuming we create using CDK:
new CodeBuildAction({
// ...
environmentVariables: {
MY_CONFIG: {
type: BuildEnvironmentVariableType.PLAINTEXT,
value: '#{ConfigMetadata.MY_CONFIG}',
},
},
We can call the variable whatever we want within CodeBuild, but note that ConfigMetadata.MY_CONFIG needs to match the namespace and output value of the Lambda.
You can have your lambda do anything you want to retrieve whatever data it needs - it's just going to need to be given appropriate permissions to reach across into other AWS accounts if required, which you can do using role assumption. Using a Lambda as a pipeline step will be a LOT faster than using a CodeBuild step in the pipeline, plus it's easier to change: if you write your Lambda code in Typescript/JS or Python, you can even use the AWS console to do in-place edits whilst you test that it executes correctly.
AFAIK there is no native way to achieve what you described. If there is way I'd like to know too. I believe you can use the CloudFormation custom resource baked by lambda for this purpose.
You can pass parameters to the lambda request and get information back from the lambda response.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources-lambda.html, https://www.2ndwatch.com/blog/a-step-by-step-guide-on-using-aws-lambda-backed-custom-resources-with-amazon-cfts/ and https://docs.aws.amazon.com/cdk/api/latest/docs/custom-resources-readme.html for more information.
This question is a year old, but a simpler method I found for retrieving parameters from your tools/deployment account is to specify them as env variables in your buildspec file. CodeBuild will always pull these from whatever account your job is running in (which in this question's scenario would be the tools account).
To pull parameters from your target environment accounts, it's best to use the CDK SSM approach suggested by the question author.

Is it possible to create product keys for my electron application?

I want to build a desktop application and be able to publish product keys or serial numbers.Before the user can use the application he will be requested to enter the product key/serial number.
Similar to Microsoft Office when they provide keys like XXXX-XXXX-XXXX-XXXX
The idea I have is to sell the app based on licenses and providing product key for every device seems more professional than accounts (usernames and passwords).
so my questions are:
1) Is it possible to accomplish this with electron?
2) Can you advice me wether I should go for serial numbers (if it is doable) or accounts? or are there better options?
3) if you answered the second question. Please state why?
Edit for 2021: I'd like to revise this answer, as it has generated a lot of inquiries on the comparison I made between license keys and user accounts. I previously would almost always recommended utilizing user accounts for licensing Electron apps, but I've since changed my position to be a little more nuanced. For most Electron apps, license keys will do just fine.
Adding license key (synonymous with product key) validation to an Electron app can be pretty straight forward. First, you would want to somehow generate a license key for each user. This can be done using cryptography, or it can be done by generating a 'random' license key string and storing it in a database and then building a CRUD licensing server that can verify that a given license key is "valid."
For cryptographic license keys, you can take some information from the customer, e.g. their order number or an email address, and create a 'signature' of it using RSA cryptography. Using Node, that would look something like this:
const crypto = require('crypto')
// Generate a new keypair
const { privateKey, publicKey } = crypto.generateKeyPairSync('rsa', {
// Using a larger key size, such as 2048, would be more secure
// but will result in longer signatures.
modulusLength: 512,
privateKeyEncoding: { type: 'pkcs1', format: 'pem' },
publicKeyEncoding: { type: 'pkcs1', format: 'pem' },
})
// Some data we're going to use to generate a license key from
const data = 'user#store.example'
// Create a RSA signer
const signer = crypto.createSign('rsa-sha256')
signer.update(data)
// Encode the original data
const encoded = Buffer.from(data).toString('base64')
// Generate a signature for the data
const signature = signer.sign(privateKey, 'hex')
// Combine the encoded data and signature to create a license key
const licenseKey = `${encoded}.${signature}`
console.log({ privateKey, publicKey, licenseKey })
Then, to validate the license key within your Electron app, you would want to cryptographically 'verify' the key's authenticity by embedding the public (not the private!) key generated above into your application code base:
// Split the license key's data and the signature
const [encoded, signature] = licenseKey.split('.')
const data = Buffer.from(encoded, 'base64').toString()
// Create an RSA verifier
const verifier = crypto.createVerify('rsa-sha256')
verifier.update(data)
// Verify the signature for the data using the public key
const valid = verifier.verify(publicKey, signature, 'hex')
console.log({ valid, data })
Generating and verifying the authenticity of cryptographically signed license keys like this will work great for a lot of simple licensing needs. They're relatively simple, and they work great offline, but sometimes verifying that a license key is 'valid' isn't enough. Sometimes requirements dictate that license keys are not perpetual (i.e. 'valid' forever), or they call for more complicated licensing systems, such as one where only a limited number of devices (or seats) can use the app at one time. Or perhaps the license key needs a renewable expiration date. That's where a license server can come in.
A license server can help manage a license's activation, expirations, among other things, such as user accounts used to associate multiple licenses or feature-licenses with a single user or team. I don't recommend user accounts unless you have a specific need for them, e.g. you need additional user profile information, or you need to associate multiple licenses with a single user.
But in case you aren't particularly keen on writing and maintaining your own in-house licensing system, or you just don't want to deal with writing your own license key generator like the one above, I’m the founder of a software licensing API called Keygen which can help you get up and running quickly without having to write and host your own license server. :)
Keygen is a typical HTTP JSON API service (i.e. there’s no software that you need to package with your app). It can be used in any programming language and with frameworks like Electron.
In its simplest form, validating a license key with Keygen is as easy as hitting a single JSON API endpoint (feel free to run this in a terminal):
curl -X POST https://api.keygen.sh/v1/accounts/demo/licenses/actions/validate-key \
-d '{
"meta": {
"key": "C1B6DE-39A6E3-DE1529-8559A0-4AF593-V3"
}
}'
I recently put together an example of adding license key validation, as well as device activation and management, to an Electron app. You can check out that repo on GitHub: https://github.com/keygen-sh/example-electron-license-activation.
I hope that answers your question and gives you a few insights. Happy to answer any other questions you have, as I've implemented licensing a few times now for Electron apps. :)
YES but concerning the software registration mechanism, IT IS HARD and it needs a lot of testing too.
If 90% of your users have internet access you should definitely go with the user accounts or some OAUTH 2.0 Plug and play thing (login with facebook/gmail/whatever)
I built a software licensing architecture from scratch using crypto and the fs module , and it was quite a journey (year) !
Making a good registration mechanism for your software from scratch is not recommended , electron makes it harder because the source code is relatively exposed.
That being said , if you really wanna go that way , bcrypt is good at this (hashs), you need a unique user identifier to hash , you also need some kind of persistence (preferably a file ) where you can store the user license , and you need to hide the salt that you are using for hashing either by hashing the hash... or storing small bits of it in separate files.
this will make a good starting point for licensing but it's far from being fully secured.
Hope it helps !
There are many services out there which help you add license key based software licensing to your app. And to ensure your customers don't reuse the key, you would need a strong device fingerprinting algorithm.
You can try out Cryptlex. It offers a very robust licensing solution with advanced device fingerprinting algo. You can check out the Node.js example on Github to add licensing to your electron app.
Yes, it is possible.
I myself desired this feature, and I found related solutions such as paid video tutorials, online solutions [with Keygen], and other random hacks, but I wanted something that is offline and free, so I created my own repository for myself/others to use. Here's how it works.
Overview
Install secure-electron-license-keys-cli. (ie. npm i -g secure-electron-license-keys-cli).
Create a license key by running secure-electron-license-keys-cli. This generates a public.key, private.key and license.data.
Keep private.key safe, but stick public.key and license.data in the root of your Electron app.
Install secure-electron-license-keys. (ie. npm i secure-electron-license-keys).
In your main.js file, review this sample code and add the necessary binding.
const {
app,
BrowserWindow,
ipcMain,
} = require("electron");
const SecureElectronLicenseKeys = require("secure-electron-license-keys");
const path = require("path");
const fs = require("fs");
const crypto = require("crypto");
// Keep a global reference of the window object, if you don't, the window will
// be closed automatically when the JavaScript object is garbage collected.
let win;
async function createWindow() {
// Create the browser window.
win = new BrowserWindow({
width: 800,
height: 600,
title: "App title",
webPreferences: {
preload: path.join(
__dirname,
"preload.js"
)
},
});
// Setup bindings for offline license verification
SecureElectronLicenseKeys.mainBindings(ipcMain, win, fs, crypto, {
root: process.cwd(),
version: app.getVersion(),
});
// Load app
win.loadURL("index.html");
// Emitted when the window is closed.
win.on("closed", () => {
// Dereference the window object, usually you would store windows
// in an array if your app supports multi windows, this is the time
// when you should delete the corresponding element.
win = null;
});
}
// This method will be called when Electron has finished
// initialization and is ready to create browser windows.
// Some APIs can only be used after this event occurs.
app.on("ready", createWindow);
// Quit when all windows are closed.
app.on("window-all-closed", () => {
// On macOS it is common for applications and their menu bar
// to stay active until the user quits explicitly with Cmd + Q
if (process.platform !== "darwin") {
app.quit();
} else {
SecureElectronLicenseKeys.clearMainBindings(ipcMain);
}
});
In your preload.js file, review the sample code and add the supporting code.
const {
contextBridge,
ipcRenderer
} = require("electron");
const SecureElectronLicenseKeys = require("secure-electron-license-keys");
// Expose protected methods that allow the renderer process to use
// the ipcRenderer without exposing the entire object
contextBridge.exposeInMainWorld("api", {
licenseKeys: SecureElectronLicenseKeys.preloadBindings(ipcRenderer)
});
Review the sample React component how you can verify the validity of your license, and act accordingly within your app.
import React from "react";
import {
validateLicenseRequest,
validateLicenseResponse,
} from "secure-electron-license-keys";
class Component extends React.Component {
constructor(props) {
super(props);
this.checkLicense = this.checkLicense.bind(this);
}
componentWillUnmount() {
window.api.licenseKeys.clearRendererBindings();
}
componentDidMount() {
// Set up binding to listen when the license key is
// validated by the main process
const _ = this;
window.api.licenseKeys.onReceive(validateLicenseResponse, function (data) {
console.log("License response:");
console.log(data);
});
}
// Fire event to check the validity of our license
checkLicense(event) {
window.api.licenseKeys.send(validateLicenseRequest);
}
render() {
return (
<div>
<button onClick={this.checkLicense}>Check license</button>
</div>
);
}
}
export default Component;
You are done!
Further detail
To explain further, the license is validated by a request from the client (ie. front-end) page. The client sends an IPC request to the main (ie. backend) process via this call (window.api.licenseKeys.send(validateLicenseRequest)).
Once this call is received by the backend process (which was hooked up because we set it up with this call (SecureElectronLicenseKeys.mainBindings)), the library code tries to decrypt license.data with public.key. Regardless if this succeeds or not, the success status is sent back to the client page (via IPC).
How to limit license keys by version
What I've explained is quite limited because it doesn't limit the versions of an app you might give to a particular user. secure-electron-license-keys-cli includes flags you may pass when generating the license key to set particular major/minor/patch/expire values for a license.
If you wanted to allow major versions up to 7, you could run the command to generate a license file like so:
secure-electron-license-keys-cli --major "7"
If you wanted to allow major versions up to 7 and expire on 2022-12-31, you could run the command to generate a license file like so:
secure-electron-license-keys-cli --major "7" --expire "2022-12-31"
If you do run these commands, you will need to update your client page in order to compare against them, ie:
window.api.licenseKeys.onReceive(validateLicenseResponse, function (data) {
// If the license key/data is valid
if (data.success) {
if (data.appVersion.major <= data.major &&
new Date() <= Date.parse(data.expire)) {
// User is able to use app
} else {
// License has expired
}
} else {
// License isn't valid
}
});
The repository page has more details of options but this should give you the gist of what you'll have to do.
Limitations
This isn't perfect, but will likely handle 90% of your users. This doesn't protect against:
Someone decompiling your app and making their own license to use/removing license code entirely
Someone copying a license and giving it to another person
There's also the concern how to run this library if you are packaging multiple or automated .exes, since these license files need to be included in the source. I'll leave that up to the creativity of you to figure out.
Extra resources / disclaimers
I built all of the secure-electron-* repositories mentioned in this question, and I also maintain secure-electron-template which has the setup for license keys already pre-baked into the solution if you need something turn-key.

Is there a simple way to share session data stored in Redis between Rails and Node.js application?

I have a Rails 3.2 application that uses Redis as it's session store. Now I'm about to write a part of new functionality in Node.js, and I want to be able to share session information between the two apps.
What I can do manually is read the _session_id cookie, and then read from a Redis key named rack:session:session_id, but this looks kind of like a hack-ish solution.
Is there a better way to share sessions between Node.js and Rails?
I have done this but it does require making your own forks of things
Firstly you need to make the session key the same name. That's the easiest job.
Next I created a fork of the redis-store gem and modified where the marshalling. I need to talk json on both sides because finding a ruby style marshal module for javascript is not easy. The file where I alter marshalling
I also needed to replace the session middleware portion of connect. The hash that is created is very specific and doesn't match the one rails creates. I will need to leave this to you to work out because there might be a nicer way. I could have forked connect but instead I extracted a copy of connect > middleware > session out and required my own in.
You'll notice how the original adds in a base variable which aren't present in the rails version. Plus you need to handle the case when rails has created a session instead of node, that is what the generateCookie function does.
/***** ORIGINAL *****/
// session hashing function
store.hash = function(req, base) {
return crypto
.createHmac('sha256', secret)
.update(base + fingerprint(req))
.digest('base64')
.replace(/=*$/, '');
};
// generates the new session
store.generate = function(req){
var base = utils.uid(24);
var sessionID = base + '.' + store.hash(req, base);
req.sessionID = sessionID;
req.session = new Session(req);
req.session.cookie = new Cookie(cookie);
};
/***** MODIFIED *****/
// session hashing function
store.hash = function(req, base) {
return crypto
.createHmac('sha1', secret)
.update(base)
.digest('base64')
.replace(/=*$/, '');
};
// generates the new session
store.generate = function(req){
var base = utils.uid(24);
var sessionID = store.hash(req, base);
req.sessionID = sessionID;
req.session = new Session(req);
req.session.cookie = new Cookie(cookie);
};
// generate a new cookie for a pre-existing session from rails without session.cookie
// it must not be a Cookie object (it breaks the merging of cookies)
store.generateCookie = function(sess){
newBlankCookie = new Cookie(cookie);
sess.cookie = newBlankCookie.toJSON();
};
//... at the end of the session.js file
// populate req.session
} else {
if ('undefined' == typeof sess.cookie) store.generateCookie(sess);
store.createSession(req, sess);
next();
}
I hope this works for you. It took me quite a bit of digging around to make them talk the same.
I found an issue as well with flash messages being stored in json. Hopefully you don't find that one. Flash messages have a special object structure that json blows away when serializing. When the flash message is restored from the session you might not have a proper flash object. I needed to patch for this too.
This may be completely unhelpful if you're not planning on using this, but all of my session experience with node is through using Connect. You could use the connect session middlewhere and change the key id:
http://www.senchalabs.org/connect/session.html#session
and use this module to use redis as your session store:
https://github.com/visionmedia/connect-redis
I've never setup something like what your describing though, there may be some necessary hacking.

Resources