Maintaining oath callback URLs - oauth-2.0

I'm developing a Loopback-based NodeJS app that uses GitHub Passport auth. For my development, I use localhost in my callbackURL setting in providers.json, but I have to change it to the published URL every time I deploy. At the same time, I have to change the same setting on GitHub.
How do you handle such scenarios? Is it possible to put a setting in providers.json? Is it possible to use two applications on GitHub and switch between them?

Probably you are loading the providers.json file in your server.js like in documentation: (https://loopback.io/doc/en/lb3/Configuring-providers.json.html)
var config = {};
try {
config = require('../providers.json');
} catch (err) {
console.trace(err);
process.exit(1); // fatal
}
So you can create two separate providers.json files (e.g. providers.dev.json providers.prod.json) and load a proper one according to e.g. NODE_ENV environment variable

Related

How to use a Google Secret in a deployed Cloud Run Service (managed)?

I have a running cloud run service user-service. For test purposes I passed client secrets via environment variables as plain text. Now since everything is working fine I'd like to use a secret instead.
In the "Variables" tab of the "Edit Revision" option I can declare environment variables but I have no idea how to pass in a secret? Do I just need to pass the secret name like ${my-secret-id} in the value field of the variable? There is not documentation on how to use secrets in this tab only a hint at the top:
Store and consume secrets using Secret Manager
Which is not very helpful in this case.
You can now read secrets from Secret Manager as environment variables in Cloud Run. This means you can audit your secrets, set permissions per secret, version secrets, etc, and your code doesn't have to change.
You can point to the secrets through the Cloud Console GUI (console.cloud.google.com) or make the configuration when you deploy your Cloud Run service from the command-line:
gcloud beta run deploy SERVICE --image IMAGE_URL --update-secrets=ENV_VAR_NAME=SECRET_NAME:VERSION
Six-minute video overview: https://youtu.be/JIE89dneaGo
Detailed docs: https://cloud.google.com/run/docs/configuring/secrets
UPDATE 2021: There is now a Cloud Run preview for loading secrets to an environment variable or a volume. https://cloud.google.com/run/docs/configuring/secrets
The question is now answered however I have been experiencing a similar problem using Cloud Run with Java & Quarkus and a native image created using GraalVM.
While Cloud Run is a really interesting technology at the time of writing it lacks the ability to load secrets through the Cloud Run configuration. This has certainly added complexity in my app when doing local development.
Additionally Google's documentation is really quite poor. The quick-start lacks a clear Java example for getting a secret[1] without it being set in the same method - I'd expect this to have been the most common use case!
The javadoc itself seems to be largely autogenerated with protobuf language everywhere. There are various similarly named methods like getSecret, getSecretVersion and accessSecretVersion
I'd really like to see some improvment from Google around this. I don't think it is asking too much for dedicated teams to make libraries for common languages with proper documentation.
Here is a snippet that I'm using to load this information. It requires the GCP Secret library and also the GCP Cloud Core library for loading the project ID.
public String getSecret(final String secretName) {
LOGGER.info("Going to load secret {}", secretName);
// SecretManagerServiceClient should be closed after request
try (SecretManagerServiceClient client = buildClient()) {
// Latest is an alias to the latest version of a secret
final SecretVersionName name = SecretVersionName.of(getProjectId(), secretName, "latest");
return client.accessSecretVersion(name).getPayload().getData().toStringUtf8();
}
}
private String getProjectId() {
if (projectId == null) {
projectId = ServiceOptions.getDefaultProjectId();
}
return projectId;
}
private SecretManagerServiceClient buildClient() {
try {
return SecretManagerServiceClient.create();
} catch(final IOException e) {
throw new RuntimeException(e);
}
}
[1] - https://cloud.google.com/secret-manager/docs/reference/libraries
Google have documentation for the Secret manager client libraries that you can use in your api.
This should help you do what you want
https://cloud.google.com/secret-manager/docs/reference/libraries
Since you haven't specified a language I have a nodejs example of how to access the latest version of your secret using your project id and secret name. The reason I add this is because the documentation is not clear on the string you need to provide as the name.
const [version] = await this.secretClient.accessSecretVersion({
name: `projects/${process.env.project_id}/secrets/${secretName}/versions/latest`,
});
return version.payload.data.toString()
Be sure to allow secret manager access in your IAM settings for the service account that your api uses within GCP.
I kinda found a way to use secrets as environment variables.
The following doc (https://cloud.google.com/sdk/gcloud/reference/run/deploy) states:
Specify secrets to mount or provide as environment variables. Keys
starting with a forward slash '/' are mount paths. All other keys
correspond to environment variables. The values associated with each
of these should be in the form SECRET_NAME:KEY_IN_SECRET; you may omit
the key within the secret to specify a mount of all keys within the
secret. For example:
'--update-secrets=/my/path=mysecret,ENV=othersecret:key.json' will
create a volume with secret 'mysecret' and mount that volume at
'/my/path'. Because no secret key was specified, all keys in
'mysecret' will be included. An environment variable named ENV will
also be created whose value is the value of 'key.json' in
'othersecret'. At most one of these may be specified
Here is a snippet of Java code to get all secrets of your Cloud Run project. It requires the com.google.cloud/google-cloud-secretmanager artifact.
Map<String, String> secrets = new HashMap<>();
String projectId;
String url = "http://metadata.google.internal/computeMetadata/v1/project/project-id";
HttpURLConnection conn = (HttpURLConnection)(new URL(url).openConnection());
conn.setRequestProperty("Metadata-Flavor", "Google");
try {
InputStream in = conn.getInputStream();
projectId = new String(in.readAllBytes(), StandardCharsets.UTF_8);
} finally {
conn.disconnect();
}
Set<String> names = new HashSet<>();
try (SecretManagerServiceClient client = SecretManagerServiceClient.create()) {
ProjectName projectName = ProjectName.of(projectId);
ListSecretsPagedResponse pagedResponse = client.listSecrets(projectName);
pagedResponse
.iterateAll()
.forEach(secret -> { names.add(secret.getName()); });
for (String secretName : names) {
String name = secretName.substring(secretName.lastIndexOf("/") + 1);
SecretVersionName nameParam = SecretVersionName.of(projectId, name, "latest");
String secretValue = client.accessSecretVersion(nameParam).getPayload().getData().toStringUtf8();
secrets.put(secretName, secretValue);
}
}
Cloud Run support for referencing Secret Manager Secrets is now at general availability (GA).
https://cloud.google.com/run/docs/release-notes#November_09_2021

Setting GOOGLE_APPLICATION_CREDENTIALS for an MVC site hosted on azure

Title says it all pretty much.
I tried uploading the json file to azure storage and referenced it's url when setting the GOOGLE_APPLICATION_CREDENTIALS environment variable under app settings, but when remotely debugging the site, apparently the url/directory was not in an acceptable format. I can’t store the json file locally either because the website doesn’t have any idea about my C drive directories.
Where should I store this file so that I can set the GOOGLE_APPLICATION_CREDENTIALS environment variable for my azure site to the directory of the json file?
The ToChannelCredentials() approach does not seem to work anymore, so I come up with an other solution that works on Azure. I create a text file in the /bin folder of my Azure server with the credentials and then I point the environment variable to this file. Google Cloud API will use this for the default credentials.
string json = #"{
'type': 'service_account',
'project_id': 'xxx',
'private_key_id': 'xx',
'private_key': 'xxx',
...
}"; // this is the content of the json-credentials file from Google
// Create text file in projects bin-folder
var binDirectory = Path.GetDirectoryName(Assembly.GetCallingAssembly().CodeBase);
string fullPath = Path.Combine(binDirectory, "credentials.json").Replace("file:\\","");
using (StreamWriter outputFile = new StreamWriter(fullPath, false)) {
outputFile.WriteLine(json);
}
// Set environment variabel to the full file path
Environment.SetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS", fullPath);
// Now you can call the service and it will pick up your credentials
TranslationServiceClient client = TranslationServiceClient.Create();
If anyone is wondering how to handle the Google's credentials smoothly in .Net applications instead of strange way of using the file on server, this is how I solved it for Translation Service. Other services must follow same principle:
store the content of the Google credentials json file as an environment variable in settings.json/azure configuration for your app (using ' ' instead of " " for inner text):
"GOOGLE_APPLICATION_CREDENTIALS": "{'type': 'service_account','project_id': ...}"
create and return the client:
var credential = GoogleCredential.FromJson(Environment.GetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS"));
var channelCredentials = credential.ToChannelCredentials();
var channel = new Channel(TranslationServiceClient.DefaultEndpoint.ToString(), channelCredentials);
return TranslationServiceClient.Create(channel);
Took a while for me to figure it our. Hope it helps.
I use the .json file in my local environment (because of environment variable length limit in Windows) and on Azure I use an "Application setting" to set an environment variable. This code handles both cases:
string? json;
var filename = Environment.GetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS");
if (filename != null)
{
json = System.IO.File.ReadAllText(filename);
}
else
{
json = Environment.GetEnvironmentVariable("GOOGLE_APPLICATION_CREDENTIALS_STRING");
if (json == null)
{
throw new Exception(
"GOOGLE_APPLICATION_CREDENTIALS_STRING environment variable with JSON is not set");
}
}
var credential = GoogleCredential.FromJson(json).ToChannelCredentials();
var grpcChannel = new Channel("firestore.googleapis.com", credential);
var grcpClient = new Firestore.FirestoreClient(grpcChannel);
var firestoreClient = new FirestoreClientImpl(grcpClient, FirestoreSettings.GetDefault());
return await FirestoreDb.CreateAsync(FirebaseProjectId, firestoreClient);
Was looking how to set the "GOOGLE_APPLICATION_CREDENTIALS" in Azure App Service. The answers here didn't help me. My solution is very simple without any code change.
In the configuration of the app service go to the Path Mappings
Add a New Azure Storage Mount. eg /mounts/config
Add the credentials.json file to the file share
In the application settings, add the GOOGLE_APPLICATION_CREDENTIALS and set the value to: /mounts/config/credentials.json
That is all.
In the azure app on the azure portal go to application settings and add the credentials under application settings tab
Then you can reference them in your code as they were in your web.config file.

Postman: How to make multiple requests at the same time

I want to POST data from Postman Google Chrome extension.
I want to make 10 requests with different data and it should be at the same time.
Is it possible to do such in Postman?
If yes, can anyone explain to me how can this be achieved?
I guess there's no such feature in postman as to run concurrent tests.
If I were you, I would consider Apache jMeter, which is used exactly for such scenarios.
Regarding Postman, the only thing that could more or less meet your needs is - Postman Runner.
There you can specify the details:
number of iterations,
upload CSV file with data for different test runs, etc.
The runs won't be concurrent, only consecutive.
Do consider jMeter (you might like it).
Postman doesn't do that but you can run multiple curl requests asynchronously in Bash:
curl url1 & curl url2 & curl url3 & ...
Remember to add an & after each request which means that request should run as an async job.
Postman however can generate curl snippet for your request: https://learning.getpostman.com/docs/postman/sending_api_requests/generate_code_snippets/
I don't know if this question is still relevant, but there is such possibility in Postman now. They added it a few months ago.
All you need is create simple .js file and run it via node.js. It looks like this:
var path = require('path'),
async = require('async'), //https://www.npmjs.com/package/async
newman = require('newman'),
parametersForTestRun = {
collection: path.join(__dirname, 'postman_collection.json'), // your collection
environment: path.join(__dirname, 'postman_environment.json'), //your env
};
parallelCollectionRun = function(done) {
newman.run(parametersForTestRun, done);
};
// Runs the Postman sample collection thrice, in parallel.
async.parallel([
parallelCollectionRun,
parallelCollectionRun,
parallelCollectionRun
],
function(err, results) {
err && console.error(err);
results.forEach(function(result) {
var failures = result.run.failures;
console.info(failures.length ? JSON.stringify(failures.failures, null, 2) :
`${result.collection.name} ran successfully.`);
});
});
Then just run this .js file ('node fileName.js' in cmd).
More details here
Not sure if people are still looking for simple solutions to this, but you are able to run multiple instances of the "Collection Runner" in Postman. Just create a runner with some requests and click the "Run" button multiple times to bring up multiple instances.
Run all Collection in a folder in parallel:
'use strict';
global.Promise = require('bluebird');
const path = require('path');
const newman = Promise.promisifyAll(require('newman'));
const fs = Promise.promisifyAll(require('fs'));
const environment = 'postman_environment.json';
const FOLDER = path.join(__dirname, 'Collections_Folder');
let files = fs.readdirSync(FOLDER);
files = files.map(file=> path.join(FOLDER, file))
console.log(files);
Promise.map(files, file => {
return newman.runAsync({
collection: file, // your collection
environment: path.join(__dirname, environment), //your env
reporters: ['cli']
});
}, {
concurrency: 2
});
In postman's collection runner you can't make simultaneous asynchronous requests, so instead use Apache JMeter instead. It allows you to add multiple threads and add synchronizing timer to it
If you are only doing GET requests and you need another simple solution from within your Chrome browser, just install the "Open Multiple URLs" extension:
https://chrome.google.com/webstore/detail/open-multiple-urls/oifijhaokejakekmnjmphonojcfkpbbh?hl=en
I've just ran 1500 url's at once, did lag google a bit but it works.
The Runner option is now on the lower right side of the panel
If you need to generate more consecutive requests (instead of quick clicking SEND button). You can use Runner. Please note it is not true "parallel request" generator.
File->New Runner Tab
Now you can "drag and drop" your requests from Collection and than keep checked only request you would like to generate by a Runner setting 10 iterations (to generate 10 requests ) and delay for example to 0 (to make it as fast as possible).
Easiest way is to get => Google Chrome "TALEND API TESTER"
Go to help + type in Create Scenario
...or just go to this link => https://help.talend.com/r/en-US/Cloud/api-tester-user-guide/creating-scenario
I was able to send several POST API calls simultaneously.
You can use Fiddler with started traffic capture to record manual queries from Postman, then select them in Fiddler's sessions list as much as you want and replay (press R key) - they would run in parallel.
https://docs.telerik.com/fiddler/generate-traffic/tasks/resendrequest
You can run multiple instances of postman Runner and run the same collection with different data files in each instance.
Open multiple postman. It replicates it and run concurrently.

How can I make a custom session provider work with ASP.NET 5? Specically use Redis for sessions

In ASP.NET MVC 4, to switch out the default session provider for a custom one, you had to let the app know via web.config, which now is gone in ASP.NET 5.
I've tried to use Microsoft.Web.RedisSessionStateProvider (which is based on StackExchange.Redis), but not sure how to proceed beyond getting it via NuGet. It simply doesn't work.
What am I missing?
Following is a Session sample where you can use Redis cache as a store for it:
https://github.com/aspnet/Session/blob/dev/samples/SessionSample/Startup.cs#L32-L39
You can configure Redis cache options like below:
services.Configure<RedisCacheOptions>(redisOptions =>
{
redisOptions.Configuration = "localhost";
redisOptions.InstanceName = "SampleInstance";
}
At one point a couple months ago I had an app (similar to https://github.com/aspnet/Session/blob/dev/samples/SessionSample/Startup.cs) connecting to a remote Redis server like this:
app.UseDistributedSession(new RedisCache(new RedisCacheOptions()
{
Configuration = "ip:port,password=xxx"
}));
It's probably different now in beta7, but hopefully that helps.

Is there a simple way to share session data stored in Redis between Rails and Node.js application?

I have a Rails 3.2 application that uses Redis as it's session store. Now I'm about to write a part of new functionality in Node.js, and I want to be able to share session information between the two apps.
What I can do manually is read the _session_id cookie, and then read from a Redis key named rack:session:session_id, but this looks kind of like a hack-ish solution.
Is there a better way to share sessions between Node.js and Rails?
I have done this but it does require making your own forks of things
Firstly you need to make the session key the same name. That's the easiest job.
Next I created a fork of the redis-store gem and modified where the marshalling. I need to talk json on both sides because finding a ruby style marshal module for javascript is not easy. The file where I alter marshalling
I also needed to replace the session middleware portion of connect. The hash that is created is very specific and doesn't match the one rails creates. I will need to leave this to you to work out because there might be a nicer way. I could have forked connect but instead I extracted a copy of connect > middleware > session out and required my own in.
You'll notice how the original adds in a base variable which aren't present in the rails version. Plus you need to handle the case when rails has created a session instead of node, that is what the generateCookie function does.
/***** ORIGINAL *****/
// session hashing function
store.hash = function(req, base) {
return crypto
.createHmac('sha256', secret)
.update(base + fingerprint(req))
.digest('base64')
.replace(/=*$/, '');
};
// generates the new session
store.generate = function(req){
var base = utils.uid(24);
var sessionID = base + '.' + store.hash(req, base);
req.sessionID = sessionID;
req.session = new Session(req);
req.session.cookie = new Cookie(cookie);
};
/***** MODIFIED *****/
// session hashing function
store.hash = function(req, base) {
return crypto
.createHmac('sha1', secret)
.update(base)
.digest('base64')
.replace(/=*$/, '');
};
// generates the new session
store.generate = function(req){
var base = utils.uid(24);
var sessionID = store.hash(req, base);
req.sessionID = sessionID;
req.session = new Session(req);
req.session.cookie = new Cookie(cookie);
};
// generate a new cookie for a pre-existing session from rails without session.cookie
// it must not be a Cookie object (it breaks the merging of cookies)
store.generateCookie = function(sess){
newBlankCookie = new Cookie(cookie);
sess.cookie = newBlankCookie.toJSON();
};
//... at the end of the session.js file
// populate req.session
} else {
if ('undefined' == typeof sess.cookie) store.generateCookie(sess);
store.createSession(req, sess);
next();
}
I hope this works for you. It took me quite a bit of digging around to make them talk the same.
I found an issue as well with flash messages being stored in json. Hopefully you don't find that one. Flash messages have a special object structure that json blows away when serializing. When the flash message is restored from the session you might not have a proper flash object. I needed to patch for this too.
This may be completely unhelpful if you're not planning on using this, but all of my session experience with node is through using Connect. You could use the connect session middlewhere and change the key id:
http://www.senchalabs.org/connect/session.html#session
and use this module to use redis as your session store:
https://github.com/visionmedia/connect-redis
I've never setup something like what your describing though, there may be some necessary hacking.

Resources