Create React App inject environment variable - environment-variables

I'm new to react and started with the create-react-app template.
I am developing a SPA, that is consuming a REST API.
For developing purposes, i need to authenticate against this API using OAUTH2 (Access Token).
In the production environment, i don't need to authenticate (as it runs on the same machine).
In order to get this access token in the dev environment, I need to make a POST request to the authentication server (with client_id and client_secret) and then I receive the access token which I need for further request.
As the authentication server does not support cors, I cannot do this post request within the react app.
My solution was to write a node script, that does this post request and inject the token to the client App using environment variables.
In the package.json (I did an eject) I inserted this script (gettoken):
"start": "npm-run-all -p watch-css gettoken-js start-js"
In the gettoken.js file I make the post request for getting the access token and set (in the callback function):
process.env.REACT_APP_CRM_API_ACCESS_TOKEN = response.access_token;
Now I want to access this variable in the react app - but here process.env.REACT_APP_CRM_API_ACCESS_TOKEN is always undefinied.
What am I doing wrong?
Is there another way to inject the access_token to the client app?
Here is the getToken script:
var request = require('request');
const endpoint = "https://xxxx/oauth2/token";
const resource = "https://xxxyyy.com";
const params = [
{
name: 'userName',
value: 'myuser#user.com'
},
{
name: 'password',
value: '123'
},
{
name: 'grant_type',
value: 'password'
},
{
name: 'client_secret',
value: '11231'
},
{
name: 'client_id',
value: '123'
},
{
name: 'resource',
value: resource
}
];
const encodedParams= Object.keys(params).map((key) => {
return params[key].name + '=' + encodeURIComponent(params[key].value);
}).join('&');
request(
{
method: 'POST',
uri: endpoint,
headers: [
{
name: 'content-type',
value: 'application/x-www-form-urlencoded'
}
],
body: encodedParams + "&"
}
,
function (error, response, body) {
//throw new Error("asdfasdf");
if (error)
{
console.log('error', "ERROR GETTING ACCESS TOKEN FROM API: " + error);
}
else
{
let response = JSON.parse(body);
process.env.REACT_APP_CRM_API_ACCESS_TOKEN = response.access_token;
process.env.REACT_APP_CRM_API_ENDPOINT = resource;
console.log(process.env.REACT_APP_CRM_API_ACCESS_TOKEN);
}
}
);

You're bumping into a common issue with configuration in a Create React App.
The REACT_APP_ prefix is used at build time. Are you building after you put REACT_APP_CRM_API_ACCESS_TOKEN into the env? If not, the app doesn't have them.
If you're happy to have the token in the JS bundle then go with that.
Depending on how you plan to promote your build through various environments, you'll likely bump into other issues.
Here's one possible pipeline and the issue you'll bump into.
You have staging and production.
You build your app and end up with staging env variables built into the bundle.
You promote that bundle to production and the staging env vars are still in the bundle.
Two ways around it:
Rebuild on production so the prod vars are put into the bundle.
Use the web server to inject the vars from the environment into one of the files in your SPA.
I've gone through this 4 times now and tweaked my solution slightly each time. Each time was on an ejected CRA app and each solution wasn't very DRY.
I'm now trying to solve it for a non ejected CRA and again, trying to find a DRY solution is proving tricky.
Will update this answer if I find a nicer way.
Edit: Because you have ejected the app, you can change config/env.js to do whatever you need. Including the way the REACT_APP_ prefix works.

Related

Neo4j GraphQL auth directive always giving forbidden

I'm trying to implement the #auth directive in GraphQL for use with Neo4j as documented here:
https://neo4j.com/docs/graphql-manual/current/auth/auth-directive/
With a jwt token that is taken from firebase, and should have all of the necessary fields, including admin roles
The problem is that whenever I try to use one of the generated queries with the admin bearer token it says "forbidden" when the auth directive is attached to it.
The full discussion of this issue between me and ChatGPT, which includes extensive trial and error that was done before writing this question, logs, code snippets etc, can be found here for reference:
https://firebasestorage.googleapis.com/v0/b/awtsmoos.appspot.com/o/ai%2FBH_1674546675630.html?alt=media&token=17b653c4-5db2-4bca-8bc1-3cc3625e5a6c
Just to summarize some key code parts, I'm trying to follow the setup example like this essentially:
const neoSch = new graphqlNeo.Neo4jGraphQL({
typeDefs: typeDefs,
driver:dr,
resolvers:rez,
plugins: {
auth: new neoGraphQLAuth
.Neo4jGraphQLAuthJWTPlugin({
secret:
secret.private_key.toString()
})
}
})
Here's the schema that it suggested after I gave it some samples, doesn't work:
type Homework {
id: ID #id
title: String!
steps: [Step!]! #relationship(
type: "HAS",
direction: OUT
)
}
extend type Homework #auth(
rules:[
{
allow: CREATE,
roles:[
"ADMIN"
]
}
]
)
extend type Homework #auth(
rules:[
{
allow: READ,
}
]
)
The token itself is getting properly passed in, as discussed at length in the ChatGPT session, I'm not sure what else it is?
Where the secret property is my JSON of a service account taken from firebase, then in my Apollo context I've tried lots of trial and error with no success, in this matter, but here was one implantation I tried:
app.use(
"/etsem",
cors(),
bodyp.json(),
exp4.expressMiddleware(
serv, {
context: async info => {
var rz = {
req:info.req,
headers:info.req.headers
}
return rz;
}
}
)
)
But still I got the forbidden error

Proper way to proxy all calls to an external API through the Next.js server? (Make SSR components work with client auth cookies)

I have a rails API running in the same cluster as my Next.js13 server. The rails API uses auth cookies to track the session.
I can log into a client side component and start making authenticated API calls based on the set-cookie header I receive from the rails API, however, when using an SSR component e.g....
export default async function MeTestPage () {
try {
let allCookies = cookies().getAll().map(c => `${c.name}=${c.value}`).join("; ");
console.log(allCookies);
let result = await fetch("http://0.0.0.0:3000/users/me", {
"headers": {
"accept": "*/*",
"accept-language": "en-US,en;q=0.9",
"sec-ch-ua": "\"Google Chrome\";v=\"107\", \"Chromium\";v=\"107\", \"Not=A?Brand\";v=\"24\"",
"sec-ch-ua-mobile": "?0",
"sec-ch-ua-platform": "\"macOS\"",
"sec-fetch-dest": "empty",
"sec-fetch-mode": "cors",
"sec-fetch-site": "same-origin",
"cookie": allCookies
},
"referrerPolicy": "strict-origin-when-cross-origin",
"body": null,
"method": "GET",
"mode": "cors",
"credentials": "include"
});
let resultJson = await result.json();
return <p>{JSON.stringify(resultJson)}</p>
} catch (e: any) {
return <p>{e.toString()}</p>
}
The request goes through, rails gets the right cookies, but rails doesn't connect it to the session, I suspect because it's coming from a different IP address, though I haven't been able to figure this out.
I feel like one good solution would be to just proxy all client-side requests through the next server so that the next server can just act as the sole API client for rails and keep the IP consistent, but I'm not actually sure what the best way to do that is. I've tried both setting rewrites in next.config.js and also just copying the request method/route/headers/body to a new request from an /api/[...path].ts defined endpoint (but had a very frustrating time debugging why this wasn't sending the body).
I'm just getting into next.js and can't believe this is such a struggle-- I figure there must be some canonical way of handling this very common need to access a cookies protected API from both environments.

connect from web to iot core using a custom authorizer

I'm trying to use a custom authorizer to authenticate a web client.
I have succesfully created a dedicated lambda and a custom authorizer. If I launch aws iot describe-authorizer --authorizer-name <authorizer-name> I can see
{
"authorizerDescription": {
"authorizerName": "<authorizer-name>",
"authorizerArn": "...",
"authorizerFunctionArn": "...",
"tokenKeyName": "<token-key-name>",
"tokenSigningPublicKeys": {
"<public-key-name>": "-----BEGIN PUBLIC KEY-----\n<public-key-content>\n-----END PUBLIC KEY-----"
},
"status": "ACTIVE",
"creationDate": "...",
"lastModifiedDate": "...",
"signingDisabled": false,
"enableCachingForHttp": false
}
}
Moreover I can test it succesfully:
$ aws iot test-invoke-authorizer --authorizer-name '<authorizer-name>' --token '<public-key-name>' --token-signature '<private-key-content>'
{
"isAuthenticated": true,
"principalId": "...",
"policyDocuments": [ "..." ],
"refreshAfterInSeconds": 600,
"disconnectAfterInSeconds": 3600
}
$
But I cannot connect using the browser.
I'm using aws-iot-device-sdk and according the SDK documentation I should set customAuthHeaders and/or customAuthQueryString (my understanding is that the latter should be used in web environment due to a limitation of the browsers) with the headers / queryparams X-Amz-CustomAuthorizer-Name, X-Amz-CustomAuthorizer-Signature and TestAuthorizerToken but no matter what combination I set for these values the iot endpoint always close the connection (I see a 1000 / 1005 code for the closed connection)
What I've written so far is
const CUSTOM_AUTHORIZER_NAME = '<authorizer-name>';
const CUSTOM_AUTHORIZER_SIGNATURE = '<private-key-content>';
const TOKEN_KEY_NAME = 'TestAuthorizerToken';
const TEST_AUTHORIZER_TOKEN = '<public-key-name>';
function f(k: string, v?: string, p: string = '&'): string {
if (!v)
return '';
return `${p}${encodeURIComponent(k)}=${encodeURIComponent(v)}`;
}
const client = new device({
region: '...',
clientId: '...',
protocol: 'wss-custom-auth' as any,
host: '...',
debug: true,
// customAuthHeaders: {
// 'X-Amz-CustomAuthorizer-Name': CUSTOM_AUTHORIZER_NAME,
// 'X-Amz-CustomAuthorizer-Signature': CUSTOM_AUTHORIZER_SIGNATURE,
// [TOKEN_KEY_NAME]: TEST_AUTHORIZER_TOKEN
// },
customAuthQueryString: `${f('X-Amz-CustomAuthorizer-Name', CUSTOM_AUTHORIZER_NAME, '?')}${f('X-Amz-CustomAuthorizer-Signature', CUSTOM_AUTHORIZER_SIGNATURE)}${f(TOKEN_KEY_NAME, TEST_AUTHORIZER_TOKEN)}`,
} as any);
As you can see I started having also doubts about the headers names!
After running my code I see that the client tries to do a GET to the host with the querystring that I wrote.
I also see that IoT core responds with a 101 Switching Protocols, and then that my client send the CONNECT command to IoT via websocket and then another packet from my browser to the backend system.
Then the connection is closed by IoT.
Looking at cloudwatch I cannot see any interaction with the lambda, it's like the request is blocked.
my doubts are:
first of all, is it possible to connect via mqtt+wss using only a custom auth, without cognito/certificates? keep in mind that I am able to use a cognito identity pool without errors, but I need to remove it.
is it correct that I just need to set up the customAuthQueryString parameter? my understanding is that this should be used on the web.
what are the values I should set up for the various headers/queryparams? X-Amz-CustomAuthorizer-Name is self explanatory, but I'm not sure about X-Amz-CustomAuthorizer-Signature (it's correct to fill it with the content of my private key?). moreover I'm not sure about the TestAuthorizerToken. Is it the correct key to set up?
I've also tried to run the custom_authorizer_connect of the sdk v2 but it's still not working, and I run out of ideas.
turns out the problem was in the permissions set on the backend systems.

Twitter authentication working in local, but in Vercel server doesn't (Next.JS) [duplicate]

This question already has an answer here:
How to properly set environment variables in Next.js app deployed to Vercel?
(1 answer)
Closed 11 months ago.
I'm making a next.js website and using the library Twitter-lite, to access the Twitter API. For some reason the code run normaly in my local server, but when I send it to vercel, to publish it, it doesn't work, It returs
502: BAD_GATEWAY
Code: NO_RESPONSE_FROM_FUNCTION
ID: gru1::dzfhn-1612303490166-f7e971564512
the code is:
import Twitter from "twitter-lite";
export default async function getUserTweets(request, response) {
var amount = 10
const client = new Twitter({
subdomain: "api",
consumer_key: process.env.TWITTER_API_KEY,
consumer_secret: process.env.TWITTER_SECRET_API_KEY,
access_token_key: process.env.TWITTER_TOKEN,
access_token_secret: process.env.TWITTER_TOKEN_SECRET,
bearer_token:process.env.TWITTER_BEARER_TOKEN
});
let timeline = await client.get("statuses/user_timeline", {
screen_name: "nerat0",
exclude_replies: true,
include_rts: false,
tweet_mode: "extended",
count: amount + 2
});
response.json(timeline)
}
OBS: in vercel Function Log it says: errors: [ { code: 32, message: 'Could not authenticate you.' } ]. But how could I run normaly in local with the same authentication...
Can anyone help me with a link or a explanation? Thanks :)
Basically the problem is in .env file and vars. If you are at your onw server (local) the next js will get the enviroment variables right, but when you send it to vercel the vercel server try to get those .env vars from another file, not from this one (.var), you can read more about it here:
https://nextjs.org/docs/basic-features/environment-variables
Environment variables not working (Next.JS 9.4.4)

Is passing environment variables to sapper's client side secure with Rollup Replace?

I am using replace in my rollup configuration for sapper and sapper-environment to pass environment variables to the client side in sapper - is this secure? Is there a better/safer way to approach this?
Using this config below:
rollup.config.js
const sapperEnv = require('sapper-environment');
export default {
client: {
input: config.client.input(),
output: config.client.output(),
plugins: [
replace({
...sapperEnv(),
'process.browser': true,
'process.env.NODE_ENV': JSON.stringify(mode)
})
...
And then this allows me to use the variables in stores.js:
import { writable } from 'svelte/store';
import Client from 'shopify-buy';
const key = process.env.SAPPER_APP_SHOPIFY_KEY;
const domain = process.env.SAPPER_APP_SHOPIFY_DOMAIN;
// Initialize a client
const client = Client.buildClient({
domain: domain,
storefrontAccessToken: key
});
export { key, domain, client };
I have tried running this in server,js and passing the variables through the session data, but client side no matter what I do they always seems to return 'undefined'.
There are two questions here — a) is it secure, and b) why are the values undefined?
The answer to the first question is 'no'. Any time you include credentials in JavaScript that gets served to the client (or in session data), you're making those credentials available to anyone who knows how to look for them. If you need to avoid that, you'll need your server (or another server) to make requests on behalf of authenticated clients.
As for the second part, it's very hard to tell without a reproduction unfortunately!

Resources