I have the ReactJS app deployed on AWS Amplify. Also, the environment variables that I need in js code are in Amplify. How can I use them from my code? How do access them?
Assuming you want to access these in the front end application:
During the build process, environment variables can be accessed via ${VARIABLE_NAME}. You can then set a react environment variable during the build process. If you are developing your app with a frontend framework that supports its own environment variables, it is important to understand that these are not the same as the environment variables you configure in the Amplify console. For example, React (prefixed REACT_APP) and Gatsby (prefixed GATSBY), enable you to create runtime environment variables that those frameworks automatically bundle into your frontend production build. To understand the effects of using these environment variables to store values, refer to the documentation for the frontend framework you are using.
Assuming you want to access these in the back end application:
You can access environment variables in React through process.env.VARIABLE_NAME and pass these through to the backend.
Amazon provides a comprehensive overview of how to store and access Amplify environment variables:
https://docs.aws.amazon.com/amplify/latest/userguide/environment-variables.html
Related
I've built myself a web api that uses a sql database. I've used Visual Studio to create this project, and have the ability to right click and "manage user secrets" on my project file.
Its in user secrets that I've stored my connection string and I dont want to add it to my github (private) repo.
The user secret is a json file.
How do I noe include these secrets? Do I include them in the project, making them a part of the image? Or do I do something fancy with the running instance?
There's many ways to go about doing this, but typically you either:
Extract your secrets from your codebase (GIT repo), inject them through environment variables at container startup and then access them from your application code like any other environment var. This is not the most secure option, but at least your secrets won't be in your VCS anymore.
Pull the secrets from some type of secrets manager (e.g. AWS Secrets Manager) straight from your application code. This is more secure than the first option, but requires more code changes and creates a dependency between your application and your secrets manager.
Trying to setup our .NET framework application in Windows AKS and need an elegant way to pass in ApplicationSettings.config & Connectionstrings.config as per environment setup.. trying to use life-cycle hooks & init containers but no luck so far..
any recommendations?
Thanks
When delivering applications in a containerized format such as Docker image on k8s cluster a common pattern is that configuration is read from environment variables.
When drawing configuration from environment variables at runtime, you can set up your pods so that data stored in ConfigMaps is injected into environment variables.
If you want to avoid a code change, here is an excellent article on generating your config files based on environment variables using a startup script.
We have built an app with flutter web and want to deploy it on different servers ( staging and prod ) with docker swarm as a part of a backend. The same image should be executable in both environments as we have to change the URLs we use. I'm looking for a possibility to set an environment variable in my docker-compose file which then can be read within the flutter app at runtime. By googling I only found solutions with static files or Gradle. But we don't use both.
We use the method in the first code block in java, but I don't see a corresponding method in the rails documentation, Only the second code block:
Storage storage = StorageOptions.getDefaultInstance().getService();
storage = Google::Cloud::Storage.new(
project: "my-todo-project",
keyfile: "/path/to/keyfile.json"
)
If we use an application specific service account in the kubernetes cluster. How do we configure the Rails application to work in the local developer environment and also run with a k8s cluster?
Also, I would prefer not to use a project_id and a keyfile to initialize, since I will have to manage multiple such JSON files during the initialization process in dev, qa, staging, production environments.
I would recommend initializing without arguments and using the default discovery of credentials as discussed in the Authentication guide.
When running on Google Cloud Platform (GCP), including Google Compute Engine (GCE), Google Kubernetes Engine (GKE), Google App Engine (GAE), Google Cloud Functions (GCF) and Cloud Run, the credentials will be discovered automatically.
For the local developer environment, we always use environment variables with initializing without arguments and the default discovery.
Before moving your app to multiple environments, you should set up your deployment pipeline which will handle how your app is configured for different environments, including configuration of service accounts.
Below you can find two official google cloud documentations on how to do it, plus one example in gitlab, so you can follow what better suits you.
Continuous deployment to Google Kubernetes Engine using Jenkins
Continuous Delivery Pipelines with Spinnaker and Google Kubernetes Engine
Git Lab - continuous-deployment-on-kubernetes
Also, regarding the parameters of instantiation of the cloud storage object, as you can see on the same documentation you provided at your question, the project parameter is the identifier of your storage in the cloud, so if you do not set that your app will not be able to find it. For the Keyfile, it is what allow your service account to authenticate, so you can't make it work without it as well.
I hope This information helped you.
I have a front-end (React) application. I want to build it and deploy to 3 environments - dev, test and production. As every front-end app it needs to call some APIs. API addresses will vary between the environments. So they should be stored as environment variables.
I utilize S2I Openshift build strategy to create the image. The image should be built and kind of sealed for changes, then before deployment to each particular environment the variables should be injected.
So I believe the proper solution is to have chained, two-stage build. First one S2I which compiles sources and puts it into Nginx/Apache/other container, and second which takes the result of the first, adds environment variables and produces final images, which is going to be deployed to dev, test and production.
Is it correct approach or maybe simpler solution exists?
I would not bake your environmental information into your runtime container image. One of the major benefits of containerization is to use the same runtime image in all of your environments. Generating a different image for each environment would increase the chance that your production deployments behave differently that those you tested in your lower environments.
For non-sensitive information the typical approach for parameterizing your runtime image is to use one or more of:
ConfigMaps
Secrets
Pod Environment Variables
For sensitive information the typical approach is to use:
Secrets (which are not very secret as anyone with root accesss to the hosts or cluster-admin in the cluster rbac can read them)
A vault solution like Hashicorp Vault or Cyberark
A custom solution you develop in-house that meets your security needs