how to load "NEXT_PUBLIC" env variables from AWS ELB - environment-variables

how to load "NEXT_PUBLIC" env variables from AWS ELB ?
I deployed my project to AWS ELB and set the env variables in settings.
other env Key has loaded like this "process.env.SOMETHING"
but env Key name starts with "NEXT_PUBLIC" cannot loaded in Next.js client side.
I'd appreciate it if you could solve it

Related

How can I parse a JSON object from an ENV variable and set a new environment variable in a dockerfile?

I'm trying to setup Ory Kratos on ECS.
Their documentation says that you can run migrations with the following command...
docker -e DSN="engine://username:password#host:port/dbname" run oryd/kratos:v0.10 migrate sql -e
I'm trying to recreate this for an ECS task and the Dockerfile so far looks like this...
# syntax=docker/dockerfile:1
FROM oryd/kratos:v0.10
COPY kratos /kratos
CMD ["-c", "/kratos/kratos.yml", "migrate", "sql", "-e", "--yes"]
It uses the base oryd/kratos:v0.10 image, copies across a directory with some config and runs the migration command.
What I'm missing is a way to construct the -e DSN="engine://username:password#host:port/dbname". I'm able to supply my database secret from AWS Secrets Manager directly to the ECS task, however the secret is a JSON object in a string containing the engine, username, password, host, port and dbname properties.
How can I securely construct the required DSN environment variable?
Please see the ECS documentation on injecting SecretsManager secrets. You can inject specific values from a JSON secret as individual environment variables. Search for "Example referencing a specific key within a secret" in the page I linked above. So the easiest way to accomplish this without adding a JSON parser tool to your docker image, and writing a shell script to parse the JSON inside the container, is to simply have ECS inject each specific value as a separate environment variable.

Where should I place a .env for production

I have a webapi build in Flask and I am using aws elastic beanstalk to serve my app.
I was integrating Jenkins for CI/CD and this is what my pipelie does:
fetch the code
Build docker image
Push docker image to Docker hub
Deploy docker image to aws (Docker hub to aws).
All the steps above work as expected but I just have a question related to the .env vars.
If wanted to have different environments, (production/development) Where should I place the .env file that my webapi uses. For development everyone can have their own .env file but for production not everyone should access these variables. Having said that where could I place this .env file so that when my pipeline starts I can get these vars to deploy.
Thanks.
You will have to configure the environment variables in the AWS elastic beanstalk dashboard of your created app. You should see Environment variables under software configuration section.

Dockerized Vue SPA with different API urls depending on where it is running

I got this docker image of my Vue.js app that fetches data from an api running in a java backend. In production the api is running under app.example.com/api, in staging it will run under staging.example.com/api and when running it on my local computer the api will be running at localhost:8081. When running the frontend on my computer I might be using vue cli serve in the project folder, or it might be started as a docker image using docker-compose. The backend is always running as a docker image.
I would like to be able to use the same docker image for local docker-compose, deploy to staging and deploy to production, but using a different url to the backend api. As a bonus it would be nice to be able to use vue-cli serve.
How can this be achieved?
You can use an environment variable containing the API url and then use the environment variable in your Vue app.
The Vue cli supports environment variables and allows you to use environment variables that start with VUE_APP_ in your client-side code. So if you create an environment variable called VUE_APP_API_URL in the environment you're running the Vue CLI in (whether it is Docker or on your host machine) you should be able to use process.env.VUE_APP_API_URL in your Vue code.
If you're running Vue CLI locally, you can just run export VUE_APP_API_URL="localhost:8081" before running vue cli serve.
Docker also supports environment variables. For example, if your SPA Docker service is called "frontend", you can add an environment variable to your Docker Compose file like this:
frontend:
environment:
- VUE_APP_API_URL
If you have the VUE_APP_API_URL environment variable set in the host you're running Docker from it will be passed on to your "frontend" Docker container. So, for example, if you're running it locally your can run export VUE_APP_API_URL="localhost:8081" before running Docker Compose.
You can also pass through environment variables using an .env file. You can read more about environment variables in Docker Compose files here if you're interested.
you can create .env files
env file for development, production ...
check this out for better details:
https://dev.to/ratracegrad/how-to-use-environment-variables-in-vue-js-4ko7

kubernetes set root environmental variables

I have a rails app that runs apache2 as root with database.yml config values set by environmental variables passed in via a kubernetes configmap.
However, since apache2 is a root process, it doesn't have the passed in environmental values. How do I set the environmental values for root from kubernetes configmap?
since apache2 is a root process, it doesn't have the passed in environmental values.
If Use ConfigMap-defined environment variables is not possible, you could add ConfigMap data to a Volume, which then can be read by a wrapper to the apache2 runner.
That wrapper can:
read the values in the config-map-based volume
set the right environment variables
launch Apache2

AWS credentials during Docker build process

As part of the process to build my docker container I need to pull some files from an s3 bucket but I keep getting fatal error: Unable to locate credentials even though for now I am setting the credentials as ENV vars (though would like to know of a better way to do this)
So when building the container I run
docker build -t my-container --build-arg AWS_DEFAULT_REGION="region" --build-arg AWS_ACCESS_KEY="key" --build-arg AWS_SECRET_ACCESS_KEY="key" . --squash
And in my Dockerfile I have
ARG AWS_DEFAULT_REGION
ENV AWS_DEFAULT_REGION=$AWS_DEFAULT_REGION
ARG AWS_ACCESS_KEY
ENV AWS_ACCESS_KEY=$AWS_ACCESS_KEY
ARG AWS_SECRET_ACCESS_KEY
ENV AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY
RUN /bin/bash -l -c "aws s3 cp s3://path/to/folder/ /my/folder --recursive"
Does anyone know how I can solve this (I know there is an option to add a config file but that just seems an unnecessary extra step as I should be able to read from ENV).
The name of the environment variable is AWS_ACCESS_KEY_ID vs AWS_ACCESS_KEY
You can review the full list from amazon doc
The following variables are supported by the AWS CLI
AWS_ACCESS_KEY_ID – AWS access key.
AWS_SECRET_ACCESS_KEY – AWS secret key. Access and secret key
variables override credentials stored in credential and config files.
AWS_SESSION_TOKEN – session token. A session token is only required if
you are using temporary security credentials.
AWS_DEFAULT_REGION – AWS region. This variable overrides the default
region of the in-use profile, if set.
AWS_DEFAULT_PROFILE – name of the CLI profile to use. This can be the
name of a profile stored in a credential or config file, or default to
use the default profile.
AWS_CONFIG_FILE – path to a CLI config file.

Resources