Deploying web services with different environment variables in the frontend - docker

I have a web service which consists of a backend and a forntend, and in the frontend I use an API uri which can change depending on the environment the service is being deployed to.
Using webpack's EnvironmentPlugin I can build the source code simply with other environment variables. The plugin allows me to use process.env in javascript which is convenient in the development phase but after bundling the frontend's code process.env will remain the same with the given environment variables when bundled.
The issue is that on the CI pipelines I build a docker image for the web service but I don't know the API uri until deploying it later on.
How can I effectively change the API uri based on environment variables?

You have two options for passing environment variables one is via a file
docker run --env-file ./env.list ubuntu bash
The other is via the command line with the -e option to the docker run command. you can stack the -e option to pass more than one environment variable.
one of the things you have in your dockerfile is the ability to declare the entry item. with that you can do something like:
set environment data via the docker run cmdline (via info above)
in the script get the environment info
finally, use it in the script to modify whatever file contains the uri data using something like sed

Related

How to access Cloud Composer system files

I am working on a migration task from an on-premise system to a cloud composer, the thing is that Cloud composer is a fully managed version of airflow which restrict access to file systems behind, actually on my on-premise system I have a lot of environment variables for some paths we're saving them like /opt/application/folder_1/subfolder_2/....
When looking at the Cloud composer documentation, they said that you can access and save your data on the data folder which is mapped by /home/airflow/gcs/data/ which implies that in case I move forward that mapping, I will be supposed to change my env variables values to something like : /home/airflow/gcs/data/application/folder_1/folder_2 things that could be a bit painful, knowing that I'm running many bash scripts that rely on those values.
Is there any approach to solve such problem ?
You can specify your env variables during Composer creation/update process [1]. These vars are then stored in the YAML files that create the GKE cluster where Composer is hosted. If you SSH into a VM running the Composer GKE cluster, then enter one of the worker containers and run env, you can see the env variables you specified.
[1] https://cloud.google.com/composer/docs/how-to/managing/environment-variables

aspnet core 2.2 web app environment variables not changing in docker

I've got an ASP.NET core 2.2 web app that I have enabled docker support for. I have created a test app here for review here.
I am running it in VS with Docker locally. I want to add environment variables/secrets to the app settings secrets in order to override the values in the appsettings.json file. In order to do this locally, I have tried changing values in:
launchsettings.json
Dockerfile
however, for both of these, when I attach to my docker instance and printenv the variable values, I find that the variable for ASPNETCORE_ENVIRONMENT still shows up as Development.
I am attaching to the running container like this:
docker exec -t -i 4c05 /bin/bash
I have searched all files in my solution. I can't find ASPNETCORE_ENVIRONMENT being set to Development anywhere in the solution. However, somehow, the environment variable is still being set with that value.
What could be going wrong? I want that variable to change. Once working, what I really want to do is to add a connection string secret to environment variables so that it can be used locally via the appsettings.json file or via a docker secret environment variable if the aspnetcore web app is running in a container. I think I've got this code working, it's just that the variables are not being deployed as expected to the running container.
My VS version is:
thanks
Mmm - seems there is a problem with DockerFile support in VS. However, when I use the Orchestration Support, using docker-compose, the functionality works as expected, so I'm answering the question myself :-)

Put applications's public URL in its Docker Compose environment

I have a Python API that has to know its public address to properly create links to itself (needed when doing paging and other HATEOAS stuff) in the responses it creates. The address is given to the application as an environment variable.
In production it's handled by Terraform, but I also have extensive local tests that make use of Docker Compose. In tests for paging I need to be aware of the fact that I'm running locally and I need to replace the placeholder address I'm putting in the app's env with http://localhost:<apps_bound_port> for following the links.
I don't want to do that. I'd like to have a way to put the port assigned by Docker in the app's environment variables. The problem wouldn't be there if I was using fixed ports (then I could just put something like http://localhost:8000 in the public addres variable), because I can have multiple instances of Compose running, which wouldn't work then.
I know I can pass environment variables from the shell running docker-compose to the containers, but I don't know of a way to insert the generated port using this approach.
Only solution that I have for my problem now is to find a free port before Compose runs, and then pass it as an environment variable (API_PORT=<FREE_PORT> docker-compose up), while setting up the port like this in docker-compose.yml:
ports:
- "8000:${API_PORT}"
This isn't ideal, because I run Compose both from the shell (with make) and from Python tests, so I'd need to put the logic for getting the port into an env variable in both places.
Is there something I'm missing, or should I create a feature request for Docker Compose?

.Net Core and Docker (Manage Application Settings)

I have just started with docker. By spending lot time on docker videos and tutorials, finally I am able to create my first docker image (and push to docker hub). I am going to use that image for my dev environment shortly.
Question is:
I have few application configuration in my appsettings.json file. Those configurations are different for different environments. While I pull docker image on my dev environment, those configuration needs to be change according to dev environment. I am not sure how to manage this. Anyone have idea on this?
Few useful information:
I have .net core 2.0 application.
I am using, docker for windows (as requirement).
I'll host that container either in VM or on Azure App Service.
have you already take a look at this
using ConfigurationBuilder
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.AddJsonFile($"appsettings.{env.EnvironmentName}.json")
.AddEnvironmentVariables()
.Build();
In your Startup.cs, if everything is intact you should have this;
var configuration = new ConfigurationBuilder()
.AddJsonFile("appsettings.json")
.AddJsonFile($"appsettings.{env.EnvironmentName}.json")
.AddEnvironmentVariables()
.Build();
you have 2 environment names other than Production. Those are
- Development
- Staging
AspNetCore understands which environment you want from the value of the Environment Variable ASPNETCORE_ENVIRONMENT which should be Production, Development or Staging
The environment you are in is important because AspNetCore picks the right settings json file based on your environment name. The code I shared above does this;
Load Settings from appsettings.json
Find environment specific settings file.
For Production -> appsettings.production.json (note, this is usually not used)
For Development -> appsettings.development.json
For Staging -> appsettings.staging.json
Override setting values based on the environment specific settings json file
So the solution to your problem is;
Make sure your appsettings.json file is filled for production values.
Add setting files for appsettings.development.json and appsettings.staging.json. Make the neccessary modifications in the contents of these files.
When you run your Docker container, override the environment variable ASPNETCORE_ENVIRONMENT based on the environment settings you would like to keep as I stated above.
When you docker run you can supply it a list of environment variables or a file with environment settings. This allows you to set the variable you need modified when you start up your container.
Using the -e flag:
docker run -it -e ASPNETCORE_ENVIRONMENT=Prod...
Using the --env-file flag:
docker run -it --env-file ./Prod.env ...
For reference: https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e-env-env-file

Transmit Heroku environment variables to Docker instance

I build a RoR app on Heroku that must be run inside a Docker container. To do so I use the official Dockerfile. As it is very common with Heroku, I need a few add-ons to make this app fully operational. In production the variable DATABASE_URL is available within my app. But if I try some other add-ons that use environment variables (Mailtrap in my case), variables aren't copied into the instance during runtime.
So my question is simple: how can I make docker instances aware of the environment variables when executed on Heroku?
As you may ask, I already know that we can specified an environment directive right in docker-compose.yml. I would like to avoid that in order to be able to share this file through the project repository.
I didn't find any documentation about it but t appears that Heroku change very recently the way it handles config vars in Docker containers: they are now replicated automatically (values from docker-compose.yml are simply ignored).
The workaround to not commit sensitive config files, would be to create a docker-compose.yml**.example** with empty fields and commit it, then add docker-compose.yml to .gitignore.
Since that's not very practical on heroku, you can use the --env docker switch to add any variable to the container's environment.
Like this: docker run --env "MY_VAR=yolo" my_image:my_tag
You could also serve a private docker-config.yml from a secure site, that heroku would have access to (that would be my preferred solution in your case).

Resources