How To evaluate Kubernetes Environment Variables - environment-variables

I have a web application that has a server side UI component that needs to talk to another component which exposes a REST interface. The UI needs to know the address of the endpoint that the REST component exposes.
When the UI starts, I set an environment variable ( ADDRESS_SERVICE_URI ) that contains the REST endpoint host and port.
I now want to deploy this application into Kubernetes, but I want to do it in a way that does not enforce any dependency in the application code on Kubernetes. I was hoping to use the environment variables that Kubernetes exposes to aid service discovery, so I have the following in my UI's deployment description
env:
- name: ADDRESS_SERVICE_URI
value: http://${REST_SERVICE_HOST}:${REST_SERVICE_PORT}
I was hoping that the environment variables would be evaluated by Kuberbetes, but they appear to be being passed thru "as is" to my application code, as I get the following exception when the code executes.
java.lang.IllegalArgumentException: Illegal character in authority at index 7: http://${REST_SERVICE_HOST}:${REST_SERVICE_PORT}/addresses/postcode/WA11
java.net.URI.create(URI.java:852)
com.sun.jersey.api.client.Client.resource(Client.java:434)
uk.gov.dwp.digital.addresslookup.dao.impl.PostCodeDAOImpl.byPostCode(PostCodeDAOImpl.java:44)
uk.gov.dwp.digital.addresslookup.service.impl.PostCodeServiceImpl.byPostcode(PostCodeServiceImpl.java:17)
uk.gov.dwp.digital.addresslookup.controllers.PostCodeController.processSearchRequest(PostCodeController.java:83)
uk.gov.dwp.digital.addresslookup.controllers.PostCodeController.executeSearch(PostCodeController.java:59)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Is it possible to evaluate the Kubernetes environment variables, or do I need to alter my code to expect the variables to be presented to it as two separate variables, with names that Kubernetes dictates?

Since env only appears to support key:value pairs, your best bet is to use an ENTRYPOINT script to pre-populate your ENV before launching the app.
Dockerfile
FROM yourbaseimage
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
entrypoint.sh
#!/bin/bash
export ADDRESS_SERVICE_URI="http://${REST_SERVICE_HOST}:${REST_SERVICE_PORT}"
exec "$#"

Related

How to access container environment variables with kubernetes env?

I'm trying to run my application on Kubernetes. My docker container have environment variables such as PATH and LD_LIBRARY_PATH, which are set in Dockerfile. I tried to change them in the yaml file like this:
env:
- name: LD_LIBRARY_PATH
value: "foo:$(LD_LIBRARY_PATH)"
The above configuration doesn't work, I just see LD_LIBRARY_PATH=foo:$(LD_LIBRARY_PATH) in the pod. This method seems work for Kubernetes env variables such as KUBERNETES_PORT_443_TCP_PROTO, but not for docker env variables.
My questions are:
I think the env settings in yaml are injected into docker before the running time of the container, so kubernetes cannot read the value of LD_LIBRARY_PATH. Therefor it can't change the variable. Do I understand it right?
How to change container environment variables with kubernetes env? I know that I can set the env variables in the command field of yaml file, but that seems not clean and are there other ways to do that?
If Kubernetes can't change existed envs, does it mean that the env field in yaml file is designed to add new envs only?
Thank you!
The Kubernetes variable expansion syntax only works on things Kubernetes directly knows about. Inside a container an environment variable could come from a couple of places (the Dockerfile ENV directive, the base container environment itself, setup in an entrypoint script) and Kubernetes doesn't consider any of these; it only considers things in the same container spec. The API definition of EnvVar hints at this:
Variable references $(VAR_NAME) are expanded using the previous defined environment variables in the container and any service environment variables.
You can't use this Kubernetes syntax to change environment variables in the way you're describing. You can only refer to other things in the same env: block (which may come from ConfigMaps or Secrets) and the implicit variables that come from other Services.
(Changing path-type variables at the Kubernetes level doesn't make a lot of sense. Since an image is self-contained, it already contains all of the commands and libraries it would need. It's difficult in Kubernetes to inject more tools or libraries; it'd be better to install them directly in your image, ideally in /usr/lib or /usr/local/lib, but failing that you can update ENV in a Dockerfile similar to how you suggest here.)

LXC environment variables

I'm new to LXC containers and am using LXC v2.0. I want to pass settings to the processes running inside my container (specifically command line parameters for their Systemd service files.
I'm thinking of passing environment variables to the container via the config file lxc.environment = ABC=DEF . (I intend to use SALT Stack to manipulate these variables). Do I manually have to parse /proc/1/environ to access these variables or is there a better way I'm missing?
The documentation says:
If you want to pass environment variables into the container (that is, environment variables which will be available to init and all of its descendents), you can use lxc.environment parameters to do so.
I would assume that, since all processes - including the shell - are descendents of the init process, the environment should be available in every shell. Unfortunately, this seems not to be true. In a discussion on linuxcontainers.org, someone states:
That’s not how this works unfortunately. Those environment variables are passed to anything you lxc exec and is passed to the container’s init system.
Unfortunately init systems usually don’t care much for those environment variables and never propagate them to their children, meaning that they’re effectively just present in lxc exec sessions or to scripts which directly look at PID 1’s environment.
So yes, obviously parsing /proc/1/environ seems to be the only possibility here.

Put applications's public URL in its Docker Compose environment

I have a Python API that has to know its public address to properly create links to itself (needed when doing paging and other HATEOAS stuff) in the responses it creates. The address is given to the application as an environment variable.
In production it's handled by Terraform, but I also have extensive local tests that make use of Docker Compose. In tests for paging I need to be aware of the fact that I'm running locally and I need to replace the placeholder address I'm putting in the app's env with http://localhost:<apps_bound_port> for following the links.
I don't want to do that. I'd like to have a way to put the port assigned by Docker in the app's environment variables. The problem wouldn't be there if I was using fixed ports (then I could just put something like http://localhost:8000 in the public addres variable), because I can have multiple instances of Compose running, which wouldn't work then.
I know I can pass environment variables from the shell running docker-compose to the containers, but I don't know of a way to insert the generated port using this approach.
Only solution that I have for my problem now is to find a free port before Compose runs, and then pass it as an environment variable (API_PORT=<FREE_PORT> docker-compose up), while setting up the port like this in docker-compose.yml:
ports:
- "8000:${API_PORT}"
This isn't ideal, because I run Compose both from the shell (with make) and from Python tests, so I'd need to put the logic for getting the port into an env variable in both places.
Is there something I'm missing, or should I create a feature request for Docker Compose?

Deploying web services with different environment variables in the frontend

I have a web service which consists of a backend and a forntend, and in the frontend I use an API uri which can change depending on the environment the service is being deployed to.
Using webpack's EnvironmentPlugin I can build the source code simply with other environment variables. The plugin allows me to use process.env in javascript which is convenient in the development phase but after bundling the frontend's code process.env will remain the same with the given environment variables when bundled.
The issue is that on the CI pipelines I build a docker image for the web service but I don't know the API uri until deploying it later on.
How can I effectively change the API uri based on environment variables?
You have two options for passing environment variables one is via a file
docker run --env-file ./env.list ubuntu bash
The other is via the command line with the -e option to the docker run command. you can stack the -e option to pass more than one environment variable.
one of the things you have in your dockerfile is the ability to declare the entry item. with that you can do something like:
set environment data via the docker run cmdline (via info above)
in the script get the environment info
finally, use it in the script to modify whatever file contains the uri data using something like sed

Is it possible to specify a Docker image build argument at pod creation time in Kubernetes?

I have a Node.JS based application consisting of three services. One is a web application, and two are internal APIs. The web application needs to talk to the APIs to do its work, but I do not want to hard-code the IP address and ports of the other services into the codebase.
In my local environment I am using the nifty envify Node.JS module to fix this. Basically, I can pretend that I have access to environment variables while I'm writing the code, and then use the envify CLI tool to convert those variables to hard-coded strings in the final browserified file.
I would like to containerize this solution and deploy it to Kubernetes. This is where I run into issues...
I've defined a couple of ARG variables in my Docker image template. These get turned into environment variables via RUN export FOO=${FOO}, and after running npm run-script build I have the container I need. OK, so I can run:
docker build . -t residentmario/my_foo_app:latest --build-arg FOO=localhost:9000 BAR=localhost:3000
And then push that up to the registry with docker push.
My qualm with this approach is that I've only succeeded in punting having hard-coded variables to the container image. What I really want is to define the paths at pod initialization time. Is this possible?
Edit: Here are two solutions.
PostStart
Kubernetes comes with a lifecycle hook called PostStart. This is described briefly in "Container Lifecycle Hooks".
This hook fires as soon as the container reaches ContainerCreated status, e.g. the container is done being pulled and is fully initialized. You can then use the hook to jump into the container and run arbitrary commands.
In our case, I can create a PostStart event that, when triggered, rebuilds the application with the correct paths.
Unless you created a Docker image that doesn't actually run anything (which seems wrong to me, but let me know if this is considered an OK practice), this does require some duplicate work: stopping the application, rerunning the build process, and starting the application up again.
Command
Per the comment below, this event doesn't necessarily fire at the right time. Here's another way to do it that's guaranteed to work (and hence, superior).
A useful Docker container ends with some variant on a CMD serving the application. You can overwrite this run command in Kubernetes, as explained in the "Define a Command and Arguments for a Container" section of the documentation.
So I added a command to the pod definition that ran a shell script that (1) rebuilt the application using the correct paths, provided as an environment variable to the pod and (2) started serving the application:
command: ["/bin/sh"]
args: ["./scripts/build.sh"]
Worked like a charm.

Resources