Tutor loading environment values from helm values file - openedx

Currently I’m running open edx on kubernetes with tutor. I want to package the application using helm chart and i’m want to pass configuration values to the open edx platform from my values.yaml file.
I’m wondering if this is possible and how to tell tutor to get values from my helm values at startup instead of getting them from defaults.yml file.

Related

Kubernetes Access Windows Environment Variable

How i can access or read windows environment variable in kubernetes. I achieved the same in docker compose file.
How i can do the same in kubernetes as i am unable to read the windows environment variables?
Nothing in the standard Kubernetes ecosystem can be configured using host environment variables.
If you're using the core kubectl tool, the YAML files you'd feed into kubectl apply are self-contained manifests; they cannot depend on host files or environment variables. This can be wrapped in a second tool, Kustomize, which can apply some modifications, but that explicitly does not support host environment variables. Helm lets you build Kubernetes manifests using a templating language, but that also specifically does not use host environment variables.
You'd need to somehow inject the environment variable value into one of these deployment systems. With all three of these tools, you could include those in a file (a Kubernetes YAML manifest, a Kustomize overlay, a Helm values file) that could be checked into source control; you may also be able to retrieve these values from some sort of external storage. But just relaying host environment variables into a container isn't an option in Kubernetes.

Create a google bigquery connection from Airflow UI (Dockerized)

I am running an Airflow instance using Docker. I am able to access the Airflow UI using http://localhost:8080/. Also able to execute a sample dag using PythonOperator. Using PythonOperator I am able to query a big query table on GCP environment. The service account key JSON file is added in my docker compose yaml file.
This works perfectly.
Now I want to use BigQueryOperator and BigQueryCheckOperator for which I need a connection ID. This connection ID would come from Airflow connections which happens through Airflow UI.
But when I am trying to create a new Google Bigquery connection getting errors. Could anyone please help me to fix this.
In your docker compose file, can you set the environment variable GOOGLE_APPLICATION_CREDENTIALS to /opt/airflow/configs/kairos-aggs-airflow-local-bq-connection.json? This might be enough to fix your first screenshot.
Looking at the docs and comparing your second screenshot, I think you could try selecting 'Google Cloud Platform' as the connection type and adding a project ID and Scopes to the form.
The answers to this question may also be helpful.

DevOps and environment variables with Docker images and containers

I am a newby with dockers and want to understand how to deal with environment variables in images/containers and how to configure the CI/CD pipelines.
In the first instance I need the big picture before deepdiving in commands etc. I searched a lot on Internet, but in the most of the cases I found the detailed commands how to create, build, publish images.
I have a .net core web application. As all of you know there are appsettings.json files for each environment, like appsettings.development.json or appsettings.production.json.
During the build you can give the environment information so .net can build the application with the specified environment variables like connection strings.
I can define the same steps in de Dockerfile and give the environment as a parameter or define as variables. That part works fine.
My question is, should I have to create seperate images for all of my environments? If no, how can I create 1 image and can use that to create a container and can use it for all of my environments? What is the best practice?
I hope I am understanding the question correctly. If the environments are the same framework, then no. In each project, import the necessary files for Docker and then update the docker-compose.yml for the project - it will then create an image for that project. Using Docker Desktop (if you prefer over CLI) you can start and stop your containers.

During the deployment process, how do you get your existing app data into an application created by a public Helm Chart for a LAMP stacks?

Take bitnami/wordpress or bitnami/drupal for example. There are millions of articles on how to run two lines of code (helm get repo / helm install my-release chart) and have a fully working new version of an app in 30 seconds. But I cannot find ANY information about how to get my existing data into that deployment.
In my development workflow, I use two docker images. One is for the app files and the other is for the database. Locally, it's easy enough to get my data into these images. Using MariaDB's docker instructions, I mount a local directory containing my db.sql file into /docker-entrypoint-initdb.d. The same goes for my files - pull them down into a local directory that's then mounted into the container's /var/www folder. Voila! Instant running app with all existing data.
So how do I do this with a public Helm chart?
Scenario: I get local copies of my db.sql and web files. I make my changes. I want to use bitnami/drupal to install this into a cluster (so a colleague can see it, UAT, etc). So how do I do that? If this is a values.yaml issue, how do I configure that file to point to the database file I want to initialize with? Or, how do I use Helm install with --set to do it?
If getting a new app up and running is as easy as
helm install my-release bitnami/drupal
then shouldn't it be just as easy to run something like
helm install --set mariadb.docker-entrypoint-initdb.d.file=db.sql --set volume.www.initial.data=/local/web/files new-feature-ticket bitnami/drupal
I know that's pseudo code, but that's exactly the type of answer I'm looking for. I want to be able to deploy this as quickly as I do a new app, but initialized with my existing data, and the bare minimum config need to do so whether that's via values.yaml or --set.

How to access Cloud Composer system files

I am working on a migration task from an on-premise system to a cloud composer, the thing is that Cloud composer is a fully managed version of airflow which restrict access to file systems behind, actually on my on-premise system I have a lot of environment variables for some paths we're saving them like /opt/application/folder_1/subfolder_2/....
When looking at the Cloud composer documentation, they said that you can access and save your data on the data folder which is mapped by /home/airflow/gcs/data/ which implies that in case I move forward that mapping, I will be supposed to change my env variables values to something like : /home/airflow/gcs/data/application/folder_1/folder_2 things that could be a bit painful, knowing that I'm running many bash scripts that rely on those values.
Is there any approach to solve such problem ?
You can specify your env variables during Composer creation/update process [1]. These vars are then stored in the YAML files that create the GKE cluster where Composer is hosted. If you SSH into a VM running the Composer GKE cluster, then enter one of the worker containers and run env, you can see the env variables you specified.
[1] https://cloud.google.com/composer/docs/how-to/managing/environment-variables

Resources