Export environment variables to JupyterHub users, without using Docker? - docker

JupyterHub has various authentication methods, and the one I am using is the PAMAuthenticator, which basically means you log into the JupyterHub with your Linux userid and password.
However, environment variables that I create, like this (or for that matter in those set in my .bashrc), before running JupyterHub, do not get set within the user's JupyterLab session. As you can see they're available in the console, with or without the pipenv, and within python itself via os.getenv().
However in JupyterHub's spawned JupyterLab for my user (me):
This environment variable myname is not available even if I export it in a bash session from within JupyterLab as follows:
Now the documentation says I can customize user environments using a Docker container for each user, but this seems unnecessarily heavyweight. Is there an easier way of doing this?
If not, what is the easiest way to do this via Docker?

In the jupyterhub_config.py file, you may want to add the environment variables which you need using the c.Spawner.env_keep variable
c.Spawner.env_keep = ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VIRTUAL_ENV', 'LANG', 'LC_ALL', 'JUPYTERHUB_SINGLEUSER_APP']
Additional information on all the different configurations are available at https://jupyterhub.readthedocs.io/en/stable/reference/config-reference.html

Unfortunately, unlike a single-user Jupyter notebook/lab, Jupyterhub is for a multi-user environment and the customization along with setting security is not some concrete area. They provide you some default settings and a ton of ways to customize the use, alas they provide only a handful amount of examples. You need to dig into documents, check for similarities to your use case, and make adjustments in a trial-error process.
Fortunately, other than using configuration files used to configure Jupyterhub and Jupyter notebook servers, namely jupyter_notebook_config.py and jupyterhub_config.py, we can use environment reading packages per user. This flexibility comes from the use of a programming language kernel.
But this needs being able to install new packages, having them already installed, or asking admins to install them on the current kernel.
Here is one way to use customized environment variables in the current workspace.
Create a new file and give a clear name to show it is an environment file. You can have as many different files as you need. Most production exercises use the .env name but jupyter will not list dot files in file view so avoid doing that. Also, be careful about quotes; sometimes you need them, sometimes you get errors depending on what library you use and where you use them.
test.env:
NAME="My Name"
TEST=This is test 42
Install and use your preferred environment file reader then read from the file(s) you want. you can use `pip install`` in the notebook when needed, just use it cautiously.
test.ipynb
#package already installed, so installation commented out
#%pip install python-environ
import environ
env = environ.Env()
env.read_env(env.str('ENV_PATH', 'test.env'))
NAME=env("NAME")
TEST=env("TEST")
print(NAME," : ",TEST)
If you are an admin of the hub, then beware of the use cases for libraries such that some may break your restrictions. So keep an eye on what permissions you give to your users. If you use custom docker images though, there should not be a leakage as they are already designed to be isolated from your system.

Related

How to enable caching in ArangoDB via Docker or arangojs?

I would like to enable caching in ArangoDB, automatically when my app start.
I'm using docker-compose to start the whole thing but apparently there's no simple parameter to enable caching in ArangoDB official image.
According to the doc, all the files in /docker-entrypoint-initdb.d/ are executed at container start. So I added a js file with that code:
require('#arangodb/aql/cache').properties({mode: 'on'});
It is indeed executed but caching doesn't seem to be enabled (from what I see with arangosh within the container).
My app is a JS app using arangojs, so if I can do it this way, I'd be happy too.
Thanks!
According to the performance and server config docs, you can enable caching in several ways.
Your method of adding require("#arangodb/aql/cache").properties({ mode: "on" }); to a .js file in the /docker-entrypoint-initdb.d/ directory should work, but keep an eye on the logs. You may need to redirect log output with a different driver (journals, syslog, etc.) to see what's going on. Make sure to run the command via arangosh to see if it works.
If that's a bust, you might want to see if there is a way to pass parameters at runtime (such as --query.cache-mode on). Unfortunately, I don't use Docker Compose, so I can't give you direct advice here, but try something like -e QUERY.CACHE-MODE=ON
If there isn't a way to pass params, then you could modify the config file: /etc/arangodb3/arangod.conf.
And don't forget about the REST API methods for system management. You can access AQL configuration (view and alter) in the Web UI by clicking on the Support -> Rest API -> AQL.
One thing to keep in mind - I'm not sure if the caching settings are global or tied to a specific database. View the configuration on multiple databases (including _system) to test the settings.

Using Environment Variables for Per Environment Configuration in iOS

I am working on an iOS application that uses a pretty normal multi-environment deployment model. We have a QA, Prod, and "Dev" version of the app that all talk to their own corresponding backends. I am new to iOS development but am familiar with Node, Java, and a few other development environments.
The first thing I reached to for this problem was Environment Variables. I saw that XCode had a way to set environment variables in a Scheme and they could be read pretty easily. So I used 4 environment variables per environment to configure a few needed backend hosts. Everything seemed to be going fine until I realized that those environment variables seem to ONLY be available when running the app through XCode. Is that correct? Is there no way to configure environment variables that "bundle up" with an app? If so, the ability to configure environment variables at all seems like a footgun.
What I mean is, In a NodeJS or Java app, I can set a number of useful "necessary" configs like a backend hosts and use some approach to provide those values when running the app for real. It seems like in iOS / Swift, environment variables are only useful for development-time debugging settings? The asymmetry between what's available in XCode vs a "real" shipped app seems odd.
Is there a similar standard way that I can configure my app for multiple different environments that works on shipped applications and ideally just involves reading some value at runtime rather than using conditionals and/or using compiler flags or something?
You are correct. The Environment Variables are only meaningful when executing the Scheme in Xcode. Their primary use in that context is to activate debugging features that are not on all the time. For example, if you try to create a bitmap Core Graphics context (CGContext) with an invalid set of parameters, the debugger may tell you to set an environment variable to see additional debugging output from Core Graphics. Or you can set environment variables to turn on memory management debugging features.
When you are running an application on a server, the Unix framework in which the application is running is part of the "user experience". In that context it makes sense for the application to use things like environment variables.
As developers we understand that a mobile app is running inside a unix process, but that unix environment is mostly unavailable to us. A similar feature that is common to Unix apps is command line arguments. An iOS application on startup receives command line arguments (argc and argv) but there is no way specify those when the app is launched either.
There are a number of places you could include configuration information like that which you describe in your application. The most common that I can think of is to include the setting in the applications Info.plist. At runtime you could access the contents of the property list by fetching the main bundle and asking for it's infoDictionary:
let infoBundle = Bundle.main.infoDictionary
let mySetting = infoBundle["SomeSetting"]
When the application's info.plist is created, it DOES have access to the environment variables declared in the Scheme so you could put the environment variables in the scheme, reference them in the Info.plist, and retrieve them at runtime from the main bundle.
try Using FeatureFlags, maybe will help you, check this
https://medium.com/#rwbutler/feature-flags-a-b-testing-mvt-on-ios-718339ac7aa1

defining a group of services to run with docker compose

I'm wondering if there is a method to define, and launch, a group of services configured in a docker-compose.yml file.
To make a real world example, I'm working with laradock: it has a lot of services configured (I think more than 50) - You have to "select" which one to run every time.
In fact to run a normal php + apache + mysql stack, you can use:
docker compose up workspace apache2 mysql
The final question is: can those three services be grouped under an alias, like "amp" and this alias used to launch these services with:
docker compose up amp ?
What I have tried, already
I thought about duplicating the docker-compose.yml into a simpler one, where only the required services are present.
Anyway this configuration I'm using (laradock) it's quite complex, being able to define an alias, would lead to a much easier to handle configuration.
Imagine the case where you need to add one more service to the group: instead of doing cut & paste of it's configuration(s), you just add it's name, and nothing else.
Is this possibile somehow?
Thank you
One simple way that might do what you want is to create an alias for your shell/terminal. To do that in a "permanent" way you might need to edit the ~/.bashrc or ~/.bash_profile files. For reference you can check, for example, this link here.
That way if you want to change the services of a "group" (which will be defined by the alias), you will just need to edit the line for that alias in the file.

How do I know when to use which hyperledger fabric docker container env variables

I played around with Hyperledger Fabric lately and I'm not able to find a good and exhaustive description of ALL environment variables one can set on the hyperledger fabric docker containers (fabric-orderer, fabric-peer, fabric-ca, fabric-tools, fabric-kafka, ...)
Is there such a documentation? I only find so little about the possible variables and what their different values would do and when one would choose which value; even on the official documentation.
Can anyone provide such a list with explanation? Or can we collect information to create such a list?
Ideally, I would like to have something like the following:
fabric-orderer
ORDERER_GENERAL_GENESISMETHOD
values: file, provisional (default)
file is used when you want provide the genesis block as file to the container (see ORDERER_GENERAL_GENESISFILE)
provisional is used when ...
ORDERER_GENERAL_GENESISFILE
value(s): path to genesis file path
fabric-peer
some env var
... explanation ...
Here's also a sample list of some env variables I've seen other people using and don't why, what it means or if it even works:
ORDERER_GENERAL_LEDGERTYPE
ORDERER_GENERAL_BATCHTIMEOUT
ORDERER_GENERAL_MAXWINDOWSIZE
CONFIGTX_ORDERER_KAFKA_BROKERS
ORDERER_GENERAL_LISTENADDRESS
ORDERER_GENERAL_PORT
ORDERER_GENERAL_HOST
...
I hope asking this question here is ok (it's my first).
Thanks a lot for your help!
This is a great question, and would indeed make a good addition to the docs. It is not currently explicitly documented, but I can explain at least how you can determine what the variables are.
We use viper for managing configuration. We ship a sample configuration with the distribution of the docker images and binaries. As you can see, there are three configuration yaml files: configtx.yaml, core.yaml and orderer.yaml. For each configuration parameter in the yaml file, you can derive an environment variable that can be used to override the value in the config file used at startup. The environment variable name is derived from the filename (e.g. CORE for core.yaml), and underscore-separated capitalization of the nested properties in the config, (e.g. CORE_LOGGING_LEVEL).
The sample apps provided contain docker-compose yaml configurations that leverage most of the properties you might consider leveraging for your own purposes.
Meanwhile, I have created a JIRA to track this and invite contributions to help us flesh out an addition to our documentation that provides a useful reference.

dropwizard get on demand jdbi connection

I have a simple CRUD application with backend code in dropwizard. The entire app just comprises of simple resource classes and crud operations except one case where some business logic is involved.
I am trying to extract this into a service instead of putting it in the resource class itself. But for that my service would need an ondemand jdbi connection to access data and do its thing.
All my connect strings and config values are in YML file. Since this app would be running on different servers with different yml files, I dont want to hardcode the yml file name in order to read it again, to get the connect strings and do it that way.
How do I achieve this?
Can you detect what environment you are on?
If so, can you do something like ${environment}.yml?
There is Configuration project on apache which might help.
Otherwise, is it a case of in dev you want to run
java -jar app.jar server dev.yml
and in prod you want to run java -jar app.jar server prod.yml? I imagine you have separate daemons in each environment. So, those environment's will pick up the right configuration, if you've configured them that way.
Otherwise, if the property names are the same, but their values differ, and you pick up the right yml in the right environment, things should work.
If I haven't addressed your question, can you please elaborate your problem a little more?

Resources