How do I know when to use which hyperledger fabric docker container env variables - docker

I played around with Hyperledger Fabric lately and I'm not able to find a good and exhaustive description of ALL environment variables one can set on the hyperledger fabric docker containers (fabric-orderer, fabric-peer, fabric-ca, fabric-tools, fabric-kafka, ...)
Is there such a documentation? I only find so little about the possible variables and what their different values would do and when one would choose which value; even on the official documentation.
Can anyone provide such a list with explanation? Or can we collect information to create such a list?
Ideally, I would like to have something like the following:
fabric-orderer
ORDERER_GENERAL_GENESISMETHOD
values: file, provisional (default)
file is used when you want provide the genesis block as file to the container (see ORDERER_GENERAL_GENESISFILE)
provisional is used when ...
ORDERER_GENERAL_GENESISFILE
value(s): path to genesis file path
fabric-peer
some env var
... explanation ...
Here's also a sample list of some env variables I've seen other people using and don't why, what it means or if it even works:
ORDERER_GENERAL_LEDGERTYPE
ORDERER_GENERAL_BATCHTIMEOUT
ORDERER_GENERAL_MAXWINDOWSIZE
CONFIGTX_ORDERER_KAFKA_BROKERS
ORDERER_GENERAL_LISTENADDRESS
ORDERER_GENERAL_PORT
ORDERER_GENERAL_HOST
...
I hope asking this question here is ok (it's my first).
Thanks a lot for your help!

This is a great question, and would indeed make a good addition to the docs. It is not currently explicitly documented, but I can explain at least how you can determine what the variables are.
We use viper for managing configuration. We ship a sample configuration with the distribution of the docker images and binaries. As you can see, there are three configuration yaml files: configtx.yaml, core.yaml and orderer.yaml. For each configuration parameter in the yaml file, you can derive an environment variable that can be used to override the value in the config file used at startup. The environment variable name is derived from the filename (e.g. CORE for core.yaml), and underscore-separated capitalization of the nested properties in the config, (e.g. CORE_LOGGING_LEVEL).
The sample apps provided contain docker-compose yaml configurations that leverage most of the properties you might consider leveraging for your own purposes.
Meanwhile, I have created a JIRA to track this and invite contributions to help us flesh out an addition to our documentation that provides a useful reference.

Related

Is it possible to create a QueueProcessingFargateService with read-only root filesystem with cdk?

AWS Foundational Security Best Practices v1.0.0 has a high risk check; [ECS.5] ECS containers should be limited to read-only access to root filesystems. The remediation explains how to change this in the console. However, I haven't found a way to do this for a QueueProcessingFargateService using CDK.
If a QueueProcessingFargateService could be created without an image, this could have been solved by calling add_container on the task definition, but image is mandatory so that doesn't work.
Does anyone know if it is possible to create a QueueProcessingFargateService with read-only root filesystem and if so, how?
(I use CDK in Python, but a solution in any other CDK language will be just as useful)
As this isn't a property directly supported on the construct you'll need to use escape hatches to set it:
https://docs.aws.amazon.com/cdk/v2/guide/cfn_layer.html#cfn_layer_resource

Export environment variables to JupyterHub users, without using Docker?

JupyterHub has various authentication methods, and the one I am using is the PAMAuthenticator, which basically means you log into the JupyterHub with your Linux userid and password.
However, environment variables that I create, like this (or for that matter in those set in my .bashrc), before running JupyterHub, do not get set within the user's JupyterLab session. As you can see they're available in the console, with or without the pipenv, and within python itself via os.getenv().
However in JupyterHub's spawned JupyterLab for my user (me):
This environment variable myname is not available even if I export it in a bash session from within JupyterLab as follows:
Now the documentation says I can customize user environments using a Docker container for each user, but this seems unnecessarily heavyweight. Is there an easier way of doing this?
If not, what is the easiest way to do this via Docker?
In the jupyterhub_config.py file, you may want to add the environment variables which you need using the c.Spawner.env_keep variable
c.Spawner.env_keep = ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VIRTUAL_ENV', 'LANG', 'LC_ALL', 'JUPYTERHUB_SINGLEUSER_APP']
Additional information on all the different configurations are available at https://jupyterhub.readthedocs.io/en/stable/reference/config-reference.html
Unfortunately, unlike a single-user Jupyter notebook/lab, Jupyterhub is for a multi-user environment and the customization along with setting security is not some concrete area. They provide you some default settings and a ton of ways to customize the use, alas they provide only a handful amount of examples. You need to dig into documents, check for similarities to your use case, and make adjustments in a trial-error process.
Fortunately, other than using configuration files used to configure Jupyterhub and Jupyter notebook servers, namely jupyter_notebook_config.py and jupyterhub_config.py, we can use environment reading packages per user. This flexibility comes from the use of a programming language kernel.
But this needs being able to install new packages, having them already installed, or asking admins to install them on the current kernel.
Here is one way to use customized environment variables in the current workspace.
Create a new file and give a clear name to show it is an environment file. You can have as many different files as you need. Most production exercises use the .env name but jupyter will not list dot files in file view so avoid doing that. Also, be careful about quotes; sometimes you need them, sometimes you get errors depending on what library you use and where you use them.
test.env:
NAME="My Name"
TEST=This is test 42
Install and use your preferred environment file reader then read from the file(s) you want. you can use `pip install`` in the notebook when needed, just use it cautiously.
test.ipynb
#package already installed, so installation commented out
#%pip install python-environ
import environ
env = environ.Env()
env.read_env(env.str('ENV_PATH', 'test.env'))
NAME=env("NAME")
TEST=env("TEST")
print(NAME," : ",TEST)
If you are an admin of the hub, then beware of the use cases for libraries such that some may break your restrictions. So keep an eye on what permissions you give to your users. If you use custom docker images though, there should not be a leakage as they are already designed to be isolated from your system.

Map a file in Docker using Docker Volume [duplicate]

change a config.properties file in a jar / war file in runtime and hotdeploy the changes ?
my requirement is something as follows, we have a "config.properties" in a jar/war file , i have to open the file through a webpage and after the user has made necessary changes to it, i have to update the "config.properties" in jar/war file and hot deploy it. can we achieve this feat ? if so can you please point me to relevant sites/documents so that i can jumpstart on this.
I will strongly recommend your architecht rethink this solution. What you describe should be done through JNDI or a similar technique, not through reloading properties.
Deployments should be considered static - that any given web container allows for magic trickery should not be depended on, and WILL break some day (most likely at the most inconvenient time).
You've got a couple of problems off the top of my head:
ensuring that nothing is holding static references to a java.util.Properties that has previously loaded your config.properties file.
most servlet engines will unpack your war to a working directory so the properties file you load won't be the one in the war, it will be the unpacked one. This means your changes
will be overwritten when you restart the servlet engine because this is typically one of the points the war is unpacked.
While these problems aren't insurmountable I've always found it much easier to implement this sort of behavior by storing the properties in JNDI (as Thorbjørn suggests) or a database (while being careful about the static references I mentioned in point 1).
The JNDI/database solution has the nice side effect of easing deployment into multiple environments because each typically has it's own registry/database.
Even that I agree with the comments explained before, I could suggest one solution:
Apache Commons Configuration extension gives you the posibility to do something like:
config.setReloadingStrategy(new FileChangedReloadingStrategy());
That could make the trick to change the configuration file on a runtime basis with no code at all.
However, like JNDI and other methods of web application configuration, the security is a concern. Be careful on which parameters you can/must be able to configure.

dropwizard get on demand jdbi connection

I have a simple CRUD application with backend code in dropwizard. The entire app just comprises of simple resource classes and crud operations except one case where some business logic is involved.
I am trying to extract this into a service instead of putting it in the resource class itself. But for that my service would need an ondemand jdbi connection to access data and do its thing.
All my connect strings and config values are in YML file. Since this app would be running on different servers with different yml files, I dont want to hardcode the yml file name in order to read it again, to get the connect strings and do it that way.
How do I achieve this?
Can you detect what environment you are on?
If so, can you do something like ${environment}.yml?
There is Configuration project on apache which might help.
Otherwise, is it a case of in dev you want to run
java -jar app.jar server dev.yml
and in prod you want to run java -jar app.jar server prod.yml? I imagine you have separate daemons in each environment. So, those environment's will pick up the right configuration, if you've configured them that way.
Otherwise, if the property names are the same, but their values differ, and you pick up the right yml in the right environment, things should work.
If I haven't addressed your question, can you please elaborate your problem a little more?

How can I see all available reltool overlay template variables?

I have a fairly standard OTP setup with rebar and reltool. I've setup reltool to use a vars.config to swap in overlay template variables with {overlay_vars, "files/vars.config"}. I've noticed that variables other than what I have listed in vars.config also work as overlay template variables, the most obvious one of which is {{erts_vsn}}.
I assume there are other built-in variables; how do I find what they are? I've combed the reltool docs and come up with nothing.
I believe that your answer is in Rebar's rebar_reltool module.
There, you find the definition for:
erts_vsn
rel_vsn
target_dir

Resources