Is it possible to create a QueueProcessingFargateService with read-only root filesystem with cdk? - aws-cdk

AWS Foundational Security Best Practices v1.0.0 has a high risk check; [ECS.5] ECS containers should be limited to read-only access to root filesystems. The remediation explains how to change this in the console. However, I haven't found a way to do this for a QueueProcessingFargateService using CDK.
If a QueueProcessingFargateService could be created without an image, this could have been solved by calling add_container on the task definition, but image is mandatory so that doesn't work.
Does anyone know if it is possible to create a QueueProcessingFargateService with read-only root filesystem and if so, how?
(I use CDK in Python, but a solution in any other CDK language will be just as useful)

As this isn't a property directly supported on the construct you'll need to use escape hatches to set it:
https://docs.aws.amazon.com/cdk/v2/guide/cfn_layer.html#cfn_layer_resource

Related

How to enable caching in ArangoDB via Docker or arangojs?

I would like to enable caching in ArangoDB, automatically when my app start.
I'm using docker-compose to start the whole thing but apparently there's no simple parameter to enable caching in ArangoDB official image.
According to the doc, all the files in /docker-entrypoint-initdb.d/ are executed at container start. So I added a js file with that code:
require('#arangodb/aql/cache').properties({mode: 'on'});
It is indeed executed but caching doesn't seem to be enabled (from what I see with arangosh within the container).
My app is a JS app using arangojs, so if I can do it this way, I'd be happy too.
Thanks!
According to the performance and server config docs, you can enable caching in several ways.
Your method of adding require("#arangodb/aql/cache").properties({ mode: "on" }); to a .js file in the /docker-entrypoint-initdb.d/ directory should work, but keep an eye on the logs. You may need to redirect log output with a different driver (journals, syslog, etc.) to see what's going on. Make sure to run the command via arangosh to see if it works.
If that's a bust, you might want to see if there is a way to pass parameters at runtime (such as --query.cache-mode on). Unfortunately, I don't use Docker Compose, so I can't give you direct advice here, but try something like -e QUERY.CACHE-MODE=ON
If there isn't a way to pass params, then you could modify the config file: /etc/arangodb3/arangod.conf.
And don't forget about the REST API methods for system management. You can access AQL configuration (view and alter) in the Web UI by clicking on the Support -> Rest API -> AQL.
One thing to keep in mind - I'm not sure if the caching settings are global or tied to a specific database. View the configuration on multiple databases (including _system) to test the settings.

Export environment variables to JupyterHub users, without using Docker?

JupyterHub has various authentication methods, and the one I am using is the PAMAuthenticator, which basically means you log into the JupyterHub with your Linux userid and password.
However, environment variables that I create, like this (or for that matter in those set in my .bashrc), before running JupyterHub, do not get set within the user's JupyterLab session. As you can see they're available in the console, with or without the pipenv, and within python itself via os.getenv().
However in JupyterHub's spawned JupyterLab for my user (me):
This environment variable myname is not available even if I export it in a bash session from within JupyterLab as follows:
Now the documentation says I can customize user environments using a Docker container for each user, but this seems unnecessarily heavyweight. Is there an easier way of doing this?
If not, what is the easiest way to do this via Docker?
In the jupyterhub_config.py file, you may want to add the environment variables which you need using the c.Spawner.env_keep variable
c.Spawner.env_keep = ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VIRTUAL_ENV', 'LANG', 'LC_ALL', 'JUPYTERHUB_SINGLEUSER_APP']
Additional information on all the different configurations are available at https://jupyterhub.readthedocs.io/en/stable/reference/config-reference.html
Unfortunately, unlike a single-user Jupyter notebook/lab, Jupyterhub is for a multi-user environment and the customization along with setting security is not some concrete area. They provide you some default settings and a ton of ways to customize the use, alas they provide only a handful amount of examples. You need to dig into documents, check for similarities to your use case, and make adjustments in a trial-error process.
Fortunately, other than using configuration files used to configure Jupyterhub and Jupyter notebook servers, namely jupyter_notebook_config.py and jupyterhub_config.py, we can use environment reading packages per user. This flexibility comes from the use of a programming language kernel.
But this needs being able to install new packages, having them already installed, or asking admins to install them on the current kernel.
Here is one way to use customized environment variables in the current workspace.
Create a new file and give a clear name to show it is an environment file. You can have as many different files as you need. Most production exercises use the .env name but jupyter will not list dot files in file view so avoid doing that. Also, be careful about quotes; sometimes you need them, sometimes you get errors depending on what library you use and where you use them.
test.env:
NAME="My Name"
TEST=This is test 42
Install and use your preferred environment file reader then read from the file(s) you want. you can use `pip install`` in the notebook when needed, just use it cautiously.
test.ipynb
#package already installed, so installation commented out
#%pip install python-environ
import environ
env = environ.Env()
env.read_env(env.str('ENV_PATH', 'test.env'))
NAME=env("NAME")
TEST=env("TEST")
print(NAME," : ",TEST)
If you are an admin of the hub, then beware of the use cases for libraries such that some may break your restrictions. So keep an eye on what permissions you give to your users. If you use custom docker images though, there should not be a leakage as they are already designed to be isolated from your system.

Map a file in Docker using Docker Volume [duplicate]

change a config.properties file in a jar / war file in runtime and hotdeploy the changes ?
my requirement is something as follows, we have a "config.properties" in a jar/war file , i have to open the file through a webpage and after the user has made necessary changes to it, i have to update the "config.properties" in jar/war file and hot deploy it. can we achieve this feat ? if so can you please point me to relevant sites/documents so that i can jumpstart on this.
I will strongly recommend your architecht rethink this solution. What you describe should be done through JNDI or a similar technique, not through reloading properties.
Deployments should be considered static - that any given web container allows for magic trickery should not be depended on, and WILL break some day (most likely at the most inconvenient time).
You've got a couple of problems off the top of my head:
ensuring that nothing is holding static references to a java.util.Properties that has previously loaded your config.properties file.
most servlet engines will unpack your war to a working directory so the properties file you load won't be the one in the war, it will be the unpacked one. This means your changes
will be overwritten when you restart the servlet engine because this is typically one of the points the war is unpacked.
While these problems aren't insurmountable I've always found it much easier to implement this sort of behavior by storing the properties in JNDI (as Thorbjørn suggests) or a database (while being careful about the static references I mentioned in point 1).
The JNDI/database solution has the nice side effect of easing deployment into multiple environments because each typically has it's own registry/database.
Even that I agree with the comments explained before, I could suggest one solution:
Apache Commons Configuration extension gives you the posibility to do something like:
config.setReloadingStrategy(new FileChangedReloadingStrategy());
That could make the trick to change the configuration file on a runtime basis with no code at all.
However, like JNDI and other methods of web application configuration, the security is a concern. Be careful on which parameters you can/must be able to configure.

Better approach to docker images

I'm new in docker so I want to know what is the better approach to use it. I have a Project that needs three components to work:
Jboss server application
PostgreSQL
A spring boot application
So, based on it my questions are:
1) Should I have one docker image for each component mentioned above? If yes, why not just put all together? My idea of docker is simplify the deploy of a application so put all together will make easy to install this app in another environment, right?
2) If yes (one docker image per component), spring boot is just a "Java -jar" command is really necessary have a docker image to it?
3) In case of PostgreSQL should I have the image with all my database structure and data or just vanilla PostgreSQL without anything?
To answer your questions
1) should I have one docker image for each component mentioned above ?
If yes, why not just put all together? My idea of docker is simplify
the deploy of a application so put all together will make easy to
install this app in another environment, right?
It is best to put them on a separate components so that:
You can isolate cases(will help you in debugging)
You can selectively scale(horizontally) specific stateless components when you run on k8s or docker-swarm
You can set hardware limit(RAM, CPU, etc) per component
You have different base images(might be useful for optimizations)
You want to build & test your components independently
List goes on
2) if yes (one docker image per component), spring boot is just a
"Java -jar" command is really necessary have a docker image to it?
Please check the list mentioned above (why it's best to separate) if it fits your use case. Note that adding it to existing components will affect your scaling strategy
Example - you run 3 instances of jboss component with spring boot app, you will spawn 3 instances for both of them w/c you might not want.
3) in case of PostgreSQL should I have the image with all my database
structure and data or just vanilla PostgreSQL without anything?
I would recommend that you mount your structure & data to the host volume, so that it doesn't get lost when the image is restarted. see example so i'll recommend using vanilla postgres
I hope this helps you in some way

How do I know when to use which hyperledger fabric docker container env variables

I played around with Hyperledger Fabric lately and I'm not able to find a good and exhaustive description of ALL environment variables one can set on the hyperledger fabric docker containers (fabric-orderer, fabric-peer, fabric-ca, fabric-tools, fabric-kafka, ...)
Is there such a documentation? I only find so little about the possible variables and what their different values would do and when one would choose which value; even on the official documentation.
Can anyone provide such a list with explanation? Or can we collect information to create such a list?
Ideally, I would like to have something like the following:
fabric-orderer
ORDERER_GENERAL_GENESISMETHOD
values: file, provisional (default)
file is used when you want provide the genesis block as file to the container (see ORDERER_GENERAL_GENESISFILE)
provisional is used when ...
ORDERER_GENERAL_GENESISFILE
value(s): path to genesis file path
fabric-peer
some env var
... explanation ...
Here's also a sample list of some env variables I've seen other people using and don't why, what it means or if it even works:
ORDERER_GENERAL_LEDGERTYPE
ORDERER_GENERAL_BATCHTIMEOUT
ORDERER_GENERAL_MAXWINDOWSIZE
CONFIGTX_ORDERER_KAFKA_BROKERS
ORDERER_GENERAL_LISTENADDRESS
ORDERER_GENERAL_PORT
ORDERER_GENERAL_HOST
...
I hope asking this question here is ok (it's my first).
Thanks a lot for your help!
This is a great question, and would indeed make a good addition to the docs. It is not currently explicitly documented, but I can explain at least how you can determine what the variables are.
We use viper for managing configuration. We ship a sample configuration with the distribution of the docker images and binaries. As you can see, there are three configuration yaml files: configtx.yaml, core.yaml and orderer.yaml. For each configuration parameter in the yaml file, you can derive an environment variable that can be used to override the value in the config file used at startup. The environment variable name is derived from the filename (e.g. CORE for core.yaml), and underscore-separated capitalization of the nested properties in the config, (e.g. CORE_LOGGING_LEVEL).
The sample apps provided contain docker-compose yaml configurations that leverage most of the properties you might consider leveraging for your own purposes.
Meanwhile, I have created a JIRA to track this and invite contributions to help us flesh out an addition to our documentation that provides a useful reference.

Resources