How to enable caching in ArangoDB via Docker or arangojs? - docker

I would like to enable caching in ArangoDB, automatically when my app start.
I'm using docker-compose to start the whole thing but apparently there's no simple parameter to enable caching in ArangoDB official image.
According to the doc, all the files in /docker-entrypoint-initdb.d/ are executed at container start. So I added a js file with that code:
require('#arangodb/aql/cache').properties({mode: 'on'});
It is indeed executed but caching doesn't seem to be enabled (from what I see with arangosh within the container).
My app is a JS app using arangojs, so if I can do it this way, I'd be happy too.
Thanks!

According to the performance and server config docs, you can enable caching in several ways.
Your method of adding require("#arangodb/aql/cache").properties({ mode: "on" }); to a .js file in the /docker-entrypoint-initdb.d/ directory should work, but keep an eye on the logs. You may need to redirect log output with a different driver (journals, syslog, etc.) to see what's going on. Make sure to run the command via arangosh to see if it works.
If that's a bust, you might want to see if there is a way to pass parameters at runtime (such as --query.cache-mode on). Unfortunately, I don't use Docker Compose, so I can't give you direct advice here, but try something like -e QUERY.CACHE-MODE=ON
If there isn't a way to pass params, then you could modify the config file: /etc/arangodb3/arangod.conf.
And don't forget about the REST API methods for system management. You can access AQL configuration (view and alter) in the Web UI by clicking on the Support -> Rest API -> AQL.
One thing to keep in mind - I'm not sure if the caching settings are global or tied to a specific database. View the configuration on multiple databases (including _system) to test the settings.

Related

Export environment variables to JupyterHub users, without using Docker?

JupyterHub has various authentication methods, and the one I am using is the PAMAuthenticator, which basically means you log into the JupyterHub with your Linux userid and password.
However, environment variables that I create, like this (or for that matter in those set in my .bashrc), before running JupyterHub, do not get set within the user's JupyterLab session. As you can see they're available in the console, with or without the pipenv, and within python itself via os.getenv().
However in JupyterHub's spawned JupyterLab for my user (me):
This environment variable myname is not available even if I export it in a bash session from within JupyterLab as follows:
Now the documentation says I can customize user environments using a Docker container for each user, but this seems unnecessarily heavyweight. Is there an easier way of doing this?
If not, what is the easiest way to do this via Docker?
In the jupyterhub_config.py file, you may want to add the environment variables which you need using the c.Spawner.env_keep variable
c.Spawner.env_keep = ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VIRTUAL_ENV', 'LANG', 'LC_ALL', 'JUPYTERHUB_SINGLEUSER_APP']
Additional information on all the different configurations are available at https://jupyterhub.readthedocs.io/en/stable/reference/config-reference.html
Unfortunately, unlike a single-user Jupyter notebook/lab, Jupyterhub is for a multi-user environment and the customization along with setting security is not some concrete area. They provide you some default settings and a ton of ways to customize the use, alas they provide only a handful amount of examples. You need to dig into documents, check for similarities to your use case, and make adjustments in a trial-error process.
Fortunately, other than using configuration files used to configure Jupyterhub and Jupyter notebook servers, namely jupyter_notebook_config.py and jupyterhub_config.py, we can use environment reading packages per user. This flexibility comes from the use of a programming language kernel.
But this needs being able to install new packages, having them already installed, or asking admins to install them on the current kernel.
Here is one way to use customized environment variables in the current workspace.
Create a new file and give a clear name to show it is an environment file. You can have as many different files as you need. Most production exercises use the .env name but jupyter will not list dot files in file view so avoid doing that. Also, be careful about quotes; sometimes you need them, sometimes you get errors depending on what library you use and where you use them.
test.env:
NAME="My Name"
TEST=This is test 42
Install and use your preferred environment file reader then read from the file(s) you want. you can use `pip install`` in the notebook when needed, just use it cautiously.
test.ipynb
#package already installed, so installation commented out
#%pip install python-environ
import environ
env = environ.Env()
env.read_env(env.str('ENV_PATH', 'test.env'))
NAME=env("NAME")
TEST=env("TEST")
print(NAME," : ",TEST)
If you are an admin of the hub, then beware of the use cases for libraries such that some may break your restrictions. So keep an eye on what permissions you give to your users. If you use custom docker images though, there should not be a leakage as they are already designed to be isolated from your system.

Add a URL path prefix to artifactory installation (Docker)

I'm running Artifactory CPP CE 7.7.3 and Traefik v2.2 using docker-compose. The service is only available over http://localhost/ui/. Now, what I need is an option which allows to add a URL path-prefix (e. g. http://localhost/artifactroy/ui).
My Setup
I used the described setup process from the Artifactory Docs suggest it.
My docker.compose.yaml is the official extracted from the jfrog-artifactory-cpp-ce-7.7.3-compose.tar.gz: ./templates/docker-compose.yaml.
I'm using a reverse proxy (traefik). For this, I've added the necessary traefik configuration lines to the docker-compose-file. Here is a small extract what I've added:
[...]
labels:
- "traefik.http.routers.artifactory.rule=Host(`localhost`) && PathPrefix(`/ui`)"
- "traefik.http.routers.artifactory.middlewares=artifactory-stripprefix"
- "traefik.http.middlewares.artifactory-stripprefix.stripprefix.prefixes=/"
- "traefik.http.services.artifactory.loadbalancer.server.port=8082"
With this I managed to access artifactory over http://localhost/ui/.
Problem:
I have multiple small services running on my server, each of this service is accusable via http://localhost/<service-name>. This is very convenient and want to make clear that this URL is related to this service on my production server.
Because of this, I want to have an URL like http://localhost/artifactroy/ui/... instead of http://localhost/ui/...
I struggled getting artifactory setup in that way. I already managed to get a redirection from typing e. g. http://localhost/artifactroy/ to http://localhost/ui/ but this is not what I want on my production server.
What I did
Went through the documentation in hope of finding an option which I just can passt to artifactroy to add a prefix (Not successful).
Tried configure traefik two full days, to alter headers to get the repose point to http://localhost/artifactroy/ui/... (Only partially successful, redirection didn’t work afterwards)
Tried finding the configuration which is responsible for configure artifactory in $JFROG_HOME/artifactory/var/etc (Not successful)
Is this even possible? Help is highly appreciated..
This example (even though not traefic example) gives you a direction to implement it. There are certain routes already used within the product. You need to add a context over and above it to ensure all comes via the new context path.
https://jfrog.com/knowledge-base/how-to-remove-artifactory-from-the-context-url-in-artifactory-7/

How do I make a simple public read-only WebDAV server with SabreDAV?

I recently began looking into WebDAV, as I found it to be an option for letting me play a Blu-ray folder remotely - i.e. without requiring the viewer to download the whole 24gb ISO first.
Add a WebDAV source in Kodi v18 to a Blu-ray folder - and it actually plays! Very awesome.
The server can also be mounted on Windows with
net use m: http://example.com/webdavfolder/
or in Linux with
sudo mount -t davfs http://example.com/webdavfolder/ /mnt/mywebdav
-and should then (in theory) play with any software media players that supports Blu-ray Disc Java (BD-J), such as PowerDVD and VLC.
vlc bluray:///mnt/mywebdav --bluray-menu
PowerDVD.exe AUTOPLAY BD m:
(Unless of course time-out values has been set too low, which seems to be the case for VLC at the moment).
Anyway, all this is great, except I can't figure out how to make my WebDAV server read-only. Currently anyone can delete files as they wish, and that's of course not optimal.
So far I've only experimented with SabreDAV, because afaik that's the only option I have if I want to keep using my existing webhost. Trying with very minimal setups, because I've read that minimal setups should default to a read-only solution. It just doesn't seem to happen.
I initially used the setup from http://sabre.io/dav/gettingstarted/ and tried removing some lines. Also tried calling chmod 0444 MainFolder -R on the webserver. And I can see that everything does get a read-only attribute. But it changes nothing. It's still possible to delete whatever I want. :-(
What am I missing?
Maybe I'm using the wrong technology for what I want to do? Is there some other/better way of offering a Blu-ray folder for remote viewing? (One that includes the whole experience - i.e. full Java menus etc).
I should probably mention that all of this is of course perfectly legal. It is my own Blu-ray project - not copyright material.
Also: Difficult to decide if this belongs on StackOverflow or SuperUser. I ended up posting it on StackOverflow because SabreDAV is about coding, and because there's no sabredav tag on SuperUser.
You have two options:
Create your own file/directory classes for sabre/dav that simply throw an error when trying to delete. You can basically start with a copy of Sabre\DAV\FS\Directory and Sabre\DAV\FS\File and change the methods that do writing.
Since you're considering just using linux file permissions, really the key thing you are missing is that that 'deleting' is not controlled on the file or directory you're trying to delete. To delete a file or directory in unix, all you need is write permissions on the parent directory. However, I wouldn't recommend going this route as doing this will just cause a weird error in sabre/dav, which might leave clients in a confused state. It would result in a 500 error, not the expected 403 error.

dropwizard get on demand jdbi connection

I have a simple CRUD application with backend code in dropwizard. The entire app just comprises of simple resource classes and crud operations except one case where some business logic is involved.
I am trying to extract this into a service instead of putting it in the resource class itself. But for that my service would need an ondemand jdbi connection to access data and do its thing.
All my connect strings and config values are in YML file. Since this app would be running on different servers with different yml files, I dont want to hardcode the yml file name in order to read it again, to get the connect strings and do it that way.
How do I achieve this?
Can you detect what environment you are on?
If so, can you do something like ${environment}.yml?
There is Configuration project on apache which might help.
Otherwise, is it a case of in dev you want to run
java -jar app.jar server dev.yml
and in prod you want to run java -jar app.jar server prod.yml? I imagine you have separate daemons in each environment. So, those environment's will pick up the right configuration, if you've configured them that way.
Otherwise, if the property names are the same, but their values differ, and you pick up the right yml in the right environment, things should work.
If I haven't addressed your question, can you please elaborate your problem a little more?

SELinux type getting set incorrectly for files uploaded VIA a Rails application

So I have a web application running on Centos 6.5.
The application is a Ruby/Rails app, but the images are served by Apache HTTPD.
The application folder is in a user home folder, but I've granted HTTPD the correct permissions, and have enabled httpd_enable_home_dirs within SELinux. All static images are working just fine.
The problem I am seeing is when an end user uploads an image (A profile icon), the SELinux context for the file is getting set to unconfined_u:object_r:user_tmp_t:s0 instead of unconfined_u:object_r:usr_t:s0.
If I manually run restorcon on the file, the context gets fixed, and the image works. Any idea how I can make sure the file gets created with the correct context? I've looked into using restorcond, but it looks like it won't recursively check subdirectories, and the subdirectory structure is not predictable.
Any help is appreciated.
Most likely your application is moving 'mv' the object from /tmp, or /var/tmp to the destination location.
By default when a object is moved with 'mv', then so is its security metadata. Thus the object ends up at the destination with old and inaccurate security metadata. Running 'restorecon' on the destination objects resets the contexts to what the policy thinks it should be.
There are various ways you can deal with this. Either allow your webapp to read the object with the inaccurate context or tell your webapp to either use 'mv' with the -Z option, or use 'cp' instead. (the 'cp' command copies the object, and as a consequence the target object ends up with the appropriate security metadata, usually mostly inherited from the targets parent directory.
So apparently SELinux suppresses some error messages...
In order to debug this I had to run
semodule -DB
This rebuilds/restarts the local policy with the disable "don't log" flag. Once "don't log" is disabled, the error messages show up in the audit log and you can add a new policy using the regular:
sealert -a /var/log/audit.log
Then find the audit2allow command for the error in question.
You can set your logging back to normal after by running
semodule -B

Resources