When I download models through Torch Hub, models are automatically downloaded in /home/me/.cache/torch.
How can I modify this behavior ?
From official documentation, there is several ways to modify this path.
In priority order :
Calling hub.set_dir()
$TORCH_HOME/hub, if environment variable TORCH_HOME is set.
$XDG_CACHE_HOME/torch/hub, if environment variable XDG_CACHE_HOME is set.
~/.cache/torch/hub
So I just had to do :
export TORCH_HUB=/my/path/
Edit
TORCH_HUB appear to be deprecated, use TORCH_HOME instead
I'm sorry but I'm trying to find out how to free up space, and these answers don't make sense to me.
Where do you type "TORCH_HOME"?
python TORCH_HOME in cmd brings up an error in windows 10.
and Torch and TORCH_HOME is not recognized as an internal or external command,
operable program or batch file.
Are you answering this for linux? because cmd uses a > and not $
Related
JupyterHub has various authentication methods, and the one I am using is the PAMAuthenticator, which basically means you log into the JupyterHub with your Linux userid and password.
However, environment variables that I create, like this (or for that matter in those set in my .bashrc), before running JupyterHub, do not get set within the user's JupyterLab session. As you can see they're available in the console, with or without the pipenv, and within python itself via os.getenv().
However in JupyterHub's spawned JupyterLab for my user (me):
This environment variable myname is not available even if I export it in a bash session from within JupyterLab as follows:
Now the documentation says I can customize user environments using a Docker container for each user, but this seems unnecessarily heavyweight. Is there an easier way of doing this?
If not, what is the easiest way to do this via Docker?
In the jupyterhub_config.py file, you may want to add the environment variables which you need using the c.Spawner.env_keep variable
c.Spawner.env_keep = ['PATH', 'PYTHONPATH', 'CONDA_ROOT', 'CONDA_DEFAULT_ENV', 'VIRTUAL_ENV', 'LANG', 'LC_ALL', 'JUPYTERHUB_SINGLEUSER_APP']
Additional information on all the different configurations are available at https://jupyterhub.readthedocs.io/en/stable/reference/config-reference.html
Unfortunately, unlike a single-user Jupyter notebook/lab, Jupyterhub is for a multi-user environment and the customization along with setting security is not some concrete area. They provide you some default settings and a ton of ways to customize the use, alas they provide only a handful amount of examples. You need to dig into documents, check for similarities to your use case, and make adjustments in a trial-error process.
Fortunately, other than using configuration files used to configure Jupyterhub and Jupyter notebook servers, namely jupyter_notebook_config.py and jupyterhub_config.py, we can use environment reading packages per user. This flexibility comes from the use of a programming language kernel.
But this needs being able to install new packages, having them already installed, or asking admins to install them on the current kernel.
Here is one way to use customized environment variables in the current workspace.
Create a new file and give a clear name to show it is an environment file. You can have as many different files as you need. Most production exercises use the .env name but jupyter will not list dot files in file view so avoid doing that. Also, be careful about quotes; sometimes you need them, sometimes you get errors depending on what library you use and where you use them.
test.env:
NAME="My Name"
TEST=This is test 42
Install and use your preferred environment file reader then read from the file(s) you want. you can use `pip install`` in the notebook when needed, just use it cautiously.
test.ipynb
#package already installed, so installation commented out
#%pip install python-environ
import environ
env = environ.Env()
env.read_env(env.str('ENV_PATH', 'test.env'))
NAME=env("NAME")
TEST=env("TEST")
print(NAME," : ",TEST)
If you are an admin of the hub, then beware of the use cases for libraries such that some may break your restrictions. So keep an eye on what permissions you give to your users. If you use custom docker images though, there should not be a leakage as they are already designed to be isolated from your system.
I played around with Hyperledger Fabric lately and I'm not able to find a good and exhaustive description of ALL environment variables one can set on the hyperledger fabric docker containers (fabric-orderer, fabric-peer, fabric-ca, fabric-tools, fabric-kafka, ...)
Is there such a documentation? I only find so little about the possible variables and what their different values would do and when one would choose which value; even on the official documentation.
Can anyone provide such a list with explanation? Or can we collect information to create such a list?
Ideally, I would like to have something like the following:
fabric-orderer
ORDERER_GENERAL_GENESISMETHOD
values: file, provisional (default)
file is used when you want provide the genesis block as file to the container (see ORDERER_GENERAL_GENESISFILE)
provisional is used when ...
ORDERER_GENERAL_GENESISFILE
value(s): path to genesis file path
fabric-peer
some env var
... explanation ...
Here's also a sample list of some env variables I've seen other people using and don't why, what it means or if it even works:
ORDERER_GENERAL_LEDGERTYPE
ORDERER_GENERAL_BATCHTIMEOUT
ORDERER_GENERAL_MAXWINDOWSIZE
CONFIGTX_ORDERER_KAFKA_BROKERS
ORDERER_GENERAL_LISTENADDRESS
ORDERER_GENERAL_PORT
ORDERER_GENERAL_HOST
...
I hope asking this question here is ok (it's my first).
Thanks a lot for your help!
This is a great question, and would indeed make a good addition to the docs. It is not currently explicitly documented, but I can explain at least how you can determine what the variables are.
We use viper for managing configuration. We ship a sample configuration with the distribution of the docker images and binaries. As you can see, there are three configuration yaml files: configtx.yaml, core.yaml and orderer.yaml. For each configuration parameter in the yaml file, you can derive an environment variable that can be used to override the value in the config file used at startup. The environment variable name is derived from the filename (e.g. CORE for core.yaml), and underscore-separated capitalization of the nested properties in the config, (e.g. CORE_LOGGING_LEVEL).
The sample apps provided contain docker-compose yaml configurations that leverage most of the properties you might consider leveraging for your own purposes.
Meanwhile, I have created a JIRA to track this and invite contributions to help us flesh out an addition to our documentation that provides a useful reference.
I am trying to develop my own GtkPrintBackend ,
taking help from here:
https://mail.gnome.org/archives/desktop-devel-list/2006-December/msg00069.html
I want to test my print backend( by making the print dialog use my backend instead). How do I do that?
That is, how do I make the Print dialog use my backend instead?
Answering my own question here since I figured out a workaround:
I installed jhbuild and built the gtk+ module using jhbuild.
The source code of the corresponding module is downloaded in ~/jhbuild/checkout/<module-name> .
Modify the print backends under ~jhbuild/checkout/gtk+/gtk/modules/printbackends/ directory, and rebuild it (Find instructions to do that here).
Now when you launch a gtk application from the jhbuild shell, it will use the modified backend instead of the system default one.
Have been using Ndisgen to try to generate a .ko kernel module for an rtl8192se driver for my Freebsd 9 netbook having followed instructions found on several different dev blogger sites.
Somehow, i've just not been able to generate a file with extension .ko. Instead, i keep getting a .kmod file.
Question is, what is the difference between these ?
I have also attempted kldload for this .kmod file. When i check it via kldstat, ok, i see it there but, when i then check with dmesg and pciconf -lv, my realtek card is still not hooked up.
So i reckon i really need to generate the .ko file in the first place, but what am i doing wrong or missing, such that only a kmod is generated?
Any pointers would be appreciated! thanks! :)
Update::
There was a message I had ignored.
My bad!
the message after conversion was :
"...Cleaning up... rm: machine: is a directory cleanup failed.Exiting"
That's all because i had pasted a copy of the "/usr/include/machine" folder with all the headers i thought was required in the path where I was converting the driver.
But i ignored it thinking, well since ndisgen had already created a .kmod file(which was what I had assumed was also a kernel module, just not in .ko form) then it was alright.
SO finally, since it's complaining that it's a directory and can't be cleaned, i then created a symbolic link to that folder instead.
Et voila! the clean was successful and now i have the .ko file! :D
The ndisgen script renames the .ko file to .kmod temporarily to do some cleanup.
If that cleanup works, it should rename it back to a .ko file. See the drvgen function /usr/src/usr.sbin/ndiscvt/ndisgen.sh.
I'm assuming that something goes wrong in between both renames. Do you get any error messages?
Keep in mind that if you load the driver, it should show up as the ndis0 device!
Looks like you are getting a NetBSD kernel module, not a FreeBSD one. See these posts:
hubertf's NetBSD Blog
Modern net bsd kernel module
Is the source code that you are using available publicly for us to try follow your steps?
I'm trying to run QA-C 7.2 on a Nightly Build that I program using Python.
It does run but the problem I encounter is, that I cannot save the license file configuration settings because IM not a 'license admin'. Hence, everytime I run QA-C it requires me to browse to the license.dat
Does anyone know a way around this, for example passing the license file configuration (eg path2license_dat) as a parameter when I call the exe? Or somehow saving this information?
I found out that it works when setting the environment variable LM_LICENSE_FILE to the path to license.dat