How to show ruby garbage collection settings configured via the environment variable - ruby-on-rails

ruby has some environment variables to configure garbage collection.
However, I do not know that the environment variables really have affected to ruby run-time.
How do I confirm them?
In my understanding, GC.stats show statistics but it does not show and correspond to environment variable values.
My purpose is to activate GC more frequently that means I want to reduce memory usage.
module GC
https://docs.ruby-lang.org/en/2.3.0/GC.html
e.g)
RUBY_GC_HEAP_INIT_SLOTS
RUBY_GC_HEAP_FREE_SLOTS
RUBY_GC_HEAP_GROWTH_FACTOR
RUBY_GC_HEAP_GROWTH_MAX_SLOTS
RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR
RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR
RUBY_GC_MALLOC_LIMIT
RUBY_GC_MALLOC_LIMIT_MAX
RUBY_GC_MALLOC_LIMIT_GROWTH_FACTOR
RUBY_GC_OLDMALLOC_LIMIT
RUBY_GC_OLDMALLOC_LIMIT_MAX
RUBY_GC_OLDMALLOC_LIMIT_GROWTH_FACTOR

You can inspect ENV in the running process.
Beyond that, I don't know how you would verify that the env vars you set actually did anything.

Related

Detect environment variable change in Tcl

Is there any way to detect an environment variable change DURING the execution of a Tcl script (I use Tk so the execution can be long) ?
For instance, if I define an environment variable MYVAR=1, then I can access it from Tcl by writing $ENV(MYVAR). Let's say now that during the execution of the Tcl program, I switch MYVAR to 2. Is there a way, or maybe a command, that scans every environment variable again so I can get 2 when I call $ENV(MYVAR) ?
First off, other processes will not see changes to the environment variables of any process. Children get a copy of the current environment when they are created, and that's it.
Secondly, to see a change in the environment variables, put a trace on the ::env variable (but tracing an individual variable is not recommended). I can't remember if this works reliably between threads, but within a thread it's pretty good provided you don't have C code modifying the variables behind your back.
proc detectChange {name1 name2 op} {
# We know what name1 and op are in this case; I'll ignore them
if {$name2 eq "MYVAR"} {
puts "MYVAR changed to $::env(MYVAR)"
}
}
trace add variable ::env write detectChange
Note that Tk internally uses traces a lot (but the C API for them, not the Tcl language API for them).

How to use environment variables in CloudFlare Worker in local development environment

I have a CloudFlare Worker where I have environment variables set in the CF Settings..Environment Variables interface. I also have this wrangler.toml
In my worker's index.js I have code reading the variable REGISTRATION_API_URL. If the code is running in a deployed environment then it injects the value from the CF Settings into REGISTRATION_API_URL just fine.
But if I run
wrangler dev
or
wrangler dev --env local
then REGISTRATION_API_URL is undefined.
Originally I expected that the variable would be populated by the CF Settings values, but they aren't. So I tried the two vars setting in the wrangler.toml I show here but no difference. And I have spent a lot of time searching the docs and the greater web.
Are environment variables supported in a local dev environment? Any workarounds that people have come up with? Currently I am looking for undefined and defining the variable with a hard-coded value, but this is not a great answer.
Using wrangler 1.16.0
Thanks.
The docs could be more clear but if you are using the newer module syntax, the variables will not be available as global variables.
Environmental variables with module workers
When deploying a Module Worker, any bindings will not be available as global runtime variables. Instead, they are passed to the handler as a parameter – refer to the FetchEvent documentation for further comparisons and examples .
Here's an example.
export default {
async fetch(request, env, context) {
return new Response(env.MY_VAR);
},
};
KV namespaces are also available in the same object.
Maybe a bit late, but: no I don't think you can
But: you can always use self["YOUR_ENV_VARIABLE"] to get the value and then go from there (unfortunately the docs don't mention that)
Here is what I personally do in my Workers Site project to get the Release version (usually inserted via pipeline/action and then inserted via HtmlRewriter into the index.html):
const releaseVersion = self["RELEASE_VERSION"] || 'unknown'

Set environment variables all over processes in Julia

I'm currently working with Julia (1.0) to run some parallel code on clusters of an HPC. The HPC is managed with PBS. I'm trying to find a way for broadcasting environment variables over all processes, i.e. a way to broadcast a specific list of environment variables automatically in order to have access to them in every Julia worker.
#!/bin/bash
#PBS ...
export TOTO=toto
julia --machine-file=$PBS_NODEFILE my_script.jl
In this example, I will not be able to access to the variable TOTO in each julia worker (via ENV["TOTO"]).
The only way I found to do what I want is to set the variables in my .bashrc but I want this to be script-specific. Another way is to put in my startup.jl file :
#everywhere ENV["TOTO"] = $(ENV["TOTO"])
But it is not script-specific because I have to know in advance which variables I want to send. If I do a loop over ENV keys then I'll broadcast all the variables and then override variables I don't want to.
I tried to use DotEnv.jl but it doesn't work.
Thanks for your time.
The obvious way is to set the variables first thing in script.jl. You can also put the initialization in a separate file, e.g. environment.jl, and load that on all processes with the -L flag:
julia --machine-file=$PBS_NODEFILE -L environment.jl my_script.jl
where environment.jl would, in this case, contain
ENV["TOTO"] = "toto"
etc.

Get all environment variables in Dlang

https://dlang.org/library/std/process/environment.html allows getting a particular environment variable.
But I see no way to get all environment variables or the list of all environment variable names.
What is the right way to retrieve the full environment in D?
In fact, I want to pass some environment variables to a child process. What is the right way to do it?
There is no need to get all variables to pass to a child process; that is the default. If you are using the std.process library, you can pass null for environment to keep the existing one entirely, or a set of just the keys and values you want to change to get just them changed, and the rest inherited.

Passing environment variables to XMonad spawn

It is my understanding that when invoking spawn "string command" in xmonad, the argument "string command" is actually passed to /bin/sh.
Is there a way to change this behavior ?
More specifically, is it possible to make the instance of the interpreter called by spawn aware of some predefined environment variables (typically, SSH_AUTH_SOCK and SSH_AGENT_PID)?
Of course, it is always possible to resort to spawn "$VARIABLE=stuff; export $VARIABLE; string command", but it bothers me that the variabe should be created and exported each time.
Strictly answering your first question, the safeSpawn function in XMonad.Util.Run (in xmonad-contrib) will run a command without passing it to a shell.
However, that shouldn't make much of a difference as far as environment variables are concerned. In both cases, the spawned command should inherit the environment of the XMonad process (which the shell's startup/rc files could tweak in the case of spawn).
It's possible to set the environment of the started process with general Haskell facilities, e.g. System.Posix.Process.executeFile (and System.Environment.getEnvironment if you want to make a modified copy of the XMonad process' environment).

Resources