How to set bash environment variables using lua - lua

I am new to lua script features.
I tried using,
os.execute("export MY_VAR=10")
io.popen("export MY_VAR=10")
from lua script.
I try reading MY_VAR variable from shell using echo $MY_VAR after lua script is executed but I do not see MY_VAR getting set to 10.
How do we set the environment variable using lua script?

Your problem isn't a lua problem. Your problem is misunderstanding how process environments work.
Every time you run os.execute or io.popen you are running a new process with new environment.
So while you may be correctly setting MY_VAR in that processes environment (and it would affect any processes run as children processes of that process) it doesn't survive beyond the death of the launched process and so cannot be seen by any other processes.
If you want to affect the lua process's environment (which would then, in turn, affect the environment's of processes run by lua) then you need a binding to the setenv system function (which lua itself doesn't provide as it doesn't pass the clean C test that lua uses for things it includes).

Related

Implement 'Entrypoint' like functionality in Cloud Native Buildpack

I have a multi-process web app. The processes are contributed by different buildpacks. The default process will start the web application. I have a use case in which a given shell script should be executed before the default process invocation.
I have tried the following approach;
Create a custom-buildpack
Create a script that needs to be executed and invoke the web process in it.
Create a new process based on the above shell sciprt by specifying it in launch.toml definition
Make the buildpack launchable
The entrypoint.sh
#!/usr/bin/env bash
# Some fancy stuff..
#Invoke the web process
/cnb/process/web
Create lauch.toml from the build script of custom-buildpack. Make the entrypoint process the default one.
cat > "$layers_dir/launch.toml" << EOL
[[processes]]
type = "entrypoint"
command = "bash"
args = ["$scriptlayer/bin/entrypoint.sh"]
default = true
EOL
echo -e '[types]\nlaunch = true' > "$layers_dir/assembly-scripts.toml"
Truncated pack inspect-image output
Processes:
TYPE SHELL COMMAND ARGS
entrypoint (default) bash bash /layers/gw_assembly-scripts/assembly-scripts/bin/entrypoint.sh
task bash catalina.sh run
tomcat bash catalina.sh run
web bash catalina.sh run
Is there any better CNB native approach to achieve this use case?
You have a couple of options here:
The simplest option would be to add a .profile script to the root of your application. It's a bash script, so anything you can write in bash can be done there, however, it's primarily for initializing your app and setting additional env variables.
This file runs prior to the command in your process type. I looked for documentation on this behavior, but only found it briefly mentioned in the buildpacks spec.
As an example, if I put .profile in the root of my application and inside that file, I write echo 'Hello World!'. I'll see Hello World! printed before any of my process types execute.
If you want to create a buildpack, you can achieve something similar to the .profile script by having your buildpack include an exec.d binary.
This is a binary that's part of your launch image and gets run prior to any of your process types. It allows you to take actions to initialize an application and set additional environment variables dynamically before your application starts.
This mechanism is often used by buildpack authors to provide dynamic behavior at runtime based on changes to environment variables or Kubernetes service bindings. For example, turning on/off features like APM tools, debugging, and metrics.
A few other miscellaneous notes.
Neither of the options above allows you to change the actual process type. The process type that will be executed is selected prior to these options (.profile and exec.d) running and you cannot influence that from within. You can only use them to run things prior to the process type running.
The buildpack spec does not allow for a buildpack to modify the process types for another buildpack. So you cannot create a buildpack that wraps or modifies process types set by another buildpack. That said, a buildpack can override the process types set by another buildpack. Buildpacks that are later in the order group will override earlier buildpacks.
From the spec: A combined processes list derived from all launch.toml files such that process types from later buildpacks override identical process types from earlier buildpacks.
With buildpacks, the entrypoint is always the launcher. The launcher is a process that runs and implements the application side of the buildpack specification. It runs .profile, exec.d binaries, sets up buildpack provide environment variables and eventually launch the specified process type.
If you override the entrypoint for a container then the launcher won't run and none of the things it is supposed to do will happen. Sometimes this is desired, like if you're troubleshooting, but usually you want the launcher to be the entrypoint.

How to pass command line arguments to Mix Release built

I am trying to pass command-line arguments to my elixir release. I have built the release using
MIX_ENV=prod mix release
Now, Am not able to pass any command-line arguments with the start command.
_build/prod/rel/prod/bin/prod start arg1 arg2
Using eval i have achieved passing the arguments but it stops after a while.
_build/prod/rel/prod/bin/prod eval "Hello.nodes([3, :node1])"
Is there any way that I can pass the args through the start flag?
Using eval i have achieved passing the arguments but it stops after a
while.
_build/prod/rel/prod/bin/prod eval "Hello.nodes([3, :node1])"
From the docs:
The eval command starts its own instance of the VM but without
starting any of the applications in the release and without starting
distribution. For example, if you need to do some prep work before
running the actual system, like migrating your database, eval can be a
good fit. Just keep in mind any application you may use during eval
has to be explicitly loaded and/or started.
I'm guessing that your eval tries to use an application that you didn't explicitly load before executing the eval.
Is there any way that I can pass the args through the start flag?
It's not documented, but there are various ways to configure a release:
https://elixir-lang.org/getting-started/mix-otp/config-and-releases.html
Maybe an escript would be a better fit?
escript provides support for running short Erlang programs without
having to compile them first, and an easy way to retrieve the
command-line arguments.
It is possible to bundle escript(s) with an Erlang runtime system to make it self-sufficient and relocatable.

activating conda env vs calling python interpreter from conda env

What exactly is the difference between these two operations?
source activate python3_env && python my_script.py
and
~/anaconda3/envs/python3_env/bin/python my_script.py ?
It appears that activating the environment adds some variables to $PATH, but the second method seems to access all the modules installed in python3_env. Is there anything else going on under the hood?
You are correct, activating the environment adds some directories to the PATH environment variable. In particular, this will allow any binaries or scripts installed in the environment to be run first, instead of the ones in the base environment. For instance, if you have installed IPython into your environment, activating the environment allows you to write
ipython
to start IPython in the environment, rather than
/path/to/env/bin/ipython
In addition, environments may have scripts that add or edit other environment variables that are executed when the environment is activated (see the conda docs). These scripts can make arbitrary changes to the shell environment, including even changing the PYTHONPATH to change where packages are loaded from.
Finally, I wrote a very detailed answer of what exactly is happening in the code over there: Conda: what happens when you activate an environment? That may or may not still be up-to-date though. The relevant part of the answer is:
...the build_activate method adds the prefix to the PATH via the _add_prefix_to_path method. Finally, the build_activate method returns a dictionary of commands that need to be run to "activate" the environment.
And another step deeper... The dictionary returned from the build_activate method gets processed into shell commands by the _yield_commands method, which are passed into the _finalize method. The activate method returns the value from running the _finalize method which returns the name of a temp file. The temp file has the commands required to set all of the appropriate environment variables.
Now, stepping back out, in the activate.main function, the return value of the execute method (i.e., the name of the temp file) is printed to stdout. This temp file name gets stored in the Bash variable ask_conda back in the _conda_activate Bash function, and finally, the temp file is executed by the eval Bash function.
So you can see, depending on the environment, running conda activate python3_env && python my_script.py and ~/anaconda3/envs/python3_env/bin/python my_script.py may give very different results.

$_SERVER and $_ENV not available if running php from shell

I have set up a cron job to run once an hour a script cron/cron.php
This script simply reads a table to check which scripts should run at a given time.
So far no problem.
I just noticed that $_SERVER['DOCUMENT_ROOT'] and $_SERVER['SERVER_NAME'] is empty. Same to $_ENV['HOSTNAME']
What can be the reason? I would prefer to have my cron.php portable so I am searching for a solution which should work on every server.
Thanks in advance for any tips!
When the cron script is run, it's most likely executed by the php-cli binary and not the webserver.
$_SERVER entries are set by the webserver, here is the quote from $_SERVER page in the PHP manual:
$_SERVER is an array containing information such as headers, paths, and script locations. The entries in this array are created by the web server.
As there is no webserver involved with your cron script, these are not set. You can try this your own by executing php on the command-line:
php -r 'var_dump($_SERVER);'
it will output all settings in $_SERVER in your command-line environment, "DOCUMENT_ROOT" most likely will be an empty string and "SERVER_NAME" is not set at all.
The $_ENV superglobal contains the environment variables of the system specifically, it's just that "HOSTNAME" is not set as environment variable by the cron binary.
Further Considerations
I normally suggest to not only create the PHP cron script (as you did with cron/cron.php) but also to create a shell-script that invokes the php script. Then use the shell-script in the crontab. This allows you to modify the environment easily without re-configuring the crontab or the cron.php too often. You can then set environment variables within that shell script as well as changing the working directory etc.
If you want to make your cron.php script more portable, figure out what the injected environment dependencies are (e.g. the document root your have) and make those variable, e.g. with variables or a parameter object. Then create a section in your script where those variables are populated and the rest of your script can run based on them in an injected manner. This reduces configuration changes only to a very limited part of your script and will allow you to create more re-useable code.

Environment Variable in 64 bit OS nor recognized without reboot

I have Installshiled script which define CATALINA_HOME as environment Variable initially. same script after that execute the batch file service.bat that is using CATALINA_HOME. this file when executed display the error CATALINA_HOME is not define define correctly. as this variable is defined as environmental VARIABLE and pointing Tomcat Directory Properly. I thing the system require reboot to recognize environment Variables.Is there any way to define Environment that work directly without reboot. I am using 64 bit Windows 7.
I may be wrong but the script that you're running loads the env variables once when you start it so you won't get any new env variables added during the runtime of the script.
And in your script, if you just execute the batch file, it will use the same out dated env variables the script started with.
What I do is run 'cmd /k service.bat' This starts a new shell (with the updated env variables) and runs the batch file and terminates afterwards.
You shouldn't need to reboot between your install.

Resources