I want to assign a system variable within chef recipe
I am using the following code:
env 'DEF_ADDR' do
value "http://#{node['ipaddress']}"
end
However, I am getting the below error on executing the recipe
ERROR: Cannot find a resource for env on redhat version 6.6
The env resource seems to be only for Windows environments:
Use the env resource to manage environment keys in Microsoft Windows.
If you want to define an environment variable only for the Chef Run, you can use Ruby:
ENV['DEF_ADDR'] = "http://#{node['ipaddress']}"
But this will only be accessible during the Chef Run.
If you want to define a system-wide environment variable, maybe the etc_environment cookbook could help you with that:
node.default['etc_environment']['DEF_ADDR'] = "http://#{node['ipaddress']}"
There is no consistent way to set global environment variables on Unix. Some distros support global-level shell includes via things like /etc/profile.d and the like, but this will have no effect on things run outside of a shell like direct SSH execution or running as a service.
Related
I'm trying to use the modules program to configure my linux computer's environment variables. I add the following command to add environment variables to my bashrc.
module load gcc/5.5
I expect to use the above command to add gcc5.5/bin to $PATH. If I open a new terminal, gcc5.5/bin is in $PATH, but if I use a tmux command, gcc5.5/bin is not added.
When I use vim to update my environmental variables (in ~/.bashrc), PyCharm does not get the updates right away. I have to shut down the program, source ~/.bashrc again, and re-open PyCharm.
Is there any way to have PyCharm source the changes automatically (or without shutting down)?
When any process get created it inherit the environment variables from it's parent process (the O.S. itself in your case). if you change the environment variables at the parent level, the child process is not aware of it.
PyCharm allows you to change the environment variables from the Run\Debug Configuration window.
Run > Edit Configurations > Environment Variables ->
In my case pycharm does not take env variables from bashrc even after restarting
Pycharm maintains it's own version of environment variables and those aren't sourced from the shell.
It seems that if pycharm is executed from a virtualenv or the shell containing said variables, it will load with them, however it is not dynamic.
the answer below has a settings.py script for the virtualenv to update and maintain settings. Whether this completely solves your question or not i'm not sure.
Pycharm: set environment variable for run manage.py Task
I recently discovered a workaround in windows. Close Pycharm, copy the command to run Pycharm directly from the shortcut, and rerun it in a new terminal window: cmd, cmder, etc.
C:\
λ "C:\Program Files\JetBrains\PyCharm 2017.2.1\bin\pycharm64.exe"
I know this is very late, but I encountered this issue as well and found the accepted answer tedious as I had a lot of saved configurations already.
The solution that a co-worker told me is to add the environment variables to ~/.profile instead. I then had to restart my linux machine and pycharm picked up the new values. (for OSX, I only needed to source ~/.profile and restart pycharm completely)
One thing to be aware is that another coworker said that pycharm would look at ~/.bash_profile so if you have that file, then you need the environment variables added there
In case you are using the "sudo python" technique, be aware that it does not by default convey the environment variables.
To correctly pass on the environment variables defined in the PyCharm launch configuration, use the -E switch:
sudo -E /path/to/python/executable "$#"
This is simply how environment variables work. If you change them you have to re-source your .bashrc (or whatever file the environment variables are located in).
from dotenv import load_dotenv
load_dotenv(override=True)
Python-dotenv can interpolate variables using POSIX variable expansion.
With load_dotenv(override=True) or dotenv_values(), the value of a variable is the first of the values defined in the following list:
Value of that variable in the .env file.
Value of that variable in the environment.
Default value, if provided.
Empty string.
With load_dotenv(override=False), the value of a variable is the first of the values defined in the following list:
Value of that variable in the environment
Value of that variable in the .env file.
Default value, if provided.
Empty string.
I am writing Ansible playbooks to setup and install our applications on Solaris servers.
The problem is that the (bash) scripts which I need to execute all assume that a certain directory lies on the PATH, namely /data/bin - which would normally not be a problem were it not for Ansible ignoring all the .profile and .bashrc config.
Now, I know that you can specify the environment for shell tasks via the environment flag, for example like this:
- shell: printenv
environment:
PATH: /usr/bin:/usr/sbin:/data/bin
This will properly path the /data/bin folder, and the printenv command will correctly display (or my bash scripts would correctly run).
But. There are two problems however:
First of all it is very annoying to have to specify the environment over and over again. I know that you can define the environment in some playbook base file variable and the reference that, but you still have to set environment: ... on every single shell task.
Secondly, the above example does not allow me to specify the path dynamically, e.g. as PATH: $PATH:/data/bin - because Ansible executes this in a way which does not resolve $PATH, thus the command fails catastrophically. So essentially this will override any other changes to PATH.
I am looking for a solution where
the additional PATH entry should only be added once
the additional PATH entry should not override entries added by other tasks
P.S. I found this nice explanation on how to do this on Linux, but it makes use of /etc/environment which does not exist on Solaris. (And /etc/profile is once again ignored by Ansible.)
try adding -o SendEnv=PATH to ssh_args in ansible.cfg. Requires that
the shell in which you run ansible has /data/bin in PATH. Or however ansible allows you to modify the current/local PATH variable.
remote machine has AcceptEnv set correctly.
So I've been looking around for an example of how I can specify environment variables for my Docker container from the AWS EB web interface. Typically in EB you can add environment properties which are available at runtime. I was using these for my previous deployment before I switched to Docker, but it appears as though Docker has some different rules with regards to how the environment properties are handled, is that correct? According to this article [1], ONLY the AWS credentials and PARAM1-PARAM5 will be present in the environment variables, but no custom properties will be present. That's what it sounds like to me, especially considering the containers that do support custom environment properties say it explicitly, like Python shown here [2]. Does anyone have any experience with this software combination? All I need to specify is a single environment variable that tells me whether the application is in "staging" or "production" mode, then all my environment specific configurations are set up by the application itself.
[1] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-docker
[2] http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python
Custom environment variables are supported with the AWS Elastic Beanstalk Docker container. Looks like a miss in the documentation. You can define custom environment variables for your environment and expect that they will be passed along to the docker container.
I've needed to pass environment variable in moment docker run using Elastic Beanstalk, but, is not allowed put this information in Dockerrun.aws.json.
Below the steps to resolve this scenario:
Create a folder .ebextensions
Create a .config file in the folder
Fill the .config file:
option_settings:
-option_name: VARIABLE_NAME value: VARIABLE_VALUE
Zip the folder .ebextensions file along with the Dockerrun.aws.json plus Dockerfile and upload it to Beanstalk
To see the result, inside EC2 instance, execute the command "docker inspect CONTAINER_ID" and will see the environment variable.
At least for me the environment variables that I set in the EB console were not being populated into the Docker container. I found the following link helpful though: https://aws.amazon.com/premiumsupport/knowledge-center/elastic-beanstalk-env-variables-shell/
I used a slightly different approach where instead of exporting the vars to the shell, I used the ebextension to create a .env file which I then loaded from Python within my container.
The steps would be as follows:
Create a directory called '.ebextensions' in your app root dir
Create a file in this directory called 'load-env-vars.config'
Enter the following contents:
commands:
setvars:
command: /opt/elasticbeanstalk/bin/get-config environment | jq -r 'to_entries | .[] | "\(.key)=\"\(.value)\""' > /var/app/current/.env
packages:
yum:
jq: []
This will create a .env file in /var/app/current which is where your code should be within the EB instance
Use a package like python-dotenv to load the .env file or something similar if you aren't using Python. Note that this solution should be generic to any language/framework that you're using within your container.
I don't think the docs are a miss as Rohit Banga's answer suggests. Thought I agree that "you can define custom environment variables for your environment and expect that they will be passed along to the docker container".
The Docker container portion of the docs say, "No DOCKER-SPECIFIC configuration options are provided by Elastic Beanstalk" ... which doesn't necessarily mean that no environment variables are passed to the Docker container.
For example, for the Ruby container the Ruby-specific variables that are always passed are ... RAILS_SKIP_MIGRATIONS, RAILS_SKIP_ASSET_COMPILATION, BUNDLE_WITHOUT, RACK_ENV, RAILS_ENV. And so on. For the Ruby container, the assumption is you are running a Ruby app, hence setting some sensible defaults to make sure they are always available.
On the other hand, for the Docker container it seems it's open. You specify whatever variables you want ... they make no assumptions as to what you are running, Rails (Ruby), Django (Python) etc ... because it could be anything. They don't know before hand what you want to run and that makes it difficult to set sensible defaults.
I just installed java using chef cookbook and updated PATH environment variable for all users (added new file to /etc/profile.d/).
Is it possible to tell chef to reload PATH variable?
When I do something like this:
execute "java_check" do
command "java -version"
end
Is says that java could not be found.
It works fine when I log out, log in again and then run chef recipe.
I'm not 100% sure you can update the PATH variable for future chef runs, but you can set it up manually using the environment attribute within the execute stanza. This can also be used on other Resources as well. See: http://docs.opscode.com/chef/resources.html#execute
From the Chef Docs,
environment
A hash of environment variables: {"ENV_VARIABLE"=>"VALUE"}.
(These environment variables must exist for a command to execute successfully.)
Default value: nil.
Run a command which requires an environment variable
execute "slapadd" do
command "slapadd < /tmp/something.ldif"
creates "/var/lib/slapd/uid.bdb"
action :run
environment ({'HOME' => '/home/myhome'})
end
I found that it is not possible to update ENV variables pemanently (to be available after chef finishes), but it is possible to update variables for future commands of current chef run.
ruby_block "set-env-" do
block { ENV[variable_name] = variable_value }
not_if { ENV[variable_name] == variable_value }
end
execute "run_updated_bash" do
command "bash /etc/profile.d/myscript.sh"
end
Have you tried something like this? It could be run after you place your file in /etc/profile.d/