I would like to find a single place where I can set an environment variable that will be used for both systemd scripts and bash scripts. I've been 50% successful. I can add systemd-visible environment variable to /etc/systemd/system.conf as:
DefaultEnvironment=APPLICATION=user1 ID=1
and then use it in a systemd service script:
[Service]
ExecStart=
ExecStart=-/sbin/agetty --autologin $APPLICATION --noclear %I 38400 linux
This works. However, I would like the same environment variables to then be visible to the bash environment started by this getty. I have tried several variations of:
[Service]
Environment=$APPLICATION
PassEnvironment=APPLICATION ID
ExecStart=
ExecStart=-/sbin/agetty --autologin $APPLICATION --noclear %I 38400 linux
with and without the "Environment" declaration, etc. ad nauseum but a subsequent
echo $APPLICATION
in the spawned shell shows it is unset.
I know that I can set the environment variable again in /etc/bashrc but, again, I would like to be able to set this only one place and have it be useable to both systemd and bash (I need to make this a bit idiot proof).
What am I missing here??
Related
A program that I have no control over (it's actually PyCharm) launches
C:\Windows\system32\wsl.exe --distribution Ubuntu-20.04 -- wget <link>
The download fails because my system runs behind a proxy. Within WSL, I have set the environment variables http_proxy and https_proxy in /etc/profile, /etc/environment and /etc/bash.bashrc. They would get picked up if the program ran the command
C:\Windows\system32\wsl.exe --distribution Ubuntu-20.04 -- /bin/bash -lc "wget <link>"
because this launches bash in login shell mode which reads in bashrc etc.. However, that does not happen and I need to make it work with only wsl.exe.
How can I set environment variables that are picked up when launching wsl.exe (without bash -lc)?
While I did not find out how to add environment variables that are used on launch of wsl.exe, I found out how to instruct wget to use a proxy by default.
Add to /etc/wgetrc the following lines:
use_proxy=yes
http_proxy=<proxy:port>
https_proxy=<proxy:port>
Also curl can be instructed to use a proxy by default, just add the following line to /etc/curlrc:
proxy=<proxy:port>
I have a project which is setup with Docker Compose. When me or anyone from my team is working on the project, getting everything running is just a docker-compose up away. We also have a Docker Machine ("default") associated with the production environment. So we just have to connect to the machine via:
eval "$(docker-machine env default)"
and now deploying is exactly the same as getting everything running locally. Just a docker-compose up. I love this!
However, the Compose configuration is now split into three files:
docker-compose.yml: general stuff. Applies for both production and local environment.
docker-compose.override.yml: only applies in local environment.
docker-compose.production.yml: only applies in production environment.
Conveniently, docker-compose automatically reads both docker-compose.yml and docker-compose.override.yml so in the local environment we can still run docker-compose up without additional arguments. In the production environment we need to be explicit though:
docker-compose -f docker-compose.yml -f docker-compose.production.yml up
This is much more verbose and it's easy to forget about the additional arguments when you are used to simply running docker-compose up. I which I had the seamless experience from before: Connect to the machine with a single command and then use the same commands as if you are in the local environment.
I found out that you can make docker-compose default to a different set of configuration files by setting the environment variable COMPOSE_FILE. So my go-to solution by now is running two commands when connecting to the machine:
eval $(docker-machine env default)
export COMPOSE_FILE="docker-compose.yml:docker-compose.production.yml"
This works! I can run docker-compose up in the production environment just like before.
Since eval $(docker-machine env default) too is doing nothing but registering environment variables, I was wondering if it's possible to permanently add this line:
export COMPOSE_FILE="docker-compose.yml:docker-compose.production.yml"
into the output of docker-machine env default so I'm back to running a single command when I want to connect to the machine.
If this is not possible (I couldn't find a source) is there a different approach to this problem?
Of course I could write a shell script which simply includes both commands but I would prefer an idiomatic solution.
If you just run the docker-compose env command without the eval wrapper, it will write out a series of shell commands. You can wrap this with another script that runs docker-compose env and also adds your other settings
#!/bin/sh
# I am docker_production
docker-compose env default
echo 'export COMPOSE_FILE="docker-compose.yml:docker-compose.production.yml"'
echo 'PS1="PRODUCTION $PS1"'
If you run this script
./docker_production
it will print out shell commands that set up the environment; to actually have it take effect you can
eval $(./docker_production)
In the same way you can write a shell script that outputs the command to undo this
#!/bin/sh
# I am docker_development
docker-compose env -u
echo 'unset COMPOSE_FILE'
echo 'PS1="${PS1#PRODUCTION }"'
eval $(./docker_development)
I installed docker and docker compose in ubuntu 18.10 server, when i execute the command docker compose from terminal it is working, but when i configure a crontab to exexute a command with docker compose, i get this error :"The USER variable is not set. Defaulting to a blank string.the input device is not a TTY"
The USER error is related to an error with docker-compose.yml, juste for using ${USER}.
How can i fix the issue ?
PS: It was working normaly in ubuntu 18.04 server.
The cron daemon was designed in such a way that it does NOT execute commands within your normal shell environment. This means you cannot use bare commands in cron the way you would from the SSH shell command line. This is because the PATH environment variable is /usr/bin:/bin, and the SHELL environment variable is set to /bin/sh.
Something you can reference: here and here.
So you may have to specify shell environment variable directly in crontab, like
USER=xxx
* * * * * /bin/echo ${USER}
While working behind the corporate proxies..
Why can't docker export the proxy specific value from environment variables
(http_proxy, https_proxy,...).
Usually you get timeout issue while pulling the image, even if the proxy url is mentioned in environment vairable.
I have to set the value (hard-code the same value again) in or by creating the config files in /etc/systemd/system/docker.service.d folder.
If we change the proxy url, we have to make changes in different place. Or is there any way to refer the value from environment variable ?
I have tried the docker run -e env_proxy_variable=proxy_url but got the same timeout issue.
Consider using the below instead:
export HTTP_PROXY=http://xxx:port/
export HTTPS_PROXY=http://xxx:port/
export FTP_PROXY=http://xxx:port/
You can hardcode these variables in the /etc/default/docker file so that they are exported whenever docker is started.
You can check if the environment variable has been exported by typing $(name_of_var). For eg, after running
docker run --env HTTP_PROXY="123.45.21.32" -it ubuntu_latest /bin/bash
type
echo $HTTP_PROXY
It is likely that your DNS server isn't configured. Try
cat /etc/resolv.conf
if you see something like:
nameserver:8.8.8.8
then it's likely that the DNS server is inaccessible behind firewalls. You can pass dns server address along with docker run command like so:
docker run --env HTTP_PROXY="123.45.21.32" --dns=172.10.18.0 -it ubuntu_latest /bin/bash
Before I can use my docker container (using Boot2Docker on OSX) I always have to remember to enter
export DOCKER_HOST=tcp://$(boot2docker ip 2>/dev/null):2375
in my terminal, and naturally I often forget this.
So I figured I'd just add that line to my ~/.bashrc file but when I've done this and check the value of DOCKER_HOST it's tcp://192.168.42.43:4243 instead of tcp://192.168.42.43:2375.
Breaking it down:
boot2docker ip => "The VM's Host only interface IP address is: 192.168.59.103"
boot2docker ip 2 => "The VM's Host only interface IP address is: 192.168.59.103"
boot2docker ip 2>/dev/null => "192.168.59.103" (okay I sort of get that but I have no idea how that works, and I have no idea where the :4243 is coming from."
What's actually going on here and why is the port different?
If you want your DOCKER_HOST environment variable to be set automatically for every terminal you open, use the "boot2docker shellinit" command. You can add this line to your .bash_profile to take care of business:
$(boot2docker shellinit)
Unfortunately, this will give you an annoying error message if your boot2docker VM is not running when you open the terminal ("error in run: VM "boot2docker-vm" is not running.") Put this in your .bash_profile instead to suppress the error message:
$(boot2docker shellinit 2>/dev/null)
More details at Github
NOTE: if you're using Docker Machine instead to manage boot2docker, the equivalent command is
eval "$(docker-machine env MACHINE-NAME)"
where MACHINE-NAME is the name of your boot2docker machine.
Something else is setting that environment variable for you. Why are you dumping stderr from that command to /dev/null. Is some extra info coming to stderr?
I would do
export DOCKER_HOST="tcp://$(boot2docker ip 2>/dev/null):2375" ;
echo "Docker Host is set to ${DOCKER_HOST}"
For some debugging as it is set, then query the value at a later stage to see if something else is messing with it.
I think the boot2docker's socket command is more useful here, also we can simplify things a bit.
If you've not set this up before just run this in your terminal:
echo export DOCKER_HOST=\`boot2docker socket 2\>/dev/null\` >> ~/.bashrc
If you've already been messing with this, just change your line in .bashrc to this:
export DOCKER_HOST=`boot2docker socket 2>/dev/null`
Now to test it has worked open a new terminal and run docker run hello-world.