How do I add an environment variable to wsl.exe? - environment-variables

A program that I have no control over (it's actually PyCharm) launches
C:\Windows\system32\wsl.exe --distribution Ubuntu-20.04 -- wget <link>
The download fails because my system runs behind a proxy. Within WSL, I have set the environment variables http_proxy and https_proxy in /etc/profile, /etc/environment and /etc/bash.bashrc. They would get picked up if the program ran the command
C:\Windows\system32\wsl.exe --distribution Ubuntu-20.04 -- /bin/bash -lc "wget <link>"
because this launches bash in login shell mode which reads in bashrc etc.. However, that does not happen and I need to make it work with only wsl.exe.
How can I set environment variables that are picked up when launching wsl.exe (without bash -lc)?

While I did not find out how to add environment variables that are used on launch of wsl.exe, I found out how to instruct wget to use a proxy by default.
Add to /etc/wgetrc the following lines:
use_proxy=yes
http_proxy=<proxy:port>
https_proxy=<proxy:port>
Also curl can be instructed to use a proxy by default, just add the following line to /etc/curlrc:
proxy=<proxy:port>

Related

How to default to Docker Compose production configuration when connected to dedicated Docker Machine?

I have a project which is setup with Docker Compose. When me or anyone from my team is working on the project, getting everything running is just a docker-compose up away. We also have a Docker Machine ("default") associated with the production environment. So we just have to connect to the machine via:
eval "$(docker-machine env default)"
and now deploying is exactly the same as getting everything running locally. Just a docker-compose up. I love this!
However, the Compose configuration is now split into three files:
docker-compose.yml: general stuff. Applies for both production and local environment.
docker-compose.override.yml: only applies in local environment.
docker-compose.production.yml: only applies in production environment.
Conveniently, docker-compose automatically reads both docker-compose.yml and docker-compose.override.yml so in the local environment we can still run docker-compose up without additional arguments. In the production environment we need to be explicit though:
docker-compose -f docker-compose.yml -f docker-compose.production.yml up
This is much more verbose and it's easy to forget about the additional arguments when you are used to simply running docker-compose up. I which I had the seamless experience from before: Connect to the machine with a single command and then use the same commands as if you are in the local environment.
I found out that you can make docker-compose default to a different set of configuration files by setting the environment variable COMPOSE_FILE. So my go-to solution by now is running two commands when connecting to the machine:
eval $(docker-machine env default)
export COMPOSE_FILE="docker-compose.yml:docker-compose.production.yml"
This works! I can run docker-compose up in the production environment just like before.
Since eval $(docker-machine env default) too is doing nothing but registering environment variables, I was wondering if it's possible to permanently add this line:
export COMPOSE_FILE="docker-compose.yml:docker-compose.production.yml"
into the output of docker-machine env default so I'm back to running a single command when I want to connect to the machine.
If this is not possible (I couldn't find a source) is there a different approach to this problem?
Of course I could write a shell script which simply includes both commands but I would prefer an idiomatic solution.
If you just run the docker-compose env command without the eval wrapper, it will write out a series of shell commands. You can wrap this with another script that runs docker-compose env and also adds your other settings
#!/bin/sh
# I am docker_production
docker-compose env default
echo 'export COMPOSE_FILE="docker-compose.yml:docker-compose.production.yml"'
echo 'PS1="PRODUCTION $PS1"'
If you run this script
./docker_production
it will print out shell commands that set up the environment; to actually have it take effect you can
eval $(./docker_production)
In the same way you can write a shell script that outputs the command to undo this
#!/bin/sh
# I am docker_development
docker-compose env -u
echo 'unset COMPOSE_FILE'
echo 'PS1="${PS1#PRODUCTION }"'
eval $(./docker_development)

Detect Docker runtime on host using environment variables

I would like to run tests verifying the correct execution of Flyway migrations using TestContainers.
Using JUnit5, I would like to enable these tests only on a host that have a Docker daemon running (#EnabledIfSystemProperty(named = "docker...", matches = "")) https://junit.org/junit5/docs/current/user-guide/#writing-tests-conditional-execution-system-properties.
My question is: how can I check that a Docker daemon is available on host using environment variables?
PS: I don't have any access to the CI host.
If you can run bash before that, you can run :
export IS_DOCKER_RUNNING =`cat /var/run/docker.pid`
and check if the environment variable is empty or contain an id.
There are several variables involved with this ("does the calling user have permissions" is an important check; "is the Docker I have access to actually local" is another interesting question) and there isn't going to be a magic environment variable that tells you this.
I'd probably try running a throwaway container; something along the lines of
docker run --rm busybox /bin/true
and if it succeeds move forward with other Docker-based end-to-end tests.
Building on #NinaHashemi's answer, if it must be an environment variable, and you can run a shell script before/around your tests (any POSIX shell, not necessarily bash) then you can run
if docker run --rm busybox /bin/true >/dev/null 2>&1; then
export IS_DOCKER_RUNNING=yes
fi

Creating environment variables for systemd and spawned processes (bash)

I would like to find a single place where I can set an environment variable that will be used for both systemd scripts and bash scripts. I've been 50% successful. I can add systemd-visible environment variable to /etc/systemd/system.conf as:
DefaultEnvironment=APPLICATION=user1 ID=1
and then use it in a systemd service script:
[Service]
ExecStart=
ExecStart=-/sbin/agetty --autologin $APPLICATION --noclear %I 38400 linux
This works. However, I would like the same environment variables to then be visible to the bash environment started by this getty. I have tried several variations of:
[Service]
Environment=$APPLICATION
PassEnvironment=APPLICATION ID
ExecStart=
ExecStart=-/sbin/agetty --autologin $APPLICATION --noclear %I 38400 linux
with and without the "Environment" declaration, etc. ad nauseum but a subsequent
echo $APPLICATION
in the spawned shell shows it is unset.
I know that I can set the environment variable again in /etc/bashrc but, again, I would like to be able to set this only one place and have it be useable to both systemd and bash (I need to make this a bit idiot proof).
What am I missing here??

Docker Proxy Setup using environment variable

While working behind the corporate proxies..
Why can't docker export the proxy specific value from environment variables
(http_proxy, https_proxy,...).
Usually you get timeout issue while pulling the image, even if the proxy url is mentioned in environment vairable.
I have to set the value (hard-code the same value again) in or by creating the config files in /etc/systemd/system/docker.service.d folder.
If we change the proxy url, we have to make changes in different place. Or is there any way to refer the value from environment variable ?
I have tried the docker run -e env_proxy_variable=proxy_url but got the same timeout issue.
Consider using the below instead:
export HTTP_PROXY=http://xxx:port/
export HTTPS_PROXY=http://xxx:port/
export FTP_PROXY=http://xxx:port/
You can hardcode these variables in the /etc/default/docker file so that they are exported whenever docker is started.
You can check if the environment variable has been exported by typing $(name_of_var). For eg, after running
docker run --env HTTP_PROXY="123.45.21.32" -it ubuntu_latest /bin/bash
type
echo $HTTP_PROXY
It is likely that your DNS server isn't configured. Try
cat /etc/resolv.conf
if you see something like:
nameserver:8.8.8.8
then it's likely that the DNS server is inaccessible behind firewalls. You can pass dns server address along with docker run command like so:
docker run --env HTTP_PROXY="123.45.21.32" --dns=172.10.18.0 -it ubuntu_latest /bin/bash

Unable to use a docker-container as the php interpreter in PhpStorm on Windows

I use docker-tools to run docker on Windows 7. I have the docker-machine setup and the commands I run from the command line work fine.
I'm now trying to integrate this in PhpStorm. Here are the 2 things I tried:
1. As a local interpreter
I made php.bat with the following content
#echo off
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.100:2376
SET DOCKER_CERT_PATH=C:\Users\phillaf\.docker\machine\machines\default
SET DOCKER_MACHINE_NAME=default
docker run -it --rm -v /c/Users/phillaf/app:/app -w /app php php %*
I can use php.bat to run php scripts successfully from the windows command line. In PhpStorm I tried adding php.bat as a local PHP interpreter, but it's not recognized as an interpreter.
2. As a remote interpreter
I can connect successfully to docker-machine with putty using docker#192.168.99.100, password tcuser. However PhpStorm seems to be unable to connect with the same infos. I tried setting it up as a remote interpreter.
Under Settings > Languages & Frameworks > PHP > Interpreters > Remote I add the SSH info and then click browse. I get the error "failed to send channel request".

Resources