VSCode not passing environment variables to docker-compose.yml and container - docker

I'm trying to setup a development environment in Docker under WSL2 (Windows 11) with VSCode and its Remote Containers extension. Building and running mostly works, but I am unable to use or pass environment variables from my WSL environment to the docker compose build step and subsequently the container. This originated from my wanting to forward my SSH agent by adding the following to docker-compose.yml:
environment:
- SSH_AUTH_SOCK=/ssh-agent
...
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent
This build step fails in VSCode if I include the volume line, because the variable SSH_AUTH_SOCK is evaluated as an empty string and thus the docker compose command fails. If I manually run docker compose up -d from the WSL commandline (and ofcourse provided I have an SSH_AUTH_SOCK variable from ssh-agent running), the build succeeds and I can attach VSCode to this container. However, even if I do that, VSCode overrides the container's SSH_AUTH_SOCK with something like SSH_AUTH_SOCK=/tmp/vscode-ssh-auth-xxxx.sock (although I could ofcourse manually export SSH_AUTH_SOCK=/ssh-agent). This happens even if I have disabled automatically starting ssh-agent.
More generally speaking, if I try to pass through other environment variables, they never get set, even if I explicitly have the settings in VSCode enabled to pass through WSL environment variables.
This essentially means I cannot use my WSL ssh-agent socket in containers. Is there a solution to this that I'm missing?

Related

How to set environment variable in docker desktop in windows?

I used command "docker pull mysql:5.7.28" which showed image and container correctly in docker desktop but when trying to run the container it showed exited and error was MYSQL_ROOT_PASSWORD required.
So I need to edit MYSQL_ROOT_PASSWORD in yaml file to resolve this issue.
Now the problem is simple I have not used docker-compose file to setup the container and unable to find option in docker desktop to set up this variable.
You can set the environment variable when you run the container with docker run - see, e.g. "Start a mysql server instance" on https://hub.docker.com/_/mysql.
An alternative would be to create a docker-compose.yml and set the environment variable there (the reference for what you can put in Compose files is here).
There might be a way to set environment variables in Docker Desktop, but I don't use it, so I don't know. The documentation should tell you, though.

how to configure docker containers proxy?

how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/

Running testcontainers inside a Docker container for Windows

As said in documentation, if I want to run testcontainers inside a docker I have to consider the following points:
The docker socket must be available via a volume mount
The 'local' source code directory must be volume mounted at the same path inside the container that Testcontainers runs in, so that Testcontainers is able to set up the correct volume mounts for the containers it spawns.
How to comply with 2nd point, mainly with the -v $PWD:$PWD condition if I use Docker for Windows?
On windows, instead socket, docker uses named pipes.
docker run -v \\.\pipe\docker_engine:\\.\pipe\docker_engine
But you need Windows v1709 and special version of Docker for Windows, since this feature is experimental.
More info:
https://blog.docker.com/2017/09/docker-windows-server-1709/
As for the $PWD, on windows cmd you can use %CD% variable which does this same job. Powershell also has a $pwd, same as in linux. But unfortunatelly, they does not work with docker-compose, as they're not true environment variables.
I think easiest would be to execute a short script to create .env file on windows where PWD= will be set to the current dir:
echo PWD=%cd% > .env
and you can use $PWD in docker-compose same as on linux.

Reference env variable from host at runtime in "env-file" to be passed to docker image

Is there a syntax to reference an environment variable from the host in a Docker env-file.
Specifically I'd like to do something like DOCKER_HOST=${HOSTNAME} where HOSTNAME would come the environment of the machine hosting the docker image.
The above doesn't get any attempt at replacement whatsoever and gets passed into the Docker image literally as ${HOSTNAME}.
This is generally not done at the image level, but at runtime, on docker run:
See "How to get the hostname of the docker host from inside a docker container on that host without env vars"
docker run .. -e HOST_HOSTNAME=$(hostname) ..
That does use an environment variable.
You can do so without environment variables, using -h
docker run -h=$(hostname)
But that does not work when your docker run is part of a docker compose. See issue 3840.

Using docker within vagrant behind a proxy

I want to run apt from within a docker container, within a vagrant machine (running on virtualbox), but this fails because I'm behind a proxy.
I use vagrant-proxyconf to allow the vagrant machine itself to connect to the internet, which works fine:
if Vagrant.has_plugin?("vagrant-proxyconf")
config.proxy.http = ...
config.proxy.https = ...
config.proxy.no_proxy = "localhost,127.0.0.1,.example.com"
end
However, these settings aren't carried through to docker containers started within the vagrant machine. When I start a debian-based docker container with
docker run -it debian /bin/bash
and within the bash I run
apt-get update
then apt can't establish a connection. I can fix this problem by adding the following to my Dockerfile
ENV http_proxy <myproxy>
but adjusting all Dockerfile's would be cumbersome, and I'd prefer not to hardcode my proxy into the Dockerfile's themselves, as those are also used in a different setup.
I've also tried telling docker what proxy to use using: https://docs.docker.com/engine/admin/systemd/
However, this appears not to have any effect on the proxy that apt uses within the docker container.
Is there a way to pass the http_proxy environment variable to all docker containers started within my machine by default? Alternatively, would it be possible to configure vagrant / virtualbox to "emulate" a "proxyless" internet connection so that I don't have to reach the proxy settings down through all the virtualization layers?
You can add the vars passing them as arguments on docker build command. In that way it will work and the proxy ip won't be on Dockerfile. In this way:
docker build -t --build-arg http_proxy="http://yourIp" yourImage
Then on Dockerfile you must set the var as argument:
ARG http_proxy
Automatically the var is able to be used in this way:
RUN echo ${http_proxy}
But in your case you don't need to use it, only setting the proxy var is enough to be used during building.
This technique could be very useful too in order to avoid write passwords on Dockerfiles.
Hope it helps

Resources