vscode dev container environment variables not exposing to docker-compose - docker

My docker-compose.yml looks like this
version: "3.8"
services:
vscode:
volumes:
- ..:/workspace:cached
- $SSH_AUTH_SOCK:/ssh-agent
- /var/run/docker.sock:/var/run/docker.sock
environment:
- SSH_AUTH_SOCK=/ssh-agent
Problem is that vscode does not want to give any kind of possibility to do an equivalent of docker-compose run --env ... thus I'm left with
WARNING: The SSH_AUTH_SOCK variable is not set. Defaulting to a blank string.
Is there any way for me to expose my variables from my host to the dev container without using an .env file or anything like that?

I have opened an issue on github yesterday. The outcome is that if you use WSL, then you need to export this variable through your ~/.profile or ~/.bash_profile.
We are using a non-interactive login shell to probe the environment variables in WSL and use these as the "local" variables.
https://github.com/microsoft/vscode-remote-release/issues/5806

I just had a similar issue so this may help.
I would run the following:
export VARIABLE=VALUE
sudo docker-compose up
The problem was that I was exporting the environment variable as my user and then running docker-compose as sudo so it wouldn't have the same environment variables.
This can be solved by either adding yourself to the docker group so you can run without sudo
or exporting the variable as root(not recommended)

Related

Docker compose refuses to apply environment variables

UPDATE
It appears the problem is specifically related to the RUN command in the Dockerfile. If I remove it, the build works fine and the environment variables are clearly being picked up since the password gets applied and I can connect using it. Not sure why the login fails in the RUN command, I've seen many examples using similar code.
I'm working on a very basic docker compose file to setup a dev environment for an app and I started with the database server, which is MS SQL. Here's what the docker-compose.yml file looks like:
version: '3.8'
services:
mssql:
build:
context: .
dockerfile: docker/mssql/Dockerfile
ports:
- '1434:1433'
environment:
ACCEPT_EULA: "Y"
SA_PASSWORD: "YourStrong!Passw0rd"
volumes:
- mssql-data:/var/opt/mssql
As you can see from my dockerfile path, that's in a sub-path and looks like this:
FROM mcr.microsoft.com/mssql/server:2019-latest
COPY ./docker/mssql/TESTDB.bak /var/opt/mssql/backup/TESTDB.bak
RUN ( /opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" && /opt/mssql-tools/bin/sqlcmd -S localhost,1433 -U SA -P "YourStrong!Passw0rd" -Q 'RESTORE DATABASE TESTDB FROM DISK = "/var/opt/mssql/backup/TESTDB.bak" WITH MOVE "TESTDB_Data" to "/var/opt/mssql/data/TESTDB.mdf", MOVE "TESTDB_Log" to "/var/opt/mssql/data/TESTDB_log.ldf"'
(Yes, I realize that the password in the RUN command is redundant, I had tried to use a variable there earlier and since it wasn't working I hard coded it.)
When I run docker-compose up -d, I always get this error: Login failed for user 'SA'
I wasted way too much time thinking there was actually something wrong with the password until I realized that if I add the environment variables directly in the Dockerfile, it works. So in my Dockerfile, above the RUN command, I can just do this:
ENV ACCEPT_EULA=Y
ENV SA_PASSWORD=YourStrong!Passw0rd
So I concluded that my environment variables simply aren't being read. I tried with quotes, without quotes, using env_file instead, nothing seems to work. I also tried the following format, no luck:
environment
- ACCEPT_EULA=Y
- SA_PASSWORD=YourStrong!Passw0rd
I also tried using MSSQL_SA_PASSWORD instead of SA_PASSWORD, as well as having both in there. I assumed that was unlikely to be the problem though given SA_PASSWORD works fine. Lastly, I tried using a 2017 image in case it was image specific, that didn't work either.
I'm assuming it must be something silly I'm missing. I saw a lot of talk of .env in the root being different, but if I understood correctly people go wrong with that when they try to use environment values in their docker-compose.yml file, which is not what I'm doing here. So I'm about ready to lose my mind on this as it seems like such a simple, basic thing.
I think you're confusing the ENV statement in Dockerfile with the environment variables set when running an image. The key is still in the details of the docs. It notes that they are the same as saying docker run -e, not docker build.
What's causing more confusion, when you use ENV, you are setting defaults for when the image runs later:
https://docs.docker.com/engine/reference/builder/#env
If you haven't yet, I very much recommend getting familiar with building and running your image with docker run and docker build before moving on to compose, it's much less confusing that way.
The issue with your build here stems from a confusion between the build-time and run-time environment variables: with the environment or env_file properties you specify the environment variables to be set for the service container.
But the RUN command in your Dockerfile is executed at the build-time of the image! To pass variables when building a new image you should use build args instead, as you already mentioned in your comment:
services:
mssql:
build:
context: .
dockerfile: docker/mssql/Dockerfile
args:
SA_PASSWORD: "YourStrong!Passw0rd"
# ...
With this you can use the SA_PASSWORD as a build ARG:
FROM mcr.microsoft.com/mssql/server:2019-latest
COPY ./docker/mssql/TESTDB.bak /var/opt/mssql/backup/TESTDB.bak
ARG SA_PASSWORD
RUN ( /opt/mssql/bin/sqlservr --accept-eula & ) | grep -q "Service Broker manager has started" && /opt/mssql-tools/bin/sqlcmd -S localhost,1433 -U SA -P "$SA_PASSWORD" -Q 'RESTORE DATABASE TESTDB FROM DISK = "/var/opt/mssql/backup/TESTDB.bak" WITH MOVE "TESTDB_Data" to "/var/opt/mssql/data/TESTDB.mdf", MOVE "TESTDB_Log" to "/var/opt/mssql/data/TESTDB_log.ldf"'
If you want to move the actual password to a .env file you can use variable substitution in the compose.yml:
services:
mssql:
build:
# ...
args:
SA_PASSWORD: "$SA_PASSWORD"
# ...
In your docker-compose.yml, have you tried:
- ACCEPT_EULA=Y
- SA_PASSWORD=YourStrong!Passw0rd
Both responses above are fine, just a few more things:
SA_PASSWORD is deprecated instead use MSSQL_SA_PASSWORD
It is always nice to define .env files with the variables for instance:
sapassword.env
MSSQL_SA_PASSWORD=YourStrong!Passw0rd
sqlserver.env
ACCEPT_EULA=Y
MSSQL_DATA_DIR=/var/opt/sqlserver/data
MSSQL_LOG_DIR=/var/opt/sqlserver/log
MSSQL_BACKUP_DIR=/var/opt/sqlserver/backup
And in docker-compose.yml instance the env files the following way:
environment:
- sqlserver.env
- sapassword.env

Passing env variables from docker-compose.yml to the client-side Next.js

This is quite silly, but I can't successfully pass my environment vars into my Next.js service (run with docker-compose up). Anyone can see the bug?
docker-compose.yml
services:
...
nextjs-client:
image: nextjs-client
ports: "3000:3000"
environment:
- NEXT_PUBLIC_API_HOST=192.168.0.9:8080
At my nextjs-client sourcecode I try to access it with process.env.NEXT_PUBLIC_API_HOST, but it's undefined.
Your syntax looks fine,
Try to exec the container and printenv to see if the variable exists.
No need to say, your code should run in the same container, of course.
If it exists, maybe a spelling issue. Check also the process.env declaration vs the docker environment declaration.
Try also to docker-compose down to remove the container, might be also a caching issue.
It might be easier to maintain the environment variables with .env file and docker compose --env-file .env, but it is not the problem, just a tip.

set env variables for all the users is not working in docker

I am setting env variable in dockerfile which is not reflecting for all the users when i start the deployment. Unless i go to the terminal and do source /etc/profile.d/ip.sh then only it gets affected to the user. Without doing this activity can we set this value to all the users. If yes how do we need to achieve it.
Created shell script and added in /etc/profile.d/ip.sh:
IP="1.1.1.1"
export IPADDR=$IP
Dockerfile:
COPY ip.sh /etc/profile.d
RUN chmod 644 /etc/profile.d/ip.sh
RUN . ./etc/profile.d/ip.sh
please read the environment variables section of the docker manual
the reason you should not get what you want is that Docker does not behave the same as full OS therefore you do not get the scripts at startup.
here are some examples from there:
Set environment variables in containers
You can set environment variables in a service’s containers with the ‘environment’ key, just like with docker run -e VARIABLE=VALUE ...:
web:
environment:
- DEBUG=1
The “env_file” configuration option🔗
You can pass multiple environment variables from an external file through to a service’s containers with the ‘env_file’ option, just like with docker run --env-file=FILE ...:
web:
env_file:
- web-variables.env
The “.env” file
You can set default values for any environment variables referenced in the Compose file, or used to configure Compose, in an environment file named .env:
$ cat .env
TAG=v1.5
$ cat docker-compose.yml
version: '3'
services:
web:
image: "webapp:${TAG}"

How to pass environment variables to docker-compose's applications

I want to pass environment variables that is readable by applications spin up by docker-compose up.
What is the proper way of using docker-compose up with varying configuration settings?
I don't want to use .env & environment: config as the environment variables are changing frequently & it is insecure to save tokens in a file.
docker-compose run -e does work a bit, but loses many.
It does not map the ports that defined in docker-compose.yml services.
Also multiple services are defined in docker-compose.yml and I don't want to use depends_on just because docker-compose up doesn't work.
Let's say I define service in docker-compose.yml
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
In my serviceA.js, I simply use the environment variable:
console.log("This is ", process.env.KEY, "running in service A");
When I run docker-compose run -e KEY=DockerComposeRun service-a
I do get the environment variable KEY read by serviceA.js
This is DockerComposeRun running in service A
However I could only get one single service running.
I could have use environment: in docker-compose.yml
environment:
- KEY=DockerComposeUp
But in my use case, each docker compose would have different environment variable values, meaning I would need to edit the file each time before I do docker-compose.
Also, not only single service would use the same environment variable, .env even done a better job, but it is not desired.
There doesn't seem to be a way to do the same for docker-compose up
I have tried KEY=DockerComposeUp docker-compose up,
but what I get is undefined .
Export doesn't work for me as well, it seems they are all about using environment variable for docker-compose.yml instead of for the applications in container
To safely pass sensitive configuration data to your containers you can use Docker secrets. Everything passed through Secrets is encrypted.
You can create and manage secrets using the commands below:
docker secret create
docker secret inspect
docker secret ls
docker secret rm
And use them in your docker-compose file, either referring to existing secrets (external) or use a file:
secrets:
my_first_secret:
file: ./secret_data
my_second_secret:
external: true
You can use environment like this:
service-a:
build:
context: .
dockerfile: DockerfileA
command: node serviceA.js
environment:
KEY=DockerComposeRun
Refer at: https://docs.docker.com/compose/environment-variables/

Variable substitution in docker-compose.yml file when running docker-compose with sudo

I'm currently trying to use variable substitution in a docker-compse.yml file. This file contains the following:
jenkins:
image: "jenkins:${JENKINS_VERSION}"
external_links:
- mongodb:mongo
ports:
- 8000:8080
The image below shows what happens when I try to start everything up.
As you can see, docker-compose shows a warning saying that the variable is not set. I suspect this is caused due to the use of sudo to start docker-compose. My setup (a Jenkins docker container which has access to docker and docker-compose via volume mounts) currently requires the use of sudo. Would it be better to stop docker requiring sudo, or is there another way to fix this without changing the current setup?
sudo -E preserve the user environment when running the command. It should do what you want.

Resources