How to set environment variable in docker desktop in windows? - docker

I used command "docker pull mysql:5.7.28" which showed image and container correctly in docker desktop but when trying to run the container it showed exited and error was MYSQL_ROOT_PASSWORD required.
So I need to edit MYSQL_ROOT_PASSWORD in yaml file to resolve this issue.
Now the problem is simple I have not used docker-compose file to setup the container and unable to find option in docker desktop to set up this variable.

You can set the environment variable when you run the container with docker run - see, e.g. "Start a mysql server instance" on https://hub.docker.com/_/mysql.
An alternative would be to create a docker-compose.yml and set the environment variable there (the reference for what you can put in Compose files is here).
There might be a way to set environment variables in Docker Desktop, but I don't use it, so I don't know. The documentation should tell you, though.

Related

VSCode not passing environment variables to docker-compose.yml and container

I'm trying to setup a development environment in Docker under WSL2 (Windows 11) with VSCode and its Remote Containers extension. Building and running mostly works, but I am unable to use or pass environment variables from my WSL environment to the docker compose build step and subsequently the container. This originated from my wanting to forward my SSH agent by adding the following to docker-compose.yml:
environment:
- SSH_AUTH_SOCK=/ssh-agent
...
volumes:
- ${SSH_AUTH_SOCK}:/ssh-agent
This build step fails in VSCode if I include the volume line, because the variable SSH_AUTH_SOCK is evaluated as an empty string and thus the docker compose command fails. If I manually run docker compose up -d from the WSL commandline (and ofcourse provided I have an SSH_AUTH_SOCK variable from ssh-agent running), the build succeeds and I can attach VSCode to this container. However, even if I do that, VSCode overrides the container's SSH_AUTH_SOCK with something like SSH_AUTH_SOCK=/tmp/vscode-ssh-auth-xxxx.sock (although I could ofcourse manually export SSH_AUTH_SOCK=/ssh-agent). This happens even if I have disabled automatically starting ssh-agent.
More generally speaking, if I try to pass through other environment variables, they never get set, even if I explicitly have the settings in VSCode enabled to pass through WSL environment variables.
This essentially means I cannot use my WSL ssh-agent socket in containers. Is there a solution to this that I'm missing?

Need to pass arguments to docker entrypoint.sh during each docker start ( not docker run ). Is something like this possiable?

i have a dockerfile , i can give enviroment variables & arguments during 'docker run' and it is persistance during docker start/stop/restart.
But sometimes i many need to change it, which requires me to make a new container everytime.
Is there a solution to it ?
There are many properties of a container that can only be set at creation time, and the environment variables and command line are among those. You must delete and recreate the container to change these. There isn't a workaround.
If you're just concerned about the length of the docker run command, consider either packaging that command in a shell script or looking at an orchestration tool like Docker Compose. If you change a setting in a docker-compose.yml file and re-run docker-compose up -d, it will make the minimal change required for that (which could include deleting and recreating the container, but it won't touch containers whose current settings are fine).

how to configure docker containers proxy?

how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/

Docker backup container with startup parameters

Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/

Reference env variable from host at runtime in "env-file" to be passed to docker image

Is there a syntax to reference an environment variable from the host in a Docker env-file.
Specifically I'd like to do something like DOCKER_HOST=${HOSTNAME} where HOSTNAME would come the environment of the machine hosting the docker image.
The above doesn't get any attempt at replacement whatsoever and gets passed into the Docker image literally as ${HOSTNAME}.
This is generally not done at the image level, but at runtime, on docker run:
See "How to get the hostname of the docker host from inside a docker container on that host without env vars"
docker run .. -e HOST_HOSTNAME=$(hostname) ..
That does use an environment variable.
You can do so without environment variables, using -h
docker run -h=$(hostname)
But that does not work when your docker run is part of a docker compose. See issue 3840.

Resources