As the title says, whats the difference between running a command for a docker-machine as
eval $(docker-machine env mycontainer)
docker run httpd
And
docker $(docker-machine config mycontainer) run httpd
as both create an httpd image under the "mycontainer" ip, but with the second, there's no container shown with "docker ps"
In the first form, you are first evaluating a series of env vars (DOCKER_HOST, DOCKER_CERT_PATH, DOCKER_TLS_VERIFY, DOCKER_MACHINE_NAME) which are configuring your current shell so that any docker command you later launch will talk to the same docker daemon.
In the second form, you are passing the params (--tlsverify, --tlscacert, --tlscert, --tlskey, -H) directly to the docker command. Those will eventually override the values already in your env or the defaults (i.e. connect to local daemon).
In this latest case if you want to see the container you just launched making sure your are talking to the correct server, you have to use the same command line parameters again with docker $(docker-machine config mycontainer) ps
To summarize:
config is more suited for single on-spot commands
env is more convenient for a full session on the same server.
I am trying to change the value of somaxconn in a running docker container. I am using this command:
docker run --net=container:redis --sysctl net.core.somaxconn=65535 bash
I have checked that this command changes the value of somaxconn from 128 to 65535 in the running docker container.
Because docker run command creates and starts a new container, so, does this command recreates and starts my existing conatiner? Does it restart my running container with the somaxconn changes or does it change the value in the running docker docker container?
But one thing I would like to add is that after running the above command, the value of somaxconn was changed and when I did docker ps, it showed that the same container was up and running from long time ago, so, does that mean it changed the value in an already running container?
docker run will always start a new container. If you already have a container running, it will not be affected by the docker run command.
You could look at the docker exec command. This allows you to execute a command in an existing container. However, this will not allow you to change the parameters with which the original container was started (e.g. changing the --sysctl param). To change these kind of parameters you must remove and recreate the container with the desired parameters.
See the docs for docker run and docker exec here:
https://docs.docker.com/engine/reference/commandline/run/
https://docs.docker.com/engine/reference/commandline/exec/
docker run command will always create a new container.
So you can do one thing. Execute following commands to change value in running container.
#>docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
80eb3dc8438b abcxyz "docker-entrypoint.sâ¦" 22 hours ago Up 22 hours abcxyzfriendlyname
Look for container-name or id (e.g. 80eb3dc8438b)
Then launch the following command : (Make sure your container is running at this moment).
#>docker exec 80eb3dc8438b sysctl net.core.somaxconn=65535
I am not sure whether it will persist data in the container or is it just available in session only.
I am new with selenium docker. I want to create a Chrome/Firefox node with capabilities (Selenium Grid). How to add capabilities when I add a Selenium Node docker container?
I found this command so far...
docker run -d --link selenium-hub:hub selenium/node-firefox:2.53.0
but I don't know how to add capabilities on it. Already use this command but not working.
docker run -d --link selenium-hub:hub selenium/node-firefox:2.53.0 -browser browserName=firefox,version=3.6,maxInstances=5,platform=LINUX
Solved... adding SE_OPTS will help you to set capabilites
docker run -d -e SE_OPTS="-browser browserName=chromeku,version=56.0,maxInstances=3,platform=WINDOWS" --link selenium-hub:hub selenium/node-chrome:2.53.0
There are multiple ways of doing this and SE_OPTS is one of them, however for me it complicated what I was trying to accomplish. Using SE_OPTS forced me to set capabilities I didn't want to change, otherwise they would be reset to blank/null
I wanted to do:
SE_OPTS=-browser applicationName=Testing123
but I was forced to do:
SE_OPTS=-browser applicationName=Testing123,browserName=firefox,maxInstances=1,version=59.0.1
Another way to set capabilities is to supply your own config.json
-nodeConfig /path/config.json
You can find a default config.json
Or you can start the node container and copy the current one from it
docker cp <containerId>:/opt/selenium/config.json /host/path/target
You can also take a look at entry_point.sh, either on github or on the running container:
/opt/bin/entry_point.sh
You can run bash on the node container via:
sudo docker exec -i -t <container> bash
This will let you see how SE_OPTS is used and how config.json is generated. Note config.json is generated only if you don't supply one.
/opt/bin/generate_config
By examining generate_config you can see quite a few ENV vars such as:
FIREFOX_VERSION, NODE_MAX_INSTANCES, NODE_APPLICATION_NAME etc.
This leads to the third way to set capabilities which is to set the environment variables being used by generate_config, in my case APPLICATION_NODE_NAME
docker run -d -e "NODE_APPLICATION_NAME=Testing123"
Finally, when using SE_OPTS be careful not to accidentally change values. Specifically, the browser version. You can see by looking at entry_point.sh the browser version is calculated.
FIREFOX_VERSION=$( firefox -version | cut -d " " -f 3 )
If you change it to something else you will not get the results you are looking for.
If I have a docker container that I started a while back, what is the best way to set an environment variable in that running container? I set an environment variable initially when I ran the run command.
$ docker run --name my-wordpress -e VIRTUAL_HOST=domain.example --link my-mysql:mysql -d spencercooley/wordpress
but now that it has been running for a while I want to add another VIRTUAL_HOST to the environment variable. I do not want to delete the container and then just re-run it with the environment variable that I want because then I would have to migrate the old volumes to the new container, it has theme files and uploads in it that I don't want to lose.
I would just like to change the value of VIRTUAL_HOST environment variable.
There are generaly two options, because docker doesn't support this feature now:
Create your own script, which will act like runner for your command. For example:
#!/bin/bash
export VAR1=VAL1
export VAR2=VAL2
your_cmd
Run your command following way:
docker exec -i CONTAINER_ID /bin/bash -c "export VAR1=VAL1 && export VAR2=VAL2 && your_cmd"
Docker doesn't offer this feature.
There is an issue: "How to set an enviroment variable on an existing container? #8838"
Also from "Allow docker start to take environment variables #7561":
Right now Docker can't change the configuration of the container once it's created, and generally this is OK because it's trivial to create a new container.
For a somewhat narrow use case, docker issue 8838 mentions this sort-of-hack:
You just stop docker daemon and change container config in /var/lib/docker/containers/[container-id]/config.json (sic)
This solution updates the environment variables without the need to delete and re-run the container, having to migrate volumes and remembering parameters to run.
However, this requires a restart of the docker daemon. And, until issue issue 2658 is addressed, this includes a restart of all containers.
To:
set up many env. vars in one step,
prevent exposing them in 'sh' history, like with '-e' option (passing credentials/api tokens!),
you can use
--env-file key_value_file.txt
option:
docker run --env-file key_value_file.txt $INSTANCE_ID
Here's how you can modify a running container to update its environment variables. This assumes you're running on Linux. I tested it with Docker 19.03.8
Live Restore
First, ensure that your Docker daemon is set to leave containers running when it's shut down. Edit your /etc/docker/daemon.json, and add "live-restore": true as a top-level key.
sudo vim /etc/docker/daemon.json
My file looks like this:
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"live-restore": true
}
Taken from here.
Get the Container ID
Save the ID of the container you want to edit for easier access to the files.
export CONTAINER_ID=`docker inspect --format="{{.Id}}" <YOUR CONTAINER NAME>`
Edit Container Configuration
Edit the configuration file, go to the "Env" section, and add your key.
sudo vim /var/lib/docker/containers/$CONTAINER_ID/config.v2.json
My file looks like this:
...,"Env":["TEST=1",...
Stop and Start Docker
I found that restarting Docker didn't work, I had to stop and then start Docker with two separate commands.
sudo systemctl stop docker
sudo systemctl start docker
Because of live-restore, your containers should stay up.
Verify That It Worked
docker exec <YOUR CONTAINER NAME> bash -c 'echo $TEST'
Single quotes are important here.
You can also verify that the uptime of your container hasn't changed:
docker ps
You wrote that you do not want to migrate the old volumes. So I assume either the Dockerfile that you used to build the spencercooley/wordpress image has VOLUMEs defined or you specified them on command line with the -v switch.
You could simply start a new container which imports the volumes from the old one with the --volumes-from switch like:
$ docker run --name my-new-wordpress --volumes-from my-wordpress -e VIRTUAL_HOST=domain.com --link my-mysql:mysql -d spencercooley/wordpres
So you will have a fresh container but you do not loose the old data. You do not even need to touch or migrate it.
A well-done container is always stateless. That means its process is supposed to add or modify only files on defined volumes. That can be verified with a simple docker diff <containerId> after the container ran a while.
In that case it is not dangerous when you re-create the container with the same parameters (in your case slightly modified ones). Assuming you create it from exactly the same image from which the old one was created and you re-use the same volumes with the above mentioned switch.
After the new container has started successfully and you verified that everything runs correctly you can delete the old wordpress container. The old volumes are then referred from the new container and will not be deleted.
If you are running the container as a service using docker swarm, you can do:
docker service update --env-add <you environment variable> <service_name>
Also remove using --env-rm
To make sure it's addedd as you wanted, just run:
docker exec -it <container id> env
1. Enter your running container:
sudo docker exec -it <container_name> /bin/bash
2. Run command to all available to user accessing the container and copy them to user running session that needs to run the commands:
printenv | grep -v "no_proxy" >> /etc/environment
3. Stop and Start the container
sudo docker stop <container_name>
sudo docker start <container_name>
Firstly you can set env inside the container the same way as you do on a linux box.
Secondly, you can do it by modifying the config file of your docker container (/var/lib/docker/containers/xxxx/config.v2.json). Note you need restart docker service to take affect. This way you can change some other things like port mapping etc.
here is how to update a docker container config permanently
stop container: docker stop <container name>
edit container config: docker run -it -v /var/lib/docker:/var/lib/docker alpine vi $(docker inspect --format='/var/lib/docker/containers/{{.Id}}/config.v2.json' <container name>)
restart docker
I solve this problem with docker commit after some modifications in the base container, we only need to tag the new image and start that one
docs.docker.com/engine/reference/commandline/commit
docker commit [container-id] [tag]
docker commit b0e71de98cb9 stack-overflow:0.0.1
then you can pass environment vars or file
docker run --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN --env-file env.local -p 8093:8093 stack-overflow:0.0.1
the quick working hack would be:
get into the running container.
docker exec -it <container_name> bash
set env variable,
install vim if not installed in the container
apt-get install vim
vi ~/.profile at the end of the file add export MAPPING_FILENAME=p_07302021
source ~/.profile
check whether it has been set! echo $MAPPING_FILENAME(make sure you should come out of the container.)
Now, you can run whatever you're running outside of the container from inside the container.
Note, in case you're worried that you might lose your work if the current session you logged in gets logged off. you can always use screen even before starting step 1. That way if you logged off by chance of your inside running container session, you can log back in.
After understand that docker run an image constructed with a dockerfile , and the only way to change it is build another image stop everything and run everything again .
So the easy way to "set an environment variable in a running docker container" is read dockerfile [1] (with docker inspect) understand how docker starts [1].
In the example [1] we can see that docker start with /usr/local/bin/docker-php-entrypoint and we could edit it with vi and add one line with export myvar=myvalue since /usr/local/bin/docker-php-entrypoint Posix script .
If you can change dockerfile, you can add a call to a script [2] for example /usr/local/bin/mystart.sh and in that file we can set your environment var.
Of course after change the scripts you need restart the container [3]
[1]
$ docker inspect 011aa33ba92b
[{
. . .
"ContainerConfig": {
"Cmd": [
"php-fpm"
],
"WorkingDir": "/app",
"Entrypoint": [
"docker-php-entrypoint"
],
. . .
}]
[2]
/usr/local/bin/mystart.sh
#!/bin/bash
export VAR1=VAL1
export VAR2=VAL2
your_cmd
[3]
docker restart dev-php (container name)
Hack with editing docker inner configs and then restarting docker daemon was unsuitable for my case.
There is a way to recreate container with new environment settings and use it for some time.
1. Create new image from runnning container:
docker commit my-service
a1b2c3d4e5f6032165497
Docker created new image, and answered with its id. Note, the image doesn't include mounts and networks.
2. Stop and rename original container:
docker stop my-service
docker rename my-service my-service-original
3. Create and start new container with modified environment:
docker run \
-it --rm \
--name my-service \
--network=required-network \
--mount type=bind,source=/host/path,target=/inside/path,readonly \
--env 'MY_NEW_ENV_VAR=blablabla OLD_ENV=zzz' \
a1b2c3d4e5f6032165497
Here, I did the following:
created new temporary container from image built on step 1, that will show its output on terminal, will exit on Ctrl+C, and will be deleted after that
configured its mounts and networks
added my custom environment configuration
4. After you worked with temporary container, press Ctrl+C to stop and remove it, and then return old container back:
docker rename my-service-original my-service
docker start my-service
How to set environment variable in a running docker container as a development environment
Basically you can do like in normal linux, adding export MY_VAR="value" to ~/.bashrc file.
Instructions
Using VScode attach to your running container
Then with VScode open the ~/.bashrc file
Export your variable by adding the code in the end of the file
export MY_VAR="value"
Finally execute .bashrc using source command
source ~/.bashrc
You could set an environment variable to a running Docker container by
docker exec -it -e "your environment Key"="your new value" <container> /bin/bash
Verify it using below command
printenv
This will update your key with the new value provided.
Note: This will get reverted back to old on if docker gets restarted.
Use export VAR=Value
Then type printenv in terminal to validate it is set correctly.