How to set an environment variable in a running docker container - docker

If I have a docker container that I started a while back, what is the best way to set an environment variable in that running container? I set an environment variable initially when I ran the run command.
$ docker run --name my-wordpress -e VIRTUAL_HOST=domain.example --link my-mysql:mysql -d spencercooley/wordpress
but now that it has been running for a while I want to add another VIRTUAL_HOST to the environment variable. I do not want to delete the container and then just re-run it with the environment variable that I want because then I would have to migrate the old volumes to the new container, it has theme files and uploads in it that I don't want to lose.
I would just like to change the value of VIRTUAL_HOST environment variable.

There are generaly two options, because docker doesn't support this feature now:
Create your own script, which will act like runner for your command. For example:
#!/bin/bash
export VAR1=VAL1
export VAR2=VAL2
your_cmd
Run your command following way:
docker exec -i CONTAINER_ID /bin/bash -c "export VAR1=VAL1 && export VAR2=VAL2 && your_cmd"

Docker doesn't offer this feature.
There is an issue: "How to set an enviroment variable on an existing container? #8838"
Also from "Allow docker start to take environment variables #7561":
Right now Docker can't change the configuration of the container once it's created, and generally this is OK because it's trivial to create a new container.

For a somewhat narrow use case, docker issue 8838 mentions this sort-of-hack:
You just stop docker daemon and change container config in /var/lib/docker/containers/[container-id]/config.json (sic)
This solution updates the environment variables without the need to delete and re-run the container, having to migrate volumes and remembering parameters to run.
However, this requires a restart of the docker daemon. And, until issue issue 2658 is addressed, this includes a restart of all containers.

To:
set up many env. vars in one step,
prevent exposing them in 'sh' history, like with '-e' option (passing credentials/api tokens!),
you can use
--env-file key_value_file.txt
option:
docker run --env-file key_value_file.txt $INSTANCE_ID

Here's how you can modify a running container to update its environment variables. This assumes you're running on Linux. I tested it with Docker 19.03.8
Live Restore
First, ensure that your Docker daemon is set to leave containers running when it's shut down. Edit your /etc/docker/daemon.json, and add "live-restore": true as a top-level key.
sudo vim /etc/docker/daemon.json
My file looks like this:
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"live-restore": true
}
Taken from here.
Get the Container ID
Save the ID of the container you want to edit for easier access to the files.
export CONTAINER_ID=`docker inspect --format="{{.Id}}" <YOUR CONTAINER NAME>`
Edit Container Configuration
Edit the configuration file, go to the "Env" section, and add your key.
sudo vim /var/lib/docker/containers/$CONTAINER_ID/config.v2.json
My file looks like this:
...,"Env":["TEST=1",...
Stop and Start Docker
I found that restarting Docker didn't work, I had to stop and then start Docker with two separate commands.
sudo systemctl stop docker
sudo systemctl start docker
Because of live-restore, your containers should stay up.
Verify That It Worked
docker exec <YOUR CONTAINER NAME> bash -c 'echo $TEST'
Single quotes are important here.
You can also verify that the uptime of your container hasn't changed:
docker ps

You wrote that you do not want to migrate the old volumes. So I assume either the Dockerfile that you used to build the spencercooley/wordpress image has VOLUMEs defined or you specified them on command line with the -v switch.
You could simply start a new container which imports the volumes from the old one with the --volumes-from switch like:
$ docker run --name my-new-wordpress --volumes-from my-wordpress -e VIRTUAL_HOST=domain.com --link my-mysql:mysql -d spencercooley/wordpres
So you will have a fresh container but you do not loose the old data. You do not even need to touch or migrate it.
A well-done container is always stateless. That means its process is supposed to add or modify only files on defined volumes. That can be verified with a simple docker diff <containerId> after the container ran a while.
In that case it is not dangerous when you re-create the container with the same parameters (in your case slightly modified ones). Assuming you create it from exactly the same image from which the old one was created and you re-use the same volumes with the above mentioned switch.
After the new container has started successfully and you verified that everything runs correctly you can delete the old wordpress container. The old volumes are then referred from the new container and will not be deleted.

If you are running the container as a service using docker swarm, you can do:
docker service update --env-add <you environment variable> <service_name>
Also remove using --env-rm
To make sure it's addedd as you wanted, just run:
docker exec -it <container id> env

1. Enter your running container:
sudo docker exec -it <container_name> /bin/bash
2. Run command to all available to user accessing the container and copy them to user running session that needs to run the commands:
printenv | grep -v "no_proxy" >> /etc/environment
3. Stop and Start the container
sudo docker stop <container_name>
sudo docker start <container_name>

Firstly you can set env inside the container the same way as you do on a linux box.
Secondly, you can do it by modifying the config file of your docker container (/var/lib/docker/containers/xxxx/config.v2.json). Note you need restart docker service to take affect. This way you can change some other things like port mapping etc.

here is how to update a docker container config permanently
stop container: docker stop <container name>
edit container config: docker run -it -v /var/lib/docker:/var/lib/docker alpine vi $(docker inspect --format='/var/lib/docker/containers/{{.Id}}/config.v2.json' <container name>)
restart docker

I solve this problem with docker commit after some modifications in the base container, we only need to tag the new image and start that one
docs.docker.com/engine/reference/commandline/commit
docker commit [container-id] [tag]
docker commit b0e71de98cb9 stack-overflow:0.0.1
then you can pass environment vars or file
docker run --env AWS_ACCESS_KEY_ID --env AWS_SECRET_ACCESS_KEY --env AWS_SESSION_TOKEN --env-file env.local -p 8093:8093 stack-overflow:0.0.1

the quick working hack would be:
get into the running container.
docker exec -it <container_name> bash
set env variable,
install vim if not installed in the container
apt-get install vim
vi ~/.profile at the end of the file add export MAPPING_FILENAME=p_07302021
source ~/.profile
check whether it has been set! echo $MAPPING_FILENAME(make sure you should come out of the container.)
Now, you can run whatever you're running outside of the container from inside the container.
Note, in case you're worried that you might lose your work if the current session you logged in gets logged off. you can always use screen even before starting step 1. That way if you logged off by chance of your inside running container session, you can log back in.

After understand that docker run an image constructed with a dockerfile , and the only way to change it is build another image stop everything and run everything again .
So the easy way to "set an environment variable in a running docker container" is read dockerfile [1] (with docker inspect) understand how docker starts [1].
In the example [1] we can see that docker start with /usr/local/bin/docker-php-entrypoint and we could edit it with vi and add one line with export myvar=myvalue since /usr/local/bin/docker-php-entrypoint Posix script .
If you can change dockerfile, you can add a call to a script [2] for example /usr/local/bin/mystart.sh and in that file we can set your environment var.
Of course after change the scripts you need restart the container [3]
[1]
$ docker inspect 011aa33ba92b
[{
. . .
"ContainerConfig": {
"Cmd": [
"php-fpm"
],
"WorkingDir": "/app",
"Entrypoint": [
"docker-php-entrypoint"
],
. . .
}]
[2]
/usr/local/bin/mystart.sh
#!/bin/bash
export VAR1=VAL1
export VAR2=VAL2
your_cmd
[3]
docker restart dev-php (container name)

Hack with editing docker inner configs and then restarting docker daemon was unsuitable for my case.
There is a way to recreate container with new environment settings and use it for some time.
1. Create new image from runnning container:
docker commit my-service
a1b2c3d4e5f6032165497
Docker created new image, and answered with its id. Note, the image doesn't include mounts and networks.
2. Stop and rename original container:
docker stop my-service
docker rename my-service my-service-original
3. Create and start new container with modified environment:
docker run \
-it --rm \
--name my-service \
--network=required-network \
--mount type=bind,source=/host/path,target=/inside/path,readonly \
--env 'MY_NEW_ENV_VAR=blablabla OLD_ENV=zzz' \
a1b2c3d4e5f6032165497
Here, I did the following:
created new temporary container from image built on step 1, that will show its output on terminal, will exit on Ctrl+C, and will be deleted after that
configured its mounts and networks
added my custom environment configuration
4. After you worked with temporary container, press Ctrl+C to stop and remove it, and then return old container back:
docker rename my-service-original my-service
docker start my-service

How to set environment variable in a running docker container as a development environment
Basically you can do like in normal linux, adding export MY_VAR="value" to ~/.bashrc file.
Instructions
Using VScode attach to your running container
Then with VScode open the ~/.bashrc file
Export your variable by adding the code in the end of the file
export MY_VAR="value"
Finally execute .bashrc using source command
source ~/.bashrc

You could set an environment variable to a running Docker container by
docker exec -it -e "your environment Key"="your new value" <container> /bin/bash
Verify it using below command
printenv
This will update your key with the new value provided.
Note: This will get reverted back to old on if docker gets restarted.

Use export VAR=Value
Then type printenv in terminal to validate it is set correctly.

Related

How to start an existing image as container so it runs in the background [duplicate]

Running the docker registry with below command always throws an error:
dev:tmp me$ docker run \
-d --name registry-v1 \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-e SEARCH_BACKEND=sqlalchemy \
-e LOGLEVEL=DEBUG \
-p 5000:5000 \
registry:0.9.1
Error response from daemon: Conflict. The name "registry-v1" is already in use by container f9e5798a82e0. You have to delete (or rename) that container to be able to reuse that name.
How can I prevent this error ?
I got confused by this also. There are two commands relevant here:
docker run # Run a command in a **new** container
docker start # Start one or more stopped containers
That means you have already started a container in the past with the parameter
docker run --name registry-v1 ...
You need to delete that first before you can re-create a container with the same name with
docker rm registry-v1
When that container is sill running you need to stop it first before you can delete it with
docker stop registry-v1
Or simply choose a different name for the new container.
To get a list of existing containers and their names simply invoke
docker ps -a
Here what i did, it works fine.
step 1:(it lists docker container with its name)
docker ps -a
step 2:
docker rm name_of_the_docker_container
When you are building a new image you often want to run a new container each time and with the same name. I found the easiest way was to start the container with the --rm option:
--rm Automatically remove the container when it exits
e.g.
docker run --name my-micro-service --rm <image>
Sadly it's used almost randomly in the examples from the docs
Edit: Read Lepe's comment below.
Just to explain what others are saying (it took me some time to understand) is that, simply put, when you see this error, it means you already have a container and what you have to do is run it. While intuitively docker run is supposed to run it, it doesn't. The command docker run is used to only START a container for the very first time. To run an existing container what you need is docker start $container-name. So much for asking developers to create meaningful/intuitive commands.
You have 2 options to fix this...
Remove previous container using that name, with the command docker rm $(docker ps -aq --filter name=myContainerName)
OR
Rename current container to a different name i.e change this portion --name registry-v1 to something like --name myAnotherContainerName
You are getting this error because that container name ( i.e registry-v1) was used by another container in the past...even though that container may have exited i.e (currently not in use).
Cause
A container with the same name is still existing.
Solution
To reuse the same container name, delete the existing container by:
docker rm <container name>
Explanation
Containers can exist in following states, during which the container name can't be used for another container:
created
restarting
running
paused
exited
dead
You can see containers in running state by using :
docker ps
To show containers in all states and find out if a container name is taken, use:
docker ps -a
Here is how I solved this on ubuntu 18:
$ sudo docker ps -a
copy the container ID
For each container do:
$ sudo docker stop container_ID
$ sudo docker rm container_ID
removing all the exited containers
docker rm $(docker ps -a -f status=exited -q)
The Problem: you trying to create new container while in background container with same name is running and this situation causes conflicts.
The error would be like:
Cannot create continer for service X :Conflict. The name X is already in use by container abc123xyz. You have to remove ot delete (or rename) that container to be able to reuse that name.
Solution rename the service name in docker-compose.yml or delete the running container and rebuild it again (this solution related to Unix/Linux/macOS systems):
get all running containers sudo docker ps -a
get the specific container id
stop and remove the duplicated container / force remove it
sudo docker stop <container_id>
sudo docker rm <container_id>
or
sudo docker rm --force <container_id>
You can remove it with command sudo docker rm YOUR_CONTAINER_ID, then run a new container with sudo docker run ...;
or restart an existing container with sudo docker start YOUR_CONTAINER_ID
I was running into this issue that when I run docker rm (which usually works) I would get:
Error: No such image
The easiest solution to this is removing all stopped containers by running:
docker container prune
I have solved the issue by doing following steps and I hope it helps.
Type docker ps -a to list all the containers in your system.
Check the NAMES part where you have initialized your docker container.
Then type docker rm --force name_of_container
Install the docker container as you wish.
I had problem using NIFI and I have removed and reinstalled using docker. Good luck.
TL:DR;
List all containers:
docker ps -a
Remove the concerned container by id:
docker container rm <container_id>
I'm just learning docker and this got me as well. I stopped the container with that name already and therefore I thought I could run a new container with that name.
Not the case. Just because the container is stopped, doesn't mean it can't be started again, and it keeps all the same parameters that it was created with (including the name).
when I ran docker ps -a that's when I saw all the dummy test containers I created while I was playing around.
No problem, since I don't want those any more I just did docker rm containername at which point my new container was allowed to run with the old name.
Ah, and now that I finish writing this answer, I see Slawosz's comment on Walt Howard's answer above suggesting the use of docker ps -a
The OP's problem is the error. Deleting state isn't the only solution - or even a good one. The problem is docker run isn't re-entrant, and docker start is impotent w/o run. So we have to combine them.
For example to run Postgres w/o destroying previous state, try this:
docker start postgres || docker run -d -p 5432:5432 --name postgres -e POSTGRES_PASSWORD=password postgres:13-alpine
Ok, so I didn't understand either, then I left my pc, went to do other things, and upon my return, it clicked :D
You download a docker image file. docker pull *image-name* will just pull the image from docker hub without running it.
Now, you use docker run, and give it a name (e.g. newWebServer).
docker run -d -p 8080:8080 -v volume --name newWebServer image-name/version
You perhaps only need docker run --name *name* *image*, but the other stuff will become useful quickly.
-d (detached) - means the container will exit when the root process used to run the container exits.
-p (port) - specify the container port and the host port. Kind of the internal and external port. The internal one being the port the container uses, and the external one is the port you use outside of it and probably the one you need to put in your web browser if that's how you access your app.
--name (what you want to call this instance of the container) - you could have several instances of the same container all with different names, which is useful when you're trying to test something.
image-name/version is the actual image you want to create the container from. You can see a list of all the images on your system with docker images -a. You may have more than one version, so make sure you choose the correct one/tag.
-v (volume) - perhaps not needed initially, but soon you'll want to persist data after your container exits.
OK. So now, docker run just created a container from your image. If it isn't running, you can now start it with it's name:
docker start newWebServer
You can check all your containers (they may or may not be running) with
docker ps -a
You can stop and start them (or pause them) with their name or the container id (or just the first few characters of it) from the CONTAINER ID column e.g:
docker stop newWebServer
docker start c3028a89462c
And list all your images, with
docker images -a
In a nutshell, download an image; docker run creates a container from it; start it with docker start (name or container id); stop it with docker stop (name or container id).
I had this issue because I had two or more containers with the same container_name in the docker-compose.yml file.
Simple Solution: Goto your docker folder in the system and delete .raw file or docker archive with large size.
For me, the issue was that I used an image name more than once in the dockerfile.
This happened to me on the docker tutorial! The port I tried to use was taken, but docker still created.. an image? A process to run docker? I'll find out soon. Anyways, to choose a different port, I had to remove the older image, and then docker run again.
Sometimes a tutorial can be too terse. What you want is concise, not terse, or even succinct.

How to get docker container id when starting container

When writing a bash script that starts a docker container, it is useful to refer to the started docker container. How do you get the specific container id of a docker container when you start it?
P.S. I know that I can use --name to name the container, which I can use to filter the list of containers using docker ps -aqf "name=containername", but this will fail if I ever start the script twice. And then there's the possibility of name conflicts. Besides, what's the point of container IDs if you can't use them?
When you start a detached container, it returns the container ID. e.g.:
$ docker run -d ubuntu:18.04
71329cf6a02d89cf5f211072dd37716fe212787315ce4503eaee722da6ddf18f
In bash, you can define a new variable from the output like this:
CID=$(docker run -d ubuntu:18.04)
Then, later you can use this variable to refer to your container like this:
docker stop $CID
docker rm $CID
In the documentation for docker run under "capture container id", they advise using the --cidfile flag for this purpose.
--cidfile takes a file name as an argument and will write the long ID of the container to that location. E.g.,
docker run --cidfile /tmp/hello-world.cid hello-world && cat /tmp/hello-world.cid
This is useful when you don't want to run the image in a detached state, but still want access to the ID.

Inject configuration into volume before Docker container starts

I am looking for a way to create a Docker volume and put some data on it just before a specific container is started - which needs the configuration on startup.
I do not want to modify the container. I would like to use a vanilla container straight from the Docker Hub.
Any ideas?
Update
I did not mention that all this has to be done in a compose file. If I would do it manually, I could wait for the configuration injecting container to finish.
Absolutely! Just create your volume beforehand, attach it to any container (A base OS like Ubuntu would work great), add your data, and you're good to go!
Create the volume:
docker volume create test_volume
Attach it to an instance where you can add data:
docker run --rm -it --name ubuntu_1 -v test_volume:/app ubuntu /bin/sh
Add some data:
Do this within the container; which you are in from the previous command.
touch /app/my_file
Exit the container:
exit
Attach the volume to your new container:
Of course, replace ubuntu with your real image name.
docker run --rm -it --name ubuntu_2 -v test_volume:/app ubuntu /bin/sh
Verify the data is there:
~> ls app/
my_file

Docker - Name is already in use by container

Running the docker registry with below command always throws an error:
dev:tmp me$ docker run \
-d --name registry-v1 \
-e SETTINGS_FLAVOR=local \
-e STORAGE_PATH=/registry \
-e SEARCH_BACKEND=sqlalchemy \
-e LOGLEVEL=DEBUG \
-p 5000:5000 \
registry:0.9.1
Error response from daemon: Conflict. The name "registry-v1" is already in use by container f9e5798a82e0. You have to delete (or rename) that container to be able to reuse that name.
How can I prevent this error ?
I got confused by this also. There are two commands relevant here:
docker run # Run a command in a **new** container
docker start # Start one or more stopped containers
That means you have already started a container in the past with the parameter
docker run --name registry-v1 ...
You need to delete that first before you can re-create a container with the same name with
docker rm registry-v1
When that container is sill running you need to stop it first before you can delete it with
docker stop registry-v1
Or simply choose a different name for the new container.
To get a list of existing containers and their names simply invoke
docker ps -a
Here what i did, it works fine.
step 1:(it lists docker container with its name)
docker ps -a
step 2:
docker rm name_of_the_docker_container
When you are building a new image you often want to run a new container each time and with the same name. I found the easiest way was to start the container with the --rm option:
--rm Automatically remove the container when it exits
e.g.
docker run --name my-micro-service --rm <image>
Sadly it's used almost randomly in the examples from the docs
Edit: Read Lepe's comment below.
Just to explain what others are saying (it took me some time to understand) is that, simply put, when you see this error, it means you already have a container and what you have to do is run it. While intuitively docker run is supposed to run it, it doesn't. The command docker run is used to only START a container for the very first time. To run an existing container what you need is docker start $container-name. So much for asking developers to create meaningful/intuitive commands.
You have 2 options to fix this...
Remove previous container using that name, with the command docker rm $(docker ps -aq --filter name=myContainerName)
OR
Rename current container to a different name i.e change this portion --name registry-v1 to something like --name myAnotherContainerName
You are getting this error because that container name ( i.e registry-v1) was used by another container in the past...even though that container may have exited i.e (currently not in use).
Cause
A container with the same name is still existing.
Solution
To reuse the same container name, delete the existing container by:
docker rm <container name>
Explanation
Containers can exist in following states, during which the container name can't be used for another container:
created
restarting
running
paused
exited
dead
You can see containers in running state by using :
docker ps
To show containers in all states and find out if a container name is taken, use:
docker ps -a
Here is how I solved this on ubuntu 18:
$ sudo docker ps -a
copy the container ID
For each container do:
$ sudo docker stop container_ID
$ sudo docker rm container_ID
removing all the exited containers
docker rm $(docker ps -a -f status=exited -q)
The Problem: you trying to create new container while in background container with same name is running and this situation causes conflicts.
The error would be like:
Cannot create continer for service X :Conflict. The name X is already in use by container abc123xyz. You have to remove ot delete (or rename) that container to be able to reuse that name.
Solution rename the service name in docker-compose.yml or delete the running container and rebuild it again (this solution related to Unix/Linux/macOS systems):
get all running containers sudo docker ps -a
get the specific container id
stop and remove the duplicated container / force remove it
sudo docker stop <container_id>
sudo docker rm <container_id>
or
sudo docker rm --force <container_id>
You can remove it with command sudo docker rm YOUR_CONTAINER_ID, then run a new container with sudo docker run ...;
or restart an existing container with sudo docker start YOUR_CONTAINER_ID
I was running into this issue that when I run docker rm (which usually works) I would get:
Error: No such image
The easiest solution to this is removing all stopped containers by running:
docker container prune
I have solved the issue by doing following steps and I hope it helps.
Type docker ps -a to list all the containers in your system.
Check the NAMES part where you have initialized your docker container.
Then type docker rm --force name_of_container
Install the docker container as you wish.
I had problem using NIFI and I have removed and reinstalled using docker. Good luck.
TL:DR;
List all containers:
docker ps -a
Remove the concerned container by id:
docker container rm <container_id>
The OP's problem is the error. Deleting state isn't the only solution - or even a good one. The problem is docker run isn't re-entrant, and docker start is impotent w/o run. So we have to combine them.
For example to run Postgres w/o destroying previous state, try this:
docker start postgres || docker run -d -p 5432:5432 --name postgres -e POSTGRES_PASSWORD=password postgres:13-alpine
I'm just learning docker and this got me as well. I stopped the container with that name already and therefore I thought I could run a new container with that name.
Not the case. Just because the container is stopped, doesn't mean it can't be started again, and it keeps all the same parameters that it was created with (including the name).
when I ran docker ps -a that's when I saw all the dummy test containers I created while I was playing around.
No problem, since I don't want those any more I just did docker rm containername at which point my new container was allowed to run with the old name.
Ah, and now that I finish writing this answer, I see Slawosz's comment on Walt Howard's answer above suggesting the use of docker ps -a
Ok, so I didn't understand either, then I left my pc, went to do other things, and upon my return, it clicked :D
You download a docker image file. docker pull *image-name* will just pull the image from docker hub without running it.
Now, you use docker run, and give it a name (e.g. newWebServer).
docker run -d -p 8080:8080 -v volume --name newWebServer image-name/version
You perhaps only need docker run --name *name* *image*, but the other stuff will become useful quickly.
-d (detached) - means the container will exit when the root process used to run the container exits.
-p (port) - specify the container port and the host port. Kind of the internal and external port. The internal one being the port the container uses, and the external one is the port you use outside of it and probably the one you need to put in your web browser if that's how you access your app.
--name (what you want to call this instance of the container) - you could have several instances of the same container all with different names, which is useful when you're trying to test something.
image-name/version is the actual image you want to create the container from. You can see a list of all the images on your system with docker images -a. You may have more than one version, so make sure you choose the correct one/tag.
-v (volume) - perhaps not needed initially, but soon you'll want to persist data after your container exits.
OK. So now, docker run just created a container from your image. If it isn't running, you can now start it with it's name:
docker start newWebServer
You can check all your containers (they may or may not be running) with
docker ps -a
You can stop and start them (or pause them) with their name or the container id (or just the first few characters of it) from the CONTAINER ID column e.g:
docker stop newWebServer
docker start c3028a89462c
And list all your images, with
docker images -a
In a nutshell, download an image; docker run creates a container from it; start it with docker start (name or container id); stop it with docker stop (name or container id).
I had this issue because I had two or more containers with the same container_name in the docker-compose.yml file.
Simple Solution: Goto your docker folder in the system and delete .raw file or docker archive with large size.
For me, the issue was that I used an image name more than once in the dockerfile.
This happened to me on the docker tutorial! The port I tried to use was taken, but docker still created.. an image? A process to run docker? I'll find out soon. Anyways, to choose a different port, I had to remove the older image, and then docker run again.
Sometimes a tutorial can be too terse. What you want is concise, not terse, or even succinct.

Why env variables are not created automatically?

I am referring this site to link containers.
When two containers are linked, Docker will set some environment variables in the target container to enable programmatic discovery of information related to the source container.
This is the line specified in the documentaion. But when i see /etc/hosts i can see entries for both container. But when i run env command, i don't see any port mappings specified in that docker site.
Works fine for me:
$ docker run -d --name redis1 redis
0b869d9f5a43e24976beec6c292839ea2c67983012e50893f0b557cd8bc0c3b4
$ docker run --link redis1:redis1 debian env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=c23a30b8618f
REDIS1_PORT=tcp://172.17.0.3:6379
REDIS1_PORT_6379_TCP=tcp://172.17.0.3:6379
REDIS1_PORT_6379_TCP_ADDR=172.17.0.3
REDIS1_PORT_6379_TCP_PORT=6379
REDIS1_PORT_6379_TCP_PROTO=tcp
REDIS1_NAME=/berserk_nobel/redis1
REDIS1_ENV_REDIS_VERSION=2.8.19
REDIS1_ENV_REDIS_DOWNLOAD_URL=http://download.redis.io/releases/redis-2.8.19.tar.gz
REDIS1_ENV_REDIS_DOWNLOAD_SHA1=3e362f4770ac2fdbdce58a5aa951c1967e0facc8
HOME=/root
If you're still having trouble, you need to provide a way we can recreate your problem.

Resources