EDIT: specifically I'm asking should I change /Users/bob to my user name or what happens if I leave it as bob but there is no user bob
I've installed docker on windows 10 with WSL and was told to run this line in command prompt:
docker run -p 8888:8888 -v /Users/bob/myJupyter:/home/jovyan/work bob22/data_mining1
I know -p is the port number for jupyter and -v is volume but what does everything after -v do?
let's say my windows user name is dave but I ran the above command as is with Users/bob/...
-p is short for --publish, which is the port as you state (port 8888 on the host maps to port 8888 inside the container).
-v is short for --volume, which maps a host path to the container path, so in your case /Users/bob/myJupyter on your host will map to /home/jovyan/work inside the container.
The last part bob22/data_mining1 is the name of the image to use for the container, which may be pulled from a repository (like docker-hub) if not found on your local machine already.
Since I don't see bob22/data_mining1 on docker-hub, I suspect they expect you to create an image on your local machine first (or perhaps pull from another repository).
Related
Im trying to use the docker client from inside WSL, connecting to the docker engine on Windows. Ive exposed the docker engine on Windows on port 2375, and after setting the DOCKER_HOST environment variable in WSL, I can verify this works by running docker ps.
The problem comes when i attempt to mount directories into docker containers from WSL. For example:
I create a directory and file inside my home folder on WSL (mkdir ~/dockertest && touch ~/dockertest/example.txt)
ls ~/dockertest shows my file has been created
I now start a docker container, mounting my docker test folder (docker run -it --rm -v ~/dockertest:/data alpine ls /data)
I would expect to see 'example.txt' in the docker container, but this does not seem to be happening.
Any ideas what I might be missing?
There are great instructions for Docker setup in WSL at https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly#ensure-volume-mounts-work - solved most of my problems. The biggest trick for me was with bind-mounted directories; you have to use a path in WSL that the Docker daemon will be able to translate, for example /c/Users/rfay/myproject.
I don't think you need to change the mound point as the link suggests. Rather if you use pwd and sed in combination you should get the effect you need.
docker run -it -v $(pwd | sed 's/^\/mnt//'):/var/folder -w "/var/folder" alpine
pwd returns the working folder in the format '/mnt/c/code/folder'. Pipe this to sed and replace '/mnt' with empty string will leave you with a path such as '/c/code/folder' which is correct for docker for windows.
Anyone stumbling here over this issue follow this: Docker Desktop WSL2 Backend and make sure you are running the version 2 of the WSL in PowerShell:
> wsl -l -v
NAME STATE VERSION
* docker-desktop Running 2
Ubuntu Running 2
docker-desktop-data Running 2
If your Ubuntu VERSION doesn't say 2, you need to update it according to the guide above.
After that, you will be able to mount your Linux directory to the Docker Container directly like:
docker run -v ~/my-project:/sources <my-image>
Specific to WSL 1
Ran into the same issue. One thing to remember is that docker run command does not execute a container then and there on a command shell. It sends the run arguments to a docker daemon that does not interpret WSL path correctly. Hence, you need to pass Windows formatted path in quotes and with back slashes escaped
Your Windows path is
\\wsl$\Ubuntu\home\username\dockertest
Docker command after escaping will probably be like
docker run -it --rm -v "\\\\wsl\$\\Ubuntu\\home\\username\\dockertest":/data alpine ls /data
try
docker -v /:{path} exe
Hope to help you.
I'm brand new to both TeamCity and Docker. I'm struggling to get a Docker container with TeamCity running and usable on my local machine. I've tried several things, to no avail:
I installed Docker for Mac per instructions here. I then tried to run the following command, documented here, for setting up teamcity in docker:
docker run -it --name teamcity-server-instance \
-v c:\docker\data:/data/teamcity_server/datadir \
-v c:\docker\logs:/opt/teamcity/logs \
-p 8111:8111 \
jetbrains/teamcity-server
That returned the following error: docker: Error response from daemon: Invalid bind mount spec "c:dockerdata:/data/teamcity_server/datadir": invalid mode: /data/teamcity_server/datadir.
Taking a different tack, I tried to follow the instructions here - I tried running the following command:
docker run -it --name teamcity -p 8111:8111 sjoerdmulder/teamcity
The terminal indicated that it was starting up a web server, but I can't browse to it at localhost, nor at localhost:8111 (error ERR_SOCKET_NOT_CONNECTED without the port, and ERR_CONNECTION_REFUSED with the port).
Since the website with the docker run command says to install Docker via Docker Toolbox, I then installed that at the location they pointed to (here). I then tried the
docker-machine ip default
command they suggested, but it didn't work, error "Host does not exist: "default"". That makes sense, since the website said the "default" vm would be created by running Docker Quickstart and I didn't do that, but they don't provide any link to Docker Quickstart, so I don't know what they are talking about.
To try to get the IP address the container was running on, I tried this command
docker inspect --format='{{.Name}} - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq)
That listed the names of the running containers, each followed by a hyphen, then nothing. I also tried
docker ps -a
That listed running contaners also, but didn't give the IP. Also, the port is blank, and the status says "exited (130) 4 minutes ago", so it doesn't seem like the container stayed alive after starting.
I also tried again with port 80, hoping that would make the site show at localhost:
docker run -it --name teamcity2 -p 80:80 sjoerdmulder/teamcity
So at this point, I'm completely puzzled and blocked - I can't start the server at all following the instructions on hub.docker.com, and I can't figure out how to browse to the site that does start up with the other instructions.
I'll be very grateful for any assistance!
JetBrains now provides official docker images for TeamCity. I would recommend starting with those.
The example command in their TeamCity server image looks like this
docker run -it --name teamcity-server-instance \
-v <path to data directory>:/data/teamcity_server/datadir \
-v <path to logs directory>:/opt/teamcity/logs \
-p <port on host>:8111 \
jetbrains/teamcity-server
That looks a lot like your first attempt. However, c:\docker\data is a Windows file path. You said you're running this on a mac, so that's definitely not going to work.
Once TeamCity starts, it should be available on port 8111. That's what -p 8111:8111 part of the command does. It maps port 8111 on your machine to port 8111 in the VM Docker for Mac creates to run your containers. ERR_CONNECTION_REFUSED could be caused by several things. Two most likely possibilities are
TeamCity could take a little while to start up and maybe you didn't give it enough time. Solution is to wait.
-it would start the TeamCity container in interactive mode. If you exit out of the terminal window where you ran the command, the container will also probably terminate and will be inaccessible. Solution is to not close the window or run the container in detached mode.
There is a good overview of the differences between Docker for Mac and Docker Toolbox here: Docker for Mac vs. Docker Toolbox. You don't need both, and for most cases you'll want to use Docker for Mac for testing stuff out locally.
I've installed the tensorflow docker container on an ubuntu machine. The tensorflow docker setup instructions specify:
docker run -it b.gcr.io/tensorflow/tensorflow
This puts me into the docker container terminal, and I can run python and execute the Hello World example. I can also manually run .\run_jupyter.sh to start the jupyter notebook. However, I can't reach the notebook from host.
How do I start the jupyter notebook such that I can use the notebook from the host machine? Ideally I would like to use docker to launch the container and start jupyter in a single command.
For a Linux host Robert Graves answer will work, but for Mac OS X or Windows there is more to be done because docker runs in a virtual machine.
So to begin launch the docker shell (or any shell if you are using Linux) and run the following command to launch a new TensorFlow container:
docker run -p 8888:8888 -p 6006:6006 b.gcr.io/tensorflow/tensorflow ./run_jupyter.sh
Then for Mac OS X and Windows you need to do the following only once:
Open VirtualBox
Click on the docker vm (mine was automatically named "default")
Open the settings by clicking settings
In the network settings open the port forwarding dialog
Click the + symbol to add another port and connect a port from your mac to the VM by filling in the dialog as shown below. In this example I chose port 8810 because I run other notebooks using port 8888.
then open a browser and connect to http://localhost:8810 (or whichever port you set in the host port section
Make your fancy pants machine learning app!
My simple yet efficient workflow:
TL;DR version:
Open Docker Quickstart Terminal. If it is already open, run $ cd
Run this once: $ docker run -it -p 8888:8888 -p 6006:6006 -v /$(pwd)/tensorflow:/notebooks --name tf b.gcr.io/tensorflow/tensorflow
To start every time: $ docker start -i tf
If you are not on windows, you should probably change /$(pwd) to $(pwd)
You will get an empty folder named tensorflow in your home directory for use as a persistent storage of project files such as Ipython Notebooks and datasets.
Explanation:
cd for making sure you are in your home directory.
params:
-it stands for interactive, so you can interact with the container in the terminal environment.
-v host_folder:container_folder enables sharing a folder between the host and the container. The host folder should be inside your home directory. /$(pwd) translates to //c/Users/YOUR_USER_DIR in Windows 10. This folder is seen as notebooks directory in the container which is used by Ipython/Jupyter Notebook.
--name tf assigns the name tf to the container.
-p 8888:8888 -p 6006:6006 mapping ports of container to host, first pair for Jupyter notebook, the second one for Tensorboard
-i stands for interactive
Running TensorFlow on the cloud
After further reading of docker documentation I have a solution that works for me:
docker run -p 8888:8888 -p 6006:6006 b.gcr.io/tensorflow/tensorflow ./run_jupyter.sh
The -p 8888:8888 and -p 6006:6006 expose the container ports to the host on the same port number. If you just use -p 8888, a random port on the host will be assigned.
The ./run_jupyter.sh tells docker what to execute within the container.
With this command, I can use a browser on the host machine to connect to http://localhost:8888/ and access the jupyter notebook.
UPDATE:
After wrestling with docker on windows I switched back to a Ubuntu machine with docker. My notebook was being erased between docker sessions which makes sense after reading more docker documentation. Here is an updated command which also mounts a host directory within the container and starts jupyter pointing to that mounted directory. Now my notebook is saved on the host and will be available next time start up tensorflow.
docker run -p 8888:8888 -p 6006:6006 -v /home/rob/notebook:/notebook b.gcr.io/tensorflow/tensorflow sh -c "jupyter notebook /notebook"
Jupyter now has a ready to run Docker image for TensorFlow:
docker run -d -v $(pwd):/home/jovyan/work -p 8888:8888 jupyter/tensorflow-notebook
These steps worked for me if you are a total docker noob using a windows machine.
Versions: Windows 8.1, docker 1.10.3, tensorflow r0.7
Run Docker Quickstart Terminal
After it is loaded, note the ip address. If you can't find it use this docker-machine ip and make a note. Lets call it 'ip address'. Will look something like this: 192.168.99.104 (I made up this ip address)
Paste this command on the docker terminal:
docker run -p 8888:8888 -p 6006:6006 b.gcr.io/tensorflow/tensorflow.
If you are running this for the first time, it will download and install the image on this light weight vm. Then it should say 'The Jupyter notebook is running at ....' -> This is a good sign!
Open your browser at: <your ip address (see above)>:8888. Eg. 192.168.99.104:8888/
Hopefully you can see your ipython files.
To get this to run under hyper-v. Perform the following steps:
1) Create a docker virtual machine using https://blogs.msdn.microsoft.com/scicoria/2014/10/09/getting-docker-running-on-hyper-v-8-1-2012-r2/ this will get you a working docker container. You can connect to it via the console or via ssh. I'd put at least 8gb of memory since I'm sure this will use a lot of memory.
2) run "ifconfig" to determine the IP address of the Docker VM
3) On the docker shell prompt type:
docker run -p 8888:8888 -p 6006:6006 -it b.gcr.io/tensorflow/tensorflow
4) Connect to the Jupyter Workbench using http:/[ifconfig address]:8888/
To tidy up the things a little bit, I want to give some additional explanations because I also suffered a lot setting up docker with tensorflow. For this I refer to this video which is unfortunately not selfexplanatory in all cases.
I assume you already installed docker. The really interesting general part of the video starts at minute 0:44 where he finally started docker. Until there he only downloads the tensorflow repo into the folder, that he then mounts into the container. You can of course put anything else into the container and access it later in the docker VM.
First he runs the long docker command docker run –dit -v /c/Users/Jay/:/media/disk –p 8000 –p 8888 –p 6006 b.gcr.io/tensorflow/tensorflow. The “run” command starts containers. In this case it starts the container “b.gcr.io/tensorflow/tensorflow”, whose address is provided within the tensorflow docker installation tutorial. The container will be downloaded by docker if not already locally available.
Then he gives two additional kinds of arguments: He mounts a folder of the hostsystem at the given path to the container. DO NOT forget to give the partition in the beginning (eg. "/c/").
Additionally he declares ports being available later from the host machine with the params -p.
From all this command you get back the [CONTAINER_ID] of this container execution!
You can always see the currently running containers by running “docker ps” in the docker console. Your container created above should appear in this list with the same id.
Next Step: With your container running, you now want to execute something in it. In our case jupyter notebook or tensorflow or whatever: To do this you make docker execute the bash on the newly created container: docker exec –ti [CONTAINER_ID] bash. This command now starts a bash shell on your container. You see this because the “$” now changed to root#[CONTAINER_ID]:. From here is no way back. If you want to get back to the docker terminal, you have to start another fresh docker console like he is doing in minute 1:10. Now with a bash shell running in the container you can do whatever you want and execute Jupiter or tensorflow or whatever. The folder of the host system, you gave in the run command, should be available now under “/media/disk”.
Last step accessing the VM output. It still did not want to work out for me and I could not access my notebook. You still have to find the correct IP and Port to access the launched notebook, tensorboard session or whatever. First find out the main IP by using docker-machine –ls. In this list you get the URL. (If it is your only container it is called default.) You can leave away the port given here. Then from docker ps you get the list of forwarded ports. When there is written 0.0.0.32776->6006/tcp in the list, you can access it from the hostmachine by using the port given in the first place (Awkyard). So in my case the executed tensorboard in the container said “launched on port 6006”. Then from my hostmachine I needed to enter http://192.168.99.100:32776/ to access it.
-> And that’s it! It ran for me like this!
It gives you the terminal prompt:
FOR /f "tokens=*" %i IN ('docker-machine env --shell cmd vdocker') DO %i
docker run -it tensorflow/tensorflow:r0.9-devel
or
FOR /f "tokens=*" %i IN ('docker-machine env --shell cmd vdocker') DO %i
docker run -it b.gcr.io/tensorflow/tensorflow:latest-devel
You should have 'vdocker' or change vdocker to 'default'.
For some reason I ran into one additional problem that I needed to overcome beyond the examples provided, using the --ip flag:
nvidia-docker run --rm \
-p 8888:8888 -p 6006:6006 \
-v `pwd`:/root \
-it tensorflow/tensorflow:latest-devel-gpu-py3 sh -c "jupyter notebook --ip 0.0.0.0 ."
And then I can access via http://localhost:8888 from my machine. In some ways this makes sense; within the container you bind to 0.0.0.0 which represents all available addresses. But whether I need to do this seems to vary (e.g I've started notebooks using jupyter/scipy-notebook without having to do this).
In any case, the above command works for me, might be of use to others.
As an alternative to the official TensorFlow image, you can also use the ML Workspace Docker image. The ML Workspace is an open-source web IDE that combines Jupyter, VS Code, TensorFlow, and many other tools & libraries into one convenient Docker image. Deploying a single workspace instance is as simple as:
docker run -p 8080:8080 mltooling/ml-workspace:latest
All tools are accessible from the same port and integrated into the Jupyter UI. You can find the documentation here.
How can I share a folder between my Windows files and a docker container, by mounting a volume with simple --volume command using Docker Toolbox on?
I'm using "Docker Quickstart Terminal" and when I try this:
winpty docker run -it --rm --volume /C/Users/myuser:/myuser ubuntu
I have this error:
Invalid value "C:\\Users\\myuser\\:\\myuser" for flag --volume: bad mount mode specified : \myuser
See 'docker run --help'.
Following this, I also tried
winpty docker run -it --rm --volume "//C/Users/myuser:/myuser" ubuntu
and got
Invalid value "\\\\C:\\Users\\myuser\\:\\myuser" for flag --volume: \myuser is not an absolute path
See 'docker run --help'.
This is an improvement of the selected answer because that answer is limited to c:\Users folder. If you want to create a volume using a directory outside of c:\Users this is an extension.
In windows 7, I used docker toolbox. It used Virtual Box.
Open virtual box
Select the machine (in my case default).
Right clicked and select settings option
Go to Shared Folders
Include a new machine folder.
For example, in my case I have included:
**Name**: c:\dev
**Path**: c/dev
Click and close
Open "Docker Quickstart Terminal" and restart the docker machine.
Use this command:
$ docker-machine restart
To verify that it worked, following these steps:
SSH to the docker machine.
Using this command:
$ docker-machine ssh
Go to the folder that you have shared/mounted.
In my case, I use this command
$ cd /c/dev
Check the user owner of the folder. You could use "ls -all" and verify that the owner will be "docker"
You will see something like this:
docker#default:/c/dev$ ls -all
total 92
drwxrwxrwx 1 docker staff 4096 Feb 23 14:16 ./
drwxr-xr-x 4 root root 80 Feb 24 09:01 ../
drwxrwxrwx 1 docker staff 4096 Jan 16 09:28 my_folder/
In that case, you will be able to create a volume for that folder.
You can use these commands:
docker create -v /c/dev/:/app/dev --name dev image
docker run -d -it --volumes-from dev image
or
docker run -d -it -v /c/dev/:/app/dev image
Both commands work for me. I hope this will be useful.
This is actually an issue of the project and there are 2 working workarounds:
Creating a data volume:
docker create -v //c/Users/myuser:/myuser --name data hello-world
winpty docker run -it --rm --volumes-from data ubuntu
SSHing directly in the docker host:
docker-machine ssh default
And from there doing a classic:
docker run -it --rm --volume /c/Users/myuser:/myuser ubuntu
If you are looking for the solution that will resolve all the Windows issues and make it work on the Windows OS in the same way as on Linux, then see below. I tested this and it works in all cases. I’m showing also how I get it (the steps and thinking process). I've also wrote an article about using Docker and dealing with with docker issues here.
Solution 1: Use VirtualBox (if you think it's not good idea see Solution 2 below)
Open VirtualBox (you have it already installed along with the docker tools)
Create virtual machine
(This is optional, you can skip it and forward ports from the VM) Create second ethernet card - bridged, this way it will receive IP address from your network (it will have IP like docker machine)
Install Ubuntu LTS which is older than 1 year
Install docker
Add shared directories to the virtual machine and automount your project directories (this way you have access to the project directory from Ubuntu) but still can work in Windows
Done
Bonus:
Everything is working the same way as on Linux
Pause/Unpause the dockerized environment whenever you want
Solution 2: Use VirtualBox (this is very similar to the solution 1 but it shows also the thinking process, which might be usefull when solving similar issues)
Read that somebody move the folders to /C/Users/Public and that works https://forums.docker.com/t/sharing-a-volume-on-windows-with-docker-toolbox/4953/2
Try it, realize that it doesn’t have much sense in your case.
Read entire page here https://github.com/docker/toolbox/issues/607 and try all solutions listed on page
Find this page (the one you are reading now) and try all the solutions from other comments
Find somewhere information that setting COMPOSE_CONVERT_WINDOWS_PATHS=1 environment variable might solve the issue.
Stop looking for the solution for few months
Go back and check the same links again
Cry deeply
Feel the enlightenment moment
Open VirtualBox (you have it already installed along with the docker tools)
Create virtual machine with second ethernet card - bridged, this way it will receive IP address from your network (it will have IP like docker machine)
Install Ubuntu LTS which is very recent (not older than few months)
Notice that the automounting is not really working and the integration is broken (like clipboard sharing etc.)
Delete virtual machine
Go out and have a drink
Rent expensive car and go with high speed on highway
Destroy the car and die
Respawn in front of your PC
Install Ubuntu LTS which is older than 1 year
Try to run docker
Notice it’s not installed
Install docker by apt-get install docker
Install suggested docker.io
Try to run docker-compose
Notice it’s not installed
apt get install docker-compose
Try to run your project with docker-compose
Notice that it’s old version
Check your power level (it should be over 9000)
Search how to install latest version of docker and find the official guide https://docs.docker.com/install/linux/docker-ce/ubuntu/
Uninstall the current docker-compose and docker.io
Install docker using the official guide https://docs.docker.com/install/linux/docker-ce/ubuntu/
Add shared directories to the virtual machine and automount your project directories (this way you have access to the project directory from Ubuntu, so you can run any docker command)
Done
As of August 2016 Docker for windows now uses hyper-v directly instead of virtualbox, so I think it is a little different. First share the drive in settings then use the C: drive letter format, but use forward slashes. For instance I created an H:\t\REDIS directory and was able to see it mounted on /data in the container with this command:
docker run -it --rm -v h:/t/REDIS:/data redis sh
The same format, using drive letter and a colon then forward slashes for the path separator worked both from windows command prompt and from git bash.
I found this question googling to find an answer, but I couldn't find anything that worked. Things would seem to work with no errors being thrown, but I just couldn't see the data on the host (or vice-versa). Finally I checked out the settings closely and tried the format they show:
So first, you have to share the whole drive to the docker vm in settings here, I think that gives the 'docker-machine' vm running in hyper-v access to that drive. Then you have to use the format shown there, which seems to only exist in this one image and in no documentation or questions I could find on the web:
docker run --rm -v c:/Users:/data alpine ls /data
Simply using double leading slashes worked for me on Windows 7:
docker run --rm -v //c/Users:/data alpine ls /data/
Taken from here: https://github.com/moby/moby/issues/12590
Try this:
Open Docker Quickstart Terminal. If it is already open, run $ cd ~ to make sure you are in Windows user directory.
$ docker run -it -v /$(pwd)/ubuntu:/windows ubuntu
It will work if the error is due to typo. You will get an empty folder named ubuntu in your user directory. You will see this folder with the name windows in your ubuntu container.
For those using Virtual Box who prefer command-line approach
1) Make sure the docker-machine is not running
Docker Quickstart Terminal:
docker-machine stop
2) Create the sharing Windows <-> docker-machine
Windows command prompt:
(Modify following to fit your scenario. I feed my Apache httpd container from directory synced via Dropbox.)
set VBOX=D:\Program Files\Oracle\VirtualBox\VBoxManage.exe
set VM_NAME=default
set NAME=c/htdocs
set HOSTPATH=%DROPBOX%\htdocs
"%VBOX%" sharedfolder add "%VM_NAME%" --name "%NAME%" --hostpath "%HOSTPATH%" --automount
3) Start the docker-machine and mount the volume in a new container
Docker Quickstart Terminal:
(Again, I am starting an Apache httpd container, hence that port exposing.)
docker-machine start
docker run -d --name my-apache-container-0 -p 80:80 -v /c/htdocs:/usr/local/apache2/htdocs my-apache-image:1.0
share folders virtualBox toolbox and windows 7 and nodejs image container
using...
Docker Quickstart Terminal [QST]
Windows Explorer [WE]
lets start...
[QST] open Docker Quickstart Terminal
[QST] stop virtual-machine
$ docker-machine stop
[WE] open a windows explorer
[WE] go to the virtualBox installation dir
[WE] open a cmd and execute...
C:\Program Files\Oracle\VirtualBox>VBoxManage sharedfolder add "default" --name
"/d/SVN_FOLDERS/X2R2_WP6/nodejs" --hostpath "\?\d:\SVN_FOLDERS\X2R2_WP6\nodejs" --automount
check in the oracle virtual machine, that the new shared folder has appeared
[QST] start virtual-machine
$ docker-machine start
[QST] run container nodejs
docker stop nodejs
docker rm nodejs
docker run -d -it --rm --name nodejs -v /d/SVN_FOLDERS/X2R2_WP6/nodejs:/usr/src/app -w /usr/src/app node2
[QST] open bash to the container
docker exec -i -t nodejs /bin/bash
[QST] execute dir and you will see the shared files
I solved it!
Add a volume:
docker run -d -v my-named-volume:C:\MyNamedVolume testimage:latest
Mount a host directory:
docker run -d -v C:\Temp\123:C:\My\Shared\Dir testimage:latest
I've been wondering if the linked container shuts down and starts again, does the container that is linked with it
restores the --link connection?
Fire 2 containers.
docker run -d --name fluentd millisami/fluentd
docker run -d --name railsapp --link fluentd:fluentd millisami/rails
Now if the fluentd container is stopped and restarted, will the railsapp container restores the link and linked ENV vars automatically?
UPDATE:
As of Docker 1.3 (or probably even from earlier version, not sure), the /etc/hosts file will be updated with the new ip of a linked containers if it restarts. This means that if you access it via its name within the /etc/hosts entry as opposed to the environment variable, your link will still work even after restart.
Example:
When you starts two containers app_image and mysql and link them like this:
$ docker run --name mysql mysql:5.6.20
$ docker run -d --link mysql:mysql app_image
you'll get an entry in your /etc/hosts within app_image:
# /etc/hosts example
mysql 172.17.0.12
and that ip will be refreshed in case mysql crashes and is restarted.
So don't use environment variables when referring to your linked container:
$ ping $MYSQL_PORT_3306_TCP_ADDR # --> don't use! won't work after the linked container restarts
$ ping mysql # --> instead, access it by the name as in /etc/hosts
Old answer:
Nope,it won't. In the face of crashing containers scenario, links are as good as dead. I think it is pretty much an open problem,i.e., there are many candidate solutions, but none are yet to be crowned the standard approach.
You might want to take a look at http://jasonwilder.com/blog/2014/07/15/docker-service-discovery/