How to connect to local MySQL server through Docker? - docker

This is more a general question for how to connect to local services through Docker. There's a similar question in a Github issue here that doesn't seem to have any resolution. What I'm really looking for is to be able to do development locally against my local development MySQL server, then once I'm ready to deploy, to test locally against a newly created deploy candidate docker image.
Ideally, both get settings from the same place as well, so I could put mysql_server: host_ip. This seems like a typical use case. Is anything like this currently possible?
I'm using Boot2Docker specifically with MySQL server running on my host mac's OS X Yosemite NOT in a container. Would be cool to have a more general answer for future readers though.

The Docker CLI docs give this solution (which assumes you are running on a Linux host with ):
Sometimes you need to connect to the Docker host from within your container. To enable this, pass the Docker host’s IP address to the container using the --add-host flag. To find the host’s address, use the ip addr show command.
The flags you pass to ip addr show depend on whether you are using IPv4 or IPv6 networking in your containers. Use the following flags for IPv4 address retrieval for a network device named eth0:
$ HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print \$2}' | cut -d / -f 1`
$ docker run --add-host=docker:${HOSTIP} --rm -it debian
Then the name docker inside the container will map to the host's IP address. For your case, you could use docker run --add-host=mysql_server:$(hostip) ...
If using Boot2Docker, it sets up a mapping to the host at a predefined address, so on that platform the equivalent to the above is just the one command:
$ docker run --add-host=docker:192.168.59.3 --rm -it debian

To connect local MySQL you can definitely use --network="host" in your docker run command.
Then 127.0.0.1 or localhost in your docker container will point to your docker host.
docker run --network="host" -p 8080:8080 <your-docker-Image>

To help with several of the additional questions and the main post I would like to link to a repo I have been managing to manage my local development. I have stopped trying to run any service for my development directly on OS X and use Docker containers as they are the exact same running on production and my environments can be matched and streamlined.
This repo consists of a web server, database server and a data container to load the MySQL databases.
I have and will continue to support this repo and have recently upgraded the documentation to make it turn key for other developer.
Docker Repo on GitHub

On a mac with boot2docker, you can use homebrew's default mysql/mariadb settings by adding the Mac OS Host.
This worked for me (with, what I believe, are default settings).

Related

SSH out from Docker Container to server on LAN

I have an app that generates image files running in a Docker container.
Once the image is generated I want to copy it to another server on my LAN.
I'm trying to use SCP to a static IP on my LAN, but the container can't see it. How can I expose the LAN IP to my container?
Posting my solution here in case it helps someone else
My problem was how to copy a generated file from Docker contained app to a LAN machine.
The solution I found was to use the Samba share Docker here:
https://hub.docker.com/r/dperson/samba/
My app shares a volume with the Samba container, and my LAN machine connects to the Shared Samba directory. Its much more robust than using ssh, and seems much less complicated than using bridge network IMO
You will need to bind the ports using -p command with docker. If the port is already in use, try a different port. So: docker run -p 22:24 And if that complains again what you need to do is see whats running on port 22. You can do that with sudo lsof -i -P -n | grep LISTEN OR for specific ports: sudo lsof -i:22

Access host machine in Docker Container on Ubuntu 18.04

I am looking for a way to connect to my host machine in a Docker Container (in my case, access to a specific port for using a proxy in the application container).
I tried network_mode: "host" (or docker run --network="host"), it worked in case of accessing to local machine but caused some other problems which were related to changing network driver to host:
SQLSTATE[HY000] [2002] php_network_getaddresses: getaddrinfo failed: Name or service not known.
Also I can not use ifconfig to define network alias since I'm using Ubuntu 18.04.
What should I do?
UPDATE: Since the docker-host (https://github.com/qoomon/docker-host) image is published in the last few months, you can use that without any manual configuration. Easy peasy!
After struggling for a day, finally found the solution. It can be done by --add-host flag in docker run command or extra_hosts in a docker-compose.yml file with making an alias for Local (lo | 127.0.0.1 ) network interface.
So here are the instructions:
First, create an alias for lo interface. As you may know, ifconfig command does not exist on Ubuntu 18.04 so this is how we do it:
sudo ip addr add 192.168.0.20/24 dev lo label lo:1
Then, put this on you docker-compose.yml file:
extra_hosts:
- "otherhost:192.168.0.20"
If you are not using Docker Compose you can add a host to a container by --add-host flag. Something like
docker run container-name --add-host="otherhost:192.168.0.20"
Finally, when you're done with the above steps, restart your containers with docker-compose down && docker-compose up -d or docker-compose restart
Now you can log-in to your container (docker-compose exec container-name bash) and test it.
NOTE: Make sure your working port is open using telnet [interface-ip] [port] command.
You can use the extra_hosts in you docker-compose, which is what you discovered by yourself. I just wanted to add another way when you are working on your local environment.
In docker-for-mac and docker-for-windows, within a container the DNS name host.docker.internal resolves to an IP address allowing network access to the host.
Here's the related description, extracted from the documentation:
The host has a changing IP address (or none if you have no network access). From 18.03 onwards our recommendation is to connect to the special DNS name host.docker.internal, which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker for Windows.
There's an open issue on github concerning the implementation of this feature for docker-for-linux.

Unable to run docker project in localhost

I have installed a repo from docker and ran it using the following command,
docker run -d --name searx -p $PORT:8888 wonderfall/searx
The container was also sucessfully created but while accessing it in my browser i get the following error,
dail tcp[::1]:8888: connectex: No connection could be made because the target machine actively refused it.
Does anyone know why this error occurs? I use a windows10 system.
Just installed docker toolbox
That means you cannot use localhost directly without declaring in Virtual Box a port-forwarding rule.
First, test your service using the IP of your VM (see docker-machine ip default output)
http://<ip>:8888
Then, declare a port-forward rule:
either directly in your VirtualBox graphical interface: see "How do I configure docker compose to expose ports correctly?"
or with VBoxManage controlvm commands: see "Not able to access tomcat application on Docker VM with host(windows) IP while using docker toolbox"

Why can't I curl one docker container from another via the host

I really don't understand what's going on here. I just simply want to perform a http request from inside one docker container, to another docker container, via the host, using the host's public ip, on a published port.
Here is my setup. I have my dev machine. And I have a docker host machine with two containers. CONT_A listens and publishes a web service on port 3000.
DEV-MACHINE
HOST (Public IP = 111.222.333.444)
CONT_A (Publish 3000)
CONT_B
On my dev machine (a completely different machine)
I can curl without any problems
curl http://111.222.333.444:3000 --> OK
When I SSH into the HOST
I can curl without any problesm
curl http://111.222.333.444:3000 --> OK
When I execute inside CONT_B
Not possible, just timeout. Ping is fine though...
docker exec -it CONT_B bash
$ curl http://111.222.333.444:3000 --> TIMEOUT
$ ping 111.222.333.444 --> OK
Why?
Ubuntu 16.04, Docker 1.12.3 (default network setup)
I know this isn't strictly answer to the question but there's a more Docker-ish way of solving your problem. I would forget about publishing the port for inter-container communication altogether. Instead create an overlay network using docker swarm. You can find the full guide here but in essence you do the following:
//create network
docker network create --driver overlay --subnet=10.0.9.0/24 my-net
//Start Container A
docker run -d --name=A --network=my-net producer:latest
//Start Container B
docker run -d --name=B --network=my-net consumer:latest
//Magic has occured
docker exec -it B /bin/bash
> curl A:3000 //MIND BLOWN!
Then inside container be you can just curl hostname A and it will resolve for you (even when you start doing scaling etc.)
If you're not keen on using Docker swarm you can still use Docker legacy links as well:
docker run -d --name B --link A:A consumer:latest
which would link any exposed (not published) ports in your A container.
And finally, if you start moving to production...forget about links & overlay networks altogether...use Kubernetes :-) Bit more difficult initial setup but they introduce a bunch of concepts & tools to make linking & scaling clusters of containers a lot easier! But that's just my personal opinion.
By running your container B with --network host argument, You can simply access your container A using localhost, no public ip needed.
> docker run -d --name containerB --network host yourimagename:version
After you run container B with above command then you can try curl container A from container B like this
> docker exec -it containerB /bin/bash
> curl http://localhost:3000
None of the current answers explain why the docker containers behave like described in the question
Docker is there to provide a lightweight isolation of the host resources to one or several containers.
The Docker network is by default isolated from the host network, and use a bridge network (again, by default; you have have overlay network) for inter-container communication.
and how to fix the problem without docker networks.
From "How to connect to the Docker host from inside a Docker container?"
As of Docker version 18.03, you can use the host.docker.internal hostname to connect to your Docker host from inside a Docker container.
This works fine on Docker for Mac and Docker for Windows, but unfortunately, this is not was not supported on Linux until Docker 20.10.0was released in December 2020.
Starting from version 20.10 , the Docker Engine now also supports communicating with the Docker host via host.docker.internal on Linux.
Unfortunately, this won't work out of the box on Linux because you need to add the extra --add-host run flag:
--add-host=host.docker.internal:host-gateway
This is for development purpose and will not work in a production environment outside of Docker Desktop for Windows/Mac.
That way, you don't have to change your network driver to --network=host, and you still can access the host through host.docker.internal.
I had a similar problem, I have a nginx server in one container (lets call it web) with several server blocks, and cron installed in another container (lets call it cron). I use docker compose. I wanted to use curl from cron to web from time to time to execute some php script on one of the application. It should look as follows:
curl http://app1.example.com/some_maintance.php
But I always was getting host unreachable after some time.
First solution was to update /etc/hosts in cron container, and add:
1.2.3.4 app1.example.com
where 1.2.3.4 is the ip for web container, and it worked - but this is a hack - also as far as I know such manual updates are not encouraged. You should use extra_hosts in docker compose, which requires explicit ip address instead of name of container to specify IP address.
I tried to use custom networks solution, which as I have seen is the correct way to deal with this, but I never succeeded here. If I ever learn how to do this I promise to update this answer.
Finally I used curl capability to specify IP address of the server, and I pass domain name as a header in separate parameter:
curl -H'Host: app1.example.com' web/some_maintance.php
not very beautiful but does work.
(here web is the name of my nginx container)

Obtaining the ip address of a docker container

I have a ubuntu machine which is a VM where I have installed docker in it. I am using this machine from my local windows machine and doing ssh , opening the terminal to the ubuntu machine.
Now , I am going to take a docker image which contains all the necessary softwares for eg: apache installed in it. Later I am going to deploy a sample appication(which is a web applicationP on to it and save it .
Now , I am in a confused mode as in how to check the deployed application if its running properly. i.e., what would be the address of the container which containds the deployed application.
for eg:- If I type http://127.x.x.x which is the address of the ubuntu machine , I am just getting time out .
Can anyone tell me how to verify the deployed application . Also, the printing the output of the program on the console works seemlessly fine , as the output gets printed , only thing I have some doubts is regarding the web application.
There are some possibilities to check whether your app is running.
Remote API
As JimiDini said, one possibility is the Docker remote API. You can use it to see all running containers (which would be your use case, right?), inspect a certain container or start and stop containers. The API is a REST-API with several binding for programming languages (at https://docs.docker.io/reference/api/remote_api_client_libraries/). Some of them are very outdated. To use the Docker remote API from another machine, I needed to open it explicitly:
docker -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock -d &
Note that the API is open to the world now! In a real scenario you would need to secure it in some way (e.g. see the example at http://java.dzone.com/articles/securing-docker%E2%80%99s-remote-api).
Docker PS
To see all running containers run docker ps on your host. This will list all running containers. If you do not see your app, it is not running. It also shows you the ports your app is exposing. You can also do this via the remote API.
Logs
You can also check the logs. You can run docker attach <container id> to attach to a certain container an see its stdout. You can run also run docker logs <container id> to receive the Docker logs. What I prefer is to write the logs to a certain directory, e.g. all logs to /var/log and mount this folder to my host machine. Then all your logs will end up in /home/ubuntu/docker-logs on your host.
docker run -p 80:8080 -v /home/ubuntu/docker-logs:/var/log:rw my/application
One word to ports and IP
Every container will get its own IP address. You can check this IP address via the remote API or via Docker on the host machine directly. You can also specify a certain host name for the container (by passing the --hostname="test42" to the run command). However, you mostly did not need that.
To access the application in the container, you need to open the port in the container and bind to a port on the host.
In your Dockerfile you need to EXPOSE the port your app runs on:
FROM ubuntu
...
EXPOSE 8080
CMD run-my-app.sh
When you start your container, you need to bind this port to a port of the host:
docker run -p 80:8080 my/application
Now you can access your app on http://localhost:80 or http://127.0.0.1:80.
If you app does not response, check if the container is running by typing docker ps or the remote API. If it is not running, check the logs for the reason.
(Note: If you run your Ubuntu VM in something like VirtualBox and you try to access it from your Windows machine, make sure you opened the ports in VirtualBox too!).
Docker container has a separate IP address. By default it is private (accessible only from the host-machine).
Docker provides all metadata (including IP address) via its API:
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#inspect-a-container
https://docs.docker.io/reference/api/docker_remote_api_v1.10/#monitor-docker-s-events
You can also take a look at a little tool called docker-gen for inspiration. It monitors docker-events and created configuration-files on host machine using templates.
To obtain the ip address of a docker container, if you know its id (a long hex string) or if you named it:
docker inspect --format '{{ .NetworkSettings.IPAddress }}' <container-id-or-name>
Docker is running its own network and to get information about it you can run the following commands:
docker network ls
docker network inspect <network name>
docker inspect <container id>
In the output, you should be able to find the IP.
But there is also a couple of things you need to be aware of, regarding Dockerfile and docker run command:
when you EXPOSE a port in Dockerfile, the service in the container is not accessible from outside Docker, but from inside other Docker containers
and when you EXPOSE and use docker run -p ... flag, the service in the container is accessible from anywhere, even outside Docker
So for example, if your apache is running on port 8080 you should expose it in Dockerfile and then you can run it as:
docker run -d -p 8080:8080 <image name> and you should be able to access it from your host on HTTP://localhost:8080.
It is an old question/answer but it might help somebody else ;)
working as of 2020
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' container_name_or_id

Resources