Docker inter-container communication painfully slow (Mac OS X) - docker

I've got an AngularJS application running on nginx in container1. When it hits "/api" it gets proxied over to Docker container2 (a PHP API running on Apache). The PHP API accesses my database that is running locally on Mac OS X. Docker containers are Ubuntu.
Communication between containers is very slow. Any idea why?
boot2docker start
docker run -d -h docker --name container2 -v ~/container2dir/:/var/www -p 5000:80 seanbollin/image2
docker run -d -h docker --link name:api -v ~/somefolder/:/var/www/anotherfolder/ -p 5001:80 seanbollin/image1

This could be due to the size of the mounted directory from "File Sharing" section.
I had a 50GB around folder shared having all my projects.
I shared the parent folder just to save some time, but it caused the slowness in http responses.
Share only required folder, and use the .dockerignore for any folder that you don't want to mount, ex. "venv" etc.

Related

Problem with static files using phpmyadmin:fpm-alpine behind Apache2

I try to migrate most of my server apps to docker. I basically have a Apache2 running on my host and some PHP based WebApps as docker containers using FPM.
As my knowledge goes, only the *.php files are served through the docker container, those this configuration must be added to Apache2:
ProxyPassMatch "^/phpMyAdmin/(.*\.php)$" "fcgi://localhost:9000/var/www/html/$1"
So you can't pass any static files (CSS, JavaScript) to the fpm container. Therefore I usually mount a host directory to the container like so:
docker run -v /var/www/phpMyAdmin:/var/www/html -p 9000:9000 -d phpmyadmin:fpm-alpine
But as soon as I add the mount (-v), the containers "/var/www/html" directory is empty. I checked with:
docker exec -it phpmyadmin /bin/sh
It seems like the whole phpMyAdmin installation wasn't extracted or got deleted/overwritten. This approach did work for other containers (postfixadmin, roundcube), so I have no idea what is going on or what I'm doing wrong.
How am I supposed to serve the static files from the fpm docker container through my Apache2 host? I didn't find any example, only nginx as server or docker compose.
Best regards,
Billie

Docker access container logs from the host machine

I have a docker container running on a AWS EC2 instance, and would like to know if its possible to mount a directory to a container log directory so i can access the files on the host.
The service I'm running has some log files I would like to look at without accessing the container each time.
The command I tried.
docker run -d -v $(pwd)/datalogs:/etc/tmp/logdir -p 8000:8000 -p 9000:9000 -p 2181:2181 --name burcon gmantmp/imagecon
It does create a directory on the host but it is empty. Is it possible to do this and if some where am I going wrong.
Your run command looks fine, it will mount /etc/tmp/logdir from the host, so the container will be writing directly to ~/datalogs as you want. If there's no output, you'll need to confirm the app is actually writing to/etc/tmp/logdir.
As a side note, if you can configure the app in your container to write stdout instead then you can use docker logs to see what's happening in the container. Then you can also use different logging drivers which gives you a lot of flexibility.

Docker container cannot connect to linked containers services

I'm using Docker version 1.9.1 build a34a1d5 on an Ubuntu 14.04 server host and I have 4 containers: redis (based on alpine linux 3.2), mongodb (based on alpine linux 3.2), postgres (based on ubuntu 14.04) and the one that will run the application that connects to these other containers (based on alpine linux 3.2). All of the db containers expose their corresponding ports in the Dockerfile.
I did the modifications on the database containers so their services don't bind to the localhost IP but to all addresses. This way I would be able to connect to all of them from the app container.
For the sake of testing, I first ran the database containers and then the app one with a command like the following:
docker run --rm --name app_container --link mongodb_container --link redis_container --link postgres_container -t localhost:5000/app_image
I enter the terminal of the app container and I verify that its /etc/hosts file contains the IP and names of the other containers. Then I am able to ping all the db containers. But I cannot connect to their ports to any of the db containers.
A simple: telnet mongodb_container 27017 simply sits and waits forever, and the same happens if I try to connect to the other db containers. If I run the application, it also complains that it cannot connect to the specified db services.
Important note: I am able to telnet the corresponding ports of all the db containers from the host.
What might be happening?
EDIT: I'll include the run commands for the db containers:
docker run --rm --name mongodb_container -t localhost:5000/mongodb_image
docker run --rm --name redis_container -t localhost:5000/redis_image
docker run --rm --name postgres_container -t localhost:5000/postgres_image
Well, the problem with telnet seems to be related to the telnet client on alpine linux since the following two commands showed me that the ports on the containers were open:
nmap -p27017 172.17.0.3
nc -vz 172.17.0.3 27017
Being focused mainly on the telnet command I issued, I believed that the problem was related to the ports being closed or something; and I overlooked the configuration file on the app that was being used to connect it to the services (it was the wrong filename), my bad.
All works fine now.

Docker in Docker: Port Mapping

I have found a similar thread, but failed to get it to work. So, the use case is
I start a container on my Linux host
docker run -i -t --privileged -p 8080:2375 mattgruter/doubledocker
When in that container, I want to start another one with GAE SDK devserver running.
At that, I need to access a running app from the host system browser.
When I start a container in the container as
docker run -i -t -p 2375:8080 image/name
I get an error saying that 2375 port is in use. I start the app, and can curl 0.0.0.0:8080 when inside both containers (when using another port 8080:8080 for example) but cannot preview the app from the host system, since lohalhost:8080 listens to 2375 port in the first container, and that port cannot be used when launching the second container.
I'm able to do that using the image jpetazzo/dind. The test I have done and worked (as an example):
From my host machine I run the container with docker installed:
docker run --privileged -t -i --rm -e LOG=file -p 18080:8080
jpetazzo/dind
Then inside the container I've pulled nginx image and run it with
docker run -d -p 8080:80 nginx
And from the host environment I can browse the nginx welcome page with http://localhost:18080
With the image you were using (mattgruter/doubledocker) I have some problem running it (something related to log attach).

Multiple docker containers as web server on a single IP

I have multiple docker containers on a single machine. On each container is running a process and a web server that provides an API for the process.
My question is, how can I access the API from my browser when the default port is 80? To be able to access the web server inside docker container I do the following:
sudo docker run -p 80:80 -t -i <yourname>/<imagename>
This way I can do from my computers terminal:
curl http://hostIP:80/foobar
But how to handle this with multiple containers and multiple web servers?
You can either expose multiple ports, e.g.
docker run -p 8080:80 -t -i <yourname>/<imagename>
docker run -p 8081:80 -t -i <yourname1>/<imagename1>
or put a proxy ( nginx, apache, varnish, etc.) in front of your API containers.
Update:
The easiest way to do a proxy would be to link it to the API containers, e.g. having apache config
RewriteRule ^api1/(.*)$ http://api1/$1 [proxy]
RewriteRule ^api2/(.*)$ http://api2/$1 [proxy]
you may run your containers like this:
docker run --name api1 <yourname>/<imagename>
docker run --name api2 <yourname1>/<imagename1>
docker run --link api1:api1 --link api2:api2 -p 80:80 <my_proxy_container>
This might be somewhat cumbersome though if you need to restart the api containers as the proxy container would have to be restarted either (links are fairly static at docker yet). If this becomes a problem, you might look at approaches like fig or autoupdated proxy configuration : http://jasonwilder.com/blog/2014/03/25/automated-nginx-reverse-proxy-for-docker/ . The later link also shows proxying with nginx.
Update II:
In a more modern versions of docker it is possible to use user defined network instead of the links shown above to overcome some of the inconveniences of the deprecated link mechanism.
Only a single process is allowed to be bound to a port at a time. So running multiple containers means each will be exposed on a different port number. Docker can do this automatically for you by using the "-P" attribute.
sudo docker run -P -t -i <yourname>/<imagename>
You can use the "docker port" and "docker inspect" commands to see the actual port number allocated to each container.

Resources