Replace Docker container with external service - docker

I'm just getting started with Docker, and I read a ton of documentation and tutorials yesterday, but I can't find where I read about replacing an external service using a linked container, and I'm not even sure which terminology to search for.
Say there is an apache container and a mysql container, where apache was run with a link to mysql, and has access to its ports and such. Now instead of MySQL running on the container instance, we move it to AWS RDS, for example. How do you modify the mysql container so that apache continues to run as expected? To clarify, apache would still be run with a link to a container with the alias mysql, but the mysql container would take care of getting traffic on that port sent to AWS.
Alternatively, maybe there is a container running a MySQL service, but that container is on another host. I have a vague feeling that the pattern I'm referring to would be able to handle that scenario as well. Does this sound familiar to anyone?

If the container is on another host, why not just hit the host directly and have docker be transparent with 3386 (or whatever port you're running mysql on) forwarding requests to the container? I can't think of any reason you'd want to link containers unless they're actually on the same host. Docker is great at being transparent, so clients can run things against a service in Docker from another machine as if the service was being run directly on the machine without Docker.
If you really have to have both containers on the same machine (even though the mysql container is calling out to RDS or another host), you should be able to make a new simple mysql image that just has mysql_client installed and just takes requests and forwards them to RDS.

Related

Can (Should) I Run a Docker Container with Same host name as the Docker Host?

I have a server application (that I cannot change) that, when you connect as a client, will give you other URLs to interact with. Those URLs are also part of the same server so the URL advertised uses the hostname of a docker container.
We are running in a mixed economy (some docker containers, some regular applications). We actually need to set up where we have the server running as a docker application on a single VM, and that server will be accessed by non-docker clients (as well as docker clients not running on the same docker network).
So you have a server hostname (the docker container) and a docker hostname (the hostname of the VM running docker).
The client's initial connection is to: dockerhostname:1234 but when the server sends URLs to the client, it sends: serverhostname:5678 ... which is not resolvable by the client. So far, we've addressed this by adding "server hostname " to the client's /etc/hosts file but this is a pain to maintain.
I have also set the --hostname of the server docker container to the same name as the docker host and it has mostly worked but I've seen where a docker container running on the same docker network as the server had issues connecting to the server.
I realize this is not an ideal docker setup. We're migrating from a history of delivering as rpm's to delivering containers .. but it's a slow process. Our company has lots of applications.
I'm really curious if anyone has advice/lessons learned with this situation. What is the best solution to my URL problem? (I'm guessing it is the /etc/hosts we're already doing)
You can do port-mapping -p 8080:80
How you build and run your container?
With a shell command, dockerfile or yml file?
Check this:
docker port
Call this and it will work:
[SERVERIP][PORT FROM DOCKERHOST]
To work with hostnames you need DNS or use hosts file.
The hosts file solution is not a good idea, it's how the internet starts in the past ^^
If something change you have to change all hosts files on every client!
Or use a static ip for your container:
docker network ls
docker network create my-network
docker network create --subnet=172.18.0.0/16 mynet123
docker run --net mynet123 --ip 172.18.0.22 -it ubuntu bash
Assign static IP to Docker container
You're describing a situation that requires a ton of work. The shortest path to success is your "adding things to /etc/hosts file" process. You can use configuration management, like ansible/chef/puppet to only have to update one location and distribute it out.
But at that point, you should look into something called "service discovery." There are a ton of ways to skin this cat, but the short of it is this. You need some place (lazy mode is DNS) that stores a database of your different machines/services. When a machine needs to connect to another machine for a service, it asks that database. Hence the "service discovery" part.
Now implementing the database is the hardest part of this, there are a bunch of different ways, and you'll need to spend some time with your team to figure out what is the best way.
Normally running an internal DNS server like dnsmasq or bind should get you most of the way, but if you need something like consul that's a whole other conversation. There are a lot of options, and the best thing to do is research, and audit what you actually need for your situation.

How to run phpBB in Docker with NGINX, php installed on a different host than the database?

How can I run a Docker image with NGINX, phpBB (and all required stuff like php installed) with persistence (Changes on the board shouldn't be lost) and with the database on another host (which already exists). So, let's assume I have the following: MySQL on 192.168.2.233 (local address) on port 3307 running. Now I want to create a Docker image with Alpine Linux (The smallest propably), NGINX and phpBB where the board runs on the NGINX webserver and connects to the database. Changes on the board (E.g. changing the webserver settings or so) should be persisted within the container. How can I do that?
EDIT:
The database on server 192.168.2.233 is already existing! So no, I don't need two or more Docker containers. I need one Docker container with phpBB running on the NGINX webserver and connecting to the database on another host in the same network. The container should use persistence (volumes) to save the settings made in phpBB.
I tried to use the following Dockerfile and modified it:
https://gitlab.com/boxedcode/alpine-nginx-php-fpm/blob/master/Dockerfile --> https://drive.google.com/open?id=1CW68OFCJE9RjIe8_RBC8q5Fa6juRtxmR
Together with the owner of another repository I've now found a solution (that uses Apache however) here: https://github.com/blueimp/phpbb/issues/1. After a few errors and problems from my side, I figured it out. The solution I'm using now is placed here: https://github.com/SeppPenner/DockerApacheSSLphpBB

Easy, straightforward, robust way to make host port available to Docker container?

It is really easy to mount directories into a docker container. How can I just as easily "mount a port into" a docker container?
Example:
I have a MySQL server running on my local machine. To connect to it from a docker container I can mount the mysql.sock socket file into the container. But let's say for some reason (like intending to run a MySQL slave instance) I cannot use mysql.sock to connect and need to use TCP.
How can I accomplish this most easily?
Things to consider:
I may be running Docker natively if I'm using Linux, but I may also be running it in a VM if I'm on Mac or Windows, through Docker Machine or Docker for Mac/Windows (Beta). The answer should handle both scenarios seamlessly, without me as the user having to decide which solution is right depending on my specific Docker setup.
Simply assigning the container to the host network is often not an option, so that's unfortunately not a proper solution.
Potential solution directions:
1) I understand that setting up proper local DNS and making the Docker container (network) talk to it might be a proper, robust solution. If there is such a DNS service that can be set up with 1, max 2 commands and then "just work", that might be something.
2) Essentially what's needed here is that something will listen on a port inside the container and like a sort of proxy route traffic between the TCP/IP participants. There's been discussion on this closed Docker GH issue that shows some ip route command-line magic, but that's a bit too much of a requirement for many people, myself included. But if there was something akin to this that was fully automated while understanding Docker and, again, possible to get up and running with 1-2 commands, that'd be an acceptable solution.
I think you can run your container with --net=host option. In this case container will bind to the host's network and will be able to access all the ports on your local machine.

How to design Docker routing and database Layer?

I have a few Django that I want to host on a single docker host running CentOS. I want to have 3 layers
network
application
database
network: I want to have an nginx container in the network layer routing requests to different containers in the application layer. I play to use 1:1 port mappings in this docker container to expose port 80 on the host to the container. Nginx will use direct request to appropriate app in the application layer running on port 8001-8010
application: Ill have several containers each running a seperate django app using Gunicorn running on port 8001-8010
database: one container running MySQL with a different database for each app. The MYSQL container will have a data volume linked to it for persistence.
I understand you can link containers. But as I understand it, I think it relies on the order in which the containers are started ie: how can you link nginx to several containers when they havent been started yet.
So my question is
How do I connect the network layer to the application layer when the number/names of containers in the application is always changing. ie: I might bring a new applcation online/offline. How would I update nginx config and what would the addressing look like?
How do I connect the application layer to the database layer? do I have to use Docker Linking? In my Django application code I need to use the hostname of the database to connect to. What would I put for my hostname of my docker container? Would it be able to resolve?
Is there a reference architecture I could leverage?
Docker does not support dynamic linking but there are some tools that can do this for you, see this SO question.
2.) You could start you database container at first and then link all application containers to the database container. Docker will create the host file at the boot up (statically, if your database container reboots and gets another IP you need dynamlically links, see above). When you link a container like this:
-link db:db
you can access the container with the hostname db.
I ended up using this solution:
https://github.com/blalor/docker-hosts
It allows you to have to refer to other containers on the same host by hostname. It is also dynamic as the /etc/host file on the containers gets updated dynamically as containers go up and down.

Has anyone successfull run Apigee Edge as a Docker container?

We're starting to go down the containerization route with Docker and have created Docker versions of some of our infrastructure and applications.
Apigee is proving a little more of a struggle...we're doing a standalone install inside our Dockerfile and that works great. Once the install has finished and the container is started you can hit the UI and the management API just fine from the machine running the container.
The problem appears to be the virtualhost. Inside the container it is fine - if you enter the container (nsenter has been massively useful) you canthe run the /test/test1-sa.sh script no problems. From outside the container that virtualhost port is not accessible, even when you use the EXPOSE command inside your Dockerfile.
The only thing I maybe have to go on is the value for all the hostname entries inside our silent installation file. It is pointing to 127.0.0.1, which the Apigee docs seem to warn against.
Many thanks
Michael
Make sure you set your hostname to your external IP adress in /etc/hosts (as Docker runs on Ubuntu -- I believe it's in /etc/sysconfig/network if you're running CentOS). It should look something like this at a minimum:
127.0.0.1 localhost
172.56.12.67 MyApigeeInstance
Then running hostname -i should give you the outside ip address and the individual compoents will know how to find each other. Otherwise all components are being registered as 127.0.0.1 and the machines can't find each other.
You might also want to take a look at what ports are open for your docker image. The install doc for Apigee lists a TON of ports you need open for the various components.
I don't know if you have to do this as part of the docker image or if it there is a way to configure its underlying Ubuntu settings.

Resources