Docker Compose setting Hostname - docker

I have the basic docker-compose.yml shown below for an apache server. And I was wondering if there was a way to configure this apache server, which is currently accessible using 0.0.0.0:8889 OR localhost:8889, so is it accessible using a custom host name, such as: local.foobar.dev for example?
version: '3'
services:
snappyweb:
image: php:7.0-apache
ports:
- "8889:80"
volumes:
- ./:/var/www/html

I assume you want this to only work for your local dev environment.
The easiest and safest way is to use an application that will essentially trick your local browser into thinking a URL of choice is a certain IP address. That IP could be a locahost:8000.
For this I use GasMask (osX) or your can use Host File Manager (Windows).

If you need to reach your container from your host using a custom name, you will need to do the mapping between local.foobar.dev and the desired IP manually by adding the following line to /etc/hosts
127.0.0.1 local.foobar.dev
For communication between containers, you can create a docker network and add the containers to this network. Then containers can reach each other using container names.

Related

How to map extra host in docker-compose to container gateway?

I have a case where I need to call external API from docker container, but can only do that by its URL. In order to do that I'm mapping this URL to container gateway and that's working, but I need it to be dynamic, because I need to run this docker-compose on different devices and from what I see, the gateways are different.
version: '3'
services:
pdf-service:
image: $IMAGE:latest
container_name: pdf-$LOCALE
environment:
- NODE_ENV=production
- LOCALE=${LOCALE}
- API_URL=${API_URL}
extra_hosts:
- ${API_HOST}:172.24.0.1
tty: true
restart: always
ports:
- ${PORT}:8124
At the moment I've hardcoded it as you can see - 172.24.0.1, which is container's gateway. I've found out about something like host.gateway, but have no idea how to use it correctly. Also I've read that it's not working in production? My production environment is Debian 10 with Docker v. 18.09.1 and docker-compose v.1.21.0.
The IP 172.24.0.1 is internal to Docker. When you add an extra host you need to map a name to a public IP.
From your host run ping <api_host> to get the public IP. Use that IP in your docker-compose.yml instead of 172.24.0.1.
If the service you want to reach runs also in Docker, then it must expose a port on the host in order for you to reach it. So then the IP you need is the local network IP of your host (or the public one if for some reason you need to).
If you had to reach an IP that is Docker internal then you wouldn't have to declare an extra_host. You would just put the containers/services on the same network and refer each other by name.
I think you can use network, container can join many network, and if two container in a same network, they can find another by host (service name)

Correct IP address to access web application in apache docker container

I have an easy apache docker setup defined in a docker-compose.yml:
services:
apache:
image: php:7.4-apache
command: /bin/bash -c "/var/www/html/startup.sh && exec 'apache2-foreground'"
volumes:
- ./:/var/www/html
- /c/Windows/System32/drivers/etc/hosts:/tmp/hostsfile
ports:
- "80:80"
From the startup.sh script I want to modify the hosts file from the host OS through the volume. Here I want to dynamically add an entry to resolve the hostname test.local to the ip address of the docker web application like so:
<ip-address> test.local
This way I should be able to open the application with the specified hostname http://test.local in my local browser.
Before writing the startup.sh script I wanted to try to manually open the application at http://172.19.0.2 via the containers IP address I got from
docker inspect apache_test
But the page won't open: ERR_CONNECTION_TIMED_OUT
Shouldn't I be able to access the application from that IP? What am I missing? Do I use the wrong IP address?
BTW: I am using Docker Desktop for Windows with the Hyper-V backend
Edit: I am able to access the application via http://localhost but since I want to add a new entry to the hosts file this is not the solution.
Apparently in newer versions of Docker Desktop for Windows you can't access (ping) your linux containers by IP.
Now there is some really dirty workaround for this that involves changing a .ps1 file of your Docker installation to get back the DockerNAT interface on windows. After that you need to add a new route in your windows routing table as described here:
route /P add <container-ip> MASK 255.255.0.0 <ip-found-in-docker-desktop-settings>
Then you might be able to ping your docker container from the windows host. I didn't test it though...
I found a solution to my original issue (resolution of test.local to container IP via hosts file) while reading one of the threads linked above here
That involves setting a free loopback IP in the 127.0.0.0/8 IP range in your ports section of the docker-compose.yml:
ports:
- "127.55.0.1:80:80"
After that you add the following to your hosts file:
127.55.0.1 test.local
And you can open your application at http://test.local
To do that for other dockerized applications too just choose another free loopback address.

How do I access Mopidy running in Docker container from another container

To start, I am more familiar running Docker through Portainer than I am with doing it through the console.
What I'm Doing:
Currently, I'm running Mopidy through a container, which is being accessed by other machines through the default Mopidy port. In another container, I am running a Slack bot using the Limbo repo as a base. Both of them are running on Alpine Linux.
What I Need:
What I want to do is for my Slack bot to be able to call MPC commands, such as muting the volume, etc. This is where I am stuck. What is the best way for this to work
What I've tried:
I could ssh into the other container to send a command, but it doesn't make sense to do this since they're both running on the same server machine.
The best way to connect a bunch of containers is to define a service stack using docker-compose.yml file and launch all of them using docker-compose up. This way all the containers will be connected via single user-defined bridge network which will make all their ports accessible to each other without you explicitly publishing them. It will also allow the containers to discover each other by the service name via DNS-resolution.
Example of docker-compose.yml:
version: "3"
services:
service1:
image: image1
ports:
# the following only necessary to access port from host machine
- "host_port:container_port"
service2:
image: image2
In the above example any application in the service2 container can reach some port on service1 just by using service2:port address.

Couldn't connect containers using docker-compose.yaml file

I created two Dockerfiles to run frontend and backend in a web application. When I run docker-compose.yaml file, web application front-end is opened of web browser. But I cannot login to the system. I think there is a problem with connecting those containers. Following is my docker-compose.yaml file. What can I do to resolve this problem ?
version: '2'
services:
gulp:
build: './ui'
ports:
- "3000:4000"
python:
build: '.'
ports:
- "5000:5000"
You need to use --links to enable communication between containers and you should use their DNS network alias like http://python:5000
Containers within a docker-compose file are part of one network by default. And one container can access other container using their host name.
Host name can be defined in docker-compose file using hostname. And if hostname is not defined, then the service name is considered the hostname.
Internally, docker containers can talk to each other by referring to each other at their hostname. Like in your case, gulp can access python at http://python:5000 and that would be possible even if you did not declare ports. This all is happening all because it is internal to the docker system.
From outside, if you want to connect to any of the services, then you can define ports, as you did and then access those services at the defined port number.

It's possible to tie a domain to the docker container when building it?

Currently in the company where I am working on they have a central development server which contains a LAMP environment. Each developer has access to the application as: developer_username.domain.com. The application we're working on uses licenses and the licenses are generated under each domain and are tied to the domain only meaning I can't use license from other developer.
The following example will give you an idea:
developer_1.domain.com ==> license1
developer_2.domain.com ==> license2
developer_n.domain.com ==> licenseN
I am trying to dockerize this enviroment at least having PHP and Apache in a container and I was able to create everything I need and it works. Take a look to this docker-compose.yml:
version: '2'
services:
php-apache:
env_file:
- dev_variables.env
image: reypm/php55-dev
build:
context: .
args:
- PUID=1000
- PGID=1000
ports:
- "80:80"
- "9001:9001"
extra_hosts:
- "dockerhost:xxx.xxx.xxx.xxx"
volumes:
- ~/var/www:/var/www
That will build what I need but the problem comes when I try to access the server because I am using http://localhost and then the license won't work and I won't be able to use the application.
The idea is to access as developer_username.domain.com, so my question is: is this a work that should be done on the Dockerfile or the Docker Compose I mean at image/container level let's say by setting up a ENV var perhaps or is this a job for /etc/hosts on the host running the Docker?
tl;dr
No! Docker doesn't do that for you.
Long answer:
What you want to do is to have a custom hostname on the machine hosting docker mapped to a container in Docker compose network. right?
Let's take a step back and see how networking in docker works:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
This network is not equal to your host network and without explicit ports exporting (for a specific container) you wouldn't have access to this network. All exposing does, is that:
The exposed port is accessible on the host and the ports are available to any client that can reach the host.
From now on you can put a reverse proxy (like nginx) or you can edit /etc/hosts to define how clients can access the host (i.e. Docker host, the machine running Docker compose).
The hostname is defined when you start the container, overwriting anything you attempt to put inside the image. At a high level, I'd recommend doing this with a mix of custom docker-compose.yml and a volume per developer, but each running an identical image. The docker-compose.yml can include the hostname and domain setting. Then everything else that needs to be hostname specific on the filesystem, and the license itself, should point to files on the volume. Lastly, include an entrypoint that does the right thing if a new hostname is started with a default or empty volume, populating it with the new hostname data and prompting for the license.

Resources