Couldn't connect containers using docker-compose.yaml file - docker

I created two Dockerfiles to run frontend and backend in a web application. When I run docker-compose.yaml file, web application front-end is opened of web browser. But I cannot login to the system. I think there is a problem with connecting those containers. Following is my docker-compose.yaml file. What can I do to resolve this problem ?
version: '2'
services:
gulp:
build: './ui'
ports:
- "3000:4000"
python:
build: '.'
ports:
- "5000:5000"

You need to use --links to enable communication between containers and you should use their DNS network alias like http://python:5000

Containers within a docker-compose file are part of one network by default. And one container can access other container using their host name.
Host name can be defined in docker-compose file using hostname. And if hostname is not defined, then the service name is considered the hostname.
Internally, docker containers can talk to each other by referring to each other at their hostname. Like in your case, gulp can access python at http://python:5000 and that would be possible even if you did not declare ports. This all is happening all because it is internal to the docker system.
From outside, if you want to connect to any of the services, then you can define ports, as you did and then access those services at the defined port number.

Related

How can I enforce all containers work together with domain localhost

I have 8 frontends apps, 12 backends servers. Frontends are Vue.js or AngularJS, backedends are ASP.NET Core 3.1, and SQL server and Redis and Other services
all services are similar configs for Docker containers, except loggings, ports and so on. they all work in same named network mynetwork
abcservice:
image: ${DOCKER_REGISTRY-}abcservice
container_name: abcServer
hostname: abcservice
build:
context: .
dockerfile: abcService/Dockerfile
networks:
- mynetwork
but I have to use http://host.docker.internal:{portnumer}, so all containers can work well together. How can I force all apps work together on http://localhost:{portnumer}?
let's say a simple ASP.NET core app, if started it WITHOUT docker, it can access SQL Server(run in docker) and Redis(run in docker) with domain http://localhost:port, but once if start it with docker, I have to access the app via domain http://host.docker.internal:port, otherwise it cannot reach SQL and Redis. because inside containers, localhost means the container itself, I need something config to let container reach other containers with localhost and specified ports.
Appreciate.
Option 1: Environment variables
You can either use ports for all services, and use environment variables in a .env file to switch between hostnames. The .env file works out-of-the-box with Docker Compose, see docs.
Using ports:
ports:
- 6379:6379
Sample .env file:
REDIS_HOST=redis
REDIS_PORT=6379
Option 2: using network_mode host
Another option is apply host network settings instead using network_mode on each service. That should apply host network settings to a service, instead of running in isolation.
network_mode: host

docker compose communication with container

I am trying to create an example to create two WEB API 's and containerize them and to communicate between them.
I would like to see the side car design pattern, I have found an example in Github that I am trying to run.
https://github.com/cesaroll/dotnet-sidecar
In the above example, HelloAPI makes a call to HelloSideCar API which is a different project.
In the HelloAPI a call is made to another API in another project. I am trying to run in local using Docker Compose.
When I try to hit the API from HelloAPI(localhost:8080/FromSidecar) project to SideCarAPI, I see a 404 error, request is not going to another container
Below Is my Docker Compose
# docker-compose up -d
# docker-compose stop
# docker-compose rm -f
version: '3.8'
services:
hello-sidecar-api:
image: hello-sidecar-api:latest
container_name: hello-sidecar-api
ports:
- "8180:8080"
hello-api:
image: helloapi:latest
container_name: hello-api
environment:
- SIDERCAR_URL=http://localhost:8180/
depends_on:
- hello-sidecar-api
ports:
- "8080:8080"
You can either change SIDECAR_URL to http://hello-sidecar-api:8080/ or place both APIs in the same container. Note that I also changed the port from 8180 to 8080, because 8180 is a port mapped on the host, but inside your Docker network your API is accessible by other containers on 8080.
Your containers are separate network entities with their own IPs, so when you call http://localhost:8180 from the inside of a container, you're not calling the host, but the same container from which the request originates. (assuming you're not using host network driver).
What you are trying to do here resembles the behavior of pods in Kubernetes (where sidecar term is widely used). In Kubernetes you could put these two containers in one pod and then they could call each other on localhost

Docker Compose setting Hostname

I have the basic docker-compose.yml shown below for an apache server. And I was wondering if there was a way to configure this apache server, which is currently accessible using 0.0.0.0:8889 OR localhost:8889, so is it accessible using a custom host name, such as: local.foobar.dev for example?
version: '3'
services:
snappyweb:
image: php:7.0-apache
ports:
- "8889:80"
volumes:
- ./:/var/www/html
I assume you want this to only work for your local dev environment.
The easiest and safest way is to use an application that will essentially trick your local browser into thinking a URL of choice is a certain IP address. That IP could be a locahost:8000.
For this I use GasMask (osX) or your can use Host File Manager (Windows).
If you need to reach your container from your host using a custom name, you will need to do the mapping between local.foobar.dev and the desired IP manually by adding the following line to /etc/hosts
127.0.0.1 local.foobar.dev
For communication between containers, you can create a docker network and add the containers to this network. Then containers can reach each other using container names.

It's possible to tie a domain to the docker container when building it?

Currently in the company where I am working on they have a central development server which contains a LAMP environment. Each developer has access to the application as: developer_username.domain.com. The application we're working on uses licenses and the licenses are generated under each domain and are tied to the domain only meaning I can't use license from other developer.
The following example will give you an idea:
developer_1.domain.com ==> license1
developer_2.domain.com ==> license2
developer_n.domain.com ==> licenseN
I am trying to dockerize this enviroment at least having PHP and Apache in a container and I was able to create everything I need and it works. Take a look to this docker-compose.yml:
version: '2'
services:
php-apache:
env_file:
- dev_variables.env
image: reypm/php55-dev
build:
context: .
args:
- PUID=1000
- PGID=1000
ports:
- "80:80"
- "9001:9001"
extra_hosts:
- "dockerhost:xxx.xxx.xxx.xxx"
volumes:
- ~/var/www:/var/www
That will build what I need but the problem comes when I try to access the server because I am using http://localhost and then the license won't work and I won't be able to use the application.
The idea is to access as developer_username.domain.com, so my question is: is this a work that should be done on the Dockerfile or the Docker Compose I mean at image/container level let's say by setting up a ENV var perhaps or is this a job for /etc/hosts on the host running the Docker?
tl;dr
No! Docker doesn't do that for you.
Long answer:
What you want to do is to have a custom hostname on the machine hosting docker mapped to a container in Docker compose network. right?
Let's take a step back and see how networking in docker works:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
This network is not equal to your host network and without explicit ports exporting (for a specific container) you wouldn't have access to this network. All exposing does, is that:
The exposed port is accessible on the host and the ports are available to any client that can reach the host.
From now on you can put a reverse proxy (like nginx) or you can edit /etc/hosts to define how clients can access the host (i.e. Docker host, the machine running Docker compose).
The hostname is defined when you start the container, overwriting anything you attempt to put inside the image. At a high level, I'd recommend doing this with a mix of custom docker-compose.yml and a volume per developer, but each running an identical image. The docker-compose.yml can include the hostname and domain setting. Then everything else that needs to be hostname specific on the filesystem, and the license itself, should point to files on the volume. Lastly, include an entrypoint that does the right thing if a new hostname is started with a default or empty volume, populating it with the new hostname data and prompting for the license.

Link Running External Docker to docker-compose services

I assume that there is a way to link via one or a combination of the following: links, external_links and networking.
Any ideas? I have come up empty handed so far.
Here is an example snippet of a Docker-compose which is started from within a separate Ubuntu docker
version: '2'
services:
web:
build: .
depends_on:
- redis
redis:
image: redis
I want to be able to connect to the redis port from the Docker that launched the docker-compose.
I do not want to bind the ports on the host as it means I won't be able to start multiple docker-compose from the same model.
-- context --
I am attempting to run a docker-compose from within a Jenkins maven build Docker so that I can run tests. But I cannot for the life of me get the original Docker to access exposed ports on the docker-compose
Reference the machines by hostname, v2 automatically connects the nodes by hostname on a private network by default. You'll be able to ping "web" and "redis" from within each container. If you want to access the machines from your host, include a "ports" definition for each service in your yml.
The v1 links were removed from the v2 compose syntax since they are now implicit. From the docker compose file documentation
links with environment variables: As documented in the environment variables reference, environment variables created by links have been
deprecated for some time. In the new Docker network system, they have
been removed. You should either connect directly to the appropriate
hostname or set the relevant environment variable yourself...

Resources