Link Running External Docker to docker-compose services - docker

I assume that there is a way to link via one or a combination of the following: links, external_links and networking.
Any ideas? I have come up empty handed so far.
Here is an example snippet of a Docker-compose which is started from within a separate Ubuntu docker
version: '2'
services:
web:
build: .
depends_on:
- redis
redis:
image: redis
I want to be able to connect to the redis port from the Docker that launched the docker-compose.
I do not want to bind the ports on the host as it means I won't be able to start multiple docker-compose from the same model.
-- context --
I am attempting to run a docker-compose from within a Jenkins maven build Docker so that I can run tests. But I cannot for the life of me get the original Docker to access exposed ports on the docker-compose

Reference the machines by hostname, v2 automatically connects the nodes by hostname on a private network by default. You'll be able to ping "web" and "redis" from within each container. If you want to access the machines from your host, include a "ports" definition for each service in your yml.
The v1 links were removed from the v2 compose syntax since they are now implicit. From the docker compose file documentation
links with environment variables: As documented in the environment variables reference, environment variables created by links have been
deprecated for some time. In the new Docker network system, they have
been removed. You should either connect directly to the appropriate
hostname or set the relevant environment variable yourself...

Related

docker compose communication with container

I am trying to create an example to create two WEB API 's and containerize them and to communicate between them.
I would like to see the side car design pattern, I have found an example in Github that I am trying to run.
https://github.com/cesaroll/dotnet-sidecar
In the above example, HelloAPI makes a call to HelloSideCar API which is a different project.
In the HelloAPI a call is made to another API in another project. I am trying to run in local using Docker Compose.
When I try to hit the API from HelloAPI(localhost:8080/FromSidecar) project to SideCarAPI, I see a 404 error, request is not going to another container
Below Is my Docker Compose
# docker-compose up -d
# docker-compose stop
# docker-compose rm -f
version: '3.8'
services:
hello-sidecar-api:
image: hello-sidecar-api:latest
container_name: hello-sidecar-api
ports:
- "8180:8080"
hello-api:
image: helloapi:latest
container_name: hello-api
environment:
- SIDERCAR_URL=http://localhost:8180/
depends_on:
- hello-sidecar-api
ports:
- "8080:8080"
You can either change SIDECAR_URL to http://hello-sidecar-api:8080/ or place both APIs in the same container. Note that I also changed the port from 8180 to 8080, because 8180 is a port mapped on the host, but inside your Docker network your API is accessible by other containers on 8080.
Your containers are separate network entities with their own IPs, so when you call http://localhost:8180 from the inside of a container, you're not calling the host, but the same container from which the request originates. (assuming you're not using host network driver).
What you are trying to do here resembles the behavior of pods in Kubernetes (where sidecar term is widely used). In Kubernetes you could put these two containers in one pod and then they could call each other on localhost

How should I connect to a container using the host rather than the service name?

I'd like to be able to connect to localstack using the host rather than the service name. I have added the localstack image to my docker-compose file and set network_mode: "host". I can connect to http://localhost:8080 from my other containers. But, I can not connect to: http://localhost:8080 from my host machine. How can I connect to a container using localhost rather than the service name? Not sure if I have misunderstood what network_mode: "host" does.
version: "3"
services:
localstack:
image: localstack/localstack:latest
network_mode: "host"
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- AWS_REGION=us-east-1
- SERVICES=sqs
Problem is I'm using CircleCI to run some component tests, but it seems that in CircleCI you can only reference other services on localhost and not via the service name. This means there are some difference between my local environment and test environment configs. I tried running docker-compose in CircleCI but it seems to freak out locally when doing that. So I wanted to see if I can reference localhost between services in docker-compose.
This happens because Docker for Mac runs inside a virtual machine using the xhyve hypervisor not natively on macOS.
When you run the container with net=host you are actually using the network of the VM and not the one from your local machine.
This is a known limitation of Docker for mac given the nature of how it works.
The only way to access a container is by using port mapping, so if you remove the network_mode:"host" from your docker-compose file it should work as you are already mapping ports.

How do I access Mopidy running in Docker container from another container

To start, I am more familiar running Docker through Portainer than I am with doing it through the console.
What I'm Doing:
Currently, I'm running Mopidy through a container, which is being accessed by other machines through the default Mopidy port. In another container, I am running a Slack bot using the Limbo repo as a base. Both of them are running on Alpine Linux.
What I Need:
What I want to do is for my Slack bot to be able to call MPC commands, such as muting the volume, etc. This is where I am stuck. What is the best way for this to work
What I've tried:
I could ssh into the other container to send a command, but it doesn't make sense to do this since they're both running on the same server machine.
The best way to connect a bunch of containers is to define a service stack using docker-compose.yml file and launch all of them using docker-compose up. This way all the containers will be connected via single user-defined bridge network which will make all their ports accessible to each other without you explicitly publishing them. It will also allow the containers to discover each other by the service name via DNS-resolution.
Example of docker-compose.yml:
version: "3"
services:
service1:
image: image1
ports:
# the following only necessary to access port from host machine
- "host_port:container_port"
service2:
image: image2
In the above example any application in the service2 container can reach some port on service1 just by using service2:port address.

Couldn't connect containers using docker-compose.yaml file

I created two Dockerfiles to run frontend and backend in a web application. When I run docker-compose.yaml file, web application front-end is opened of web browser. But I cannot login to the system. I think there is a problem with connecting those containers. Following is my docker-compose.yaml file. What can I do to resolve this problem ?
version: '2'
services:
gulp:
build: './ui'
ports:
- "3000:4000"
python:
build: '.'
ports:
- "5000:5000"
You need to use --links to enable communication between containers and you should use their DNS network alias like http://python:5000
Containers within a docker-compose file are part of one network by default. And one container can access other container using their host name.
Host name can be defined in docker-compose file using hostname. And if hostname is not defined, then the service name is considered the hostname.
Internally, docker containers can talk to each other by referring to each other at their hostname. Like in your case, gulp can access python at http://python:5000 and that would be possible even if you did not declare ports. This all is happening all because it is internal to the docker system.
From outside, if you want to connect to any of the services, then you can define ports, as you did and then access those services at the defined port number.

It's possible to tie a domain to the docker container when building it?

Currently in the company where I am working on they have a central development server which contains a LAMP environment. Each developer has access to the application as: developer_username.domain.com. The application we're working on uses licenses and the licenses are generated under each domain and are tied to the domain only meaning I can't use license from other developer.
The following example will give you an idea:
developer_1.domain.com ==> license1
developer_2.domain.com ==> license2
developer_n.domain.com ==> licenseN
I am trying to dockerize this enviroment at least having PHP and Apache in a container and I was able to create everything I need and it works. Take a look to this docker-compose.yml:
version: '2'
services:
php-apache:
env_file:
- dev_variables.env
image: reypm/php55-dev
build:
context: .
args:
- PUID=1000
- PGID=1000
ports:
- "80:80"
- "9001:9001"
extra_hosts:
- "dockerhost:xxx.xxx.xxx.xxx"
volumes:
- ~/var/www:/var/www
That will build what I need but the problem comes when I try to access the server because I am using http://localhost and then the license won't work and I won't be able to use the application.
The idea is to access as developer_username.domain.com, so my question is: is this a work that should be done on the Dockerfile or the Docker Compose I mean at image/container level let's say by setting up a ENV var perhaps or is this a job for /etc/hosts on the host running the Docker?
tl;dr
No! Docker doesn't do that for you.
Long answer:
What you want to do is to have a custom hostname on the machine hosting docker mapped to a container in Docker compose network. right?
Let's take a step back and see how networking in docker works:
By default Compose sets up a single network for your app. Each container for a service joins the default network and is both reachable by other containers on that network, and discoverable by them at a hostname identical to the container name.
This network is not equal to your host network and without explicit ports exporting (for a specific container) you wouldn't have access to this network. All exposing does, is that:
The exposed port is accessible on the host and the ports are available to any client that can reach the host.
From now on you can put a reverse proxy (like nginx) or you can edit /etc/hosts to define how clients can access the host (i.e. Docker host, the machine running Docker compose).
The hostname is defined when you start the container, overwriting anything you attempt to put inside the image. At a high level, I'd recommend doing this with a mix of custom docker-compose.yml and a volume per developer, but each running an identical image. The docker-compose.yml can include the hostname and domain setting. Then everything else that needs to be hostname specific on the filesystem, and the license itself, should point to files on the volume. Lastly, include an entrypoint that does the right thing if a new hostname is started with a default or empty volume, populating it with the new hostname data and prompting for the license.

Resources