Connect two docker instances in localhost with port exposed mappings - docker

I have two docker containers wanted to work as connected. One docker instance(instance1) is connected with the client and another docker(instance2) is wants to connect with instance1. Mainly what it does is when the client sends a request to instance 1, instatnce1 wants to invoke instance2 service and get the response. Then pass it to the client.
Currently, I'm using following docker commands to run the docker images
instance1
docker run --name instance1 -d -p 8290:8290 composite-service
This instance has a service - http://localhost:8290/composite .
This service invokes the service in instance2
instance2
docker run --name instance2 -d -p 8291:8290 service-backend
This instance has a service - http://localhost:8291/service. When this called service response some data to the request. (actually, this service is started on port 8290, but it exposes to externals in 8291 port)
The problem is when the client called to the service in instance1 (http://localhost:8290/composite) it shows an error
Connection refused or failed for : localhost/127.0.0.1:8291
How can I resolve this? I want to connect these 2 containers with existing port mappings passed in the docker run commands.
I tried --link and --net commands to connect these two containers but the result was same.

Building on bellackn’s Answer the easiest way in my opinion is with docker compose.
The dockercompose file would look something like this.
version: "3.7"
services:
service:
image: composite-service:latest
ports:
- 8290:8290
backend:
image: service-backend:latest
expose:
- 8290
Now instead of using docker run … you save the code above in a file called docker-compose.yml, and run docker-compose up from the folder that file is saved in.
Now your composite-service no longer should call http://localhost:8291/service , but instead use something like backend:8290/service.
You can read more about compose-files in the official documentation: https://docs.docker.com/compose/compose-file/
An added benefit is that this way the service-backend is only accessible within the docker compose network (not on your local host).
If you want access to your backend remove the expose statement and add the ports.

localhost inside a container always refers to the container itself (unless you start them with --net host).
If you want containers to communicate, I recommend Docker Compose. The containers can then refer to each other using their service names (if you run them in the same network, of course, but that's the default), i.e. instance1 could reach instance2 via http://instance2:8291/service.

Related

docker containers choosing different networks despite never setting any

I'm having a horrible time with setting up my Docker configuration for my go service. Below is an overview of my setup
go_binary(
name = "main_arm64",
embed = [":server_lib"],
goarch = "arm64",
goos = "linux",
visibility = ["//visibility:public"],
)
container_image(
name = "ww_server_image",
base = "#go_image_static_arm64//image",
entrypoint = ["/main_arm64"],
files = [":main_arm64"],
ports = [
"8080",
"3306",
],
)
I have a GraphQL Playgroud (HTTP) running on http://localhost:8080, and despite the port supposedly being exposed, i cant access the playground UI.
All I'm trying to do is:
Be able to access the GraphQL playground and any other APIs running on other ports within the container
Be able to make requests from my Dockerized Go app to a separate MySQL container (I can't figure out how to put them on the same network with rules_docker).
docker exec -it ... /bin/bash into my docker container (this hasnt been working because bash isnt installed, yet i have no idea how to install bash via this container_image command)
Here is the error:
OCI runtime exec failed: exec failed: unable to start container process: exec: "bash": executable file not found in $PATH: unknown
If i take the generated docker image ID and run docker run -p 8080:8080 IMAGE_ID, I'm able to access the GraphQL playground, but can't communicate with the MySQL container
If I change the network as such: docker run --network=host -p 8080:8080 IMAGE_ID the Dockerized Go app can successfully communicate with the MySQL container, but then the GraphQL playground becomes inaccessible. The GraphQL playground only becomes accessible if I maintain --network=bridge. I'm not sure why MySQL isn't using bridge as well, since i never specified the network when starting it. This is how I got the MySQL container
docker run -p 3306:3306 --name my-db -e MYSQL_ROOT_PASSWORD=testing -d mysql:8.0.31
So, you have several problems here:
First of all, you can most likely access containers that don't have bash installed through docker exec -it container_name /bin/sh, as most containers at least come with sh.
Second, your host machine can only have one service per port, so when you assign network host to a container, you overwrite the port mapping of other containers, which is why your GraphQL became unreachable after starting the go app with network host as a result of that they both use port 8080.
Third, when you use the default bridge network, your containers can only communicate by IP, not by container name.
Also, you don't need a port mapping to let the containers communicate with each other. Port mapping is only required, when something outside the Docker network needs the access.
Your best chance is to create a network with docker network create network_name and then to assign the network to all containers with --network network_name through the Docker run command.
You don't necessarily need a port mapping to get your application running, but when you want to access a service from outside - e.g. your hosts browser - make sure to take a unique port for each container. The port outside doesn't have to be the same as the container's internal port, you can map for instance -p 8081:8080.
Since all containers belong to one app, you also might want to check whether docker compose is the better alternative as it allows you to easily manage all your container by one config file.
the answer was here:
Unable to connect to mysql server with go and docker - dial tcp 127.0.0.1:3306: connect: connection refused
turns out i need to actually access MySQL using the following address, since Docker on Mac uses Linux VM:
docker.for.mac.localhost:3306

Connect to docker-compose container from docker image

I've got a Docker image which accepts a DATABASE_URL env and start the container with
docker run -p 3000:3000 -e DATABASE_URL=mysql://root:foobar#localhost:3309/app app_image
On startup the container should run migrations on a database bootstraped from a docker-compose.yml file:
version: '3'
services:
database:
image: mysql:8.0
environment:
- MYSQL_DATABASE=app
- MYSQL_ROOT_PASSWORD=foobar
ports:
- "3309:3306"
volumes:
- db:/var/lib/mysql
volumes:
db:
Unfortunately, I always get Can't reach database at localhost:3309. I assume it has something to do with the network settings - but how to configure these settings in order to make it work?
I've tried many different configurations (e.g. database, 127.0.0.1, etc. instead of localhost) but couldn't make it work and I'm honestly running out of ideas.
Thanks!
Some clarifications:
Unless you specifically bind containers to the same host e.g. --network host or link them together in docker compose using links Links documentation, container A will not be able to reach anything on container B via localhost.
Docker compose, unless specified differently in the docker-compose.yaml file, automatically creates a separate network for all of the containers that are managed by that compose file, its name is usually based on the folder the file is in.
You can view it using docker network ls
that means, any container not managed by that compose file, is not part of the same network by default.
One option easy enough option that you have (among many probably):
Decide on a network name which the compose file and you container will agree on
create this network beforehand (before starting the standalone container and the compose file), the network will probably be in bridge mode
docker network docs docker network create -d bridge my-bridge-network
add it to the compose file so that docker compose uses it instead of the auto-created one
networks:
- my-bridge-network
when starting the stand-alone container, specify which network to attach it to
docker run -p 3000:3000 -e DATABASE_URL=mysql://root:foobar#database:3309/app --network my-bridge-network app_image
notice that the IP/HOSTNAME for the database container is according to the service name in the compose file, you can change that using hostname: some-other-hostname in the yaml.
all containers should now be on the same network, each one has a different IP/Hostname using this method (so cant use localhost for inter-container communication)
Alternative option:
use network: host for both the compose file and the stand-alone container, they can talk to each other using localhost.
i dont recommend this option, and cant find a good enough reason to use it over other options.

Docker container not able to call another running docker container from localhost?

I have two running docker containers. One docker container is calling the other docker container but when it is trying to call application is breaking. When I am giving my hostname of my machine inside the application.Application is working.
This is a really a dependency if i deploy these two containers i again have to find the hostname of that machine and then put inside my application is any other way so that which can remove this dependency.
This url is consumed by my docker container which is failing
http://localhost:8080/userData
Same when i update with my host name then it is working.
http://nl55443lldsfa:8080/userData
But this is really a dependency i cannot change inside my application everytime.Is any work around is there for the same.
You should use docker-compose to run both containers and link them using the link property on your yaml file.
This might be a good example:
web:
image: nginx:latest
ports:
- "8080:8080"
links:
- php
php:
image: php
Then the ip of each container will be associated to its service name on the /etc/hosts file of both containers and you will be able to access them from inside the containers just by using that hostname.
Also be sure to be mapping the ports correctly, using http://localhost:8080 shouldn't fail if you map the ports correctly and the service is running.
Put the two containers inside the same network when running them. Only then you can use hostnames for inter container communication.
Edit: And of course name you containers so you don’t get a random container name each time.
Edit 2: The commands are:
$ docker network create -d bridge my-bridge-network
$ docker run -d \
--name webserver \
--network=my-bridge-network \
nginx:latest
$ docker run -d \
--name dbserver \
--network=my-bridge-network \
mysql:5.7
Containers started both with a specified hostname and a common network can use hostnames internally to communicate with each other.

accessing a docker container from another container

I created two docker containers based on two different images. One of db and another for webserver. Both containers are running on my mac osx.
I can access db container from host machine and same way can access webserver from host machine.
However, how do I access db connection from webserver?
The way I started db container is
docker run --name oracle-db -p 1521:1521 -p 5501:5500 oracle/database:12.1.0.2-ee
I started wls container as
docker run --name oracle-wls -p 7001:7001 wls-image:latest
I can access db on host by connecting to
sqlplus scott/welcome1#//localhost:1521/ORCLCDB
I can access wls on host as
http://localhost:7001/console
It's easy.
If you have two or more running container, complete next steps:
docker network create myNetwork
docker network connect myNetwork web1
docker network connect myNetwork web2
Now you connect from web1 to web2 container or the other way round.
Use the internal network IP addresses which you can find by running:
docker network inspect myNetwork
Note that only internal IP addresses and ports are accessible to the containers connected by the network bridge.
So for example assuming that web1 container was started with: docker run -p 80:8888 web1 (meaning that its server is running on port 8888 internally), and inspecting myNetwork shows that web1's IP is 172.0.0.2, you can connect from web2 to web1 using curl 172.0.0.2:8888).
Easiest way is to use --link, however the newer versions of docker are moving away from that and in fact that switch will be removed soon.
The link below offers a nice how too, on connecting two containers. You can skip the attach portion, since that is just a useful how to on adding items to images.
https://web.archive.org/web/20160310072132/https://deis.com/blog/2016/connecting-docker-containers-1/
The part you are interested in is the communication between two containers. The easiest way, is to refer to the DB container by name from the webserver container.
Example:
you named the db container db1 and the webserver container web0. The containers should both be on the bridge network, which means the web container should be able to connect to the DB container by referring to its name.
So if you have a web config file for your app, then for DB host you will use the name db1.
if you are using an older version of docker, then you should use --link.
Example:
Step 1: docker run --name db1 oracle/database:12.1.0.2-ee
then when you start the web app. use:
Step 2: docker run --name web0 --link db1 webapp/webapp:3.0
and the web app will be linked to the DB. However, as I said the --link switch will be removed soon.
I'd use docker compose instead, which will build a network for you. However; you will need to download docker compose for your system. https://docs.docker.com/compose/install/#prerequisites
an example setup is like this:
file name is base.yml
version: "2"
services:
webserver:
image: moodlehq/moodle-php-apache:7.1
depends_on:
- db
volumes:
- "/var/www/html:/var/www/html"
- "/home/some_user/web/apache2_faildumps.conf:/etc/apache2/conf-enabled/apache2_faildumps.conf"
environment:
MOODLE_DOCKER_DBTYPE: pgsql
MOODLE_DOCKER_DBNAME: moodle
MOODLE_DOCKER_DBUSER: moodle
MOODLE_DOCKER_DBPASS: "m#0dl3ing"
HTTP_PROXY: "${HTTP_PROXY}"
HTTPS_PROXY: "${HTTPS_PROXY}"
NO_PROXY: "${NO_PROXY}"
db:
image: postgres:9
environment:
POSTGRES_USER: moodle
POSTGRES_PASSWORD: "m#0dl3ing"
POSTGRES_DB: moodle
HTTP_PROXY: "${HTTP_PROXY}"
HTTPS_PROXY: "${HTTPS_PROXY}"
NO_PROXY: "${NO_PROXY}"
this will name the network a generic name, I can't remember off the top of my head what that name is, unless you use the --name switch.
IE docker-compose --name setup1 up base.yml
NOTE: if you use the --name switch, you will need to use it when ever calling docker compose, so docker-compose --name setup1 down this is so you can have more then one instance of webserver and db, and in this case, so docker compose knows what instance you want to run commands against; and also so you can have more then one running at once. Great for CI/CD, if you are running test in parallel on the same server.
Docker compose also has the same commands as docker so docker-compose --name setup1 exec webserver do_some_command
best part is, if you want to change db's or something like that for unit test you can include an additional .yml file to the up command and it will overwrite any items with similar names, I think of it as a key=>value replacement.
Example:
db.yml
version: "2"
services:
webserver:
environment:
MOODLE_DOCKER_DBTYPE: oci
MOODLE_DOCKER_DBNAME: XE
db:
image: moodlehq/moodle-db-oracle
Then call docker-compose --name setup1 up base.yml db.yml
This will overwrite the db. with a different setup. When needing to connect to these services from each container, you use the name set under service, in this case, webserver and db.
I think this might actually be a more useful setup in your case. Since you can set all the variables you need in the yml files and just run the command for docker compose when you need them started. So a more start it and forget it setup.
NOTE: I did not use the --port command, since exposing the ports is not needed for container->container communication. It is needed only if you want the host to connect to the container, or application from outside of the host. If you expose the port, then the port is open to all communication that the host allows. So exposing web on port 80 is the same as starting a webserver on the physical host and will allow outside connections, if the host allows it. Also, if you are wanting to run more then one web app at once, for whatever reason, then exposing port 80 will prevent you from running additional webapps if you try exposing on that port as well. So, for CI/CD it is best to not expose ports at all, and if using docker compose with the --name switch, all containers will be on their own network so they wont collide. So you will pretty much have a container of containers.
UPDATE: After using features further and seeing how others have done it for CICD programs like Jenkins. Network is also a viable solution.
Example:
docker network create test_network
The above command will create a "test_network" which you can attach other containers too. Which is made easy with the --network switch operator.
Example:
docker run \
--detach \
--name db1 \
--network test_network \
-e MYSQL_ROOT_PASSWORD="${DBPASS}" \
-e MYSQL_DATABASE="${DBNAME}" \
-e MYSQL_USER="${DBUSER}" \
-e MYSQL_PASSWORD="${DBPASS}" \
--tmpfs /var/lib/mysql:rw \
mysql:5
Of course, if you have proxy network settings you should still pass those into the containers using the "-e" or "--env-file" switch statements. So the container can communicate with the internet. Docker says the proxy settings should be absorbed by the container in the newer versions of docker; however, I still pass them in as an act of habit. This is the replacement for the "--link" switch which is going away. Once the containers are attached to the network you created you can still refer to those containers from other containers using the 'name' of the container. Per the example above that would be db1. You just have to make sure all containers are connected to the same network, and you are good to go.
For a detailed example of using network in a cicd pipeline, you can refer to this link:
https://git.in.moodle.com/integration/nightlyscripts/blob/master/runner/master/run.sh
Which is the script that is ran in Jenkins for a huge integration tests for Moodle, but the idea/example can be used anywhere. I hope this helps others.
You will have to access db through the ip of host machine, or if you want to access it via localhost:1521, then run webserver like -
docker run --net=host --name oracle-wls wls-image:latest
See here
Using docker-compose, services are exposed to each other by name by default. Docs.
You could also specify an alias like;
version: '2.1'
services:
mongo:
image: mongo:3.2.11
redis:
image: redis:3.2.10
api:
image: some-image
depends_on:
- mongo
- solr
links:
- "mongo:mongo.openconceptlab.org"
- "solr:solr.openconceptlab.org"
- "some-service:some-alias"
And then access the service using the specified alias as a host name, e.g mongo.openconceptlab.org for mongo in this case.
Environment: Windows 10, Docker Desktop version 4.5.1.
Use hostname host.docker.internal to access services running on your host machine from inside a container.
See: https://docs.docker.com/desktop/windows/networking/#use-cases-and-workarounds
I run PostgreSQL in one container and my app in a separate container.
I configure the app database connection to use host.docker.internal as the hostname and it just works.
Consider Example
We Create two containers here PostgreSQL server and pgadmin(for accessing servers like PHPMyAdmin, SQL studio, workbench).
Exposed port
PostgreSql --->5436
Pgadmin --->5050
After adding a server in pgadmin hostname as localhost.It will show a connection error. Because Docker container pgadmin getting localhost as their system instead we need PostgreSQL IP to solve the problem.
docker network create con
docker network connect con app1
docker network connect con app2
This command gets connected container IP address and other details.
docker network inspect con
Now you can see the IP address shown in the network inspect. Choose the Postgres container IP. You can access other exposed ports through this IP. Here postgre 5432 is only exposed.Now set hostname as the container ip and it will work.
You can use the default docker network. If you don't want to go through any docker networking, you can do this:
Copy the ip address in Docker subnet in Resources>Network in Docker Preferences in Mac:
Docker preferences screenshot
As you can see from the screenshot link the ip address is
192.168.65.0
You just need to replace “localhost” in your containers config file with “192.168.65.1" (i.e. IP address picked + 1 ).
You can start your containers and should be set for local development/testing.
For some more details, you can see my article:
Connect Docker containers the easy way
In my case, the host connection in the application to a container from an other container by the IP provide by the bridge didn't work.
But it works with the name of the container (see my screenshot).
So you can replace the IP by the name of the container.

Link & Expose Docker Container Simultaniously

Does the following command link two containers and also expose the port on my network?..
docker run -d -p 5000:5000 --link my-postgres:postgres danwahlin/aspnetcore
I'm watching Dav Wahlin's course on Docker and this one command is blowing my mind. Does this mean that port 5000 will be accessible from my network AND linked between the two containers? If so, then the link isn't essential to communicate between the containers since they could just use the IP and port in a config file. Correct?
Looks like you're confusing "legacy linking" with "container networks". Creating a link, as your example shows, creates an entry in the containers hosts file so they can resolve each other by name.
In the example above, you created an alias of "postgres" to the "my-postgres" container. Think name resolution here. This does nothing to isolate the network stack.
Next you have the --port or -p switch which exposes a container port to the network. Here you are exposing port 5000. Without this switch you would not expose anything and, therefore, would not receive any incoming calls.
Should you want to isolate containers you could do so using a "bridge network" like so:
docker network create --driver bridge mynetwork
Once the network is created, only containers added to the network will communicate with each other. For example:
docker run -d --net=mynetwork --name postgres postgres:latest
docker run -d --net=mynetwork --name node node:latest

Resources