Run API tests in docker using http://localhost:port - docker

There is an app for book flights and our team is implementing integrations to interact with the flight booking app. So now there is a need to test their APIs in CI environment.
Therefore, I have created a Dockerfile in my API testing framework:
FROM golang:1.13
ADD . /app
WORKDIR /app
RUN go mod download
CMD ["make", "test", "$URL=", "$INTEGRATION=", "$TESTTYPE=", "$TAGS="]
And also I have created a docker-compose.yml file in the integration repo which I should test:
version: '3'
services:
tests:
image: int-tests:latest
environment:
- URL=http://localhost:3000/
- INTEGRATION=pitane
- TESTTYPE=integration
- TAGS=quotes
I have tried to build and run the integration tests using localhost. Locally I can do it without using docker. But inside docker I can't use localhost link of the integration to call their endpoints. Is there any way to do it?
This is the error message that I'm getting:
msg="Post http://localhost:3000/v1/quote: dial tcp 127.0.0.1:3000: connect: connection refusedUnable to get response"

msg="Post http://localhost:3000/v1/quote: dial tcp 127.0.0.1:3000:
connect: connection refusedUnable to get response"
You are trying to connect to the container itself. Use host ip address or network_mode: "host"
UPD
You can find out your host ip with ip route | awk '/default/ { print $3 }' and then use this ip in the URL environment variable.
Or you can get rid of network isolation of the container at all setting network_mode: "host" in docker-compose.yml (not preferred way though)
https://docs.docker.com/compose/compose-file/#network_mode
In this case localhost will denote host machine localhost.
Also you can use host.docker.internal if you use Mac (not preferred way)
https://docs.docker.com/docker-for-mac/networking/

At docker file expose appropriate port (port, others will connect to your service)
EXPOSE 80
If you want to access that started container at hosting machine you have to map that port to your hosting machine. You do it with docker command:
docker run -p 80:80 ....
or with docker compose:
ports:
- '80:80'

Related

Docker container cannot be accessed from localhost despite both --network host and expose ports are set

My system is composed of two parts: a Postgres local_db, and a Nodejs express server that communicates with it via prisma ORM. The Nodejs server, whenever receives a GET request to localhost:4000/, shall reply with a 200-code message as shown in the code:
app.get("/", (_req, res) => res.send("Hello!"))
Basically, this behavior is used further in a health check.
The database is instantiated by the docker-compose.yml (I omit parts not related to networking):
services:
timescaledb:
image: timescale/timescaledb:2.8.1-pg14
container_name: timescale-db
ports:
- "5000:5432"
And a Nodejs backend run in a container, whose Dockerfile is (omitting the parts related to Nodejs building):
FROM node:18
# Declare and set environment variables
ENV TIMESCALE_DATABASE_URL="postgres://postgres:password#localhost:5000/postgres?connect_timeout=300"
# Build app
RUN npm run build
# Expose the listening port
EXPOSE 4000
# Run container as non-root (unprivileged) user
# The node user is provided in the Node.js base image
USER node
CMD npx prisma migrate deploy; node ./build/main.js
The container is made to run via:
docker run -it --network host --name backend my-backend-image
However, despite the container actually finding and successfully connecting to the database (thus populating it), I cannot access to localhost:4000 from the host machine as it tells me connection refused. Furthermore, using curl I obtain the same reply:
$ curl -f http://localhost:4000
curl: (7) Failed to connect to localhost port 4000: Connection refused
I have even tried to connect to the localhost actual ip 127.0.0.1:4000 but still refuses the connection, or to the actual docker daemon address http://172.17.0.1:4000 but the connection keeps hanging.
I do not understand why I cannot access it, even though I have set the flag --network host when running the container, that should map one-to-one the ports of my host machine.

No route to host within docker container

I am running a Debian docker container on a Windows 10 machine which needs to access a particular url on port 9000 (164.16.240.30:9000)
The host machine can access it fine via the browser, however when I log in to the terminal and run wget 172.17.240.30:9000 I get failed: No route to host.
In an attempt to resolve this I added:
ports:
- 9000:9000
to the docker-compose.yml file, however that doesn't seem to have made any difference.
In case you can't guess I'm new to this so what would you try next?
Entire docker-compose.yml file:
version: '3.4'
services:
tokengeneratorapi:
network_mode: host
image: ${DOCKER_REGISTRY}tokengeneratorapi
build:
context: .
dockerfile: TokenGeneratorApi/Dockerfile
ports:
- 5000:80
- 9000
environment:
ASPNETCORE_ENVIRONMENT: local
SSM_PATH: /ic/env1/tokengeneratorapi/
AWS_ACCESS_KEY_ID:
AWS_SECRET_ACCESS_KEY:
Command I'm running:
docker-compose build --build-arg BRANCH=featuretest --build-arg CHANGE_ID=99 --build-arg CHANGE_TARGET=develop --build-arg SONAR_SERVER=164.16.240.30
It seems it's the container having connectivity issues so your proposed solution is likely to not work, as that is only mapping a host port to a container port (considering your target URL is not the actual host).
Check out https://docs.docker.com/compose/compose-file/#network_mode and try setting it to host.
Your browser has access to 164.16.240.30:9000, because it is going through proxy (typical enteprise environment), so the proxy has network connectivity to 164.16.240.30. It doesn't mean that also your host has the same network connectivity. Actually, it looks like your host doesn't have that one. That is the reason why direct wget from the container or from terminal has error No route to host.
Everything must go through the proxy. Try to configure proxy properly - linux apps use environment variables http_proxy,https_proxy usually, but apps may have own option to configure proxy, eventualy you may configure it on the source code level. It depends on used app/code.
I think the issue is that you use host mode in your docker compose config file and do you have IPTABLES firewall allowed for the ports in the debian machine? How about windows?
network_mode: host
which actually bypasses the docker bridge completely so the ports section you specify is not applied. All the ports will be opened on the host system. You can check with
nestat -tunlp | grep 5000
And you will see that the port 5000 is not open and mapped to the 80 of the docker as you would expect. However ports 80 and 9000 should be open on the debian network but not binded to any docker bridge only to the debian ip.
From here: https://docs.docker.com/network/host/
WARNING: Published ports are discarded when using host network mode
As a solution could be to remove the network_mode line and it will work as expected.
Your code doesn't allow your container access to 164.16.240.30:9000. You should wget 164.16.240.30:9000 from the terminal instead of 172.17.240.30:9000.

Connecting docker containers using external network

I am working on a micro-service architecture where we have many different projects and all of them connect to the same redis instance. I want to move this architecture to the Docker to run on development environment. Since all of the projects have separate repositories I can not just simply use one docker-compose.yml file to connect them all. After doing some research I figured that I can create a shared external network to connect all of the projects, so I have started by creating a network:
docker network create common_network
I created a separate project for common services such as mongodb, redis, rabbitmq (The services that is used by all projects). Here is the sample docker-compose file of this project:
version: '3'
services:
redis:
image: redis:latest
container_name: test_project_redis
ports:
- "6379:6379"
networks:
- common_network
networks:
common_network:
external: true
Now when I run docker-compose build and docker-compose up -d it works like a charm and I can connect to the redis from my local machine using 127.0.0.1:6379. But there is a problem when I try to connect to this redis container from an other container.
Here is an other sample docker-compose.yml for another project which runs Node.js (I am not putting Dockerfile since it is irrelevant for this issue)
version: '3'
services:
api:
build: .
container_name: sample_project_api
networks:
- common_network
networks:
common_network:
external: true
There is no problem when I build and run this docker-compose as well but the Node.js project is getting CONNREFUSED 127.0.0.1:6379 error, which obviously it can not connect to the Redis server over 127.0.0.1
So I opened a live ssh into the api container (docker exec -i -t sample_project_api /bin/bash) and installed redis-tools to make some tests.
When I try to ping the redis-cli ping it returns Could not connect to Redis at 127.0.0.1:6379: Connection refused.
I checked the external network to see if all of the containers are connected to it properly, using docker network inspect common_network. There were no problem, all of the containers were listed under Containers, and from there I noticed that sample_project_redis container had an ip address of 192.168.16.3
As a final solution I tried to use internal ip address of the redis container:
From sample_project_api container I run redis-cli -h 192.168.16.3 ping and it return with PONG which it worked.
So my problem is that I can not connect to the redis server from other containers using ip address of 127.0.0.1 or 0.0.0.0 but I can connect using 192.168.16.3 which changes every time I restart docker container. What is the reason behind this ?
Containers have a namespaced network. Each container has its own loopback interface and an ip for the container per network you attach to. Therefore loopback or 127.0.0.1 in one container is that container and not the redis ip. To connect to redis, use the service name in your commands, which docker will resolve to the ip of the container running redis:
redis:6379

curl localhost to connect from one docker container to another docker container

I have following docker-compose and run it locally:
version: '3.4'
services:
testservice.api:
image: testservice.api
build:
context: .
dockerfile: Services/.../Dockerfile
ports:
- "5101:80"
sql.data:
image: postgres.jchem
build:
context: ../db/postgres
dockerfile: Dockerfile
ports:
- "5432:5432"
- "9090:9090"
Now within the sql.data container I try to execute curl http://localhost:5101/...
But I get the exit code (7) Connect failed
When I try to connect via curl http://testservice.api/... it works.
Why can't I connect with localhost:port? And how can I accomplish to connect from within a docker container to another with curl localhost:port?
Why can't I connect with localhost:port?
That's because each container gets its own network interface, hence its own 'localhost'.
And how can I accomplish to connect from within a docker container to another with curl localhost:port?
What you can do is use network_mode: "host" in each compose service so every container use the same host net interface. Though, I recommend you to adapt your apps to be configurable so they get their service dependencies URLs as params (for example).
When you say localhost in inside a container then it will read as the container itself. For a container to be able to communicate with other containers. You need to make sure that those containers are connected on the same network and then you can try accessing them via their DNS (their service name) or their container IP.

Access container started at Docker Compose step from Hosted Linux Agent on Azure VSTS

I am using VSTS build step Docker Compose v 0.* on Hosted Linux Agent.
Here is my docker-compose:
version: '3.0'
services:
storage:
image: blobstorageemulator:1.1
ports:
- "10000:10000"
server:
build: .
environment:
- ENV=--tests
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- "8080:8080"
depends_on:
- storage
I use run services command.
So basically I am running 2 Linux containers inside another Linux container (Build Agent).
I was able to connect these containers to each other (server uses storage through connection string, which contains storage as a host - http://storage:10000/devstoreaccount1).
Question: how to get access to the server from the build agent container? When I do curl http://localhost:8080 on the next step it returns Failed to connect to localhost port 8080: Connection refused.
PS
Locally I run docker compose and can easily access my exposed port from host OS (I have VirtualBox with Ubuntu 17.10)
UPDATE:
I tried using docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' server-container-name to get the IP address of the container running my server app and curl this IP, but I am getting connection timed out now.
there is no way to access it from the host container, you have to perform exec command.
docker-compose -p container_name exec name_from_compose http://localhost:8080

Resources