Kong cannot find sample flask container in tutorial - docker

I started following this docker/KONG installation tutorial where they create a network called "kong-net" and fire up the KONG and postgresql containers.
Then I jumped to this docker/kong tutorial the registers a sample flask container as an API in KONG..
I did not see any thing alarming while configuring the KONG container with the flask service and associated routes.
The sample flask container seems to work fine:
curl http://localhost:5055/api/v1/test1
curl http://localhost:5055/api/v1/test2
I get the expected result:
{"message":"first end point test1 is called","status_code":200}
The results of these commands look good:
curl -i -X POST --url http://localhost:8001/services/ --data 'name=testApi' --data 'url=http://localhost:5055'
curl http://localhost:8001/routes | json_pp
Everything is great until I get this command to test KONG:
curl -i -X GET --url http://localhost:8000/api/v1/test1 --header 'Host: localhost'
I think KONG is supposed to forward this to the sample flask container.
Instead I see this error:
HTTP/1.1 502 Bad Gateway
Date: Wed, 08 May 2019 18:20:00 GMT
Content-Type: text/plain; charset=UTF-8
Transfer-Encoding: chunked
Connection: keep-alive
Server: kong/1.1.2
X-Kong-Upstream-Latency: 1
X-Kong-Proxy-Latency: 35
Via: kong/1.1.2
An invalid response was received from the upstream server
In the logs for the KONG container I see this:
2019/05/08 16:56:57 [error] 38#0: *167134 connect() failed (111:
Connection refused) while connecting to upstream, client:
172.19.0.1, server: kong, request: "GET /api/v1/test1 HTTP/1.1",
upstream: "http://127.0.0.1:5055/api/v1/test1", host: "localhost"
172.19.0.1 - - [08/May/2019:16:56:57 +0000] "GET /api/v1/test1
HTTP/1.1" 502 69 "-" "curl/7.59.0"
It looks like KONG cannot see localhost:5055.
I'm worried about that network the first tutorial had me create.
I tried stopping, rebuilding and re-running the flask container with this command (so the flask was part of the network too):
docker run -d --name flask --network=kong-net -p 5055:5055 flask:test
Alas, this did not help. Same error!
When I type
docker network inspect kong-net
I now see that the flask container is part of kong-net. Is this necessary?
I tried this command and it worked:
docker exec -ti kong sh -c "curl http://172.19.0.4:5055/api/v1/test1 "
{"message":"first end point test1 is called","status_code":200}
I'm doing all this with Windows10/cygwin-bash/docker18.09.2 with docker/kubernetes turned on.
Questions:
Does the sample flask app need to part of the kong-net?
The tutorial seems be saying that kong should be able to see 127.0.0.1:5055. Is this correct? Is the tutorial wrong?
How can I make this command work?
curl -i -X GET --url http://localhost:8000/api/v1/test1 --header 'Host: localhost'

When Kong is installed as a docker container 'localhost' means loopback address of the Kong container - NOT the host. Endpoints registered in Kong should be resolvable from the Kong container.
So, you can change your registration to use actual IP and Port of the backend service as reachable by Kong container, so, if your backend is also a docker container and its port 5055 is mapped to port 15055 on the host, then registration in Kong should use host IP and port 15055.

You need to register the routes:
curl -i -X POST \
--url http://localhost:8001/services/testApi/routes \
--data 'hosts[]=localhost' \
--data 'paths[]=/api/v1/test1' \
--data 'strip_path=false' \
--data 'methods[]=GET'

Related

Error with curl command to server in Docker container

Assumption
I have torchserve installed and running inside a WSL2 Docker container that makes an API for the Pytorch model.
What I would like to achieve.
I would like to receive a list of models when I run the curl command.
curl http://127.0.0.1:8081/models
Expected Results
{
"models": [
{
"modelName": "densenet161",
"modelUrl": "densenet161.mar"
}
]
}
Occurring problems
The following message is returned.
<HTML>
<HEAD><TITLE>Redirection</TITLE></HEAD>
<BODY><H1>Redirect</H1></BODY>
What I tried
The following commands were executed from the Torchserve container
curl 127.0.0.1:8081/models
I tried with the IP address of the Docker container (172.17.0.5) but it returned an error.
curl 172.17.0.5:8081/models
curl: (52) Empty reply from server
When running the Docker container, I did the following port forwarding.
I ran the curl command from host to host IP address with the same result.
docker run -it --gpus all -v /home:/home -p 8080:8080 -p 8081:8081 -p 8082:8082 --shm-size 8GB ts_test
curl http://172.19.108.214:8081/models
With the -L option, an error was returned with the following results.
root#f0b48fd29ec1:~# curl -L http://127.0.0.1:8081/models
curl: (52) Empty reply from server
With the -v option, the following results were returned.
root#f0b48fd29ec1:~# curl -v http://127.0.0.1:8081/models
* Trying 10.77.8.70...
* TCP_NODELAY set
* Connected to proxy.mei.co.jp (10.77.8.70) port 8080 (#0)
> GET http://127.0.0.1:8081/models HTTP/1.1
> Host: 127.0.0.1:8081
> User-Agent: curl/7.58.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 301 Moved Permanently
< Server: BlueCoat-Security-Appliance
< Location:http://10.75.28.231
< Connection: Close
<
<HTML>
<HEAD><TITLE>Redirection</TITLE></HEAD>
<BODY><H1>Redirect</H1></BODY>
* Closing connection 0
What are some possible reasons why the curl command is not returning results?
The following commands produced the expected results.
It seems that the proxy was affecting the results.
root#f0b48fd29ec1:~/torchserve-sample# curl http://127.0.0.1:8081/models --noproxy "*"
{
"models": [
{
"modelName": "densenet161",
"modelUrl": "densenet161.mar"
}
]
}

Why does jetty running in docker return 502 Bad Gateway when the proxy module is enabled?

Run a simple jetty docker container:
docker run --rm -it -p 8080:8080 jetty:9.4
Request the root WebApp URL:
curl -I http://localhost:8080
Response, as expected, is 404, since there is no root WebApp:
HTTP/1.1 404 Not Found
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html;charset=iso-8859-1
Content-Length: 317
Server: Jetty(9.4.11.v20180605)
Now start a jetty docker container with the proxy module enabled:
docker run --rm -it -p 8080:8080 jetty:9.4 --module=proxy
Request the same Root URL:
curl -I http://localhost:8080
Response is HTTP 502 Bad Gateway:
HTTP/1.1 502 Bad Gateway
Cache-Control: must-revalidate,no-cache,no-store
Content-Type: text/html;charset=iso-8859-1
Server: Jetty(9.4.11.v20180605)
Content-Length: 321
Why? I cannot get a jetty docker container running the proxy module to serve any webapps or content.
Running: Docker Version 18.06.1-ce-mac73 (26764)
Proxy module is for Jetty itself to act as a proxy to another webserver.
Just enabling it results in an empty configuration where it has no destination to talk to, hence the 502 Bad Gateway.

Docker Kong admin API is unreachable

I've upgraded the Docker Kong image to the version 0.14.0 and it stopped responding to connections from outside the container:
$ curl 127.0.0.1:8001 --trace-ascii dump.txt
== Info: Rebuilt URL to: 127.0.0.1:8001/
== Info: Trying 127.0.0.1...
== Info: Connected to 127.0.0.1 (127.0.0.1) port 8001 (#0)
=> Send header, 78 bytes (0x4e)
0000: GET / HTTP/1.1
0010: Host: 127.0.0.1:8001
0026: User-Agent: curl/7.47.0
003f: Accept: */*
004c:
== Info: Recv failure: Connection reset by peer
== Info: Closing connection 0
The ports mapping is
0.0.0.0:8000-8001->8000-8001/tcp, 0.0.0.0:8443-8444->8443-8444/tcp
Everything is ok when trying to connect from inside the container:
/ # curl 127.0.0.1:8001
{"plugins":{"enabled_in_cluster":[], ...
Port 8000 is available from outside and inside the container. What can that be?
I have encountered the same issue. The reason is the kong admin configuration set to loopback address by default. But I didn't modify the configuration file. Since Kong Docker Image providing an environment variable to expose the admin port.
KONG_ADMIN_LISTEN="0.0.0.0:8001, 0.0.0.0:8444 ssl"
This bind the admin port to the host machine port
The problem was in the binding of the admin server to localhost in /usr/local/kong/nginx-kong.conf
server {
server_name kong_admin;
listen 127.0.0.1:8001;
listen 127.0.0.1:8444 ssl;
...
I've added the following code into my custom entrypoint which removes this binding just before starting nginx:
echo "Remove the admin API localhost binding..."
sed -i "s|^\s*listen 127.0.0.1:8001;|listen 0.0.0.0:8001;|g" /usr/local/kong/nginx-kong.conf && \
sed -i "s|^\s*listen 127.0.0.1:8444 ssl;|listen 0.0.0.0:8444 ssl;|g" /usr/local/kong/nginx-kong.conf
echo "Starting nginx $PREFIX..."
exec /usr/local/openresty/nginx/sbin/nginx \
-p $PREFIX \
-c nginx.conf
Of course the admin ports must be closed in production some other way.

Why I get the gateway IP as source address in Docker bridge networking?

I'm originally doing some composition with Docker bridge network, and noticed that instead of the whitelisted local IP, the requests are always sent from the gateway IP.
To reproduce it with minimal effort, I used two Python containers to run a HTTP server and client:
docker run -it --rm python:alpine sh
On the server side:
python -m http.server
On the client side:
wget 172.17.0.3:8000
Expected output, is that the request comes from the container IP:
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
172.17.0.2 - - [time] "GET / HTTP/1.1" 200 -
Actual output, which the request comes from the bridge gateway IP:
Serving HTTP on 0.0.0.0 port 8000 (http://0.0.0.0:8000/) ...
172.17.0.1 - - [time] "GET / HTTP/1.1" 200 -
However, when I ran the same test on my laptop, I get the expected behavior (container IP). The problem only seems to happen on my server.
What can result in such behavior? Is is some sort of sysctl or iptables problem?
I have found the cause, which is an ancient iptables-save entry. It was hard to notice as iptables -nvL doesn't show NAT rules by default.
After removing them from /etc/iptables/rules.v4, everything worked as intended.

Docker swarm and service discovery

I'm moving away from docker-compose files to using docker swarm but I just can't figure this out.
I have two services - a nginx proxy, and a website both running just fine in docker swarm (which has three nodes)
The issue I've got is I need to configure nginx to proxy_pass to the backend website. Currently the only way I can get this to work is by specifying an ip address of one of the nodes.
My services are created as follows:
docker service create --mount type=bind,source=/../nginx.conf,target=/etc/nginx/conf.d/default.conf \
-p 443:443 \
--name nginx \
nginx
and
docker service create --name ynab \
-p 5000:5000 \
--replicas 2 \
scottrobertson/fintech-to-ynab
I've tried using the service name but that just doesn't work.
Really I don't think I should have to even expose the ynab service ports (at least that would work when I used docker-compose)
In one of the nginx containers I have tried the following:
root#5fabc5611264:/# curl http://ynab:5000/ping
curl: (6) Could not resolve host: ynab
root#5fabc5611264:/# curl http://nginx:5000/ping
curl: (6) Could not resolve host: nginx
root#5fabc5611264:/# curl http://127.0.0.1:5000/ping
curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused
root#5fabc5611264:/# curl http://localhost:5000/ping
curl: (7) Failed to connect to localhost port 5000: Connection refused
Using the process list I tried connecting to the running instances id, and name:
root#5fabc5611264:/# curl http://ynab.1:5000/ping
curl: (6) Could not resolve host: ynab.1
root#5fabc5611264:/# curl http://pj0ekc6i7n0v:5000/ping
curl: (6) Could not resolve host: pj0ekc6i7n0v
But I can only get it to work if I use the nodes public ip addresses:
root#5fabc5611264:/# curl http://192.168.1.52:5000/ping
pong
root#5fabc5611264:/# curl http://192.168.1.53:5000/ping
pong
root#5fabc5611264:/# curl http://192.168.1.51:5000/ping
pong
I really don't want to use a public ip in case that node goes down. I'm sure I must just be doing something wrong!
The services need to be connected to the same network for this to work.
$ docker network create -d overlay fake-internet-points
$ docker service create --name service_one --network fake-internet-points [...]
$ docker service create --name service_two --network fake-internet-points [...]

Resources