On a public server, I have an Prometheus exporter setup. This is blocked, intentionally, by a firewall as the information should not be public.
From a separate network (my home network with dynamic IP), I wish to scrape the Prometheus exporter. The idea is to use autossh to setup an SSH tunnel, to be able to scrape the endpoint that way. I prefer to setup autossh using docker.
So far I have managed to setup a autossh docker container, with the following docker-compose:
remote-nodeexporter:
image: jnovack/autossh:latest
container_name: remote-nodeexporter
environment:
- SSH_HOSTNAME=PUBLIC_IP
- SSH_TUNNEL_REMOTE=19100
- SSH_TUNNEL_LOCAL=9100
- SSH_MODE=-L
restart: always
volumes:
- /path/to/id_rsa:/id_rsa
ports:
- "19100:19100"
From within the container this works fine:
/ # wget localhost:19100/metrics
Connecting to localhost:19100 (127.0.0.1:19100)
saving to 'metrics'
metrics 100% |**********************************************************************************************************************************************************************************************| 75595 0:00:00 ETA
'metrics' saved
But from the host (or from other containers), I get errors:
/ # wget localhost:19100/metrics
--2020-07-07 08:53:25-- http://localhost:19100/metrics
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:19100... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.
How do I correctly expose this endpoint?
Related
I try docker to connect my services for study purpose.
It's all good when I try it in my localhost.
But when I try it on Docker, It's get error when I try to http redirect from product-service to my auth-service.
What I had try is instead using http redirect, I make a new request to auth-service, and It's worked and connected but it's get error when I try to redirect it in Docker.
This is my Redirect Code
I'm using golang and Gin-gonic Library
url := os.Getenv("AUTH_SERVICE_URL") // http://auth-service:8081
c.Redirect(http.StatusSeeOther, url+"/refresh?token="+tokenStr)
c.Abort()
return
And this is My new request Code
url := os.Getenv("AUTH_SERVICE_URL") // http://auth-service:8081
request, err := http.NewRequest(http.MethodGet, url+"/refresh?token="+tokenStr, nil)
log.Println("make request to", request.URL)
if err != nil {
panic("Internal Server Error")
}
client := http.Client{
Timeout: 30 * time.Second,
}
response, err := client.Do(request)
if err != nil {
panic(fmt.Sprint("unable to get auth request", err.Error()))
}
log.Println("REFRESH TOKEN", response.Header)
I just wondering, why when I use http redirect It's got an error on docker but it works when i try it on my localhost.
I'm using postman for testing and I got this error.
Error: getaddrinfo ENOTFOUND auth-service
But it works when i'm using New Http Request code in my Docker with the same URL.
This is my product-service docker-compose
version: '3'
services:
product-service:
build:
context: .
dockerfile: .
ports:
- "8082:8082"
environment:
AUTH_SERVICE_URL: http://auth-service:8081
DB_DRIVER: postgres
DB_USER: postgres
DB_PASSWORD: password
DB_HOST: postgresDB
DB_PORT: 5431
DB_NAME: v_product
DB_SSL_MODE: disable
PORT: 8082
networks:
- auth-service_default
- product-service_default
depends_on:
- postgresDB
postgresDB:
image: 'postgres:12.12'
ports:
- "5431:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: v_product
networks:
- product-service_default
volumes:
- psql_user:/var/lib/postgresql/data
volumes:
psql_user:
networks:
auth-service_default:
external: true
product-service_default:
external: true
This is my auth-service docker-compose
version: '3'
services:
auth-service:
build:
context: .
dockerfile: .
ports:
- "8081:8081"
environment:
USER_SERVICE_URL: http://user-service:8080
PORT: 8081
networks:
- user-service_default
- auth-service_default
networks:
user-service_default:
external: true
auth-service_default:
external: true
Note: The Endpoint I try to redirect is : GET :8081/refresh?token=JWTOKEN and I use See Other Code for redirect, because It's redirect from POST request.
An HTTP Redirect works something like this (see the link for a better description with diagrams etc):
The client (e.g. postman) makes a request (e.g. GET http://auth-service:8080)
The server responds with a redirect status (303 "See Other" in this case) and the new URL (e.g. http://auth-service:8081/refresh?token=blah)
The client requests the new URL.
So, as per the comments, your app was redirecting to http://auth-service:8081/refresh?token=blah. The issue is that when the client, which is not a container, tries to lookup the IP address for the host auth-service it will not find it (it's not in the local hosts file and whatever DNS server you are using does not know the address). As such the fix is to return a URL that the client can access (e.g. http://127.0.0.1:8081).
It's worth nothing that even if the client could resolve auth-service its likely the address would be something like 172.21.0.2. This is a private address that is not accessible from the host (because you are using a bridge network). This type of network allows containers to communicate amongst themselves (within limits) and to initiate outgoing connections but provides no inbound access for the host or devices attached to the host.
External access is (generally) via published ports; that is the ports: - "5431:5432" lines in the docker-compose.yml. These entries allow you to expose a containers port on the host (in this case port 5432 on the container is exposed as port 5432 on the host, but the port numbers can differ).
How does the network in containers works
Sorry - that's too much to explain in a stack overflow answer! The docker docs are pretty good.
1.5 does it take effect to my redirect action or I'm just wrong specify the host because redirect is actioned by client?
The redirect is performed using the http protocol. HTTP is an application layer protocol; communication usually takes place over TCP/IP connections. It's worth spending a little bit of time to understand all of the different layers involved (there is always more to learn in IT!).
Within the underlying TCP stack the host (and other external systems) can only access the container via the published port. It's worth nothing that DNS (how the host name is mapped to an IP) and IP routing (can traffic from the host reach the container) are separate considerations.
Is it right the client in this case is Postman ?
Postman is fine. The alternative is to use a browser which will operate in much the same way.
Am I right if I said when containers trying to connect with named host service, the communication can be done if each containers connect to the same network and it's same like server to server communication?
Containers on the same network can communicate and Docker provides an embedded DNS server that enables them to perform lookups (so map auth-service to the containers IP address and host.docker.internal to the host). Its important to note that this embedded DNS server is not used by the host (and would be of limited use anyway because the host cannot communicate directly with the containers via their IP address).
Hopefully that helps - this can get quite confusing, particularly if you don't have much experience with networks. I would note that I often use Edge Routers/reverse proxies in this kind of situation (e.g. Traefik). With this you can just map one port (or perhaps two if you want HTTPS) and Traefik routes the requests to the appropriate container (this may just confuse you now but could be useful in the future!).
What I'm trying to do
Host a Taskwarrior Server on an AWS EC2 instance, and connect to it via a subdomain (e.g. task.mydomain.dev).
Taskwarrior server operates on port 53589.
Tech involved
AWS EC2: the server (Ubuntu)
Caddy Server: for creating a reverse proxy for each app on the EC2 instance
Docker (docker-compose): for launching apps, including the Caddy Server and the Taskwarrior server
Cloudflare: DNS hosting and SSL certificates
How I've tried to do this
I have:
allowed incoming connections for ports 22, 80, 443 and 53589 in the instance's security policy
given the EC2 instance an elastic IP
setup the DNS records (task.mydomain.dev is CNAME'd to mydomain.dev, mydomain.dev has an A record pointing to the elastic IP)
used Caddy server to setup a reverse proxy on port 53589 for task.mydomain.dev
setup the Taskwarrior server as per instructions (i.e. certificates created; user and organisation created; taskrc file updated with cert, auth and server info; etc)
Config files
/opt/task/docker-compose.yml
version: '3.3'
services:
taskd:
image: connectical/taskd
restart: always
volumes:
- /opt/task:/var/taskd
ports:
- 53589:53589
networks:
default:
external:
name: caddy_net
/opt/caddy/docker-compose.yml
version: "3.4"
services:
caddy:
build:
context: .
dockerfile: Dockerfile
container_name: caddy
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./config:/config
- ./data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
networks:
default:
external:
name: caddy_net
/opt/caddy/Caddyfile:
task.mydomain.dev:53589 {
reverse_proxy taskd:53589
tls {
dns cloudflare myCloudflareAPIkey
}
}
What's actually happening
I'm unable to connect to port 53589 on task.mydomain.dev
Running telnet task.mydomain.dev 53589 times out
I'm unable to connect to port 53589 on mydomain.dev
Running telnet mydomain.dev 53589 times out
I'm able to connect to port 53589 at 127.0.0.1 by ssh'ing into the EC2 instance
Runningtelnet 127.0.0.1 53589 from the EC2 instance successfully connects
I'm able to connect to port 80 on task.mydomain.dev, but unable to sync with the Taskwarrior server
Running task sync init returns:
c: 1 Received record packet of unknown type 72
Syncing with task.mydomain.dev:80
Cannot perform this action while handshake is in progress.
Sync failed. Could not connect to the Taskserver.
I'm able to connect to port 443 on task.mydomain.dev, but unable to sync with the Taskwarrior server
Running task sync init returns:
Syncing with task.mydomain.dev:443
Malformed message
Sync failed. Could not connect to the Taskserver.
What I've tried to fix it
Changing the Caddyfile's first line to:
task.mydomain.dev { and task.mydomain.dev:80 {, then connecting to port 80
Running task sync init returns:
c: 1 Received record packet of unknown type 72
Syncing with task.mydomain.dev:80
Cannot perform this action while handshake is in progress.
Sync failed. Could not connect to the Taskserver.
task.mydomain.dev { and task.mydomain.dev:443 {, then connecting to port 443
Running task sync init returns:
Syncing with task.mydomain.dev:443
Malformed message
Sync failed. Could not connect to the Taskserver.
Changing Caddyfile's second line to reverse_proxy 127.0.0.1:53589, reverse_proxy 0.0.0.0:53589 and reverse_proxy localhost:53589. Same errors occur.
Removing the CNAME records for the subdomain. Same errors occur
Does anyone have any idea what's happening or could point me in the right direction?
If you are attempting to proxy HTTPS traffic on Cloudflare on a port not on the standard list, you will need to follow one of these options:
Set it up as a Cloudflare HTTPS Spectrum app on the required port 53589
Set up the record in the Cloudflare DNS tab as Grey cloud (in other words, it will only perform the DNS resolution - meaning you will need to manage the certificates on your side)
Change your service so that it listens on one of the standard HTTPS ports listed in the documentation in point (1)
I am trying to run a Go app binary in a docker container. The app has some gRPC request being listen and server on:
http.ListenAndServe("localhost:8081", nil)
In my docker-compose.yaml. I have a service of the app mapped to 8081:
golangAPP:
build:
context: .
dockerfile: ./docker/golangAPP/Dockerfile
depends_on:
- setup
ports:
- 8081:8081
After docker-compose up I can see the verbose that the app is being served.
But I still cannot reach it. curl -X OPTIONS http://localhost:8081 return
curl: (56) Recv failure: Connection reset by peer
If I run the binary locally without docker, then I can send request to the app.
Any suggestion? I did some googling and some point to firewall issue. But I am not sure how to proceed.
If you do:
http.ListenAndServe("localhost:8081", nil)
this will listen to connections from the loopback interface. When running within a container, this will only accept connections coming from within that container (or if you're running this in a k8s pod, within the same pod). So:
http.ListenAndServe(":8081", nil)
This will accept both loopback and external connections (external to the container).
I'm currently running a spring boot application in a docker container on an EC2. My docker-compose file looks like this (with some values replaced):
version: '3.8'
services:
my-app:
image: ${ecr-repo}/my-app:0.0.1-SNAPSHOT
ports:
- "8893:8839/tcp"
networks:
default:
The docker container deploys and comes up as healthy with the healthcheck command being:
wget --spider -q -Y off http://localhost:8893/my-app/v1/actuator/health
If I do a docker ps -a I can see for the ports:
0.0.0.0:8893->8893
My Alb healthcheck however is returning a 502 so I've temporarily allowed connections from my IP directly to the EC2 in the security group. The rules are:
Allow Ingress on 8893 from my Alb security group
Allow Ingress on 8893 from my IP
Allow Egress to anywhere (0.0.0.0)
When I try and hit the healthcheck endpoint of my app using the public DNS of the ec2 on port 8893 using Postman I get Error: connect ECONNREFUSED
If I take my docker container down and then simulate a webserver using the command from https://fabianlee.org/2016/09/26/ubuntu-simulating-a-web-server-using-netcat/ which is:
while true; do { echo -e "HTTP/1.1 200 OK\r\n$(date)\r\n\r\n<h1>hello world from $(hostname) on $(date)</h1>" | nc -vl 8080; } done
I get a 200 response with the expected body which indicates it's not a problem with the security groups.
The actuator endpoint for spring boot is definitely enabled as if I try running the app through intellij and hitting the endpoint it returns a 200 and status up.
Any suggestions for what I might be missing here or how I could debug this further? It seems like docker isn't picking up connections to the port for some reason.
I have a HTTP health check in my service, exposed on localhost:35000/health. At the moment it always returns 200 OK. The configuration for the health check is done programmatically via the HTTP API rather than with a service config, but in essence, it is:
set id: service-id
set name: health check
set notes: consul does a GET to '/health' every 30 seconds
set http: http://127.0.0.1:35000/health
set interval: 30s
When I run consul in dev mode (consul agent -dev -ui) on my host machine directly the health check passes without any problem. However, when I run consul in a docker container, the health check fails with:
2017/07/08 09:33:28 [WARN] agent: http request failed 'http://127.0.0.1:35000/health': Get http://127.0.0.1:35000/health: dial tcp 127.0.0.1:35000: getsockopt: connection refused
The docker container launches consul, as far as I am aware, in exaclty the same state as the host version:
version: '2'
services:
consul-dev:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul-dev"
hostname: "consul-dev"
ports:
- "8400:8400"
- "8500:8500"
- "8600:8600"
volumes:
- ./etc/consul.d:/etc/consul.d
command: "agent -dev -ui -client=0.0.0.0 -config-dir=/etc/consul.d"
I'm guessing the problem is that consul is trying to do the GET request to the containers loopback interface rather than what I am intending, which is for the loopback interface of the host. Is that a correct assumption? More importantly, what do I need to do to correct the problem?
So it transpires that there was a bug in some previous versions of macOS that prevented use of the docker0 network. Whilst the bug is fixed in newer versions, Docker support extends to older versions and so Docker for Mac doesn't currently support docker0. See this discussion for details.
The workaround is to create an alias to the loopback interface on the host machine, set the service to listen on either that alias or 0.0.0.0, and configure Consul to send the health check GET request to the alias.
To set the alias (choose a private IP address that's not being used for anything else; I chose a class A address but that's irrelevant):
sudo ifconfig lo0 alias 10.200.10.1/24
To remove the alias:
sudo ifconfig lo0 -alias 10.200.10.1
From the service definition above, the HTTP line should now read:
set http: http://10.200.10.1:35000/health
And the HTTP server listening for the health check requests also needs to be listening on either 10.200.10.2 or 0.0.0.0. This latter option is suggested in the discussion but I've only tried it with the alias.
I've updated the title of the question to more accurately reflect the problem, now I know the solution. Hope it helps somebody else too.