Unable to connect to port 53589 on EC2 instance using Docker and Caddy server - docker

What I'm trying to do
Host a Taskwarrior Server on an AWS EC2 instance, and connect to it via a subdomain (e.g. task.mydomain.dev).
Taskwarrior server operates on port 53589.
Tech involved
AWS EC2: the server (Ubuntu)
Caddy Server: for creating a reverse proxy for each app on the EC2 instance
Docker (docker-compose): for launching apps, including the Caddy Server and the Taskwarrior server
Cloudflare: DNS hosting and SSL certificates
How I've tried to do this
I have:
allowed incoming connections for ports 22, 80, 443 and 53589 in the instance's security policy
given the EC2 instance an elastic IP
setup the DNS records (task.mydomain.dev is CNAME'd to mydomain.dev, mydomain.dev has an A record pointing to the elastic IP)
used Caddy server to setup a reverse proxy on port 53589 for task.mydomain.dev
setup the Taskwarrior server as per instructions (i.e. certificates created; user and organisation created; taskrc file updated with cert, auth and server info; etc)
Config files
/opt/task/docker-compose.yml
version: '3.3'
services:
taskd:
image: connectical/taskd
restart: always
volumes:
- /opt/task:/var/taskd
ports:
- 53589:53589
networks:
default:
external:
name: caddy_net
/opt/caddy/docker-compose.yml
version: "3.4"
services:
caddy:
build:
context: .
dockerfile: Dockerfile
container_name: caddy
restart: always
ports:
- 80:80
- 443:443
volumes:
- ./config:/config
- ./data:/data
- ./Caddyfile:/etc/caddy/Caddyfile
networks:
default:
external:
name: caddy_net
/opt/caddy/Caddyfile:
task.mydomain.dev:53589 {
reverse_proxy taskd:53589
tls {
dns cloudflare myCloudflareAPIkey
}
}
What's actually happening
I'm unable to connect to port 53589 on task.mydomain.dev
Running telnet task.mydomain.dev 53589 times out
I'm unable to connect to port 53589 on mydomain.dev
Running telnet mydomain.dev 53589 times out
I'm able to connect to port 53589 at 127.0.0.1 by ssh'ing into the EC2 instance
Runningtelnet 127.0.0.1 53589 from the EC2 instance successfully connects
I'm able to connect to port 80 on task.mydomain.dev, but unable to sync with the Taskwarrior server
Running task sync init returns:
c: 1 Received record packet of unknown type 72
Syncing with task.mydomain.dev:80
Cannot perform this action while handshake is in progress.
Sync failed. Could not connect to the Taskserver.
I'm able to connect to port 443 on task.mydomain.dev, but unable to sync with the Taskwarrior server
Running task sync init returns:
Syncing with task.mydomain.dev:443
Malformed message
Sync failed. Could not connect to the Taskserver.
What I've tried to fix it
Changing the Caddyfile's first line to:
task.mydomain.dev { and task.mydomain.dev:80 {, then connecting to port 80
Running task sync init returns:
c: 1 Received record packet of unknown type 72
Syncing with task.mydomain.dev:80
Cannot perform this action while handshake is in progress.
Sync failed. Could not connect to the Taskserver.
task.mydomain.dev { and task.mydomain.dev:443 {, then connecting to port 443
Running task sync init returns:
Syncing with task.mydomain.dev:443
Malformed message
Sync failed. Could not connect to the Taskserver.
Changing Caddyfile's second line to reverse_proxy 127.0.0.1:53589, reverse_proxy 0.0.0.0:53589 and reverse_proxy localhost:53589. Same errors occur.
Removing the CNAME records for the subdomain. Same errors occur
Does anyone have any idea what's happening or could point me in the right direction?

If you are attempting to proxy HTTPS traffic on Cloudflare on a port not on the standard list, you will need to follow one of these options:
Set it up as a Cloudflare HTTPS Spectrum app on the required port 53589
Set up the record in the Cloudflare DNS tab as Grey cloud (in other words, it will only perform the DNS resolution - meaning you will need to manage the certificates on your side)
Change your service so that it listens on one of the standard HTTPS ports listed in the documentation in point (1)

Related

Understanding Docker overlay network

I am using an overlay network to deploy an application on multiple VMs on the same LAN. I am using nginx as the front end for this application and this is running on host_1. All the containers that are part of the application are communicating with each other without any issues. But HTTP requests to the published port 80 of the nginx container (mapped to port 8080 on host_1) from a different VM on the same LAN, say host_2, time out[1]. But HTTP requests to localhost:8080 on host_1 succeed[2]. If I start the nginx container by removing the overlay network, I am able to send HTTP requests[3].
Output of curl -vvv <host_1 IP>:8080 on host_2.
ubuntu#host_2:~$ curl -vvv <host_1>:8080
Rebuilt URL to: <host_1 IP>:8080/
Trying <host_1 IP>...
TCP_NODELAY set
connect to <host_1 IP> port 8080 failed: Connection timed out
Failed to connect to <host_1 IP> port 8080: Connection timed out
Closing connection 0 curl: (7) Failed to connect to <host_1 IP> port 8080: Connection timed out
Output of curl localhost:8080 on host_1.
nginx welcome page
Output of curl -vvv <host_1 IP>:8080 on host_2 when I recreate the container without the overlay network
nginx welcome page
The docker-compose file for the front end is as below:
version: '3'
nginx-frontend:
hostname: nginx-frontend
image: nginx
ports: ['8080:80']
restart: always
networks:
default:
external: {name: overlay-network}
I checked that the nginx and the host are listening on 0.0.0.0:80 and 0.0.0.0:8080 respectively.
Since the port 80 of the nginx is published by mapping it to port 8080 of the host, I should be able to send HTTP requests from any VM that is on the same LAN as the host of this container. Can someone please explain what I am doing wrong or where my assumptions are wrong?

Connection refused when attempting to connect to a docker container on an EC2

I'm currently running a spring boot application in a docker container on an EC2. My docker-compose file looks like this (with some values replaced):
version: '3.8'
services:
my-app:
image: ${ecr-repo}/my-app:0.0.1-SNAPSHOT
ports:
- "8893:8839/tcp"
networks:
default:
The docker container deploys and comes up as healthy with the healthcheck command being:
wget --spider -q -Y off http://localhost:8893/my-app/v1/actuator/health
If I do a docker ps -a I can see for the ports:
0.0.0.0:8893->8893
My Alb healthcheck however is returning a 502 so I've temporarily allowed connections from my IP directly to the EC2 in the security group. The rules are:
Allow Ingress on 8893 from my Alb security group
Allow Ingress on 8893 from my IP
Allow Egress to anywhere (0.0.0.0)
When I try and hit the healthcheck endpoint of my app using the public DNS of the ec2 on port 8893 using Postman I get Error: connect ECONNREFUSED
If I take my docker container down and then simulate a webserver using the command from https://fabianlee.org/2016/09/26/ubuntu-simulating-a-web-server-using-netcat/ which is:
while true; do { echo -e "HTTP/1.1 200 OK\r\n$(date)\r\n\r\n<h1>hello world from $(hostname) on $(date)</h1>" | nc -vl 8080; } done
I get a 200 response with the expected body which indicates it's not a problem with the security groups.
The actuator endpoint for spring boot is definitely enabled as if I try running the app through intellij and hitting the endpoint it returns a 200 and status up.
Any suggestions for what I might be missing here or how I could debug this further? It seems like docker isn't picking up connections to the port for some reason.

Autossh docker expose endpoint to host & containers

On a public server, I have an Prometheus exporter setup. This is blocked, intentionally, by a firewall as the information should not be public.
From a separate network (my home network with dynamic IP), I wish to scrape the Prometheus exporter. The idea is to use autossh to setup an SSH tunnel, to be able to scrape the endpoint that way. I prefer to setup autossh using docker.
So far I have managed to setup a autossh docker container, with the following docker-compose:
remote-nodeexporter:
image: jnovack/autossh:latest
container_name: remote-nodeexporter
environment:
- SSH_HOSTNAME=PUBLIC_IP
- SSH_TUNNEL_REMOTE=19100
- SSH_TUNNEL_LOCAL=9100
- SSH_MODE=-L
restart: always
volumes:
- /path/to/id_rsa:/id_rsa
ports:
- "19100:19100"
From within the container this works fine:
/ # wget localhost:19100/metrics
Connecting to localhost:19100 (127.0.0.1:19100)
saving to 'metrics'
metrics 100% |**********************************************************************************************************************************************************************************************| 75595 0:00:00 ETA
'metrics' saved
But from the host (or from other containers), I get errors:
/ # wget localhost:19100/metrics
--2020-07-07 08:53:25-- http://localhost:19100/metrics
Resolving localhost (localhost)... 127.0.0.1
Connecting to localhost (localhost)|127.0.0.1|:19100... connected.
HTTP request sent, awaiting response... Read error (Connection reset by peer) in headers.
Retrying.
How do I correctly expose this endpoint?

Deploying rails docker on ec2 and redirecting traffic from 80 to 4000

I have a dockerized rails app that I am trying to deploy to aws ec2. I managed to make it run in docker on ec2 and map port 4000, docker_compose.yml :
app:
image: davidgeismar/artifacts-app:latest
command: 'rails server -p 4000'
ports:
- "4000:4000"
volumes:
- ./log/production.log:/artifacts_data_api/log/production.log
On the aws dashboard in security groups I allowed http traffic from any source :
HTTP TCP 80 0.0.0.0/0
I wanted to open port 3000 but it is not possible on aws dashboard. From what I understand I am now supposed to redirect traffic from port 80 to port 3000. I followed those instructions to do that : https://serverfault.com/questions/320614/how-to-forward-port-80-to-another-port-on-the-samemachine
Now when I try to access my application server through my brower using ipv4:public_instance_ip/80 or ipv4:public_instance_ip:80I always get :
This site can’t be reached
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
Could you provide guidance on how to achieve this ?

Docker for Mac Container to Host Networking - Consul Health Checks Connection Refused

I have a HTTP health check in my service, exposed on localhost:35000/health. At the moment it always returns 200 OK. The configuration for the health check is done programmatically via the HTTP API rather than with a service config, but in essence, it is:
set id: service-id
set name: health check
set notes: consul does a GET to '/health' every 30 seconds
set http: http://127.0.0.1:35000/health
set interval: 30s
When I run consul in dev mode (consul agent -dev -ui) on my host machine directly the health check passes without any problem. However, when I run consul in a docker container, the health check fails with:
2017/07/08 09:33:28 [WARN] agent: http request failed 'http://127.0.0.1:35000/health': Get http://127.0.0.1:35000/health: dial tcp 127.0.0.1:35000: getsockopt: connection refused
The docker container launches consul, as far as I am aware, in exaclty the same state as the host version:
version: '2'
services:
consul-dev:
image: "consul:latest"
container_name: "net-sci_discovery-service_consul-dev"
hostname: "consul-dev"
ports:
- "8400:8400"
- "8500:8500"
- "8600:8600"
volumes:
- ./etc/consul.d:/etc/consul.d
command: "agent -dev -ui -client=0.0.0.0 -config-dir=/etc/consul.d"
I'm guessing the problem is that consul is trying to do the GET request to the containers loopback interface rather than what I am intending, which is for the loopback interface of the host. Is that a correct assumption? More importantly, what do I need to do to correct the problem?
So it transpires that there was a bug in some previous versions of macOS that prevented use of the docker0 network. Whilst the bug is fixed in newer versions, Docker support extends to older versions and so Docker for Mac doesn't currently support docker0. See this discussion for details.
The workaround is to create an alias to the loopback interface on the host machine, set the service to listen on either that alias or 0.0.0.0, and configure Consul to send the health check GET request to the alias.
To set the alias (choose a private IP address that's not being used for anything else; I chose a class A address but that's irrelevant):
sudo ifconfig lo0 alias 10.200.10.1/24
To remove the alias:
sudo ifconfig lo0 -alias 10.200.10.1
From the service definition above, the HTTP line should now read:
set http: http://10.200.10.1:35000/health
And the HTTP server listening for the health check requests also needs to be listening on either 10.200.10.2 or 0.0.0.0. This latter option is suggested in the discussion but I've only tried it with the alias.
I've updated the title of the question to more accurately reflect the problem, now I know the solution. Hope it helps somebody else too.

Resources