SSL and port forwarding in Jelastic: Deploying Strapi - docker

I'm just discovering Jelastic, and I have difficulties to run Strapi.
So far, I just have one node, that is a Docker Strapi image, with SLB (no specific load balancer).
This node is accessed with SLB, and both public IPv4 and IPv6 are available.
I redirect a subdomain to these public IPs
I can launch Strapi in the container. However, it does not work well because of two issues:
SSL is not available. I can't install Let's Encrypt Free SSL: "the add-on cannot be installed on this node"...
Port is not redirected, and I have to explicitly indicate the port in the browser url to access the app homepage.
With these two issues, Strapi cannot work properly.
DOCKER_EXPOSED_PORT 1337 and MASTER_IP are set up for the Docker container.
How can I solve these two issues?

SSL is not available. I can't install Let's Encrypt Free SSL: "the
add-on cannot be installed on this node"...
The Jelastic Let's Encrypt add-on can easily install on top of any container with the Custom SSL support enabled, namely the following servers (the list is constantly extended):
Load Balancers - NGINX, Apache LB, HAProxy, Varnish
Java application servers - Tomcat, TomEE, GlassFish, Payara, Jetty
PHP application servers - Apache PHP, NGINX PHP
Ruby application servers - Apache Ruby, NGINX Ruby
If you require Let's Encrypt SSL for any other stack, just add a load balancer in front of your application servers and install the add-on. SSL termination at load balancing level is used by default in clustered topologies. Docker containers are not on the list of supported nodes.
Port is not redirected, and I have to explicitly indicate the port in
the browser url to access the app homepage.
When using an external IP address, for correct forwarding you can can add two redirection rules to iptables and redirect all requests from port 80 or 443 to 1337, for example:
*nat
-A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337
*filter
-A INPUT -p tcp -m tcp --dport 1337 -m state --state NEW -j ACCEPT
If you will not use an external IP address, you can apply the solution indicated here or get additional information indicated at this link:

Related

How to grant internet access to application servers through load balancer

I have setup an environment in Jelastic including a load balancer (tested both Apache and Nginx with same results), with public IP and an application server running Univention UCS DC Master docker image (I have also tried a simple Ubuntu 20.04 install).
Now the application server has a private IP address and is correctly reachable from the internet, also I can correctly SSH into both, load balancer and app server.
The one thing I can't seem to achieve is to have the app server access the internet (outbound traffic).
I have tried setting up the network in the app server and tried a few Nginx load-balancing configurations but to be honest I've never used a load balancer before and I feel that configuring load balancing will not resolve my issue (might be wrong).
Of course my intention is to learn load balancing but if someone could just point me in the right direction I would be so grateful.
Question: what needs to be configured in Jelastic or in the servers to have the machines behind the load balancer access the internet?
Thank you for your time.
Cristiano
I was able to resolve the issue by simply detaching and re-attaching the public IP address to the server, so it was no setup problem just something in Jelastic got stuck..
Thanks all!
Edit: Actually to effectively resolve the issue, I have to detach the public IP address from the univention/ucs docker image, attach it to another node in the environment (ie an Ubuntu server I have), then attach the public IP back to the univention docker image. Can’t really figure why but works for me.
To have the machines access the internet you should add a route in them using your load balancer as a gw like this:
Destination GW Genmask
0.0.0.0 LB #IP 255.255.255.0
Your VMs firewalls should not block 80 and 443 ports for in/out traffic, using iptables :
sudo iptables -A INPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate ESTABLISHED -j ACCEPT
In your load balancer you should masquerade outgoing traffic (change source ip) and forward input traffic to your vms subnet using the LB interface connected to this subnet:
sudo iptables --table NAT -A POSTROUTING --out-interface eth0 -j MASQUERADE
sudo iptables -A FORWARD -p tcp -dport 80 -i eth0 -o eth1 -j ACCEPT
sudo iptables -A FORWARD -p tcp -dport 443 -i eth0 -o eth1 -j ACCEPT
You should enable ip forwarding in your load balancer
echo 1 > /proc/sys/net/ipv4/ip_forward

Make Rails Server port 3000 accessible via domain name (AWS EC2, Cloudflare)

I have a development environment that I want accessible via a specific domain name ie. "example.com". It's a legitimate development environment (the whole puma +sqlite 3 RoR starting kit) that I want to make to be able to access using the domain name.
Currently the elastic IP from AWS EC2 is accessible by appending :3000. IPAddress:3000 works (ec2-NUMBERSHERE.ap-southeast-1.compute.amazonaws.com:3000 via web browser). I start this by running screen bundle exec rails server -b 0.0.0.0 Yes, I am running my development environment in an AWS EC2. This is being done on purpose.
In my Cloudflare account I have mapped the Type A to the Name "example.com" and content to the elastic public ip. I have mapped Type CNAME to the Name "wwww" and the content to the public DNS.
How do I achieve the same as IPAddress:3000 with example.com?
**I am aware of the best practices that this is ignoring but the question is really just this.
I think you can't do anything at a DNS level. What you can do is change the rails server default port using the -p option:
$ screen bundle exec rails server -b 0.0.0.0 -p 80
But then, you'll need to run the rails server with root permissions in order to use the 80 port (and any other port below 1024).
A better option is to bind the rails server at localhost (which is the default behaviour) and add Nginx as a reverse proxy listening in port 80 using the proxy_pass feature, something like this:
server {
listen 80;
listen [::]:80;
server_name example.com;
location / {
proxy_pass http://localhost:3000/;
}
}
I was able to do it by forwarding port 80 traffic to 3000 using iptables.
sudo iptables -t nat -I PREROUTING -p tcp --dport 443 -j REDIRECT --to-ports 3000\
After that running screen bundle exec rails server -b 0.0.0.0 worked!

restrict SSH connection to specific URL/domain name

I have a server with 2 domain names (let's say domain1.com and domain2.com).
I can SSH into the server by ssh user#domain1.com and ssh user#domain2.com. I would like to be able to only allow ssh user#domain1.com and disable SSH acces to domain2.com.
Is that possible?
It does not seem possible to allow SSH connection only to specific domain name. The domain name is resolved by the DNS and there is no way for the SSH server to know which domain you are using. See also this answer to the same question.
One thing you might try to do is to configure a firewall (for example iptable) to drop connection to domain2.com on port 22.
A similar problem was discussed here, where they were trying to block a domain in iptables so that visitor could not access the http server using it.
Adjusting the iptables rule to your case ( and assuming that your ssh server is running on port 22) I would try this:
iptables -I INPUT -p tcp --dport 22 -m string --string "Host: domain2.com" --algo bm -j DROP
UPDATE:
As Dusan Bajic commented the rule above would only work for http traffic because it take advantage of the http header fields. This would not work for ssh traffic.

Create captive portal on Raspbian

I've got some issues with my captive portal.
I want to open a pop-up when anyone try to connect to my Raspberry wifi access point. In order to, I have turn my Rpi into a wifi access point and I have put a LAMP server on my Rpi.
Actually I use DNSMASQ and i change the conf file to :
address=/#/10.0.0.1
listen-address=10.0.0.1
dhcp-range=10.0.0.10,10.0.0.50,12h
And I change the iptables in order to capture all the connexion :
iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 443 -j DNAT --to-destination 10.0.0.1:443
iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 80 -j DNAT --to-destination 10.0.0.1:80
So when I connect and go on the browser with my phone I'm redirected to the home page of the server => This is what I want, so it's good :)
But my problem is I want a trigger to open the home page automatically when i connect to the network.
Anyone knows how to do this ?
Another question, when I call "google.fr" in my browser, I'm redirected to my Apache home page, but when I launch a search request in the browser, I've got an error. Anyone knows why ?
the reaseon why you get an error is either because :
your server is not setup for https request
if you request google.com/search?=whatever, /search doesn't exist on your server.
you need to:
configure your server for https (but it will show a security alert because of bad certificate)
tell your server to rewrite any "unknown" url to a specific virtual host showing your home page
This tutorial for Ubuntu, is a good follow along for the Raspberry Pi if you are using Apache and php in your captive portal setup.
http://aryo.info/labs/captive-portal-using-php-and-iptables.html (from archive)

UDP auto-discovery for peers on the same machine

I'm looking at ZeroMQ Realtime Exchange Protocol (ZRE) as inspiration for building an auto-discovery of peers in a distributed application.
I've built a simple prototype application using UDP in Python following this model. It seems it has the (obivious, in retrospect) limitation that it only works for detecting peers if all peers are on other machines. This due to the socket bind operation on the discovery port.
Reading up on SO_REUSEADDR and SO_REUSEPORT tells me that I can't exactly do this with the UDP broadcast scheme as described in ZRE.
If you needed to build an auto-discovery mechanism for distributed applications such that multiple application instances (possibly with different versioN) can run on the same machine, how would you build it?
You should be able to bind each server instance to a different address. The entire subnet 127.0.0.0/8 should resolve to your localhost, so you can set up - for example - one service listening on 127.0.0.1, another listening on 127.0.0.2, etc. Anything from 127.0.0.1 to 127.255.255.254.
# works as expected
nc -l 127.0.0.100 3000 &
nc -l 127.0.0.101 3000 &
# shows error "nc: Address already in use"
nc -l 127.0.0.1 3000 &
nc -l 127.0.0.1 3000 &

Resources