I've got some issues with my captive portal.
I want to open a pop-up when anyone try to connect to my Raspberry wifi access point. In order to, I have turn my Rpi into a wifi access point and I have put a LAMP server on my Rpi.
Actually I use DNSMASQ and i change the conf file to :
address=/#/10.0.0.1
listen-address=10.0.0.1
dhcp-range=10.0.0.10,10.0.0.50,12h
And I change the iptables in order to capture all the connexion :
iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 443 -j DNAT --to-destination 10.0.0.1:443
iptables -t nat -A PREROUTING -i wlan0 -p tcp --dport 80 -j DNAT --to-destination 10.0.0.1:80
So when I connect and go on the browser with my phone I'm redirected to the home page of the server => This is what I want, so it's good :)
But my problem is I want a trigger to open the home page automatically when i connect to the network.
Anyone knows how to do this ?
Another question, when I call "google.fr" in my browser, I'm redirected to my Apache home page, but when I launch a search request in the browser, I've got an error. Anyone knows why ?
the reaseon why you get an error is either because :
your server is not setup for https request
if you request google.com/search?=whatever, /search doesn't exist on your server.
you need to:
configure your server for https (but it will show a security alert because of bad certificate)
tell your server to rewrite any "unknown" url to a specific virtual host showing your home page
This tutorial for Ubuntu, is a good follow along for the Raspberry Pi if you are using Apache and php in your captive portal setup.
http://aryo.info/labs/captive-portal-using-php-and-iptables.html (from archive)
Related
I'm just discovering Jelastic, and I have difficulties to run Strapi.
So far, I just have one node, that is a Docker Strapi image, with SLB (no specific load balancer).
This node is accessed with SLB, and both public IPv4 and IPv6 are available.
I redirect a subdomain to these public IPs
I can launch Strapi in the container. However, it does not work well because of two issues:
SSL is not available. I can't install Let's Encrypt Free SSL: "the add-on cannot be installed on this node"...
Port is not redirected, and I have to explicitly indicate the port in the browser url to access the app homepage.
With these two issues, Strapi cannot work properly.
DOCKER_EXPOSED_PORT 1337 and MASTER_IP are set up for the Docker container.
How can I solve these two issues?
SSL is not available. I can't install Let's Encrypt Free SSL: "the
add-on cannot be installed on this node"...
The Jelastic Let's Encrypt add-on can easily install on top of any container with the Custom SSL support enabled, namely the following servers (the list is constantly extended):
Load Balancers - NGINX, Apache LB, HAProxy, Varnish
Java application servers - Tomcat, TomEE, GlassFish, Payara, Jetty
PHP application servers - Apache PHP, NGINX PHP
Ruby application servers - Apache Ruby, NGINX Ruby
If you require Let's Encrypt SSL for any other stack, just add a load balancer in front of your application servers and install the add-on. SSL termination at load balancing level is used by default in clustered topologies. Docker containers are not on the list of supported nodes.
Port is not redirected, and I have to explicitly indicate the port in
the browser url to access the app homepage.
When using an external IP address, for correct forwarding you can can add two redirection rules to iptables and redirect all requests from port 80 or 443 to 1337, for example:
*nat
-A PREROUTING -p tcp -m tcp --dport 80 -j REDIRECT --to-ports 1337
*filter
-A INPUT -p tcp -m tcp --dport 1337 -m state --state NEW -j ACCEPT
If you will not use an external IP address, you can apply the solution indicated here or get additional information indicated at this link:
I have setup an environment in Jelastic including a load balancer (tested both Apache and Nginx with same results), with public IP and an application server running Univention UCS DC Master docker image (I have also tried a simple Ubuntu 20.04 install).
Now the application server has a private IP address and is correctly reachable from the internet, also I can correctly SSH into both, load balancer and app server.
The one thing I can't seem to achieve is to have the app server access the internet (outbound traffic).
I have tried setting up the network in the app server and tried a few Nginx load-balancing configurations but to be honest I've never used a load balancer before and I feel that configuring load balancing will not resolve my issue (might be wrong).
Of course my intention is to learn load balancing but if someone could just point me in the right direction I would be so grateful.
Question: what needs to be configured in Jelastic or in the servers to have the machines behind the load balancer access the internet?
Thank you for your time.
Cristiano
I was able to resolve the issue by simply detaching and re-attaching the public IP address to the server, so it was no setup problem just something in Jelastic got stuck..
Thanks all!
Edit: Actually to effectively resolve the issue, I have to detach the public IP address from the univention/ucs docker image, attach it to another node in the environment (ie an Ubuntu server I have), then attach the public IP back to the univention docker image. Can’t really figure why but works for me.
To have the machines access the internet you should add a route in them using your load balancer as a gw like this:
Destination GW Genmask
0.0.0.0 LB #IP 255.255.255.0
Your VMs firewalls should not block 80 and 443 ports for in/out traffic, using iptables :
sudo iptables -A INPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -p tcp -m multiport --dports 80,443 -m conntrack --ctstate ESTABLISHED -j ACCEPT
In your load balancer you should masquerade outgoing traffic (change source ip) and forward input traffic to your vms subnet using the LB interface connected to this subnet:
sudo iptables --table NAT -A POSTROUTING --out-interface eth0 -j MASQUERADE
sudo iptables -A FORWARD -p tcp -dport 80 -i eth0 -o eth1 -j ACCEPT
sudo iptables -A FORWARD -p tcp -dport 443 -i eth0 -o eth1 -j ACCEPT
You should enable ip forwarding in your load balancer
echo 1 > /proc/sys/net/ipv4/ip_forward
I have an issue in regards to my iptables setup. My goal is to reach the https based webserver inside a docker container from the server machine itself.
The setup is the following:
The server is connected to the internet via eth0 and serves http via port 443.
Any users from the outside (internet) reach the server via the ip address 1.2.3.4.
It is connected to the internal network via eth1 and serves dhcp, dns and some more services.
Any users from the inside (intranet) reach the server via the ip address 10.0.0.1.
The docker container is connected via docker1 on the server. The later has the ip address 10.8.0.2 inside the docker network.
The docker container serves the webserver on port 1443, but iptables forwards (NAT) requests on port 443 to its address 10.8.0.1 and the destination port 1443.
What is working:
The webserver is perfectly reachable from the internet and the intranet.
The webserver can be reached from the server itself using the address 10.8.0.1:1443.
What is not working:
Any client which is working directly on the server can not reach the docker webserver using https://example.com:443. Using https://10.8.0.1:8443 would work, but fails due to a certificate error. It is not a goal to skip the certificate check as a workaround.
Excerpt of the iptable configuration:
iptables -P INPUT DROP
iptables -P OUTPUT ACCEPT
iptables -P FORWARD DROP
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A OUTPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT
iptables -A FORWARD -i docker1 -o docker1 -j ACCEPT
iptables -A PREROUTING -t nat -p tcp -d 1.2.3.4 --dport 443 -j DNAT --to-destination 10.1.0.1:1443
iptables -A FORWARD -o docker1 -p tcp --dport 1443 -j ACCEPT
iptables -A INPUT -i docker1 -j DROP
iptables -A FORWARD -i docker1 -j DROP
Due to that "complicated" setup I am no longer able to understand which of the iptable rules and chains need to be applied to make this work so I am seeking for your help to solve that issue.
Brainstorming about the issue using a simplified model and my understanding of the iptable chains the way of the packages might/should look like this:
Origin is a local application (wget).
The packages go through the OUTPUT table.
The packages go through the POSTROUTING table.
Magic happens...
The packages arrive again in the PREROUTING table.
The packages might go trough INPUT again.
The packages might arrive at the target application (webserver).
I have a problem with WireGuard VPN connection to my office network. I am just testing Wireguard if it can replace OpenVPN (which is working fine).
Both sides are Debian 9.7.
The connection is established between client and server successfully, I can ping and ssh in both directions.
On the server side is attached local network 10.5.5.0/24, the address of the server is 10.5.5.5, and two other computers 10.5.5.100, 10.5.5.200
Server Wireguard Address = 10.0.1.1/24, Client = 10.0.1.3/24
AllowedIPs on Server: 10.0.1.3/32
AllowedIPs on Client: 10.0.1.1/32, 10.5.5.0/24
Routes on the client are set, I can ping the server from a client with 10.0.1.1 and also 10.5.5.5.
I can't ping/access any other computer on 10.5.5.0/24 - (10.5.5.100, 10.5.5.200).
I need to know, if there is a problem with wireguard, Debian or somewhere between chair and keyboard.
Finally ... I figured it out: missing iptables rule:
iptables -t nat -A POSTROUTING -o ens224 -j MASQUERADE
where ens224 is network interface for subnet 10.5.5.0/24
I have two rails apps on separate virtual servers, but in the same facility. Both apps can communicate via local ip addresses.
This is a two part question:
1) How do I check where the request is originating and limit requests only to those from that location?
2) Do you think this would be secure enough?
My gut is telling me this isn't secure enough because of IP spoofing, but I'm thinking OAuth or similar is a little too hardcore for my needs. Though, maybe not.
This is the first time I've approached something like this and I'm looking for anyone that can push me in the right direction here.
Thanks.
Depending on who's hosting you, the local network (to which your local addresses belong) could be a private network only accessible to your instances or, more likely, it would be shared with other virtual machines that do not belong to you. You would not be open to direct external attacks, but any compromised virtual machine sharing the same local network as you can be a springboard for attack, so your concerns are absolutely valid.
Answering, in order, your two concerns:
Configure iptables for the local interfaces to only accept requests coming on specific ports from specific local IPs (read a tutorial for a better understanding of iptables configuration.) All other virtual machines on the local network should not be able to probe you, although they might be able to intercept your traffic (addressed below.)
No; you should use SSL over all intra-node connections. This will protect you in two ways: firstly it will protect you from spoofing (an attacker will be rejected if he does not have your certificate, even if he bypasses iptables by spoofing his address, or because your iptables config gets overwritten by an admin), and secondly it will protect your data from prying eyes (e.g. an attacker will not be able to snoop traffic for your passwords.) Some applications (e.g. most database engines, net-snmpd set up in v3 mode, etc.) support SSL natively. Alternatively, establish and use ssh tunnels, or use stunnel
Sample base iptables configuration allowing basic services (HTTP, SSH etc.) on the public (internet) interface, as well as allowing www1 and www2 to connect to this node's MySQL on port 3306 on the eth0 interface (www1 and www2 are defined in /etc/hosts so they resolve to the appropriate IP addresses.):
# * raw
#
# Allows internal traffic without loading conntrack
# -A PREROUTING -i lo -d 127.0.0.0/8 -j NOTRACK
*filter
# Allows all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0
-A INPUT -i lo -j ACCEPT
-A INPUT -i ! lo -d 127.0.0.0/8 -j DROP
# Accepts all established inbound connections (TCP, UDP, ICMP incl. "network unreachable" etc.)
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
# Allows all outbound traffic
# You can modify this to only allow certain traffic
-A OUTPUT -j ACCEPT
# Allows HTTP and HTTPS connections from anywhere (the normal ports for websites)
-A INPUT -p tcp --dport 80 -j ACCEPT
-A INPUT -p tcp --dport 443 -j ACCEPT
# Allows SSH connections
-A INPUT -p tcp -m state --state NEW --dport 22 -j ACCEPT
# Allow ping
-A INPUT -p icmp -m icmp --icmp-type echo-request -j ACCEPT
# log iptables denied calls
-A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level debug
# Reject all other inbound - default deny unless explicitly allowed policy
-A INPUT -j REJECT
-A FORWARD -j REJECT
# Allows MySQL within our cluster ONLY
-A INPUT -p tcp -s www1 -i eth0 --dport 3306 -j ACCEPT
-A INPUT -p udp -s www1 -i eth0 --dport 3306 -j ACCEPT
-A INPUT -p tcp -s www2 -i eth0 --dport 3306 -j ACCEPT
-A INPUT -p udp -s www2 -i eth0 --dport 3306 -j ACCEPT
COMMIT
This doesn't really sound like a Rails question, it's more of a question about web architecture. I'm assuming that both machines are accessible to the outside world via HTTP. If that's the case, you may want to consider putting a firewall in front of both machines to create a local network that the two machines are on.
Once you've done that, you should be able to configure the firewall to disallow requests based on any criteria you specify. Given that this is a Rails application I'm going to assume that the API is a set of resources. If that's the case you could configure your firewall to filter requests to the private API.
This way, the machines on the local network can communicate freely as their requests to one another aren't going through the firewall.