I have here a RoR app, what I am using with the thin appserver.
Its configuration is in an .yml file, so:
---
pid: /srv/cica/tmp/pids/thin.pid
group: cica
wait: 30
timeout: 30
log: /srv/cica/log/thin.log
max_conns: 1024
require: []
environment: production
max_persistent_conns: 512
servers: 4
daemonize: true
user: cica
socket: /srv/cica/tmp/thin.sock
chdir: /srv/cica
How could I use a TCP socket instead of a unix socket for listening?
The documentation I've found somehow never mentions even the possibility, although indirect references say it is possible.
The cause of the problem is that the frontend web (apache2) isn't very strong to proxying http requests to a unix path. It wouldn't be a problem with nginx.
In theory, you can use simply an IP:ADDR instead of the socket path:
socket: 127.0.0.1:3000
will work. But, if you use multiple thin processes, you will have a problem.
(Which is very likely, because the whole ruby is a singlethreaded thing. Considering the IO waiting times, maybe even a significantly higher process number is also possible as the number of your CPU cores).
Somehow the socket address decoder of the thin configuration interpreter is enough smart to use the ordinary IP address, but it increases the IP and not the port for the additional sockets. Thus, you will have multiple thin instances listening on
# thin will listen on these addresses
127.0.0.1:3000
127.0.0.2:3000
127.0.0.3:3000
127.0.0.4:3000
rather they would be listening on
# it would be okay, but not this happens
127.0.0.1:3000
127.0.0.1:3001
127.0.0.1:3002
127.0.0.1:3003
This surreal behavior is likely not what you want. (Although if you have active interfaces on all of the IPs, it could work.)
However, this ruby thing has the nice feature that there is a direct assignment between its command line options and configuration file options. And a thin --help command will show them to you. You can enforce a TCP listening using the address and port options:
#socket: /srv/cica/tmp/thin.sock
address: 127.0.0.1
port: 3000
So you will get already the correct result.
The default values are 0.0.0.0 and 3000.
As apache wants to proxy only to a single tcp port with its most common settings (ProxyPass, ProxyPassReverse directives), also there you need some little trickery, a load balancing proxy cluster. The relevant config snippet:
<Proxy balancer://cicas>
BalancerMember http://localhost:3000 disablereuse=On route=cica1
BalancerMember http://localhost:3001 disablereuse=On route=cica2
BalancerMember http://localhost:3002 disablereuse=On route=cica3
BalancerMember http://localhost:3003 disablereuse=On route=cica4
ProxySet lbmethod=byrequests
</Proxy>
ProxyPass / balancer://cicas/
Related
I'm trying to connect to my redis container from my container running a Go server but the the connection keeps getting refused despite what appears to be a correct setup in my docker-compose.yml:
Go
redisClient = redis.NewClient(&redis.Options{
Network: "tcp",
Addr: "redis_server:6379",
Password: "", // no password set
DB: 0, // use default DB
})
docker-compose
version: "0.1"
services:
redis_server:
image: "redis"
ports:
- "6379:6379"
lambda_server:
build: .
ports:
- "8080:50051"
links:
- redis_server
By default, Redis doesn’t allow remote connections. You can connect to the Redis server only from 127.0.0.1 (localhost) - the machine where Redis is running.
Replace bind 127.0.0.1 with bind 0.0.0.0 in the /etc/redis/redis.conf file.
then run sudo service redis-server restart to restart the server.
Use the following command to verify that redis is listening on all interfaces on port 6379:
ss -an | grep 6379
You should see something like below. 0.0.0.0 means all IPv4 addresses on the machine.
tcp LISTEN 0 128 0.0.0.0:6379 0.0.0.0:*
tcp LISTEN 0 128 [::]:6379 [::]:*
If that doesn't solve the problem, you might need to check any firewalls that might block the access.
I faced similar problem and that was related to address binding. In redis configuration file, /etc/redis/redis.conf, find the line having prefix bind. Usually, this line contains bind 127.0.0.1. This means, only from the same host as the redis server (redis server container in your case) client connections are accepted.
You need to add the host name or the host ip of your client container in this bind deffinition line, if you want the client connection be accepted.
bind 127.0.0.1 <client-ip or client-hostname>
Another way to achieve this is binding any address by,
bind 0.0.0.0
In either case, tou need to restart the redis server with the changed redis.conf.
Update
From redis.conf file, we can see the followings:
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
bind 127.0.0.1
You can see that the bind address is 127.0.0.1 by default. So, for your case, either you can specify the address or comment the line.
I'm struggling with exposing Mosquitto that I setup on my Centos7 homeserver to the outside internet through my router.
Mosquitto runs fine on my localhost and post 1883 on the homeserver. I am able to pub/sub, and it is listening on the port as 127.0.0.1:1883 (tcp)
My home router has a dynamic IP (for now), say 76.43.150.206. On the router I port forwarded 1883 as both internal/external ports to my home server, say 192.168.1.100.
In the mosquitto.conf file, I have one simply line "listener 1883 76.43.150.206".
When I then attempt to pub/sub using a python client on an external computer as mqttc.connect("76.43.150.206", 1883), it says connection refused.
Any hints on what I'm doing wrong or how to get it working? BTW, my understanding of this setup is very basic and I've pretty much been going off blogs.
Here's how it will work:
1.) Setup mosquitto.conf as
listener 1883 0.0.0.0
#cafile <path to ca file>
#certfile <path to server cert>
#keyfile <path to server key>
#require_certificate false
0.0.0.0 binds the server to all interfaces present.
You can uncomment the code to enable TLS for better security. But you'll have to configure the client to use the same as well..
2.) Port forward router's 1883 port number to port 1883 of IP of machine running the broker.
3.) Start the broker and test your client!
You should not put the external address into the mosquitto config file.
You should probably not even have a listen line at all as mosquitto will bind to all available IP addresses on the machine it's running with the default port (1883).
If you really must use the listen directive (e.g. in order to set up SSL) then it should be configured with the internal IP address of the machine running the broker, in this case 192.168.1.100 and with a different port number so it does not clash with the default
listen 1884 192.168.1.100
Most of the tutorials out there show how to configure nginx web server as a proxy to a unicorn ruby application server when they are on the same server; a result is that they both communicate via unix sockets. How can I configure both of them if they are on different servers.
Unicorn designed to serve fast clients only:
unicorn is an HTTP server for Rack applications designed to only serve
fast clients on low-latency, high-bandwidth connections and take
advantage of features in Unix/Unix-like kernels. Slow clients should
only be served by placing a reverse proxy capable of fully buffering
both the the request and response in between unicorn and slow clients.
How does it work within load balancing between multi nodes environment? The answer is to have application nodes Nginx+Unicorn (connect via Unix Domain Socket) and top level Nginx as load balancer on separate node.
The basic setup is as follows:
In your unicorn config you'll want to listen to a TCP port rather than a unix socket:
listen 80, :tcp_nopush => true
Likewise, in your Nginx configuration simply proxy requests to a remote server:
upstream backend {
ip_hash;
server backend1.example.com;
server backend2.example.com;
server backend3.example.com down;
server backend4.example.com;
}
You should also checkout http://unicorn.bogomips.org/examples/nginx.conf for unicorn-tailored nginx configuration.
I have nginx serving up my rails app, but I also have a separate 'thin' server running on another port to use with Faye (publish / subscribe gem).
So I believe that since all requests are going through nginx (right?), I can't just call myapp.com:9292 if the thin server is setup on that port, even if I use the myapp.com host rather than localhost for the thin server, because its not routed through nginx.
If I have the thin server running at 0.0.0.0:9292, what would I need to add to my nginx conf to route pings to myapp.com:9292 to 0.0.0.0:9292?
Actually you can - just call example.com:9292 - , because Nginx is listening to port 80 only, and sometimes 443
Unless you add another server block that listens to 9292 explicitly, the example.com:9292 should pass directly to your 'thin' server
I have a Rails app that is running on port 8080 that I need to trick to think it's running on port 80.
I am running Varnish on port 80 and forwarding requests to nginx on port 8080, but when the user tries to login with OmniAuth and the Devise gem generates a url to redirect back to the server, it thinks its on port 8080 which then the user will see.
Is there any way to trick the Rails app to hard code the port as 80 (I would think it's a bad practice), or have nginx forward the request as if it's running on port 80?
Since I am not running a nginx proxy to the Rails app I can't think of a way to trick the port.
Has anyone ran into this issue before, if so what sort of configuration is needed to fix it?
Thanks in advance!
EDIT:
Both nginx and Varnish are running on the same server.
I have the same setup with Varnish on port 80 and nginx on port 8080 and OmniAuth (no Devise) was doing exactly the same thing. I tried setting X-Forwarded-Port etc in Varnish and fastcgi_param SERVER_PORT 80; in nginx, both without success. The other piece in my setup is Passenger (which you didn't mention) but if you are indeed using Passenger then you can use:
passenger_set_cgi_param SERVER_PORT 80;
(The docs say you can set this in an http block but that didn't work for me and I had to add it to the server block.)
http://modrails.com/documentation/Users%20guide%20Nginx.html#passenger_set_cgi_param
Set up X-Forwarded-Port in Varnish. See this example and the other results from a Google search for "varnish x-forwarded-port".
You must also, of course, set up X-Forwarded-For and X-Forwarded-Proto.
The headers X-Forwarded-For, X-Forwarded-Proto, and X-Forwarded-Port are a way for HTTP reverse proxies such as Nginx, Squid, or Varnish to communicate to the "back-end" HTTP application server, your Rails application running in Thin or Unicorn, who the user actually is and how the user actually connected.
For example, suppose you have Nginx in front of your Rails application. Your Rails application was booted with Thin and is listening on 127.0.0.1:8080, while Nginx is listening on 0.0.0.0:80 for HTTP and 0.0.0.0:443 for HTTPS. Nginx is configured to proxy all connections to the Rails app. Then your Rails app will think that any user's IP address is 127.0.0.1, the port is 8080, and the scheme is http, even if the actual user connected from 1.2.3.4 and requested the page via https on port 443. The solution is to configure Nginx to set the headers:
X-Forwarded-For: 1.2.3.4
X-Forwarded-Scheme: https
X-Forwarded-Port: 443
and the Rails app should use these parameters instead of the default ones.
The same applies for whatever reverse proxy you use, such as Varnish in your case.
You can make a proxy and server it as whatever port you want.
Maybe with apache on top and passenger stand alone...
<VirtualHost *:80>
ServerName <name>
DocumentRoot /home/deploy/<name>
PassengerEnabled off
ProxyPass / http://127.0.0.1:<port>/
ProxyPassReverse / http://127.0.0.1:<port>/
</VirtualHost>
In shell:
passenger start -e staging -p 3003 -d
Your problem seems you're getting redirects to port 8080. The best solution would be to configure Rails (or the OmniAuth/Devise gem) to treat the requests as if they were fired on port 80 (but I have no idea how or if it is possible).
Like ablemike said; Apache has a great module for this (mod_proxy), with ProxyPassReverse it rewrites the redirects back to port-80 redirects. Better even, with mod_proxy_html it will replace port-8080 links in HTML pages with port-80 links.
If you only need to rewrite redirects, you can rewrite redirects in Varnish VCL with something like:
sub vcl_fetch {
...
#Rewrite redirect from port 8080 to port 80
if ( obj.http.Location ~ "^http://[^:]+:8080/.*" ) {
set obj.http.Location = regsub(obj.http.Location, ""^(http://[^:]+):8080(/.*)","\1\2");
}
}
(I think you have to replace obj with beresp if you use varnish >= 2.1)
If you have to rewrite HTML pages, this will be a lot harder to do completely correct with varnish.