Why does lighttpd mod_fastcgi start a listening socket? - fastcgi

I read in lighttpd 1.4.19 source codes, now I got stuck at the function fcgi_spawn_connection,
if (-1 == connect(fcgi_fd, fcgi_addr, servlen)) {
...
bind(fcgi_fd, fcgi_addr, servlen)
...
listen(fcgi_fd, 1024)
}
The question is, why does mod_fastcgi create a listening socket, what's it use for? Isn't that mod_fastcgi works as client connect to fastcgi processes(etc, php-cgi)? - the php-cgi processes will listen.
Thanks.

OK, I thought i got it.
the php-cgi will not create a listen socket, but inherits from fcgi_spawn_connection, while this listen socket has been dup2 FCGI_LISTENSOCK_FILENO(usually 0).

Related

Unable to connect to remote mqtt broker over ssl web-socket using Paho Javascript library

I am getting the error:
WebSocket connection to 'wss://iot.XXXX.GG:8883/mqtt' failed: Connection closed before receiving a handshake response
When trying to connect to a remote Mosquitto broker over SSL using Javascript Paho library on Windows 10.
What I have already tried is shown in the following listing:
<script type = "text/javascript" language = "javascript">
var mqtt;
var reconnectTimeout = 2000;
var host="iot.XXXX.GG" ;
var port=8883;
function onConnect() {
// Once a connection has been made, make a subscription and send a message.
console.log("Connected ");
message = new Paho.MQTT.Message("Hello World");
message.destinationName = "sensor1";
mqtt.send(message);
}
function MQTTconnect() {
console.log("connecting to "+ host +" "+ port);
mqtt = new Paho.MQTT.Client(host,port,"clientjs");
var options = {
useSSL:true,
timeout: 3,
userName:"abc",
password:"qweqwe",
onSuccess: onConnect
};
mqtt.connect(options);
};
</script>
Expected results should be a message saying 'Connected. Actual results are shown at the beginning of this post as the error I get.
By the way, my Mosquitto.conf file is:
allow_anonymous false
password_file /etc/mosquitto/passwd
listener 1883 localhost
protocol mqtt
listener 8883
certfile /etc/letsencrypt/live/iot.XXXX.GG/cert.pem
cafile /etc/letsencrypt/live/iot.XXXX.GG/chain.pem
keyfile /etc/letsencrypt/live/iot.XXXX.GG/privkey.pem
# WebSockets - insecure
listener 8083
protocol websockets
#http_dir /home/ΧΧΧΧ/domains/iot.XXXX.GG/public_html
#certfile /etc/letsencrypt/live/iot.XXXX.GG/cert.pem
#cafile /etc/letsencrypt/live/iot.XXXX.GG/chain.pem
#keyfile /etc/letsencrypt/live/iot.XXXX.GG/privkey.pem
The Paho MQTT client can only connect to a broker configured to run MQTT over WebSockets.
The mosquitto.conf file you have provided has 3 listeners defined.
The default native MQTT listener on port 1883 bound only to localhost
A native MQTT over SSL listener on port 8883 using the letsencrypt certificate
A MQTT over WebSockets listener on port 8083 with the certificates commented out.
If you want to connect from the web page using MQTT over WebSockets and SSL you need to uncomment the certificates from the 3rd listener and change the port you are connecting to in the page to 8083 (not 8883)

ZMQ event publisher in Jenkins doesn't send a notification

I have been trying to figure out what is wrong with my Jenkins ZMQ-event-publisher configuration for more than 23 hours and have given up. Hopefully, you may have an idea what I am doing wrong.
I've installed Jenkins with ZMQ-event-publisher plugin and under Manage Jenkins->Configure System checked Enable on all Jobs (Note TCP port to publish on is set to 8888).
Created a new job, checked Check if ZMQ events should be published for this project and clicked on Save.
I have written a Python script using pyZMQ
#!/usr/bin/env python
import zmq
port = "8888"
context = zmq.Context()
socket = context.socket(zmq.SUB)
socket.connect("tcp://localhost:%s" % port)
socket.setsockopt(zmq.SUBSCRIBE, '')
print "Jenkins... waiting..."
string = socket.recv()
print "recv =>", string
Executing the above script on the Jenkins machine + running the Jenkins job.
Unfortunately, the script doesn't receive any ZMQ message from Jenkins.
Trying to capture the ZMQ message using either tcpdump -i eth0 'port 8888' or tcpdump -i lo 'port 8888' didn't help too.
In addition to that, looking at the /var/log/jenkins/jenkins.log, I get:
Sep 25, 2014 8:54:47 PM org.jenkinsci.plugins.ZMQEventPublisher.ZMQRunnable bindSocket
INFO: Binding ZMQ PUB to port 8888
Sep 25, 2014 8:54:48 PM hudson.model.Run execute
INFO: MyJob #18 main build action completed: SUCCESS
Moreover, netstat -ntlp verifies that
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8888 :::* LISTEN 31/java
tcp 0 0 :::57467 :::* LISTEN 31/java
tcp 0 0 :::8009 :::* LISTEN 31/java
tcp 0 0 :::59373 :::* LISTEN 31/java
tcp 0 0 :::8080 :::* LISTEN 31/java
So, what am I doing wrong?
There is no good explanation on how to configure correctly the Jenkins ZMQ plugin and looking at the plugin code doesn't reveal much.
Your help will be more than appreciated.
Thanks.
EDIT : Dave's suggestion was great, but it hasn't fixed the problem, yet.
EDIT 2: It looks like it fails because Jenkins has been running from a Docker container and I've forgotten to expose all its ports. Nevertheless, It looks like Dave's suggestions does fix the problem. Yay!!!
From the ZMQ Guide:
Note that when you use a SUB socket you must set a subscription using zmq_setsockopt() and SUBSCRIBE
I suspect that your subscriber script is not seeing events because you need to set the subscribe filter. As described on the zmq_setsockopt page, setting the filter to the empty string subscribes to all messages.
Try adding:
socket.setsockopt(zmq.SUBSCRIBE, '')

uWSGI as a standalone http server with lua

I'm trying to set up uWSGI to run as a standalone server running a simple LUA script(right now, as a POC, using the hello world from http://uwsgi-docs.readthedocs.org/en/latest/Lua.html).
Here's my uwsgi.ini file:
[uwsgi]
master = true
workers = 1
threads = 8
listen = 4096
max-request = 512
pidfile = /uwsgi/logs/uwsgi.pid
procname-master = uWSGI master
auto-procname = true
lua = /uwsgi/hello.lua
socket-timeout = 30
socket = /uwsgi/uwsgi_1.sock
http = 127.0.0.1:80
http-to = /uwsgi/uwsgi_1.sock
When sending a web request, an empty response is received, and uWSGI process outputs:
-- unavailable modifier requested: 0 --
I've read this usually means a plugin is missing, however, LUA plugin is installed, and when doing the same but through NGINX everything works fine, which means there's no problem loading LUA.
Any help please?
Thanks.
Somebody told me I had to add http-modifier1 = 6 and now it works.
Still don't understand what does '6' mean, but whatever.

running errbit with ngninx and passenger with ssl

I've got things configured so that I can login & access errbit running on nginx with ssl
My problem is that I cannot work out how to set my rails app's errbit.rb so I can run the test
the nginx.conf looks a bit like:
server {
listen 443;
ssl on;
ssl_certificate stuff.crt;
ssl_certificate_key stuff.key;
server_name www.whatever.org;
location / {
root /web/stuff;
}
location /errbit {
root /webapps2;
passenger_enabled on;
rails_env development;
passenger_base_uri /errbit;
}
}
So www.whatever.org/errbit shows errbit
The initializers/errbit.rb looks like:
Airbrake.configure do |config|
config.api_key = 'code'
config.host = 'www.whatever.org/errbit'
config.port = 443
config.secure = config.port == 443
end
And running bundle exec rake airbrake:test gives:
...
Started GET "/verify" for at 2012-09-25 20:37:22 +0100
Raising 'AirbrakeTestingException' to simulate application failure.
** [Airbrake] Failure: Net::HTTPNotFound
** [Airbrake] Environment Info: [Ruby: 1.9.2] [Rails: 3.1.1] [Env: staging]
** [Airbrake] Response from Airbrake:
<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.3.5</center>
</body>
</html>
and no message reaches errbit
Is this just a non starter, or is the /errbit the problem? SSL? Using the wrong port?
Any suggestions gratefully received
Thanks in advance
I had a similar problem, turned out I had a proxy blocking server https requests to the errbit api.
If you do have a proxy blocking connections you will need to add to the errbit.rb:
config.proxy_host="1.2.3.4"
config.proxy_port=1111
Essentially when there is a server error the connection is pushed out from the server to the interweb which most standard proxy configurations will block. Adding this proxy configuration does not impact client side errors (javascript errors) as the proxy config is not used (and rightly so) in the airbrake notifier javascript for errbit (that sits on the client machine).
Another option available is if your errbit application is internal (which it should be unless you know what you're doing) you could add the internal address of the errbit api to the etc/hosts file on the server your app is running. This would stop the application from trying to connect out of the internal network bypassing the proxy. Having said this you would not be able to catch client side errors
to add to the etc/hosts file add:
1.1.1.1 https://www.whatever.org
Hope this helps

Proxy Apache Load Balancers to a Unix Socket instead of a port

How can we do the below Nginx configuration in Apache?
Basically proxying to a Unix socket instead of a load balanced port.
I want that Unicorn handle the load balancing instead of Apache.
upstream unicorn_server {
server unix:/home/prats/public_html/myapp/current/tmp/sockets/unicorn.sock
fail_timeout=0;
}
server {
...
...
...
location / {
...
...
# If you don't find the filename in the static files
# Then request it from the unicorn server
if (!-f $request_filename) {
proxy_pass http://unicorn_server;
break;
}
...
...
}
}
After having searched for quite a while, I came to the conclusion that using Apache2 + Unicorn via sockets is not possible.
The farthest I got was using mod_fastcgi on the socket file that unicorn provides, but I got 403 Forbidden when trying to access the page.
It seems that FastCGI requires a different protocol than the one Unicorn uses.
Stick with the solution from Mark Kolesar if you have to use Unicorn with Apache. Be aware that you might run into problems (taken from http://rubyforge.org/pipermail/mongrel-unicorn/2011-July/001057.html):
Apache + Unicorn is still unsupported since (as far as anybody knows),
it doesn't fully buffer responses and requests to completely isolate
Unicorn from the harmful effects of slow clients.
ProxyRequests Off
ProxyPass /stylesheets/ !
ProxyPass /javascripts/ !
ProxyPass /images/ !
ProxyPass / http://example.com:8080/
ProxyPassReverse / http://example.com:8080/
Can't you do it using unixcat in between?
Having the proxypass to localhost:something
xinetd + unixcat installed
/etc/xinetd.d/unicorn holding:
service livestatus
{
type = UNLISTED
port = someport
socket_type = stream
protocol = tcp
wait = no
cps = 100 3
instances = 500
per_source = 250
flags = NODELAY
user = someone
server = /usr/bin/unixcat
server_args = /var/run/unicorn/mysocket
only_from = 127.0.0.1
disable = no
}

Resources