Is there a way to cancel the start depending on a condition with FastAPI startup event? - startup

What I want to do is check a condition on startup event and if execption occurs do not start server or stop server.
#app.on_event("startup")
def startup_event():
public_key = None
try:
with open(PUBLIC_KEY_FILE) as public_key_file:
public_key = public_key_file.read()
except Exception as f_error:
logger.exception(f_error)
# cancel the startup
Is there a way to do it?

Raise an exception in startup function:
#app.on_event("startup")
def startup_event():
public_key = None
try:
with open(PUBLIC_KEY_FILE) as public_key_file:
public_key = public_key_file.read()
except Exception as f_error:
logger.exception(f_error)
raise SomeException()

Related

UMQTT.simple function to check client.ping() call back

I am trying to get my head around UMQTT.simple. I am looking to handle instances in which my server might disconnect for a reboot. I want to check whether the client is connected, and if not, wait some period and try to reconnect.The guidance seems to be to use client.ping() for this (How to check Micropython umqtt client is connected?).
For the MQTT.paho client I see there is a way to access ping responses in the logs function (see here: http://www.steves-internet-guide.com/mqtt-keep-alive-by-example/). For UMQTT the docs indicate that ping response is handled automatically by wait_msg(): Ping server (response is automatically handled by wait_msg() (https://mpython.readthedocs.io/en/master/library/mPython/umqtt.simple.html). There does not appear to be any analogous logs function mentioned in the UMQTT.simple docs.
This is confounding for a couple of reasons:
If i use client.wait_msg() how do I call client.ping()? client.wait_msg() is a blocking function, so I can't make the ping. The system just disconnects when the keepalive time is reached.
If I call client.check_msg(), and client.ping() intermittently, I can't access the callback. My callback function doesn't have parameters to access pingresponse (params are f(topic, msg) in the docs).
The way I am solving this for now is to set a bunch of try-except calls on my client.connect and then connect-subscribe functions, but its quite verbose. Is this the way to handle or can i take advantage of the pingresponse in UMQTT.simple?
Below is a sample of the code i am running:
#Set broker variables and login credentials
#Connect to the network
#write the subscribe call back
def sub_cb(topic, msg):
print((topic, msg))
#write a function that handles connecting and subscribing
def connect_and_subscribe():
global CLIENT_NAME, BROKER_IP, USER, PASSWORD, TOPIC
client = MQTTClient(client_id=CLIENT_NAME,
server=BROKER_IP,
user=USER,
password=PASSWORD,
keepalive=60)
client.set_callback(sub_cb)
client.connect()
client.subscribe(TOPIC)
print('Connected to MQTT broker at: %s, subscribed to %s topic' % (BROKER_IP, TOPIC))
return(client) #return the client so that i can do stuff with it
client = connect_and_subscribe()
#Check messages
now = time.time()
while True:
try:
client.check_msg()
except OSError as message_error: #except if disconnected and check_msg() fails
if message_error == -1:
time.sleep(30) #wait for reboot
try:
client = connect_and_subscribe() #Try connect again to the server
except OSError as connect_error: #If the server is still down
time.sleep(30) #wait and try again
try:
client = connect_and_subscribe()
except:
quit() #Quite so that i don't get stuck in a loop
time.sleep(0.1)
if time.time() - now > 80: #ping to keepalive (60 * 1.5)
client.ping()
now = time.time() #reset the timer

Thread running in Middleware is using old version of parent's instance variable

I've used Heroku tutorial to implement websockets.
It works properly with Thin, but does not work with Unicorn and Puma.
Also there's an echo message implemented, which responds to client's message. It works properly on each server, so there are no problems with websockets implementation.
Redis setup is also correct (it catches all messages, and executes the code inside subscribe block).
How does it work now:
On server start, an empty #clients array is initialized. Then new Thread is started, which is listening to Redis and which is intended to send that message to corresponding user from #clients array.
On page load, new websocket connection is created, it is stored in #clients array.
If we receive the message from browser, we send it back to all clients connected with the same user (that part is working properly on both Thin and Puma).
If we receive the message from Redis, we also look up for all user's connections stored in #clients array.
This is where weird thing happens:
If running with Thin, it finds connections in #clients array and sends the message to them.
If running with Puma/Unicorn, #clients array is always empty, even if we try it in that order (without page reload or anything):
Send message from browser -> #clients.length is 1, message is delivered
Send message via Redis -> #clients.length is 0, message is lost
Send message from browser -> #clients.length is still 1, message is delivered
Could someone please clarify me what am I missing?
Related config of Puma server:
workers 1
threads_count = 1
threads threads_count, threads_count
Related middleware code:
require 'faye/websocket'
class NotificationsBackend
def initialize(app)
#app = app
#clients = []
Thread.new do
redis_sub = Redis.new
redis_sub.subscribe(CHANNEL) do |on|
on.message do |channel, msg|
# logging #clients.length from here will always return 0
# [..] retrieve user
send_message(user.id, { message: "ECHO: #{event.data}"} )
end
end
end
end
def call(env)
if Faye::WebSocket.websocket?(env)
ws = Faye::WebSocket.new(env, nil, {ping: KEEPALIVE_TIME })
ws.on :open do |event|
# [..] retrieve current user
if user
# add ws connection to #clients array
else
# close ws
end
end
ws.on :message do |event|
# [..] retrieve current user
Redis.current.publish({user_id: user.id, { message: "ECHO: #{event.data}"}} )
end
ws.rack_response
else
#app.call(env)
end
end
def send_message user_id, message
# logging #clients.length here will always return correct result
# cs = all connections which belong to that client
cs.each { |c| c.send(message.to_json) }
end
end
Unicorn (and apparently puma) both start up a master process and then fork one or more workers. fork copies (or at least presents the illusion of copying - an actual copy usually only happens as you write to pages) your entire process but only the thread that called fork exists in the new process.
Clearly your app is being initialised before being forked - this is normally done so that workers can start quickly and benefit from copy on write memory savings. As a consequence your redis checking thread is only running in the master process whereas #clients is being modified in the child process.
You can probably work around this by either deferring the creation of your redis thread or disabling app preloading, however you should be aware that your setup will prevent you from scaling beyond a single worker process (which with puma and a thread friendly JVM like jruby would be less of a constraint)
Just in case somebody will face the same problem, here are two solutions I have come up with:
1. Disable app preloading (this was the first solution I have come up with)
Simply remove preload_app! from the puma.rb file. Therefore, all threads will have their own #clients variable. And they will be accessible by other middleware methods (like call etc.)
Drawback: you will lose all benefits of app preloading. It is OK if you have only 1 or 2 workers with a couple of threads, but if you need a lot of them, then it's better to have app preloading. So I continued my research, and here is another solution:
2. Move thread initialization out of initialize method (this is what I use now)
For example, I moved it to call method, so this is how middleware class code looks like:
attr_accessor :subscriber
def call(env)
#subscriber ||= Thread.new do # if no subscriber present, init new one
redis_sub = Redis.new(url: ENV['REDISCLOUD_URL'])
redis_sub.subscribe(CHANNEL) do |on|
on.message do |_, msg|
# parsing message code here, retrieve user
send_message(user.id, { message: "ECHO: #{event.data}"} )
end
end
end
# other code from method
end
Both solutions solve the same problem: Redis-listening thread will be initialized for each Puma worker/thread, not for main process (which is actually not serving requests).

Rails, ActionController::Live, Puma: ThreadError

I want to stream notification to the client. For this, I use Redis pup/sub and the ActionController::Live. Here is what my StreamingController looks like:
class StreamingController < ActionController::Base
include ActionController::Live
def stream
response.headers['Content-Type'] = 'text/event-stream'
$redis.psubscribe("user-#{params[:user_id]}:*") do |on|
on.pmessage do |subscription, event, data|
response.stream.write "data: #{data}\n\n"
end
end
rescue IOError
logger.info "Stream closed"
ensure
response.stream.close
end
end
Here the JS part to listen to the stream:
var source = new EventSource("/stream?user_id=" + user_id);
source.addEventListener("message", function(e) {
data = jQuery.parseJSON(e.data);
switch(data.type) {
case "unread_receipts":
updateUnreadReceipts(data);
break;
}
}, false);
Now if I push something to redis, the client gets the push-notification. So this works fine. But when I click on a link nothing is happening. After canceling the rails server (I use puma) with Ctrl+C I got the following error:
ThreadError: Attempt to unlock a mutex which is locked by another thread
The problem can be solved after adding config.middleware.delete Rack::Lock to development.rb, but then I don't see any console output after pushing to the client. config.cache_classes = true and config.eager_load = trueare no options because I don't want to restart my server every time in development.
Is there any other solution?
If you want to avoid restarting the server to pick up changes then I think you'd need to be running multiple processes.

Mechanize - Receiving Errno::EMFILE: Too many open files - socket(2) after a day

I'm running an application that uses mechanize to fetch some data every so often from an RSS feed.
It runs as a heroku worker and after a day or so I'm receiving the following error:
Errno::EMFILE: Too many open files - socket(2)
I wasn't able to find a "close" method within mechanize, is there anything special I need to be doing in order to close out my browser sessions?
Here is how I create the browser + read information:
def mechanize_browser
#mechanize_browser ||= begin
agent = Mechanize.new
agent.redirect_ok = true
agent.request_headers = {
'Accept-Encoding' => "gzip,deflate,sdch",
'Accept-Language' => "en-US,en;q=0.8",
}
agent
end
end
And actually fetching information:
response = mechanize_browser.get(url)
And then closing after the response:
def close_mechanize_browser
#mechanize_browser = nil
end
Thanks in advance!
Since you manually can't close each instance of Mechanize, you can try invoking Mechanize as a block. According to the docs:
After the block executes, the instance is cleaned up. This includes closing all open connections.
So, rather than abstracting Mechanize.new into a custom function, try running Mechanize via the start class method, which should automatically close all your connections upon completion of the request:
Mechanize.start do |m|
m.get("http://example.com")
end
I ran into this same issue. The Mechanize start example by #zeantsoi is the answer that I ended up following, but there is also a Mechanize.shutdown method if you want to do this manually without their block.
There is also an option that you can add a lambda on post_connect_hooks
Mechanize.new.post_connect_looks << lambda {|agent, url, response, response_body| agent.shutdown }

What can cause a connection to APNS to intermittently disconnect?

I've got a ruby script that opens a connection to Apple's push server and sends all the pending notifications. I can't see any reason why, but I get broken pipe errors when Apple disconnects my script. I've written my script to accomodate this happening, but I would rather just find out why it's happening so I can avoid it in the first place.
It doesn't consistently disconnect on a specific notification. It doesn't disconnect at a certain byte transfer size. Everything appears to be sporadic. Are there certain limitations to the data transfer or payload count you can send on a single connection? Seeing people's solutions that hold one connection open all the time, I would assume that isn't the issue. I've seen the connection drop after 3 notifications, and I've seen it drop after 14 notifications. I've never seen it make it past 14.
Has anyone else experienced this type of problem? How can this be handled?
The problem was caused by sending an invalid device token to the APNS server. In this specific case it was a development token. When an invalid device token is sent to APNS, it disconnects the socket. This can cause some headaches, and has been addressed by Apple as being something they are going to address in future updates.
I had the same issue for a bit and did two things to tackle it:
Put some auto-reconnect logic in place: I try to keep my connection for as long as possible but Apple will disconnect you every now and then. Be prepared to handle this.
Move to the enhanced interface: Using the simple interface (that's what the APNS gem and many others use) errors will trigger disconnection without any feedback. If you switch to the enhanced format you will receive an integer back every time something happens. Bad tokens will result in a 8 being returned, and I use this to remove the device from my database.
Here's my current connection code, using EventMachine:
module Apns
module SocketHandler
def initialize(wrapper)
#wrapper = wrapper
end
def post_init
start_tls(:cert_chain_file => #wrapper.pem_path,
:private_key_file => #wrapper.rsa_path,
:verify_peer => false)
end
def receive_data(data)
#wrapper.read_data!(data)
end
def unbind
#wrapper.connection_closed!
end
def write(data)
begin
send_data(data)
rescue => exc
#wrapper.connection_error!(exc)
end
end
def close!
close_connection
end
end
class Connection
attr_reader :pem_path, :rsa_path
def initialize(host, port, credentials_path, monitoring, read_data_handler)
setup_credentials(credentials_path)
#monitoring = monitoring
#host = host
#port = port
#read_data_handler = read_data_handler
open_connection!
end
def write(data)
#connection.write(data)
end
def open?
#status == :open
end
def connection_closed!
#status = :closed
end
def connection_error!(exception)
#monitoring.inform_exception!(exception, self)
#status = :error
end
def close!
#connection.close!
end
def read_data!(data)
#read_data_handler.call(data)
end
private
def setup_credentials(credentials_path)
#pem_path = "#{credentials_path}.pem"
#rsa_path = "#{credentials_path}.rsa"
raise ArgumentError.new("#{credentials_path}.pem and #{credentials_path}.rsa must exist!") unless (File.exists?(#pem_path) and File.exists?(#rsa_path))
end
def open_connection!
#connection = EventMachine.connect(#host, #port, SocketHandler, self)
#status = :open
end
end
end
end
end
It separates writes and reads in the connection, using the ID field in the notification to correlated notifications I send with feedback I receive.

Resources