FastAPI and OpenCV background task - opencv

I'd like to know if there is a way to run the code inside the endless loop (reading a frame using OpenCV and processing it) asynchronously? I'm considering using multithreading for this purpose (video capturing is running in background simultaneously with API) but before that I gave this a try.
class BackgroundRunner:
def __init__(self):
// OpenCV capture initialization
self.cap = cv2.VideoCapture(0)
async def run_main(self):
while True:
ret, image_np = self.cap.read()
if ret:
# Process image
else:
# break
recognition_runner = BackgroundRunner()
# Starting recogntion runner on the startup
#app.on_event("startup")
async def app_startup():
asyncio.create_task(recognition_runner.run_main())

Have you had a look at this library called fastapi-utils?
https://fastapi-utils.davidmontague.xyz/user-guide/repeated-tasks/
They have a repeated task annotation which with your code, you could adapt and use in the following way potentially:
class BackgroundRunner:
def __init__(self):
# OpenCV capture initialization
self.cap = cv2.VideoCapture(0)
self.processing = False # variable to track whether a loop is in progress
async def process(self):
# remove the while True as this action will be repeated
# while True:
if self.processing:
return
self.processing = True
ret, image_np = self.cap.read()
if ret:
# Process image
else:
# break
self.processing = False
recognition_runner = BackgroundRunner()
# Starting recogntion runner on the startup
#app.on_event("startup")
#repeat_every(seconds=0.1)
async def app_startup():
await recognition_runner.process()

Related

Use PySide6 in thread

Qt has a promising SCXML module. Since PySCXML is obsolete, there is no other native python scxml library, which lets me run a scxml statemachine. That's why I try PySide6.
Since I don't need any Qt despite of the scxml library, I thought about running the QCoreApplication in a seperate thread, in order to have the event-loop right there.
According to the documentation QScxmlStateMachine needs one.
Unfortunately my start_statemachine() method doesn't return, but the statemachine starts working.
Any advice on how to start a QScxmlStateMachine in a thread is welcomed.
from PySide6.QtCore import QCoreApplication, QObject
from PySide6.QtScxml import QScxmlStateMachine
from PySide6.QtCore import QTimer
import threading
def start_statemachine(filename):
app = QCoreApplication()
mysm = MyStateMachine(filename)
mysm.start_sm()
app.exec()
class MyStateMachine(QObject):
def __init__(self, filename):
super(MyStateMachine, self).__init__()
self.sm = QScxmlStateMachine.fromFile(filename)
self.counter = 0
self.timer = QTimer()
self.timer.setInterval(2000)
self.timer.timeout.connect(self.recurring_timer)
self.timer.start()
def start_sm(self):
print('starting statemachine')
self.sm.setRunning(True)
def recurring_timer(self):
print(self.sm.activeStateNames())
self.counter += 1
print("Counter: %d" % self.counter)
print('statemachine running status: ' + str(self.sm.isRunning()))
if __name__ == '__main__':
x = threading.Thread(target=start_statemachine('statemachine.scxml'))
x.start() #won't be reached
while True:
pass #do something else
x.join()
The thread target needs to be a reference to a function that will be called in the external thread, but you're not running start_statemachine() in another thread: you're actually executing it in place:
x = threading.Thread(target=start_statemachine('statemachine.scxml'))
^^^^^^^^^^^^^^^^^^^^^^
Your program is stuck there, no thread is even created because the constructor is still "waiting" for start_statemachine() to return, and since exec() is blocking, nothing else happens.
A basic solution could be to use a lambda:
x = threading.Thread(target=lambda: start_statemachine('statemachine.scxml'))
But you'll need access to the application in order to be able to quit it: x.join() won't do nothing, because the QCoreApplication event loop will keep going, so a possibility is to create a basic class that provides a reference to the application:
class StateMachineWrapper:
app = None
def __init__(self, filename):
self.filename = filename
def start(self):
self.app = QCoreApplication([])
mysm = MyStateMachine(self.filename)
mysm.start_sm()
self.app.exec()
# ...
if __name__ == '__main__':
statemachine = StateMachineWrapper('statemachine.scxml')
x = threading.Thread(target=statemachine.start)
x.start()
while True:
pass #do something else
if statemachine.app:
statemachine.app.quit()
x.join()

Can I await the same Task multiple times in Python?

I need to do a lot of work, but luckily it's easy to decouple into different tasks for asynchronous execution. Some of those depend on each other, and it's perfectly clear to me how on task can await multiple others to get their results. However, I don't know how I can have multiple different tasks await the same coroutine, and both get the result. The Documentation also doesn't mention this case as far as I can find.
Consider the following minimal example:
from asyncio import create_task, gather
async def TaskA():
... # This is clear
return result
async def TaskB(task_a):
task_a_result = await task_a
... # So is this
return result
async def TaskC(task_a):
task_a_result = await task_a
... # But can I even do this?
return result
async def main():
task_a = create_task(TaskA())
task_b = create_task(TaskB(task_a))
task_c = create_task(TaskC(task_a))
gather(task_b, task_c) # Can I include task_a here to signal the intent of "wait for all tasks"?
For the actual script, all tasks do some database operations, some of which involve foreign keys, and therefore depend on other tables already being filled. Some depend on the same table. I definitely need:
All tasks run once, and only once
Some tasks are dependent on others being done before starting.
In brief, the question is, does this work? Can I await the same instantiated coroutine multiple times, and get the result every time? Or do I need to put awaits in main(), and pass the result? (which is the current setup, and I don't like it.)
You can await the same task multiple times:
from asyncio import create_task, gather, run
async def coro_a():
print("executing coro a")
return 'a'
async def coro_b(task_a):
task_a_result = await task_a
print("from coro_b: ", task_a_result)
return 'b'
async def coro_c(task_a):
task_a_result = await task_a
print("from coro_a: ", task_a_result)
return 'c'
async def main():
task_a = create_task(coro_a())
print(await gather(coro_b(task_a), coro_c(task_a)))
if __name__ == "__main__":
run(main())
Will output:
executing coro a
from coro_b: a
from coro_a: a
['b', 'c']
What you can not do is to await the same coroutine multiples times:
...
async def main():
task_a = coro_a()
print(await gather(coro_b(task_a), coro_c(task_a)))
...
Will raise RuntimeError: cannot reuse already awaited coroutine.
As long as you schedule your coroutine coro_a using create_task your code will work.

How do you terminate a ros spin when receiving a message

We know the classic form of a subscriber node in ROS
def callback(msg):
#do something with the msg
rospy.init_node('the_node',anonymous=True)
sub= rospy.Subscriber('message',Image, callback) # for example Images, but can be anything
rospy.spin()
Here the node will be receiving mesages and processing them with callback, while ROS "spins"
My question is, is there a simple way to get out of this spin based on for example a message we receive?
def callback(msg):
#If we receive a msg that says "FINISH" break the main spin
rospy.init_node('the_node',anonymous=True)
sub= rospy.Subscriber('message',Image, callback) # for example Images, but can be anything
rospy.spin()
print("spin was broken")
The purpose rospy.spin() is to go into an infinite loop processing callbacks until a shutdown signal is received. The way to get out of the spin, and the only reason you ever should, is when the process is shutting down. This can be done via sys.exit() in python or rospy.signal_shutdown().
Based on your example it seems like you want to break out of the spin but keep the node alive to do more work. If that's the case this is not the correct use of rospy.spin() and you should reconsider what you're trying to accomplish and by what method. Consider possibly using a run loop with rospy.rate.sleep()
cb_signal = False
def callback(msg):
cb_signal = msg.data
def run():
while not rospy.is_shutdown():
#Do some other work
if cb_signal == True:
some_other_method()
rospy.rate.sleep(10) #10Hz
if __name__ == '__main__':
rospy.init_node('my_node')
rospy.Subscriber('message',Bool, callback)
run()

Multiple outputs from one input based on features

I would like to build many outputs based on the same input, e.g. a hex and a binary from an elf.
I will do this multiple times, different places in the wscript so I'd like to wrap it in a feature.
Ideally something like:
bld(features="hex", source="output.elf")
bld(features="bin", source="output.elf")
How would I go about implementing this?
If your elf files always have the same extension, you can simply use that:
# untested, naive code
from waflib import TaskGen
#TaskGen.extension('.elf')
def process_elf(self, node): # <- self = task gen, node is the current input node
if "bin" in self.features:
bin_node = node.change_ext('.bin')
self.create_task('make_bin_task', node, bin_node)
if "hex" in self.features:
hex_node = node.change_ext('.hex')
self.create_task('make_hex_task', node, hex_node)
If not, you have to define the features you want like that:
from waflib import TaskGen
#Taskgen.feature("hex", "bin") # <- attach method features hex AND bin
#TaskGen.before('process_source')
def transform_source(self): # <- here self = task generator
self.inputs = self.to_nodes(getattr(self, 'source', []))
self.meths.remove('process_source') # <- to disable the standard process_source
#Taskgen.feature("hex") # <- attach method to feature hex
#TaskGen.after('transform_source')
def process_hex(self):
for i in self.inputs:
self.create_task("make_hex_task", i, i.change_ext(".hex"))
#Taskgen.feature("bin") # <- attach method to feature bin
#TaskGen.after('transform_source')
def process_hex(self):
for i in self.inputs:
self.create_task("make_bin_task", i, i.change_ext(".bin"))
You have to write the two tasks make_elf_task and make_bin_task. You should put all this in a separate python file and make a "plugin".
You can also define a "shortcut" to call:
def build(bld):
bld.make_bin(source = "output.elf")
bld.make_hex(source = "output.elf")
bld(features = "hex bin", source = "output.elf") # when both needed in the same place
Like that:
from waflib.Configure import conf
#conf
def make_bin(self, *k, **kw): # <- here self = build context
kw["features"] = "bin" # <- you can add bin to existing features kw
return self(*k, **kw)
#conf
def make_hex(self, *k, **kw):
kw["features"] = "hex"
return self(*k, **kw)

Temporarily modify the current process's environment

I use the following code to temporarily modify environment variables.
#contextmanager
def _setenv(**mapping):
"""``with`` context to temporarily modify the environment variables"""
backup_values = {}
backup_remove = set()
for key, value in mapping.items():
if key in os.environ:
backup_values[key] = os.environ[key]
else:
backup_remove.add(key)
os.environ[key] = value
try:
yield
finally:
# restore old environment
for k, v in backup_values.items():
os.environ[k] = v
for k in backup_remove:
del os.environ[k]
This with context is mainly used in test cases. For example,
def test_myapp_respects_this_envvar():
with _setenv(MYAPP_PLUGINS_DIR='testsandbox/plugins'):
myapp.plugins.register()
[...]
My question: is there a simple/elegant way to write _setenv? I thought about actually doing backup = os.environ.copy() and then os.environ = backup .. but I am not sure if that would affect the program behavior (eg: if os.environ is referenced elsewhere in the Python interpreter).
I suggest you the following implementation:
import contextlib
import os
#contextlib.contextmanager
def set_env(**environ):
"""
Temporarily set the process environment variables.
>>> with set_env(PLUGINS_DIR=u'test/plugins'):
... "PLUGINS_DIR" in os.environ
True
>>> "PLUGINS_DIR" in os.environ
False
:type environ: dict[str, unicode]
:param environ: Environment variables to set
"""
old_environ = dict(os.environ)
os.environ.update(environ)
try:
yield
finally:
os.environ.clear()
os.environ.update(old_environ)
EDIT: more advanced implementation
The context manager below can be used to add/remove/update your environment variables:
import contextlib
import os
#contextlib.contextmanager
def modified_environ(*remove, **update):
"""
Temporarily updates the ``os.environ`` dictionary in-place.
The ``os.environ`` dictionary is updated in-place so that the modification
is sure to work in all situations.
:param remove: Environment variables to remove.
:param update: Dictionary of environment variables and values to add/update.
"""
env = os.environ
update = update or {}
remove = remove or []
# List of environment variables being updated or removed.
stomped = (set(update.keys()) | set(remove)) & set(env.keys())
# Environment variables and values to restore on exit.
update_after = {k: env[k] for k in stomped}
# Environment variables and values to remove on exit.
remove_after = frozenset(k for k in update if k not in env)
try:
env.update(update)
[env.pop(k, None) for k in remove]
yield
finally:
env.update(update_after)
[env.pop(k) for k in remove_after]
Usage examples:
>>> with modified_environ('HOME', LD_LIBRARY_PATH='/my/path/to/lib'):
... home = os.environ.get('HOME')
... path = os.environ.get("LD_LIBRARY_PATH")
>>> home is None
True
>>> path
'/my/path/to/lib'
>>> home = os.environ.get('HOME')
>>> path = os.environ.get("LD_LIBRARY_PATH")
>>> home is None
False
>>> path is None
True
EDIT2
A demonstration of this context manager is available on GitHub.
_environ = dict(os.environ) # or os.environ.copy()
try:
...
finally:
os.environ.clear()
os.environ.update(_environ)
I was looking to do the same thing but for unit testing, here is how I have done it using the unittest.mock.patch function:
def test_function_with_different_env_variable():
with mock.patch.dict('os.environ', {'hello': 'world'}, clear=True):
self.assertEqual(os.environ.get('hello'), 'world')
self.assertEqual(len(os.environ), 1)
Basically using unittest.mock.patch.dict with clear=True, we are making os.environ as a dictionary containing solely {'hello': 'world'}.
Removing the clear=True will let the original os.environ and add/replace the specified key/value pair inside {'hello': 'world'}.
Removing {'hello': 'world'} will just create an empty dictionary, os.envrion will thus be empty within the with.
In pytest you can temporarily set an environment variable using the monkeypatch fixture. See the docs for details. I've copied a snippet here for your convenience.
import os
import pytest
from typing import Any, NewType
# Alias for the ``type`` of monkeypatch fixture.
MonkeyPatchFixture = NewType("MonkeyPatchFixture", Any)
# This is the function we will test below to demonstrate the ``monkeypatch`` fixture.
def get_lowercase_env_var(env_var_name: str) -> str:
"""
Return the value of an environment variable. Variable value is made all lowercase.
:param env_var_name:
The name of the environment variable to return.
:return:
The value of the environment variable, with all letters in lowercase.
"""
env_variable_value = os.environ[env_var_name]
lowercase_env_variable = env_variable_value.lower()
return lowercase_env_variable
def test_get_lowercase_env_var(monkeypatch: MonkeyPatchFixture) -> None:
"""
Test that the function under test indeed returns the lowercase-ified
form of ENV_VAR_UNDER_TEST.
"""
name_of_env_var_under_test = "ENV_VAR_UNDER_TEST"
env_var_value_under_test = "EnvVarValue"
expected_result = "envvarvalue"
# KeyError because``ENV_VAR_UNDER_TEST`` was looked up in the os.environ dictionary before its value was set by ``monkeypatch``.
with pytest.raises(KeyError):
assert get_lowercase_env_var(name_of_env_var_under_test) == expected_result
# Temporarily set the environment variable's value.
monkeypatch.setenv(name_of_env_var_under_test, env_var_value_under_test)
assert get_lowercase_env_var(name_of_env_var_under_test) == expected_result
def test_get_lowercase_env_var_fails(monkeypatch: MonkeyPatchFixture) -> None:
"""
This demonstrates that ENV_VAR_UNDER_TEST is reset in every test function.
"""
env_var_name_under_test = "ENV_VAR_UNDER_TEST"
expected_result = "envvarvalue"
with pytest.raises(KeyError):
assert get_lowercase_env_var(env_var_name_under_test) == expected_result
For unit testing I prefer using a decorator function with optional parameters. This way I can use the modified environment values for a whole test function. The decorator below also restores the original environment values in case the function raises an Exception:
import os
def patch_environ(new_environ=None, clear_orig=False):
if not new_environ:
new_environ = dict()
def actual_decorator(func):
from functools import wraps
#wraps(func)
def wrapper(*args, **kwargs):
original_env = dict(os.environ)
if clear_orig:
os.environ.clear()
os.environ.update(new_environ)
try:
result = func(*args, **kwargs)
except:
raise
finally: # restore even if Exception was raised
os.environ = original_env
return result
return wrapper
return actual_decorator
Usage in unit tests:
class Something:
#staticmethod
def print_home():
home = os.environ.get('HOME', 'unknown')
print("HOME = {0}".format(home))
class SomethingTest(unittest.TestCase):
#patch_environ({'HOME': '/tmp/test'})
def test_environ_based_something(self):
Something.print_home() # prints: HOME = /tmp/test
unittest.main()

Resources