Trackbar in rqt ros - opencv

I am trying to interface ROS and open cv. I was able to threshold the video stream and display the output in rqt. Now I want to adjust the threshold range by creating a track bar in rqt. How could I implement it.

The best way in terms of integration and looks would be to create your own rqt plugin (tutorial). However you'd need to find some way to notify your node about any changes (e.g. via a service call).
Much easier and faster, and usually sufficient, is to re-use existing functionality. In this case, take a look at dynamic_reconfigure. This allows you to change parameters on the fly, you only need to define the configuration and register a callback in your code (tutorials). The GUI integrates into rqt.

Related

Anylogic: How to enable/disable path between nodes using a control

Refer to image.
The objective is to "enable/disable" the yellow path between experiment runs (purpose is to measure time savings if vehicles have the ability to use short-cut).
What is the easiest way to do this programmatically (i.e. not adding/removing the path manually).
If you have Material-handling transporters using the network, you can programmatically call one of these functions to prohibit usage:
If not, you will have to create the network programmatically upfront and either include/exclude the path accordingly.
One trick before going there: You can try to path.setBidirectional(false); to prohibt travelling against the drawn direction. But obviously, this only helps in some cases.
PS: You can also go all out and take full control of your network by agent-ifying it, see my full workshop here: https://www.benjamin-schumann.com/blog/2022/8/6/taking-control-of-your-network-agent-based-pathfinding

Is there a programmatic way to see what graphics API a game is using?

For games like DOTA 2 which can be run with different graphics API's such as DX9, DX11, Vulkan, I have not been able to come up with a viable solution to checking which of the API's its currently using. I want to do this to correctly inject a dll in order to display images over the game.
I have looked into manually checking what dll's the games have loaded,
this tool for example: https://learn.microsoft.com/en-us/sysinternals/downloads/listdlls
however, in the case of DOTA, it loads in both d3d9.dll and d3d11.dll libraries if none is specified in launch options on steam. Anyone have any other ideas as to how to determine the correct graphics API used?
In Vulkan, a clean way would be to implement a Vulkan Layer doing the overlay. It is slightly cleaner than outright injecting dlls. And it could work on multiple platforms.
In DirectX, screencap software typically does this. Some software adds FPS counter and such overlays. There seems to be open source with similar goals e.g. here: https://github.com/GPUOpen-Tools/OCAT. I believe conventionally the method is to intercept (i.e. "hook" in win32 api terminology) all the appropriate API calls.
As for simple detection, if it calls D3D12CreateDevice then it likely is Direct3D 12. But then again the app could create devices for all the APIs too and proceed not to use them. But I think the API detection is not particularly important for you if you only want to make an overlay; as long as you just intercept all the present calls and draw your stuff on top of it.

How to simulate audio and video calls in NS3?

I want to generate different types of traffic for analyzing OFDMA transmission in NS3. How can I simulate video and audio calls?
The first three options that come to mind are:
If you want to be as close to reality as possible, try out the Direct Code Execution (DCE) Module. I've never used it, so I'm not sure how well it's supported.
Use the OnOffApplication. The OnOffApplication allows you to set onTime, offTime, and a DataRate (among other variables). You can determine the rate at which data is sent for your audio or video program, and then provide those rates to the OnOffApplication. You may find the OnOffHelper convenient to set various parameters of an OnOffApplication.
Create your own Application. This option may be of particular interest since you could simulate variable bitrate audio/video calls. If you choose this option, I highly suggest you checkout the ns-3 tutorial for the walkthrough of fifth.cc to learn more about how to create your own Application.
The second option is probably easiest to use, but may not be as accurate as the first, or as flexible as the third.

Functions for the Following GIMP Functionality

I'm making my first foray into GIMP scripting (hopefully in Python, but I'm open to Scheme too). I know exactly the steps I want to take using the GIMP UI, and I'm trying to determine which, if any, of the steps can be executed from a script, since the documentation I found suggested that not all functionality can be accessed in this way. Looking at the documentation helped with some, but not all, of what I'm looking for, so I'm hoping for a pointer as to which of the following functionality I can access from Python, and what function I will need, since my googling as come up short.
new layer
new layer from visible
duplicate layer
changing mode to overlay/grain extract/grain merge
gaussian blur
merge layer down
desaturate (lightness)
adjust color curves
filling a transparent layer with the paper pattern
adjust opacity
Open the Python console (Filters>Python-fu>Console).
Hit the Browse... button
Enter what you look for in the top bar on the left (for instance "desaturate")
Select the call in the list below the search filter and see the doc on the right
This includes any callable installed script/plugin (if the authors did their homework). "Apply" copies a call template in the Python console.
You can do more in Python than in Scheme.
The doc for the Python classes is here. The more frequent API calls have corresponding methods/attributes.
If you are on Windows, some tricks to ease your debugging here.
There is not always a direct mapping between UI actions and the API. Some UI actions may correspond to several API calls.
In Gimp 2.10, the GEGL filters aren't callable from Python (at least via the regular Gimp API), unless they replace an existing 2.8 filter (like the Gaussian blur).

How to make Augment Reality using Pose data from Tango?

Augment reality doesn't work if Auto-Connect is disabled in Tango Manager and if auto-Connect is enabled it doesn't allow any ADF to load.
So how can we use pose data of an ADF to make AR objects that are persistent and appear with respect to the ADF origin?
In order to use ADF files you'll have to connect manually like described here: https://developers.google.com/tango/apis/unity/unity-user-permissions
Auto-connect and Area Description Files can't work together for obvious reasons as auto-connect is done on startup and an ADF would first have to be chosen.
This example uses AR+ADF together, it is a good reference for you too:
https://github.com/googlesamples/tango-examples-unity/tree/master/UnityExamples/Assets/TangoSDK/Examples/AreaLearning

Resources