Bag to Depth Matrix - ros

I am working with the Persee 3D camera from Orbecc. I am currently having problems with the processing of the bag files. Someone has provided some code here: https://github.com/JaouadROS/bagtodepth. I tried looking over this and I can't quite make heads or tails of this. I really only have two main questions:
1: Where is the output being saved to? Will it be saved into one of his directories or will it be output somewhere else?
2: Is the output a sort of stream or will it just convert the data to a certain point?
I have successful downloaded (catkin_ws directory) and ran the program with the Persee, but it doesn't help if I can't access the output. I am looking to access this matrix in real time and was hoping I could just adapt his code to my project. He does mention something about information being stored at depthclean. Sadly, the guy that has posted this has not replied to any of the messages that I have sent. Thanks!

Related

AudioKit: AKSampler: Simplest way to add multiple samples

I understand so far that AKSampler was recently rewritten and this GitHub project seems to be the defacto guide on the new AKSampler. What I can gather is a move toward SFZ format. I am new to the sampling world but in my application I only need a handful of samples recorded from my piano in order for it to work. As I have looked around with existing SFZ formats and samples, I do not need all of the complexity and features that SFZ provides.
I am currently using AKSampler with a single piano sample which works perfectly, however it gets a bit weird once I play anything too far from the original source, so I just want to fill in the gaps with a few other samples (I only need to play around an octave and a half with my current app).
I do see according to the Docs a couple methods buildSimpleKeyMap() and buildKeyMap() however there is no implementation currently.
Do I have any additional options? I know that EXS format has been deprecated, as well as SoundFont. Is the only way to map multiple samples to AKSampler currently using SFZ?
Thanks for all your help <3
Edit: This readme on the AKSampler GitHub page provides the breakdown for samples. I still only see SFZ being considered. If anyone else is lost with my question or needs a reference, this seems to be the best resource. If the current AKSampler only offers SFZ as the primary way to map multiple samples, so be it, however it does look very challenging, I'm really hoping there is some simple middle ground between only using a single sample for the AKSampler vs. a full bore SFZ file.
Edit 2: Getting a solution to this, will update as soon as possible, thanks for your patience!
I have provided a simple explainer and sample file in the AudioKit docs. Hope this helps new users of AudioKit!

Rapidminer more than one query text analysis

I am using rapidminer because I want to perform a sentiment analysis.
The things are, I have 7 queries that I need to analyze together (companies' names that I need to analyze to obtain insights about the customer).
So my idea was then to extract the data with twitter app developer and then put in rapidminer to analyze.
When I open this data in rapidminer it shows that
there are some problems with the dataset with the following:
Error: file syntax error
Message: Value quotes not closed at position 346.
Last characters read: ght.
Help enter code here
-How do I fix this?
Once enter my spreadsheet data (.csv file). It shows me the error"
Cause: Unparseable number: "FALSE"
I've searched here already for answers but none helped me to solve this error.
Is it possible to analyze this data altogether or do I
have to do it separately?
I'm not sure if this is feasible, I
suppose it would interfere in the overall analysis?
I'm quite new at rapidminer, so I appreciate you all's help.
Thanks in advance.
I decided to ignore the problem so I just selected the option to replace errors with missing values. And analyze all data together.

Is the path loss formula correct?

I have been doing some tests with the path loss formula and it gives me some pretty good results so far. However, I looked at the original code and saw that the formula used is
distance = Math.pow(10.0, ((-adjustedRssi+txPower)/10*0.35))
where adjustedRssi is RSSI - adjustment. This was giving me very small values for distance so I thought that I must have modified it at some point by accident. After doing the maths and playing around a bit I found that using txPower-adjustment instead of txPower-adjustedRSSI gives me correct distances.
I figured that the error must have been my fault but looking back at an original copy of the library I am seeing that the formula was actually this way all along.
Is this a mistake or am I missing something obvious? Using the formla as is right now gives me wrong results while modifying it the way I did gives right results.
Also, why is the formula only used if the ratio<1. Shouldn't it work in either case?
Yes, you are absolutely right! Reviewing this now, I can see that this was a simple coding error I made when I originally wrote this. I paused work on the path loss formula because I was getting poor results, probably because of this error.
Since this is a development branch of an open source library hosted on Github, it is probably most appropriate to discuss this in that forum. Please feel free to comment directly on the pull request thread here: https://github.com/AltBeacon/android-beacon-library/pull/251. As the lead developer on that project, I would also welcome a pull request with the changes you are making.

Robot framework, how to compare sound, video file

I have sound, video source file and I have to verify my program which open and play this file is work correctly.
I don't know how to verify file like this!
I think i should capture (sound/video) and then compare it to source file.
Till this time, I've searched on the internet but didn't get any solution.
This is going to be a real challange for you, I personally have never done this but hopefully I can provide you with some help to set you on your way...
First you need to know that robotframework is run on python so anything you will need to be in python or have python bindings so asking there may be a good start.
In terms of capturing sound I believe it would be eaiser to use a program with a api you can use, I found a document here of someone doing this, as to whether this is still correct I am not sure:
http://www.nektra.com/files/DirectSound_Capture_With_Deviare.pdf
For video capture try looking here:
https://www.youtube.com/watch?v=j344j34JBRs
Next would be stripping the video, seperating the audio and video frames and comparing them seperatly. For this you are going to need a video editor, audio comparison library and a tool for comparing images.
In terms of how this would work I dont know as I have never done this...
Why do you need to do this tho, is there not a better way of doing this? Does you application make the video? In which case could just doing some checks on frames, length, file size suffice? You need to provide for information.
This is a bit long for a comment but this answer is incomplete.
Let me know how you get on?

Programming screen recorder - output issues

I want record screen (by capturing 15 screenshots per second). This part I know how to do. But I don't know how to write this to some popular video format. Best option which I found is write frames to separated PNG files and use commandline Mencoder which can convert them to many output formats. But maybe someone have another idea?
Requirements:
Must be multi-platform solutions (I'm using Free Pascal / Lazarus). Windows, Linux, MacOS
Exists some librarys for that?
Could be complex commandline application which record screen for me too, but I must have possibility to edit frames before converting whole raw data to popular video format
All materials which could give me some idea are appreciated. API, librarys, anything even in other languages than FPC (I would try rewrite it or find some equivalent)
I considered also writting frames to video RAW format and then use Mencoder (he can handle it) or other solution, but can't find any API/doc for video RAW data
Regards
Argalatyr mentioned ffmpeg already.
There are two ways that you can get that to work:
By spawning an new process. All you have to do is prepare the right input (could be a series of jpeg images for example), and the right commandline parameters. After that you just call ffmpeg.exe and wait for it to finish.
ffmpeg makes use of some dll's that do the actual work. You can use those dll's directly from within your Delphi application. It's a bit more work, because it's more low-level, but in the end it'll give you a finer control over what happens, and what you show the user while you're processing.
Here are some solutions to check out:
FFVCL Commercial. Actually looks quite good, but I was too greedy to spend money on this.
Open Source Delphi headers for FFMpeg. I've tried it, but I never managed to get it to work.
I ended up pulling the DLL wrappers from an open source karaoke program (UltraStar Deluxe). I had to remove some dependencies, but in the end it worked like a charm. The relevant (pascal) code can be found here:
http://ultrastardx.svn.sourceforge.net/viewvc/ultrastardx/trunk/src/lib/ffmpeg-0.10/
There was some earlier discussion with a Delphi component here. It's a very simple component that sometimes generates some weird movies. Maybe a start.

Resources