I am trying to program a simple Babymonitor for Windows (personal use).
The babymonitor should just detect the dB level of the microphone and triggers at a certain volume.
After some research, I found the Bass.dll library and came across it's function BASS_ChannelGetLevel, which is great but seems to have limitations and doesn't fit my needs (Peak equals to a DWORD value).
In the examples I found a livespec example which is "almost" what I need. The example uses BASS_ChannelGetData, but I don't quite know how to handle the returned array...
I want to keep it as simple as possible: Detect the volume from the microphone as dB or any other value (e.g. value 0-MAXINT).
How can this be done with the Bass.dll library?
The BASS_ChannelGetLevel returns the value that is capped to 0dB (return value is 32768 in this case). If you adjust your source level (lower microphone level in sound card settings) then it will work just fine.
Another way, if you want to get uncapped value is to use the BASS_ChannelGetLevelEx function instead: it returns floating point levels, where 1 is maximum (0dB) value that corresponds to BASS_ChannelGetLevel's 32767, but it can exceed 1 to detect sound levels above 0dB which is what you may need.
I also suggest you to monitor sound level for a while: trigger only if certain level exists for 2-3 seconds at least (this way you will exclude false alarms).
Here is how you obtain the db level given an input stream handle (streamHandle):
var peak = (double)Bass.BASS_ChannelGetLevel(streamHandle);
var decibels = 20 * Math.Log10(peak / Int32.MaxValue);
Alternatively, you can use the following to get the RMS (average) peak. To get the RMS value, you have to pass in a sample length into BASS_ChannelGetLevel. I'm using 20 milliseconds here but you can play with the value to see which works best for your needs.
var decibels = 0m;
var channelCount = 2; //Assuming two channels
var sampleLengthMS = 20f;
var rmsLevels = new float[channelCount];
var rmsObtained = Bass.BASS_ChannelGetLevel(streamHandle, rmsLevels, sampleLengthMS / 1000f, BASSLevel.BASS_LEVEL_RMS);
if (rmsObtained)
decibels = 20*Math.Log10(rmsLevels[0]); //using first channel (index 0) but you can get both if needed.
else
Console.WriteLine(Bass.BASS_ErrorGetCode());
Hope this helps.
Related
I have two streams, stream A and stream B. Both streams contain the same type of event which has an ID and a timestamp. For now, all i want the flink job to do is join the events that have the same ID inside of a window of 1 minute. The watermark is assigned on event.
sourceA = initialSourceA.map(parseToEvent)
sourceB = initialSourceB.map(parseToEvent)
streamA = sourceA
.assignTimestampsAndWatermarks(CustomWatermarkStrategy())
.keyBy(Event.Key)
streamB = sourceB
.assignTimestampsAndWatermarks(CustomWatermarkStrategy())
.keyBy(Event.Key)
streamA
.join(streamB)
.where(Event.Key)
.equalTo(Event.Key)
.window(TumblingEventTimeWindows.of(Time.of(1, TimeUnit.MINUTES)))
.apply(giveMePairOfEvents)
.print()
Inside my test I try to send the following:
sourceA.send(Event(ID_1, 0 seconds))
sourceB.send(Event(ID_1, 0 seconds))
//to increase the watermark
sourceA.send(Event(ID_1, 62 seconds))
sourceB.send(Event(ID_1, 62 seconds))
For parallelism = 1, I can see the events from time 0 getting joined together.
However, for parallelism = 2 the print does not display anything getting joined. To figure out the problem, I tried to print the events after the keyBy of each stream and I can see they are all running on the same instance. Placing the print after the watermarking, for obvious reasons, that the events are currently on the different instances.
This leads me to believe that I am somehow doing something incorrectly when it comes to watermarking since for a parallelism higher than 1 it doesn't increase the watermark. So here's a couple of questions i asked myself:
Is it possible that each event has a seperate watermark generator and i have to increase them specifically?
Do I run keyBy first and then watermark so that my events from each stream use the same watermarkgenerator?
Sending another set of events as follows:
sourceA.send(Event(ID_1, 0 seconds))
sourceB.send(Event(ID_1, 0 seconds))
//to increase the watermark
sourceA.send(Event(ID_1, 62 seconds))
sourceB.send(Event(ID_1, 62 seconds))
sourceA.send(Event(ID_1, 122 seconds))
sourceB.send(Event(ID_1, 122 seconds))
Ended up sending the joined first events. Further inspection showed that the third set of events used the same watermarkgenerator that the second one didn't use. Something which I am not very clear on why is happening. How can I assign and increase watermarks correctly when using a join function in Flink?
EDIT 1:
The custom watermark generator:
class CustomWaterMarkGenerator(
private val maxOutOfOrderness: Long,
private var currentMaxTimeStamp: Long = 0,
)
: WatermarkGenerator<EventType> {
override fun onEvent(event: EventType, eventTimestamp: Long, output: WatermarkOutput) {
val a = currentMaxTimeStamp.coerceAtLeast(eventTimestamp)
currentMaxTimeStamp = a
output.emitWatermark(Watermark(currentMaxTimeStamp - maxOutOfOrderness - 1));
}
override fun onPeriodicEmit(output: WatermarkOutput?) {
}
}
The watermark strategy:
class CustomWatermarkStrategy(
): WatermarkStrategy<Event> {
override fun createWatermarkGenerator(context: WatermarkGeneratorSupplier.Context?): WatermarkGenerator<Event> {
return CustomWaterMarkGenerator(0)
}
override fun createTimestampAssigner(context: TimestampAssignerSupplier.Context?): TimestampAssigner<Event> {
return TimestampAssigner{ event: Event, _: Long->
event.timestamp
}
}
}
Custom source:
The sourceFunction is currently an rsocket connection that connects to a mockstream where i can send events through mockStream.send(event). The first thing I do with the events is parse them using a map function (from string into my event type) and then i assign my watermarks etc.
Each parallel instance of the watermark generator will operate independently, based solely on the events it observes. Doing the watermarking immediately after the sources makes sense (although even better, in general, is to do watermarking directly in the sources).
An operator with multiple input channels (such as the keyed windowed join in your application) sets its current watermark to the minimum of the watermarks it has received from its active input channels. This has the effect that any idle source instances will cause the watermarks to stall in downstream tasks -- unless those sources explicitly mark themselves as idle. (And FLINK-18934 meant that prior to Flink 1.14 idleness propagation didn't work correctly with joins.) An idle source is a likely suspect in your situation.
One strategy for debugging this sort of problem is to bring up the Flink WebUI and observe the behavior of the current watermark in all of the tasks.
To get more help, please share the rest of the application, or at least the custom source and watermark strategy.
I am trying to create a EA in mql4, but in OrderSend function, when i use some value instead of zero it show ordersend error 130. Please help to solve this problem
Code line is
int order = OrderSend("XAUUSD",OP_SELL,0.01,Bid,3,Bid+20*0.01,tp,"",0,0,Red);
error number 130 means Invalid stops.
so that means there is a problem with the stops you set with the ordersend function.
I suggest you set it like that:
int order = OrderSend("XAUUSD",OP_SELL,0.01,Bid,3,Bid+20*Point,tp,"",0,0,Red);
so you could use Point instead of hard coding it.
and to check what is the error number means. I think you could refer to: https://book.mql4.com/appendix/errors
You should know that there exists a minimum Stop Loss Size (mSLS) given in pips. "mSLS" changes with the currency and broker. So, you need to put in the OnInit() procedure of your EA a variable to get it:
int mSLS = MarketInfo(symbol,MODE_STOPLEVEL);
The distance (in pips) from your Order Open Price (OOP) and the Stop-Loss Price (SLP) can not be smaller than mSLS value.
I will try to explain a general algorithm I use for opening orders in my EAs, and then apply the constrain on Stop-Loss level (at step 3):
Step 1. I introduce a flag (f) for the type of operation I will open, being:
f = 1 for Buy, and
f = -1 for Sell
You know that there are mql4 constants OP_SELL=1 and OP_BUY=0 (https://docs.mql4.com/constants/tradingconstants/orderproperties).
Once I have defined f, I set my operation type variable to
int OP_TYPE = int(0.5((1+f)*OP_BUY+(1-f)*OP_SELL));
that takes value OP_TYPE=OP_BUY when f=1, while OP_TYPE=OP_SELL when f=-1.
NOTE: Regarding the color of the orders I put them in an array
color COL[2]= {clrBlue,clrRed};
then, having OP_TYPE, I set
color COLOR=COL[OP_TYPE];
Step 2. Similarly, I set the Order Open Price as
double OOP = int(0.5*((1+f)*Ask+(1-f)*Bid));
which takes value OOP=Ask when f=1, while OOP=Bid when f=-1.
Step 3. Then, given my desired Stop Loss in pips (an external POSITIVE parameter of my EA, I named sl) I make sure that sl > SLS. In other words, I check
if (sl <= mSLS) // I set my sl as the minimum allowed
{
sl = 1 + mSLS;
}
Step 4. Then I calculate the Stop-Loss Price of the order as
double SLP = OOP - f * sl * Point;
Step 5. Given my desired Take Profit in pips (an external POSITIVE parameter of my EA, I named tp) I calculate the Take-Profit Price (TPP) of the order as
double TPP = OOP + f * tp * Point;
OBSERVATION: I can not affirm, but, according to mql4 documentation, the minimum distance rule between the stop-loss limit prices and the open price also applies to the take profit limit price. In this case, a "tp" check-up needs to be done, similar to that of the sl check-up, above. that is, before calculating TPP it must be executed the control lines below
if (tp <= mSLS) // I set my tp as the minimum allowed
{
tp = 1 + mSLS;
}
Step 5. I call for order opening with a given lot size (ls) and slippage (slip) on the operating currency pair (from where I get the Ask and Bid values)
float ls = 0.01;
int slip = 3; //(pips)
int order = OrderSend(Symbol(),OP_TYPE,ls,OOP,slip,SLP,TPP,"",0,0,COLOR);
Note that with these few lines it is easy to build a function that opens orders of any type under your command, in any currency pair you are operating, without receiving error message 130, passing to the function only 3 parameters: f, sl and tp.
It is worth including in the test phase of your EA a warning when the sl is corrected for being less than the allowed, this will allow you to increase its value so that it does not violate the stop-loss minimum value rule, while you have more control about the risk of its operations. Remember that the "sl" parameter defines how much you will lose if the order fails because the asset price ended up varying too much in the opposite direction to what was expected.
I hope I could help!
Whilst the other two answers are not necessarily wrong (and I will not go over the ground they have already covered), for completeness of answers, they fail to mention that for some brokers (specifically ECN brokers) you must open your order first, without setting a stop loss or take profit. Once the order is opened, use OrderModify() to set you stop loss and/or take profit.
I'm designing an RPG game where a user may accumulate experience points (XP) and level up, based on XP. Calculating the current level is trivial; if else seems to be most efficent.
I would like to calculate percent of progression for the current level. Based on the assumption that I have 10 levels, where each level is capped at a somewhat exponential value:
typedef NS_ENUM(NSInteger, RPGLevelCap) {
RPGLevelCap1=499,
RPGLevelCap2=1249,
RPGLevelCap3=2249,
RPGLevelCap4=3499,
RPGLevelCap5=4999,
RPGLevelCap6=6999,
RPGLevelCap7=9999,
RPGLevelCap8=14999,
RPGLevelCap9=19999,
RPGLevelCap10=19999 // Anything beyond previous level is Lvl 10; display as 100%
};
What's an efficient, yet easily understandable way, to calculate a user's level progression based on their current level?
An if else statement is both hard to understand and maintain, but may be fairly efficient:
float levelProgression=0;
// Calculate level progression as a fraction of 1
if(xp <= RPGLevelCap1)
{
levelProgression = ((float)xp / RPGLevelCap1);
}
else if (xp <=RPGLevelCap2)
{
levelProgression = ((float)(xp-RPGLevelCap1) / (RPGLevelCap2-RPGLevelCap1));
}
else if (xp <=RPGLevelCap3)
{
levelProgression = ((float)(xp-RPGLevelCap2) / (RPGLevelCap3-RPGLevelCap2));
}
...
else if (xp>RPGLevelCap10)
{
levelProgression = 1;
}
Given that the level caps are inconsistent...how should I handle this problem?
Hmm. A simple way would be to store the level cap values in an array. Find the player's current level based on the largest value it's less than. (level one is 0 to 499, level two is 500 to 1249, etc.) Use a loop to find the user's level rather than a set of if/else statements.
Then calculate the range of the player's current level, (end_of_range - start_of_range)
0 - 499 = 499
500 - 1249 = 749,
etc.
If a player is at 600 points, he's a level 2 character, in the 500-1249 range.
He's at 600-500 or 100 points into the range. (600-500)/749*100 is the player's percent complete in that range. 13.35% complete, in this example.
There are a few ways you can approach this. My weapon of choice here is to embody the XP values within the concept of LevelData versus using an enum, array of XP values, etc. The benefit of something like this is that for each level, you'll typically have many configurable values (for example level based multipliers) based on level. This way they are all in once place.
In your LevelData, there are different ways you can encode XP. These range from the XP total for the next level, or the beginning and end XP total for the next level. Obviously there are other permutations of this.
I usually use the later, mainly because it prevents me from having to "know" the LevelData for the previous level.
So I would typically have this in JSON
{
"levelData": [
{
"levelStartXP": "0",
"levelUpXP": "250",
"levelUpBonus": 0
},
{
"levelStartXP": "250",
"levelUpXP": "1000",
"levelUpBonus": 50
},
{
"levelStartXP": "1000",
"levelUpXP": "2500",
"levelUpBonus": 100
},
]
}
This is just a boring example. I then of course have a LevelData class which is embodies each level. I also have a LevelDataManager. That manager is used to vend out information per level. Having convenience methods help. For examples, good ones to have are:
- (NSInteger)levelWithXP:(NSInteger)xp; // for a given XP, give me the appropriate level
- (LevelData *)levelDataWithXP:(NSInteger)xp; //for a given XP, give me the appropriate LevelData
- (LevelData *)levelDataWithLevel:(NSInteger)level; // for a given level, give me the appropriate LevelData
- (NSInteger)levelUpXPForLevel:(NSInteger)level; // for a given level, give me the XP value needed for the next level)
I just arbitrarily used NSInteger, use the appropriate data type for your case.
Just what you want to support is really up to you.
The gist of the whole thing is try not to store individual level components. Rather aggregate them in LevelData or some other collection, so you have all the info per level at your disposal and create some form of manager/interface to get you the information you need.
So back to you your question. Let's say we have a class for LevelData (assume it is using the JSON above, and those fields are represented by properties) and a LevelDataManager instance called theLevelDataMgr, you could compute the % based on something like:
LevelData *levelData = [theLevelDataMgr levelDataWithLevel:currLevel]; // currLevel is the current level
float xpDiffInLevel = levelData.levelUpXP - levelStartXP;
float xpInLevel = currXP - levelStartXP; // currXP is the current XP of the user
float pctOfXP = xpInLevel / xpDiffInLevel; // You should add divide by zero check
And yes, if you wanted, you could have the LevelData class contain a method to do this calculation for you to help encapsulate it even better.
Code isn't tested and is listed to just give you a better idea on how to do it. Also, how you decide to store your XP for each level dictates how this would work. This code is based on the method I usually use.
How to calculate correct PTS value for frame before encoding in FFmpeg C API?
For encoding I'm using function avcodec_encode_video2 and then writing it by av_interleaved_write_frame.
I found some formulas, but none of them work.
In doxygen example they are using
frame->pts = 0;
for (;;) {
// encode & write frame
// ...
frame->pts += av_rescale_q(1, video_st->codec->time_base, video_st->time_base);
}
This blog says that formula must be like this:
(1 / FPS) * sample rate * frame number
Someone uses only frame number to set pts:
frame->pts = videoCodecCtx->frame_number;
Or an alternative way:
int64_t now = av_gettime();
frame->pts = av_rescale_q(now, (AVRational){1, 1000000}, videoCodecCtx->time_base);
And the last one:
// 40 * 90 means 40 ms and 90 because of the 90kHz by the standard for PTS-values.
frame->pts = encodedFrames * 40 * 90;
Which one is correct? I think answer for this question will be helpful for not only for me.
It's better to think about PTS more abstractly before trying code.
What you're doing is meshing 3 "time sets" together. The first is time we're used to, based on 1000 ms per second, 60 seconds per minute, and so on. The second is the codec time for the particular codec you are using. Each codec has a certain way it wants to represent time, usually in a 1/number format meaning that for every second there is "number" amount of ticks. The third format works similar to the second except that it is the time base for the container that you are used.
Some people prefer to start with actual time, others frame count, neither is "wrong".
Starting with a frame count you need to first convert it based on your frame rate. Note all conversions I speak of use av_rescale_q(...). The purpose of this conversion is to turn a counter into time, so you rescale with your frame rate (video steam time base usually). Then you have to convert that into the time_base of your video codec before encoding.
Similarly, with a real time, your first conversion needs to be from current_time - start_time scaled to your video codec time.
Anyone using only frame counter is probably using a codec with a time_base equal to their frame rate. Most codecs do not work like this and their hack is not portable. Example:
frame->pts = videoCodecCtx->frame_number; // BAD
Additionally, anyone using hardcoded numbers in their av_rescale_q is leveraging the fact that they know what their time_base is and this should be avoided. The code isn't portable to other video formats. Instead use video_st->time_base, video_st->codec->time_base, and output_ctx->time_base to figure things out.
I hope understanding it from a higher level will help you see which of those are "correct" and which are "bad practice". There is no single answer, but maybe now you can decide which approach is best for you.
Time is measured not in seconds or milliseconds or any standard unit. Instead, it is measured by the avCodecContext's timebase.
So if you set the codecContext->time_base to 1/1, it means using second for measurement.
cctx->time_base = (AVRational){1, 1};
Assuming you want to encode at a steady fps of 30. Then, the time when a frame is encoded is framenumber * (1.0/fps)
But once again, the PTS is also not measured in seconds or any standard unit. It's measured by avStream's time_base.
In the question, the author mentioned 90k as the standard resolution for pts. But you will see that this is not always true. The exact resolution is saved in avstream. you can read it back by:
if ((err = avformat_write_header(ofctx, NULL)) < 0) {
std::cout << "Failed to write header" << err << std::endl;
return -1;
}
av_dump_format(ofctx, 0, "test.webm", 1);
std::cout << stream->time_base.den << " " << stream->time_base.num << std::endl;
The value of stream->time_stamp is only populated after calling avformat_write_header
Therefore, the right formula for calculating PTS is:
//The following assumes that codecContext->time_base = (AVRational){1, 1};
videoFrame->pts = frameduration * (frameCounter++) * stream->time_base.den / (stream->time_base.num * fps);
So really there are 3 components in the formula,
fps
codecContext->time_base
stream->time_base
so pts = fps*codecContext->time_base/stream->time_base
I have detailed my discovery here
There's also the option with setting it like frame->pts = av_frame_get_best_effort_timestamp(frame) but I'm not sure this is the correct approach either.
I a long video stream, but unfortunately, it's in the form of 1000 15-second long randomly-named clips. I'd like to reconstruct the original video based on some measure of "similarity" of two such 15s clips, something answering the question of "the activity in clip 2 seems like an extension of clip 1". There are small gaps between clips --- a few hundred milliseconds or so each. I can also manually fix up the results if they're sufficiently good, so results needn't be perfect.
A very simplistic approach can be:
(a) Create an automated process to extract the first and last frame of each video-clip in a known image format (e.g. JPG) and name them according to video-clip names, e.g. if you have the video clips:
clipA.avi, clipB.avi, clipC.avi
you may create the following frame-images:
clipA_first.jpg, clipA_last.jpg, clipB_first.jpg, clipB_last.jpg, clipC_first.jpg, clipC_last.jpg
(b) The sorting "algorithm":
1. Create a 'Clips' list of Clip-Records containing each:
(a) clip-name (string)
(b) prev-clip-name (string)
(c) prev-clip-diff (float)
(d) next-clip-name (string)
(e) next-clip-diff (float)
2. Apply the following processing:
for Each ClipX having ClipX.next-clip-name == "" do:
{
ClipX.next-clip-diff = <a big enough number>;
for Each ClipY having ClipY.prev-clip-name == "" do:
{
float ImageDif = ImageDif(ClipX.last-frame.jpg, ClipY.first_frame.jpg);
if (ImageDif < ClipX.next-clip-diff)
{
ClipX.next-clip-name = ClipY.clip-name;
ClipX.next-clip-diff = ImageDif;
}
}
Clips[ClipX.next-clip-name].prev-clip-name = ClipX.clip-name;
Clips[ClipX.next-clip-name].prev-clip-diff = ClipX.next-clip-diff;
}
3. Scan the Clips list to find the record(s) with no <prev-clip-name> or
(if all records have a <prev-clip-name> find the record with the max <prev-clip-dif>.
This is a good candidate(s) to be the first clip in sequence.
4. Begin from the clip(s) found in step (3) and rename the clip-files by adding
a 5 digits number (00001, 00002, etc) at the beginning of its filename and going
from aClip to aClip.next-clip-name and removing the clip from the list.
5. Repeat steps 3,4 until there are no clips in the list.
6. Voila! You have your sorted clips list in the form of sorted video filenames!
...or you may end up with more than one sorted lists (if you have enough
'time-gap' between your video clips).
Very simplistic... but I think it can be effective...
PS1: Regarding the ImageDif() function: You can create a new DifImage, which is the difference of Images ClipX.last-frame.jpg, ClipY.first_frame.jpg and then then sum all pixels of DifImage to a single floating point ImageDif value. You can also optimize the process to abort the difference (or sum process) if your sum is bigger than some limit: You are actually interested in small differences. A ImageDif value which is larger than an (experimental) limit, means that the 2 images differs so much that the 2 clips cannot be one next each other.
PS2: The sorting algorithm order of complexity must be approximately O(n*log(n)), therefore for 1000 video clips it will perform about 3000 image comparisons (or a little more if you optimize the algorithm and you allow it to not find a match for some clips)