Note that Apple's doco asserts
https://developer.apple.com/documentation/spritekit/skphysicsbody/1519796-lineardamping
This property is used to simulate fluid or air friction forces on the body. The property must be a value between 0.0 and 1.0. The default value is 0.1. If the value is 0.0, no linear damping is applied to the object.
In fact: you can set values higher than one - so, "10.0" and "20.0" work perfectly; 1, 10 and 20 are all very clearly different.
has anyone probed in to this and discovered if there is actually a maximum? (If it just asymptotes away, what's the realistic range?)
The Apple doco is simply wrong.
Just to answer my own question, in iOS:
The actual documentation:
This property is used to simulate fluid or air friction forces on the body. The property must be a value between 0.0 and 1.0. The default value is 0.1. If the value is 0.0, no linear damping is applied to the object.
is simply wrong.
A value of about 10 to 15 is typical when you want typical "strong damping" like in many physics scenes.
I very often use values of 2 or 3. And "10" is our value for "very thick honey".
Related
How is CIDetectorMinFeatureSize supposed to be used when setting up a CIDetector for either face or rectangle recognition? The description at Apple does not tell me anything:
A key used to specify the minimum size that the detector will
recognize as a feature.
The value for this key is an NSNumber object ranging from 0.0 through
1.0 that represents a fraction of the minor dimension of the image.
The documentation says it ranges from 0.0 to 1.0 and then I look at the WWDC slides of session 514 and they set the value to "100"...?
It is as much a secret to me as the (undocumented?) CIDetectorAspectRatio.
Let's say I'm trying to detect an A4 paper sheet which is 30cm x 21cm and has an aspect ratio of 1.4 - what would I set for the two keys?
CIDetectorAspectRatio is used by CIRectangleDetector to constrain the search. In your example, the value of the CIDetectorAspectRatio key should be #(1.43).
The CIDetectorMinFeatureSize is also used to constrain the search. Only a rectangles greater than the specified fraction of the input image size will be returned
In the documentation you can find about "CIDetectorAspectRatio" :
An option specifying the aspect ratio (width divided by height) of rectangles to search for.
The value of this key is an NSNumber object whose value is a positive floating-point number. Use this option with the CIDetectorTypeRectangle detector type to fine-tune the accuracy of the detector. For example, to more accurately find a business card (3.5 x 2 inches) in an image, specify an aspect ratio of 1.75 (3.5 / 2).
Available in iOS 8.0 and later.
enter link description here
The problem is simple: I want to move (and later, be able to rotate) an image. For example, every time i press the right arrow on my keyboard, i want the image to move 0.12 pixels to the right, and every time i press the left arrow key, i want the image to move 0.12 pixels to the left.
Now, I have multiple solutions for this:
1) simply add the incremental value, i.e.:
image.x += 0.12;
this is of course assuming that we're going to the right.
2) i multiplicate the value of a single increment by the times i already went into this particular direction + 1, like this:
var result:Number = 0.12 * (numberOfTimesWentRight+1);
image.x = result;
Both of these approaches work but produce similiar, yet subtly different, results. If we add some kind of button component that simply resets the x and y coordinates of the image, you will see that with the first approach the numbers don't add up correctly.
it goes from .12, .24, .359999, .475 etc.
But with the second approach it works well. (It's pretty obvious as to why though, it seems like += operations with Numbers are not really precise).
Why not use the second approach then? Well, i want to rotate the image as well. This will work for the first attempt, but after that the image will jump around. Why? In the second approach we never took the original position of the image in account. So if the origin-point shifts a bit down or up because you rotated your image, and THEN you try to move the image again: it will move to the same position as if you hadn't rotated before.
Alright, to make this short:
How can i reliably move, scale and rotate images for 1/10 of a pixel?
Short answer: I don't know! You're fighting with floating point math!
Luckily, I have a workaround, if you don't mind.
You store the location (x and y) of the image in a separate variable... at a larger scale. Such as 100x. So 123.45 becomes 12345, and you then divide by 100 to set the attribute that flash uses to display.
Yes, there are limits to number sizes too, but if you're willing to accept some error rate, and the fact that you'll be limited to, I dunno, a million pixels in each direction, you can fit it in a regular int. The only rounding error you will encounter will be a single rounding error when you divide by 100 (or the factor you used). So instead of the compound rounding error which you described (0.12 * 4 = 0.475), you should see things like 0.47999999. Which doesn't matter because it's, well, so small.
To expand on #Pimgd answer a bit, you're probably hitting a floating point error (multiple +='s will exaggerate the error more than one *='s) - Numbers in Flash are 53-bit precision.
There's also another thing to keep in mind, which is probably playing a bigger role with such small movement values; Flash positions all objects using twips, which is roughly about 1/20th of a pixel, or 0.05, so all values are rounded to this. When you say image.x += 0.12, it's actually the equivalent of image.x += 0.10, hence which the different becomes apparent; you're losing 0.02 of a pixel with every move.
You should be able to get around it by moving to another scale, as #Pimgd says, or just storing your position separately - i.e. work from a property _x rather than image.x so you're not losing that precision everytime:
this._x += 0.12;
image.x = this._x;
Im trying to build a 10 band Equaliser using NOVOCAINE.
I copied the Equaliser.mm's code in viewWillAppear, and added 9 more Sliders in the xib file, and changed IBAction code too this :
-(void)HPFSliderChanged:(UISlider *)sender {
PEQ[sender.tag - 1].centerFrequency = sender.value;
NSLog(#"%f",sender.value);
}
What I want to know is if I am doing this the right way or not ? and the what will be range of the Sliders ? Like in HPF example, the slider range is 2k to 8k. Need some guidance here.
Thanks.
EDIT: after your comment, I think it is clearer what you are asking.
Take the code to instantiate a NVPeakingEQFilter:
NVPeakingEQFilter* PEQ = [[NVPeakingEQFilter alloc] initWithSamplingRate:self.samplingRate];
PEQ.Q = QFactor;
PEQ.G = gain;
PEQ.centerFrequency = centerFrequencies;
you need define 3 parameters: Q, G, and centerFrequency. Both Q and centerFrequency are usually fixed (QFactor in my case is a constant equal to 2.0).
So, you have 10 sliders: each one corresponds to a fixed centerFrequency. I suggested iTunes values: 32Hz, 64Hz, 125Hz, 250Hz, 500Hz, 1KHz, 2KHz, 4KHz, 8KHz, 16KHz. You do not want to change those values when the slider value changes.
What you want to change when the slider value changes is the gain (G). At init time, G can be set to 0.0. This means "no amplification/attenuation".
When the slider moves, you change G, so actually you would do:
PEQ[sender.tag - 1].G = sender.value * kNominalGainRange;
where kNominalGainRange is 12.0, so if sender.value goes from -1.0 to +1.0, G goes from -12 to +12.
Hope this helps.
What I want to know is if I am doing this the right way or not ?
you do not show much code, but HPFSliderChanged seems correct. If you have any specific issue you should describe it and post more code.
and the what will be range of the Sliders ?
Actually, there is no rigid rule when it come to equalisers. iTunes goes from -12db to +12db, but you could use different ranges (with the only caveat being distortion).
Like in HPF example, the slider range is 2k to 8k. Need some guidance here.
again, you can take iTunes equaliser as an example (32Hz, 64Hz, 125Hz, 250Hz, 500Hz, 1KHz, 2KHz, 4KHz, 8KHz, 16KHz), or you can google for images of real equalisers and see which bands they use.
The problem is that, if I try to change the lowest point to somewhere higher, the entire chart will be lowered severely.
How do I adjust each point with no above mentioned issue?
P.S. I use Highchart's Highstock.
It would be helpful if you could show a JsFiddle with the actual working code. Without knowing what exactly is going on with your chart it's difficult to figure out what the issue is and how to fix it.
Have a look at the yAxis - Highstock API Reference, you could potentially make use of min or max to counteract the scaling. Again though, without seeing your code it's tough to know.
If I understand your problem, you are trying to set the 7th point from currently at ~150 to something really high, say 5000? I did not notice any real change in the position of other points if the new value was under 1500 (or something around twice the first point)
Appears that your chart is set to comparison mode, hence, I believe, the default max and min are (100% & -100%) by default, this is w.r.t. to first point of the series. Highcharts would auto adjust these extremes if any of the points were to lie outside this range, you could force an override using yAxis.min & max.
For instance, you can set yAxis.min to -100 and yAxis.max to 100 like this
yAxis: {
min:-100,
max:100
}
Play around with the min and max to get to the values that suits your data the most, do remember these are % comparison to the first point. Any points outside these ranges would not show in the chart.
#jsFiddle
I'm trying to use the DepthBias property on the rasterizer state in DirectX 11 (D3D11_RASTERIZER_DESC) to help with the z-fighting that occurs when I render in wireframe mode over solid polygons (wireframe overlay), and it seems setting it to any value doesn't change anything to the result. But I noticed something strange... the value is defined as a INT rather than a FLOAT. That doesn't make sense to me, but it still doesn't happen to work as expected. How do we properly set that value if it is a INT that needs to be interpreted as a UNORM in the shader pipeline?
Here's what I do:
Render all geometry
Set the rasterizer to render in wireframe
Render all geometry again
I can clearly see the wireframe overlay, but the z-fighting is horrible. I tried to set the DepthBias to a lot of different values, such as 0.000001, 0.1, 1, 10, 1000 and all the minus equivalent, still no results... obviously, I'm aware when casting the float as integer, all the decimals get cut... meh?
D3D11_RASTERIZER_DESC RasterizerDesc;
ZeroMemory(&RasterizerDesc, sizeof(RasterizerDesc));
RasterizerDesc.FillMode = D3D11_FILL_WIREFRAME;
RasterizerDesc.CullMode = D3D11_CULL_BACK;
RasterizerDesc.FrontCounterClockwise = FALSE;
RasterizerDesc.DepthBias = ???
RasterizerDesc.SlopeScaledDepthBias = 0.0f;
RasterizerDesc.DepthBiasClamp = 0.0f;
RasterizerDesc.DepthClipEnable = TRUE;
RasterizerDesc.ScissorEnable = FALSE;
RasterizerDesc.MultisampleEnable = FALSE;
RasterizerDesc.AntialiasedLineEnable = FALSE;
As anyone figured out how to set the DepthBias properly? Or perhaps it is a bug in DirectX (which I doubt) or again maybe there's a better way to achieve this than using DepthBias?
Thank you!
http://msdn.microsoft.com/en-us/library/windows/desktop/cc308048(v=vs.85).aspx
Depending on whether your depth buffer is UNORM or floating point varies the meaning of the number. In most cases you're just looking for the smallest possible value that gets rid of your z-fighting rather than any specific value. Small values are a small bias, large values are a large bias, but how that equates to a shift numerically depends on the format of your depth buffer.
As for the values you've tried, anything less than 1 would have rounded to zero and had no effect. 1, 10, 1000 may simply not have been enough to fix the issue. In the case of a D24 UNORM depth buffer, the formula would suggest a depth bias of 1000 would offset depth by: 1000 * (1 / 2^24), which equals 0.0000596, a not very significant shift in z-buffering terms.
Does a large value of 100,000 or 1,000,000 fix the z-fighting?
If anyone cares, I made myself a macro to make it easier. Note that this macro will only work if you are using a 32bit float depth buffer format. A different macro might be needed if you are using a different depth buffer format.
#define DEPTH_BIAS_D32_FLOAT(d) (d/(1/pow(2,23)))
That way you can simply set your depth bias using standard values, such as:
RasterizerDesc.DepthBias = DEPTH_BIAS_D32_FLOAT(-0.00001);