Are there any rules about what Rounding Mode and Rounding Increments should be used for different currency types?
I am aware of ISO 4217 which specifies the number of minor units (digits after decimal point) that are permitted for known currency types.
But, after searching on the internet for more than half-a-day, there seems to be no clear guidance about what Rounding Mode and Rounding Increments should be used for currency types.
Does that mean it is up to the application developer to choose the Rounding Mode and Rounding Increments?
I was told that there are some laws that mandate certain rounding mechanisms to be used for certain currencies, but unfortunately I couldn't find them.
Related
I was reading an interesting interview with Chris Lattner, author of LLVM and Swift, and noticed a very curious claim:
Other things are that Apple does periodically add new instructions [11:00] to its CPUs. One example of this historically was the hilariously named “Swift” chip that it launched which was the first designed in-house 32-bit ARM chip. This was the iPhone 5, if I recall.
In this chip, they added an integer-divide instruction. All of the chips before that didn’t have the ability to integer-divide in hardware: you had to actually open-code it, and there was a library function to do that. [11:30] That, and a few other instructions they added, were a pretty big deal and used pervasively
Now that is surprising. As I understand it, integer divide is almost never needed. The cases I've seen where it could be used, fall into a few categories:
Dividing by a power of two. Shift right instead.
Dividing by an integer constant. Multiply by the reciprocal instead. (It's counterintuitive but true that this works in all cases.)
Fixed point as an approximation of real number arithmetic. Use floating point instead.
Fixed point, for multiplayer games that run peer-to-peer across computers with different CPU architectures and need each computer to agree on the results down to the last bit. Okay, but as I understand it, multiplayer games on iPhone don't use that kind of peer-to-peer design.
Rendering 3D textures. Do that on the GPU instead.
After floating point became available in hardware, I've never seen a workload that needed to do integer divide with significant frequency.
What am I missing? What was integer divide used for on Apple devices, so frequently that it was considered worth adding as a CPU instruction?
Placement note: I've opted to ask here on SO because none of the other candidate sites appear as appropriate. I'll be happy to move this question elsewhere.
I'm starting a project to build a congestion detector for a fairly large real-time message processing system.
My current thinking is that I'll measure or sample individual message lifetimes (from input to output), and use a Kalman filter to estimate the current lifetime. While doing this, I accumulate the residuals (measured-expected), and if the total exceeds a certain threshold, I trigger an alarm.
There are a couple of quirks:
There are many types of message with considerable variation in their lifetimes. In general, for simplicity, I cannot classify messages into expected lifetime buckets. (The distribution is probably not Gaussian.)
Over time, the average lifetime will rise and fall fairly smoothly, with variations by day of the week and by time during the day, but, for simplicity, not in a way that can be expressed accurately as a formula.
My current thinking for the Kalman filter is to treat measurement noise as zero, and to treat the variation in message lifetime as plant or process noise. The rise and fall of the average lifetime will cause false alarms, and I intend to avoid these by periodically clearing the accumulated residual total.
My questions are these:
Should I be using a Kalman filter at all?
If not what other recommendations would you make?
The Kalman filters I have seen so far use a fixed value for process noise variance; is there any way I can use a dynamic value, based on the recent variance of the actual measurements (given my assumption of zero measurement noise)?
Does the idea of accumulating (signed) residuals, resetting the totals periodically to allow for long-term variation, and whenever an alert is triggered, seem reasonable as an approach?
What does function Point Analysis Mean?
is it that its used for cost estimation of a software? or are there any proper definition that would define function Point Analysis?
Can you please give me a short description on it.
While I agree with Leo's answer, I'll try a more practical description:
What it is
Function Point Analysis (FPA) is one of currently five standards for Functional Sizing (see ISO/IEC 14143) as approved by ISO. FPA is actually the widely used short term for the ISO/IEC 20926 standard titled "IFPUG Functional Size Measurement".
FPA is a means to rate (the term 'measure' is actually misleading) the amount of functional requirements to software. To achieve this rating, a technique is used that was known as 'functional decomposition' in earlier times. This concept is in fact very close to describing requirements with 'use cases', even though the detailed rules and notations are quite different.
In short, the functional requirements are decomposed into 'elementary functions', which then are rated each with a point value. The total of points for all elementary functions is used as an indication of the 'size' or amount of requirements. This is called the 'functional size' expressed in the unit of 'function points' (fp).
The natural representation of a functional decomposition is the functional tree.
The FPA standard also has a set of rules for rating changes to existing applications, thus it can be used to rate the functional requirements for the adaption of extension of existing systems ('enhancements' or 'releases').
What it is not
FPA is not an effort estimation technique by itself. Obviously, the relation between the size of functional requirements and the implementation effort can be and often is rather loose. Function points can be used as (one) input to more complex estimation models (such as COCOMO), which have to take into account all other effort drivers.
FPA is not a 'software metric' - functional size is always related to the user requirements fulfilled by software. While you can count and measure lines of code or code complexity, functional size is the result of an analytical process.
When to use it
FPA can be helpful to estimate the effort for a software project in an early stage, when the requirements are known, but the details of implementation have not yet been specified or evaluated. The functional requirements are reflected in the functional size, the non-functional requs need to be input in an estimation model. You need to have/use a good and proven (and trusted) model, otherwise the functional size is useless for this purpose.
FPA can also help to rate the 'value' of an application in the sense of 'recovery costs'.
Eventually, in the context of IT client/vendor relationships, FPA can be used as a basis for pricing. Clients are invoiced based on an agreed 'price per fp' instead of an hourly rate.
When not to use it
By definition, FPA requires a basic understanding of the functional requirements. Thus, if you do not have or know the functional requirements, it will be difficult if not impossible to use FPA.
FPA is also not suited to rate the performance of individuals, as it is a rather holistic rating for an application and cannot to be used to size only parts of it.
the authoritative answer, from IFPUG
http://www.ifpug.org/about-ifpug/about-function-point-analysis/
Function Point Analysis (FPA) is a sizing measure of clear business
significance. First made public by Allan Albrecht of IBM in 1979, the
FPA technique quantifies the functions contained within software in
terms that are meaningful to the software users. The measure relates
directly to the business requirements that the software is intended to
address. It can therefore be readily applied across a wide range of
development environments and throughout the life of a development
project, from early requirements definition to full operational use.
Other business measures, such as the productivity of the development
process and the cost per unit to support the software, can also be
readily derived.The function point measure itself is derived in a
number of stages. Using a standardized set of basic criteria, each of
the business functions is a numeric index according to its type and
complexity. These indices are totaled to give an initial measure of
size which is then normalized by incorporating a number of factors
relating to the software as a whole. The end result is a single number
called the Function Point index which measures the size and complexity
of the software product.
In summary, the function point technique provides an objective,
comparative measure that assists in the evaluation, planning,
management and control of software production.
ps. the IFPUG definition is what is taken as certain in the Court here in Brazil, when someone has any kind of dispute about function points (mostly because Government contracts are usually defined in FPs)
Working on getting some communication of longs between computers and chips. Was running into some issues and thought it might be because the definition of longs between different system architectures (We're talking between 32-bit and 64-bit machines). Does anyone know if longs are an IEEE standard (like floats and doubles), or if they vary based on system architecture? (like ints)
The type long is not an IEEE standard. It's size may vary between different architectures. In C you can use the header stdint.h that defines types like uint32_t uint16_t etc that have fixed size. If your chip has a own C compiler that should solve your problem.
This is for a personal project of mine, and I have no idea from where to start as it falls way beyond my comfort zone.
I know that there are a few language learning software out there that allows the user to record his or her voice and compare the pronounce with a native speaker of said language.
My question is, how to achieve this?
I mean, how one compares the pronunciation between the user and the native speaker?
If you're looking for something relatively simple, you could simply compute the MFCC (http://en.wikipedia.org/wiki/Mel-frequency_cepstrum) of the recording, and then look at something simple like the correlation between the recording and the average coefficients of that word being pronounced by a native speaker. The MFCC will transform the audio into a space where euclidean distance corresponds more closely with perceptual difference.
Of course, there are several possible problems:
Aligning the two recordings so the coefficients match up. To fix this, you could look at the maximum cross-correlation of the coefficients, rather than the simple correlation, so you will get an automatic "best alignment" for free. Also, you might have to clip off ends of the recording, so only the actual pronunciation of the word remains in the recording.
The MFCC maps to perceptual space, but might not correspond so well to accent inaccuracies. You could perhaps try to fix this by instead of comparing it to just the "ideal" pronunciation, comparing it to the average for several different types of mispronunciation, and looking at which model it is closest to.
Even good accented words will be on average some "distance" from the ideal. You'll have to take that into account, and compare the input's distance to the "relative" good distance.
Correlation might not be the best way to compare the relative similarity of two sounds. Experiment with lots of different metrics... try different L^p norms: (http://en.wikipedia.org/wiki/Lp_space), or try weighing the different MFCCs differently (if I recall, even after MFCC have been taken, although they are all supposed to have the same perceptual "weight", the ones in the middle are still more important for how we perceive a sound than the high or low ones.)
There might be certain parts of the sound where the pronunciation matters much more for the quality of the accent. Perhaps transient detection to find those positions and mark them as more important would be good. If you had a whole bunch of "good pronunciation" and "bad pronunciation" examples, you could probably automatically extract those locations.
Again, in the end the only way you're going to know which combination of these options works best is by testing.
I've read about adapting gaussian mixture models for the phonetic space of a general speaker to an individual. This might be useful for training for a non-canonical accent for private use.
If you just compare the speaker to a general pronunciation model, then the match might not be very good. So the idea is to adjust the models to fit the speaker better during individual training.
Speaker Verification using Adapted Gaussian Mixture Models
EDIT: looking over your question again, I think I answered a different question. But the technique uses similar models:
Model various language (Do you have lots of data for different languages? Collecting the data might be the hard part.) GMMs work well for this.
Compare the data point from the speaker to the various language models
Choose the model that is the best predictor for the speaker data as the winner.