How to model decimal type in RAML - currency

I am modelling a REST API using RAML. The response body of an endpoint (JSON format) is a financial transactions list. Each transactions contains an amount of money: currency and numeric value. The following is the snippet of my RAML file, please note the property amount in the Transaction type:
# %RAML 1.0
title: Trading details API
version: v1
mediaType: application/json
baseUri: http://my.host.io/api/trading/v1/
types:
Transactions:
type: Transaction[]
minItems: 1
Transaction:
type: object
properties:
refNum:
type: string
amount:
type: ????
currency:
type: string
minLength: 2
maxLength: 3
/trades
get:
description: Get details for a given trade
queryParameters:
userId:
type: integer
required: true
responses:
200:
body:
application/json:
type: Transactions
Unfortunately RAML has no Built-in decimal type, and the other numeric types (integer, float or double) are not suitable for this scope, mainly because I need to specify the number of digits after the . .
So question is: in RAML how do I correctly model the type amount?
I need to provide an exact definition of the type for each response body values, because this file will be the contract between the backend and frontend (developed by 2 different teams).
Any helps is welcomed.
Please note that I made some research on SO, and the closest question to mine is: How to define money amounts in an API
. But it is not related to RAML modelling, and the answers are not helping to me.

RAML has a similar construct to the one in JSON Schema. You'll want to use type: number in combination with multipleOf to describe decimal precision.
#%RAML 1.0 DataType
type: number
multipleOf: 0.01

After months I come back to share my experience.
The way I worked around it was by using the type string and a pattern. I am aware of the many concerns around changing the data type from number to string, but this approach is elegant, robust, flexible and still simple to test and understand.
The API consumers are forced to format the amount in the correct way and the messages coming in and out of the API are consistent, consistency cannot be guaranteed by using multiplyOf 0.0001 (where 25 and 25.0000 are both accepted).
I reused over and over this solution with great results. Therefore I am sharing this with the community.
Solution:
[...]
amount:
type: string
pattern: "^(([1-9][0-9]*)|[0])[.]([0-9]{4})$"
currency:
type: string
...
The pattern accepts 4 digits on the decimal part, forces to use a . and the amount cannot starts with 0, with the exception of 0.xxxx family of numbers.
The following is an examples list of accepted numbers:
1.0000
54.0000
23456.1234
1.9800
0.0000
0.0001
Instead the following is an example list of rejected:
0123.3453
12.12
1.1
000
01.0000
1.0
1.00
4.000
Moreover, you can specify the max number of digits on the left side (in this example 10):
pattern: "^(([1-9][0-9]{0,9})|[0])[.]([0-9]{4})$"
Example of accepted numbers:
1234567890.1234
3.5555
0.1234
Example of rejected numbers:
12345678901.1234
123456789012.1234

Related

What's your approach to extracting a summarized paragraph from multiple articles using GPT-3?

In the following scenario, what's your best approach using GPT-3 API?
You need to come out with a short paragraph, about a specific subject
You must base your paragraph on a set of articles, 3-6 articles, written in an unknown structure
Here is what I found to work well:
The main constraint is the open ai token limit in the prompt
Due to the constraint, I'd ask OPT-3 to parse unstructured data using the specific subject in the prompt request.
I'll then iterate each article and save it all into 1 string variable
Then, repeat it one last time but using the new string variable
If the article is too long, I'll cut it into smaller chunks
Of curse fine-tune, the model with the specific subject before will produce much better results
The temperature should be set to 0, to make sure GPT-3 uses only facts from the data source.
Example:
Let's say I want to write a paragraph about Subject A, Subject B, and Subject C. And I have 5 articles as references.
The open ai playground will look something like this:
Example Article 1
----
Subject A: example A for OPT-3
Subject B: n/a
Subject c: n/a
=========
Example Article 2
----
Subject A: n/a
Subject B: example B for GPT-3
Subject C: n/a
=========
Example Article 3
----
Subject A: n/a
Subject B: n/a
Subject c: example for GPT-3
=========
Article 1
-----
Subject A:
Subject B:
Subject C:
=========
... repeating with all articles, save to str
=========
str
-----
Subject A:
Subject B:
Subject C:
One may use the Python library GPT Index (MIT license) to summarize a collection of documents. From the documentation:
index = GPTTreeIndex(documents)
response = index.query("<summarization_query>", mode="summarize")
The “default” mode for a tree-based query is traversing from the top of the graph down to leaf nodes. For summarization purposes we will want to use mode="summarize".
 A summarization query could look like one of the following:
“What is a summary of this collection of text?”
“Give me a summary of person X’s experience with the company.”

Decimal values are truncating with to_f

I have model called Item, where I am updating the unit_price (Data Type is Decimal) value, currently I am not putting any limit when storing the value, storing the value as it is. But now I can see this PG error PG::NumericValueOutOfRange, when the value exceeds the limit.
So I was just trying to limit the value and checking something in the console, Below is the data. (Here in the data I am not putting all the decimal values)
#<Item id: 167199, description: "192830139", category_id: 10327, unit_id: 5596, weight: 0.1e5, unit_price: 0.4083333333659917816764132553606237816656920077972709552126705653021442494641325536062378168e1
i = Item.find 167199
i.unit_price.to_f
=> 4.083333333659918
#<Item id: 167199, description: "192830139", category_id: 10327, unit_id: 5596, weight: 0.1e5, unit_price: 0.6511366980197836882065909262763993442019943880913510722934069011050182329156169820243980265070876781866034494363303661586489199452739290976143216266200531728395970406461889852558384421962422689303402903e-2
i.unit_price.to_f
=> 0.006511366980197837
Can I know what will be the reason the to_f automatically reduce the limit of the decimal? What will be best way to solve this issue, I was just thinking about some truncate with some limit.
Can I know what will be the reason the to_f automatically reduce the limit of the decimal?
The reason is the to_f methods are used to convert objects to Floats, which are standard 64-bit double precision floating point numbers. The precision of these numbers is limited, therefore the precision of the original object must be automatically reduced during the conversion process in order to make it fit in a Float. All extra precision is lost.
It looks like you are using the BigDecimal class. The BigDecimal#to_f method will convert the arbitrary precision floating point decimal object into a Float. Naturally, information will be lost during this conversion should the big decimal be more precise than what Floats allow. This conversion can actually overflow or underflow if limits are exceeded.
I was just thinking about some truncate with some limit
There is a truncate method if you'd like explicit control over the precision of the result. No rounding of any kind will occur, there is a separate method for that.
BigDecimal#truncate
Deletes the entire fractional part of the number, leaving only an integer.
BigDecimal('3.14159').truncate #=> 3
BigDecimal#truncate(n)
Keeps n digits of precision, deletes the rest.
BigDecimal('3.14159').truncate(3) #=> 3.141
you can use ruby's built-in .truncate() method
for example:
floatNum = 1.222222222222222
truncatedNum = floatNum.truncate(3) #3 is the number of decimal places you want
puts floatNum #returns 1.222
another way is to use the .round() method
for example:

REST Assured: compare field to a long value

Consider this line:
response.body("path.to.some.long",is(getExpectedResult())
What I'm getting is:
java.lang.AssertionError: 1 expectation failed.
JSON path path.to.some.long doesn't match.
Expected: is <1500L>
Actual: 1500
Does that mean REST assured compares a String to a Long? This is very limiting.
How can I tell REST assured to compare it as a long value?
You'd have to go the traditional way
long value = response.jsonPath().getLong("path.to.some.long");
assertThat(value, is(1500L));

How to use common schema with different descriptions in OpenAPI 3.0? [duplicate]

I'm trying to build a Swagger model for a time interval, using a simple string to store the time (I know that there is also datetime):
definitions:
Time:
type: string
description: Time in 24 hour format "hh:mm".
TimeInterval:
type: object
properties:
lowerBound:
$ref: "#/definitions/Time"
description: Lower bound on the time interval.
default: "00:00"
upperBound:
$ref: "#/definitions/Time"
description: Upper bound on the time interval.
default: "24:00"
For some reason the generated HTML does not show the lowerBound and upperBound "description", but only the original Time "description". This makes me think I'm not doing this correctly.
So the question is if using a model as a type can in fact be done as I'm trying to do.
TL;DR: $ref siblings are supported (to an extent) in OpenAPI 3.1. In previous OpenAPI versions, any keywords alongside $ref are ignored.
OpenAPI 3.1
Your definition will work as expected when migrated to OpenAPI 3.1. This new version is fully compatible with JSON Schema 2020-12, which allows $ref siblings in schemas.
openapi: 3.1.0
...
components:
schemas:
Time:
type: string
description: Time in 24 hour format "hh:mm".
TimeInterval:
type: object
properties:
lowerBound:
# ------- This will work in OAS 3.1 ------- #
$ref: "#/components/schemas/Time"
description: Lower bound on the time interval.
default: "00:00"
upperBound:
# ------- This will work in OAS 3.1 ------- #
$ref: "#/components/schemas/Time"
description: Upper bound on the time interval.
default: "24:00"
Outside of schemas - for example, in responses or parameters - $refs only allow sibling summary and description keywords. Any other keywords alongside these $refs will be ignored.
# openapi: 3.1.0
# This is supported
parameters:
- $ref: '#/components/parameters/id'
description: Entity ID
# This is NOT supported
parameters:
- $ref: '#/components/parameters/id'
required: true
Here're some OpenAPI feature requests about non-schema $ref siblings that you can track/upvote:
Allow sibling elements with $ref that overrides the references definition
Allow required as sibling of $ref (like summary/description)
Extend/override properties of a parameter
OpenAPI 2.0 and 3.0.x
In these versions, $ref works by replacing itself and all of its sibling elements with the definition it is pointing at. That is why
lowerBound:
$ref: "#/definitions/Time"
description: Lower bound on the time interval.
default: "00:00"
becomes
lowerBound:
type: string
description: Time in 24 hour format "hh:mm".
A possible workaround is to wrap $ref into allOf - this can be used to "add" attributes to a $ref but not override existing attributes.
lowerBound:
allOf:
- $ref: "#/definitions/Time"
description: Lower bound on the time interval.
default: "00:00"
Another way is to use replace the $ref with an inline definition.
definitions:
TimeInterval:
type: object
properties:
lowerBound:
type: string # <------
description: Lower bound on the time interval, using 24 hour format "hh:mm".
default: "00:00"
upperBound:
type: string # <------
description: Upper bound on the time interval, using 24 hour format "hh:mm".
default: "24:00"

Storing Time Stamp as Number Mongoid

I'm new to Mongoid. In my model file, I've created a field with data type BigDecimal. I want to store time stamp in it. Below is the model that I'm using:
class Test
include Mongoid::Document
field :time_stamp, type: BigDecimal
end
And Below is the code that I'm using to create a document:
aTime = "Wed Apr 24 09:48:38 +0000 2013"
timest = aTime.to_time.to_i
Test.create({time_stamp: timest})
I see that the time_stamp is stored as String in the database. Can anybody direct me to store the timestamp as number in DB so that I could perform some operations on it. Thanks in advance.
According to this answer, the numeric types supported by MongoDB are:
MongoDB stores data in a binary format called BSON which supports these numeric data types:
int32 - 4 bytes (32-bit signed integer)
int64 - 8 bytes (64-bit signed integer)
double - 8 bytes (64-bit IEEE 754 floating point)
Reinforced by this statement in the Mongoid documentation:
Types that are not supported as dynamic attributes since they cannot be cast are:
BigDecimal
Date
DateTime
Range
I don't know about the things you want to with the field, but if you really want it stored as a number, you have to use a different numeric type that is supported by the MongoDB (BSON), probably Float or Integer.

Resources