I'm providing an OpenAPI 3.0 file as an API specification in JSON format. I expect the APIs to be updated quite regularly and want to keep readers informed of changes made.
I noted that there's a "version" attribute in "info" which can be used to indicate versioning, but is there a dedicated/prefered field to keep a changelog of the versions?
I'm thinking of something like:
V0.1.1 - 2022/11/25
- added "gender" attribute to response of "/getPersonalDetails"
- changed "record_dt" format of "/getPersonalDetails" from "YYYY-MM-DD" to "YYYY-MM-DD hh:mm:ss"
V0.1.2 - 2022/11/26
- other changes...
The only appropriate (or possible) location I've found so far within the document itself is the "description" fields, to either dump all changelog in "info/description" or increment version number and write API-specific changelogs in the API descriptions. I just worry either way could seem too messy as the versions stack.
SwaggerHub does provide a versioning tool but it requires a subscription and also I want to share the document in JSON which can be consumed as a standalone file.
Related
I'm using Ruby on Rails and I'm implementing a library called stock_quote. Using its Github Documentation
I've been able to succesfully use the library, using different methods in RoR, like:
StockQuote::Stock.quote
StockQuote::Stock.stats
StockQuote::Stock.chart
But I'm having issues to fetch an specific date. For example, I can fetch the last six months in a daily basis, using:
#stock_chart = StockQuote::Stock.chart(params[:id], '6m')
But I need to fetch an specific date with this method. In the iextrading documentation, it says:
"Specific date : IEX-only data by minute for a specified date in the format YYYYMMDD if available. Currently supporting trailing 30 calendar days''
And the HTTP requested is:
/stock/aapl/chart/date/20180620
HTTP request highlighted - for specific date
I tried to execute this commend in Ruby On Rails, but I haven't been able to translate the HTTP request into a proper RoR format in order to succesfully fetch the trend data. In the stock_quote documentation there is not also any reference to this specific command.
I appreciate any help with this issue, I've tried around 20+ syntax ways but they didn't work.
I am pretty new to Cumulocity and I am trying to get data into the platform from my own device using mqtt and the smartrest templates. I can get data in using the static templates but they only support certain data types. I am struggling to create the appropriate smartrest template in the UI and the documentation doesn't go into much detail.
I get that the template name goes in the MQTT topic (or selected on login as part of the username) in s/ut/template_name and the messageId of the messages in the template get matched to the first CSV field of the MQTT publish payload. What I don't get is the template terminology. In the UI I choose API->Measurement and Method->POST and I am presented with required values $.type and $.time. My questions:
Is $.type the "measurement fragment type" name or do I have to make it "c8y_CustomMeasurement"? Can I call it whatever I want?
$.time has a value field. Is this the default value if one is not supplied in the publish?
I assume I need to add a numerical value in the optional API values. To link it to the value of the data point should I make the key "c8y_CustomMeasurement.custom.value"?
Am I way off base here?
Every time I publish to my own smartrest template the server drops the connection so I assume its an error in my template setup but I don't see a way of accessing debug messages (also nothing is published back to me on s/e or s/dt).
For the sake of an example, lets say I wish to publish a unitless, timestamped pulse count with payload format "mId,ts,value" with example data "p01,'2017-07-17 12:34:00',1234"
What you wrote so far is mostly correct just to be a bit more precise:
The topic is s/uc/template_id (not the template name, this is just a label)
The $.type refers to the 'type' fragment in the measurement JSON. It is a free text field
In 99% of cases you want to leave the $.time empty. If you set something here it is not the default but fixed to that timestamp and you cannot change it when using the template. If you leave it empty and still not send something in
Example: p01,2017-07-17T12:34:00,1234 (no quotes arounf timestamp and ISO8601 format
Example without sending time: p01,,1234 (sending empty string as time results in server time beeing set. The template is the same)
Hope these points help you to find you issue
I need to index data from a custom application in Solr. The custom app stores metadata in an Oracle RDBMS and documents (PDF, MS Word, etc.) in a file store. The two are linked in the sense that the metadata in the database refers to a physical document (PDF) in the file store.
I am able to index the metadata from the RDBMS without issues. Now I would like to update the indexed documents with an additional field in which I can store the parsed content from the PDFs.
I have considered and tried the following
1. Using Update RequestHandler to try and update the indexed document with . This didn't work and the original document indexed from the RDBMS was overwritten.
2. Using SolrJ to do atomic updates but I am not sure if this is a good approach for something like this
Has anyone come across this issue before and what would be the recommended approach?
You can update the document, but it requires that you know the id of the existing document. For example:
{
"id": "5",
"parsed_content":{"set": "long text field with parsed content"}
}
Instead of just saying "parsed_content":"something" you have to wrap the value in "parsed_content":{"set":"something"} to trigger adding it to the existing document.
See https://wiki.apache.org/solr/UpdateXmlMessages#Optional_attributes_for_.22field.22 for documentation on how to work with multivalued fields etc.
I am currently working on building CCD for my project.
I have a problem in code. For example let me take an example of payers section.
CONF-60:A covered party in a policy activity SHOULD contain exactly one participant / participantRole / code, to represent the reason for coverage (e.g. Self, Family dependent, student).
CONF-61:The value for “participant / participantRole / code” in a policy activity’s covered party MAY be selected from ValueSet 2.16.840.1.113883.1.11.19809 PolicyOrProgramCoverageRoleType DYNAMIC.
Above is the line i have copied from hl7 official document.
<code code="SELF" codeSystem="2.16.840.1.113883.5.111" displayName="Self"/>
Its copied from sample ccd document. Going to http://wiki.hl7.de/index.php/2.16.840.1.113883.5.111 we can see there are codes. But my system has values for which i cant find the codes there.
So my question is if cant get the codes there can i just use following and still produce a valid ccd document
<code displayName="Organ Donor"/>
In other words is it necessary to set code and code system in ccd document??
No, that particular line will not be valid and yes - It is necessary. These codes and coding systems are how other systems or programs will recognize the component. They are based in standard language meant to be recognized across EHR platforms and applications - such as LOINC (2.16.840.1.113883.6.1).
The whole purpose of the C-CDA, as the name "continuity of care" would suggest, is the seamless transition of patient information in a recognizable format to other organizations who may not utilize the same EHR.
Take a look at SMART CCDA Scorecard http://ccda-scorecard.smartplatforms.org/static/ccdaScorecard/#/
Also, what system are you using? Your system, especially for those values, should have the correct coding system because the values "SELF, MTH, FTH" are very common for documenting any demographic, insurance or patient related information. Otherwise, it might not meet the requirements of a certified EHR.
When the coding system doesn't contain an appropriate value you can use a NULL value and show the text, although usage of such is disallowed for certain elements. So your example should actually look something like this.
<code nullFlavor="OTH">
<originalText>Organ Donor</originalText>
</code>
But in general you should always try to use a valid concept code where one exists. That's the only way you'll achieve meaningful interoperability with third-party systems.
I've been asked to prototype a replacement "file transformation process" (that currently is a mess of SQL) using Altova's MapForce. My input is a CSV file with headers. My problem is that I need to capture both the data AND the column name to use in downstream processing.
I need to have MapForce feed a C# method (imported as that takes two parameters: fieldName and value. I can access the value trivially, but after hours pouring over the manual (1000 pages!) I haven't found any examples of how to access the field name as an output.
The reason each output needs the field name and the value has to do with how all our mappings/transformations are currently managed - on a database. The .NET code jumps in at this point and does any necessary database lookups.
For example, if I had the following file:
"Symbol", "Account", "Price", ...
"FOO", "10101", "1.23", ...
"BAR", "10201, "13.56", ...
And a static method string TransformField( string fieldName, string value ),
I'd like to map the CSV file's Symbol data output to the method's value parameter and the Field Name "Symbol" to the method's fieldName parameter.
Some limitations:
I need to keep the "wiring" visible in the MapForce GUI. I'll have non-programmers maintaining the mappings in the future. So doing all this in code is not an option.
MapForce is the tool of choice by the company. Part of the reason our original process is such a mess is because the original programmer rolled his own mapping/transformation tool (out of TSQL no less - ouch).
We can treat all inputs/outputs to the method call as strings. Conversions will happen later.
I would like to avoid using scalar literals as inputs. I already have the column names from the file - I do not want to re-type each one and feed it to my method.
I'm not sure how many users out there have experience with this tool, but after 3 days of tinkering with it, I see much potential. If only I can get past this current sticking point, I think the company will have a solid alternative to their current mess.
Thanks for any/all suggestions.
I solved my issue and, for future reference, want to post a solution. I handled my problem by using MapForce's FlexText. This allowed me to extract the header from the CSV file and "invert" the column names as data inputs to the transformation process. Once I knew the approach to take, I was able to find more information directly from Altova.
I found a couple helpful tutorials while digging through their website:
Altova Online Videos
Web Tutorial
Hope this can help someone else in the future!