I'm using Azure Logic Apps to integrate with a legacy SOAP API. I would like to translate the XML (particularly the responses) in to something easier to use such as json.
Normally I use a Custom Connector within Logic Apps to connect to new APIs. I've tried to create a Custom Connector for this SOAP, but the WSDL contains recursive references which apparently aren't allowed. I was able to create managed API with our APIM container, but still could not produce anything that would allow me to create the custom connector. So, I moved on to dealing with the transactions individually. A Liquid transformation map from XML to json seems ideal, but so far I haven't got it to work, namely because I can't figure out the naming convention to access certain XML elements (those that happen to have the same id as their parent). For now I am using the json(xml()) function as a work around, but it seems less ideal than a Liquid map.
As you can see the AgreementId is easily accessible via the normal naming conventions, but I can't seem to access any of the child elements of the 2nd RequestReportResponse node.
This is the XML I'm trying to transform:
<SOAP-ENV:Envelope>
<SOAP-ENV:Header/>
<SOAP-ENV:Body>
<RequestReportResponse>
<MessageHeader>
<AgreementId>urn:agreementId:</AgreementId>
</MessageHeader>
<RequestReportResponse>
<Extension>csv</Extension>
<FileByteArray>xyzFileBytes</FileByteArray>
<FileName>xyzFileName</FileName>
<StatusCode>200</StatusCode>
<StatusDescription>SUCCESS</StatusDescription>
</RequestReportResponse>
</RequestReportResponse>
</SOAP-ENV:Body>
</SOAP-ENV:Envelope>
Here is the Liquid map I'm using:
{
"AgreementId": " {{content.Envelope.Body.RequestReportResponse.MessageHeader.AgreementId}}",
"FileByteArray": "{{content.Envelope.Body.RequestReportResponse.RequestReportResponse.FileByteArray}}",
"FileName": "{{content.Envelope.Body.RequestReportResponse.RequestReportResponse.FileName}}",
"StatusCode": "{{content.Envelope.Body.RequestReportResponse.RequestReportResponse.StatusCode}}",
"StatusDescription": "{{content.Envelope.Body.RequestReportResponse.RequestReportResponse.StatusDescription}}"
}
Expected result:
{
"AgreementId": "urn:agreementId:",
"FileByteArray": "xyzFileBytes",
"FileName": "xyzFileName",
"StatusCode": "200",
"StatusDescription": "SUCCESS"
}
Actual result:
{
"AgreementId": "urn:agreementId:",
"FileByteArray": "",
"FileName": "",
"StatusCode": "",
"StatusDescription": ""
}
It seems liquid doesn't have good support for the nested same tag name, we can use xslt instead to operate the xml and then transform to json which we want. But it's better for us to improve the format of xml resource to escape nested same tag name.
Related
I'd like to use a personnal API for named entity recognition (NER), and use brat for visualisation. It seems brat offers an Automatic annotation tool, but documentation about its configuration is sparse.
Are there available working examples of this features ?
Could someone explain me what should be the format of the response of the API ?
I finally manage to understand how it works, thanks to this topic in the GoogleGroup diffusion list of BRAT
https://groups.google.com/g/brat-users/c/shX1T2hqzgI
The text is sent to the Automatic Annotator API as a byte string in the body of a POST request, and the format BRAT required in response from this API is in the form of a dictionary of dictionaries, namel(
{
"T1": {
"type": "WhatEverYouWantString", # must be defined in the annotation.conf file
"offsets": [(0, 2), (10, 12)], # list of tuples of integers that correspond to the start and end position of
"texts": ["to", "go"]
}
"T2" : {
"type": "SomeString",
"offsets":[(start1, stop1), (start2, stop2), ...]
"texts":["string[start1:stop1]", "string[start2:stop2]", ...
}
"T3" : ....
}
THEN, you put this dictionary in a JSON format and you send it back to BRAT.
Note :
"T1", "T2", ... are mandatory keys (and corresponds to the Term index in the .ann file that BRAT generates during manual annotation)
the keys "type", "offsets" and "texts" are mandatory, otherwise you get some error in the log of BRAT (you can consult these log as explained in the GoogleGroup thread linked above)
the format of the values are strict ("type" gets a string, "offsets" gets a list of tuple (or list) or integers, "texts" gets a list of strings), otherwise you get BRAT errors
I suppose that the strings in "texts" must corresponds to the "offsets", otherwise there should be an error, or at least a problem with the display of tags (this is already the case if you generate the .ann files from an automatic detection algorithm and have different start and stop than the associated text)
I hope it helps. I managed to make the API using Flask this morning, but I needed to construct a flask.Response object to get the correct output format. Also, the incoming format from BRAT to the Flask API could not be catch until I used a flask.request object with request.get_body() method.
Also, I have to mention that I was not able to use the examples given in the BRAT GitHub :
https://github.com/nlplab/brat/blob/master/tools/tokenservice.py
https://github.com/nlplab/brat/blob/master/tools/randomtaggerservice.py
I mean I could not make them working, but I'm not familiar at all with API and HTTP packages in Python. At least I figured out what was the correct format for the API response.
Finally, I have no idea how to make relations among entities (i.e. BRAT arrows) format from the API, though
https://github.com/nlplab/brat/blob/master/tools/restoataggerservice.py
seems to work with such thing.
The GoogleGroup discussion
https://groups.google.com/g/brat-users/c/lzmd2Nyyezw/m/CMe9FenZAAAJ
seems to mention that it is not possible to send relations between entities back from the Automatic Annotation API and make them work with BRAT.
I may try it later :-)
I would like to A/B test different variations of strings in my app by fetching translations from my server, and displaying them on the screen.
Let's say I have an api that takes locale and a list of string keys, and returns their values, for example:
// request
{
"locale": "es",
"keys": ["greetings"]
}
// response
{
"greetings": "Hola!"
}
and then in the app I just use that value. It's easy and it works.
However, I'm not sure what to do with strings that require proper handling of plural nouns.
Usually I'd use .stringsdict:
then in the code I use it as follows:
This works just fine, everything looks correct at runtime:
This only works with Localizable.stringsdict, and it has to be a file in the bundle.
Is there a way to make it work with dictionary fetched from remote api?
I'm currently coding a transition from a system that used hand-crafted JSON files to one that can automatically generate the JSON files. The old system works; the new system works; what I need to do is transfer data from the old system to the new one.
The JSON files are used by an iOS app to provide functionality, and have never been read by our server software in Ruby On Rails before. To convert between the original system and the new system, I've started work on parsing the existing JSON files.
The problem is that one of my first two sample files has trailing commas in the JSON:
{ "sample data": [1, 2, 3,] }
This apparently went through just fine with the iOS app, because that file has been in use for a while. Now I need some way to parse the data provided in the file in my Ruby on Rails server, which (quite rightfully) throws an exception over the illegal trailing comma in the JSON file.
I can't just JSON.parse the code, because the parser, quite rightfully, rejects it as invalid JSON. Is there some way to parse it -- either an option I can pass to JSON.parse, or a gem that adds something, etc etc? Or do I need to report back that we're going to have to hand-fix the broken files before the automated process can process them?
Edit:
Based on comments and requests, it looks like some additional data is called for. The JSON files in question are stored in .zip files on S3, stored via ActiveStorage. The process I'm writing needs to download, unpack, and parse the zip files, using the 'manifest.json' file as a key to convert the archived file into a database structure with multiple, smaller files stored on S3 instead of a single zip that contains everything. A (very) long term goal is for clients to stop downloading a unitary zip file, and instead download the files individually. The first step towards that is to break the zip files up on the server, which means the server needs to read in the zip files. A more detailed sample of the data follows. (Note that the structure contains several design decisions I later came to regret; one of the original ideas was to be able to re-use files rather than pack multiple copies of the same identical file, but YAGNI bit me in the rear there)
The following includes comments that are not legal in JSON format:
{
"defined_key": [
{
"name": "Object_with_subkeys",
"key": "filename",
"subkeys": [
{
"id":"1"
},
{
"id":"2"
},
{
"id":"3" // references to identifier on another defined key
}, // Note trailing comma
]
}
],
"another_defined_key":[
{
"identifier": "should have made parent a hash with id as key instead of an array",
"data":"metadata",
"display_name":"Names: Can be very arbitrary",
"user text":"Wait for the right {moment}", // I actually don't expect { or } in the strings, but they're completely legal and may have been used
"thumbnail":"filename-2.png",
"video-1":"filename-3.mov"
}
]
}
The problem is that your are trying to parse something that looks a lot like JSON but is not actually JSON as defined by the spec.
Arrays- An array structure is a pair of square bracket tokens surrounding zero or more values. The values are separated by commas.
Since you have a trailing comma another value is also expected and most JSON parsers will raise an error due to this violation
All that being said json-next will parse this appropriately maybe give that a shot.
It can parse JSON like representations that completely violate the JSON spec depending on the flavor you use. (HanSON, SON, JSONX as defined in the gem)
Example:
json = "{ \"sample data\": [1, 2, 3,] }")
require 'json/next'
HANSON.parse(json)
#=> {"sample data"=>[1, 2, 3]}
but the following is equivalent and completely violates spec
JSONX.parse("{ \"sample data\": [1 2 3] }")
#=> {"sample data"=>[1, 2, 3]}
So if you choose this route do not expect to use this to validate the JSON data or structure in any fashion and you could end up with unintended results.
I have been using Karate and RestAssured for sometime. There are advantages and downside of both tools of course. Right now I have a RestAssured project where I have Request and Response object and POJOs. My requests wraps my endpoint and send my POJOs to those endpoint. I do all my Headers, etc configuration in an abstract layer. In case I need to override them, I override them during the test. If not, Its a two lines of code for me to trigger an endpoint.
My way of working with happy path and negative path of an edpoint is that I initialize the POJO before every test with new values in the constructor. Then I override the value that I want in test scope. For example, if I want to test a negative case for password field, I set this field to empty string during the test. But other fields are already set to some random stuff before the test.
But I dont know how to achieve this with Karate.
Karate allows me to create a JSON representation of my request body and define my parameters as seen below example.
{
"firstName": "<name>",
"lastName": "<lastName>",
"email": "<email>",
"role": <role>
}
Then in every test I have to fill all the fields with some data.
|token |value|
|name |'canberk'|
|lastName |''|
|email |'canberk#blbabla.com'|
|role |'1'|
and
|token |value|
|name |''|
|lastName |'akduygu'|
|email |'canberk#blbabla.com'|
|role |'1'|
It goes on like this.
It's ok with a 4 fields JSON body but when the body starts to have more than 20 fields, it become a pain to initialise every field for every test.
Does Karate have a way of achieving this problem with a predefined steps of I need to come up with a solution?
There are advantages and downside of both tools of course.
I'm definitely biased, but IMHO the only disadvantage of Karate compared to REST-assured is that you don't get compile time safety :) I hope that you have seen this comparison.
Karate has multiple ways to do what you want. Here's what I would do.
create a JSON file that has all your "happy path" values set
use the read() syntax to load the file (which means this is re-usable across multiple tests)
use the set keyword to update only the field for your scenario or negative test
You can get even more fancier if you use embedded expressions.
create a JSON file that has all your "happy path" values set and the values you want to vary look like foo: '##(foo)'
before using read() you init some variables for e.g. * def foo = 'bar' and if you use null that JSON key will even be removed from the JSON
read() the JSON. it is ready for use !
You can refer to this file that demonstrates some of these concepts for XML, and you may get more ideas: xml.feature
Say I have a ruby model which has a name and age attribute. A GET request for one of these objects returns something like this when using rails generate scaffold:
{
"id": 1,
"name": "foo",
"age": 21,
"parent_id": 1
}
By default a POST/PUT to this resource expects:
{
"user": {
"name": "foo",
"age": 21,
"parent_id": 1
}
}
When using nested resources configured in routes the default behaviour is to add the parent id outside of this nested hash too, e.g.: PUT /parents/1/users:
{
"parent_id": 1,
"user": {
"name": "foo",
"age": 21
}
}
I can go to the controller simply enough and alter what parameters are expected, but I'd like to know why that is the case and if I risk breaking anything if changing it.
More specifically this is a Rails API and I'd like to add swagger doc generation to the API, so having this asymmetrical request body is annoying.
So in summary my questions are:
What are the advantages of this, why is it the Rails default and what do I risk breaking by changing it?
How best to add swagger support to the API in a way which doesn't have different GET responses vs PUT/POST (which seems like bad design to me, but maybe I'm wrong)?
How best/should I make the API automatically add the parent id when making a call like POST /parents/1/users, because again the default generation doesn't support it and I'm wondering if there's a reason
What are the advantages of this?
This is perhaps an opinion-based answer, which is generally frowned upon by StackOverflow, but here's my 2 cents.
In the GET request, you are simply being returned a resource. So the attributes are all you need to know:
{
"id": 1,
"name": "foo",
"age": 21,
"parent_id": 1
}
On the other hand, for this PUT request:
{
"parent_id": 1,
"user": {
"name": "foo",
"age": 21
}
}
You can think of the parameters as being split into two "sections": The parent_id (which would normally get sent as a path param, not part of the request body!) is something to "search/filter" by, whereas the user params are the attributes of the user resource to update.
This logical separation of concerns is particularly useful in the context of web forms (which is what Rails was originally/primarily designed for), especially when dealing with complex queries or "nested" attributes.
what do I risk breaking by changing it?
Nothing really.
That format, however, was "optimised" for the context of RESTful APIs and web forms.
If you'd rather use some other format then go ahead; Rails isn't forcing you to use anything here. Just beware that a naive "better design" may come back to bite you down the line.
How best to add swagger support to the API in a way which doesn't have different GET responses vs PUT/POST (which seems like bad design to me, but maybe I'm wrong)?
You can design the API any way you like. If you want "flat parameters" everywhere, then just build the Rails application like that.
How best/should I make the API automatically add the parent id when making a call like POST /parents/1/users, because again the default generation doesn't support it and I'm wondering if there's a reason
I'm not sure what you mean by "the default generation doesn't support it". The default generation of what? The swagger docs? The rails application?
Anyway... That should be implemented as a path parameter. The swagger docs should look something like this:
/parents/{parent_id}/users:
get:
description: '.....'
parameters:
- name: parent_id
in: path
description: 'ID of the parent'
required: true
type: integer
Tom Lord’s answer and note is probably better than mine.
My guess is that this mimics the behaviour of HTTP. If you GET data, you can add parameters (?name=foo). However, if you POST data, you tend to put the payload in the body of the request. And not have any parameters in the URL.
It’s likely that Rails thinks that you’re going put that JSON object into the body of the request. Whereas the GET request it’s going to split the key/values apart and send as parameters.
The advantages of keeping it the way they are is that it’ll avoid a gotcha later. I’d argue this is always the best thing to do in programming, but also especially something like Rails. But, if you’re making an API I can see why you’d want to let people send data as parameters rather than a body that needs validating.
As for Swagger, let the user know they need to send the data as a JSON string, and then use the parameters feature as expected.
Last one is a bit tricky. I guess it’s up to the design of your API. You could pass it as part of the request. Maybe take a look through sometihng like RESTful API Design to clarify your goal.