How do I create the OpenAPI section for the 404 page? - swagger

I'm using OpenApi 3. A tool I use, Owasp Zap looks at the OpenAPI doc and creates fake requests. When it gets a 404, it complains that it doesn't have the media type that the OpenAPI promises.
But I didn't write anything in the OpenAPI doc about how 404s are handled. Obviously I can't write an infinite number of bad end points & document that they return 404s.
What is the right way to record this in the OpenAPI yaml or json?
Here is a minimal yaml file... I know for sure that this file does say anything about 404, ie. 404s aren't in the contract so tools are complaining that 404s are valid responses, but 404 is what a site should return when a resource is missing
---
"openapi": "3.0.0"
paths:
/Foo/:
get:
responses:
"200":
content:
application/json:
schema:
$ref: "#/components/schemas/Foo"
default:
description: Errors
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
components:
schemas:
Foo:
type: object
required:
- name
properties:
name:
type: string
Error:
type: object
required:
- error
properties:
error:
type: string
message:
type: string
data:
type: object

This has been proposed already but not implemented: https://github.com/OAI/OpenAPI-Specification/issues/521
In the comments someone gave a suggestion: https://github.com/OAI/OpenAPI-Specification/issues/521#issuecomment-513055351, which reduces a little your code, but you would still have to insert N*M entries for N paths * M methods.
Since we don't have the ability to make the specification change to our needs, all that remains is we adapting ourselves.
From your profile, you seem to be a windows user. You can for example, create a new explorer context menu to your .yaml files (Add menu item to windows context menu only for specific filetype, Adding a context menu item in Windows for a specific file extension), and make it run a script that auto-fills your file.
Here, an example python script called yamlfill404.py that would be used in the context call in a way like path/to/pythonexecutable/python.exe path/to/python/script/yamlfill404.py %1, where %1 is the path to the file being right clicked.
Python file:
import yaml
from sys import argv
import re
order = ['openapi','paths','components']
level0re = re.compile('(?<=\n)[^ ][^:]+')
def _propfill(rootnode, nodes, value):
if len(nodes) == 1:
rootnode[nodes[0]] = value
if len(nodes) > 1:
nextnode = rootnode.get(nodes[0])
if rootnode.get(nodes[0]) is None:
nextnode = {}
rootnode[nodes[0]] = nextnode
_propfill(nextnode, nodes[1:], value)
def propfill(rootnode, nodepath, value):
_propfill(rootnode, [n.replace('__slash__','/') for n in nodepath.replace('\/','__slash__').split('/')], value)
def yamlfill(filepath):
with open(filepath, 'r') as file:
yamltree = yaml.safe_load(file)
#propfill(yamltree, 'components/schemas/notFoundResponse/...', '')
propfill(yamltree, 'components/responses/notFound/description', 'Not found response')
propfill(yamltree, 'components/responses/notFound/content/application\/json/schema/$ref', '#/components/schemas/notFoundResponse')
responses = [mv['responses'] if 'responses' in mv else [] for pk,pv in (yamltree['paths'].items() if 'paths' in yamltree else []) for mk,mv in pv.items()]
for response in responses:
propfill(response, '404/$ref', '#/components/responses/notFound')
yamlstring = yaml.dump(yamltree)
offsets = [i[1] for i in sorted([(order.index(f.group(0)) if f.group(0) in order else len(order),f.start()-1) for f in [f for f in level0re.finditer('\n'+yamlstring)]])]
offsets = [(offset,(sorted([o for o in offsets if o > offset]+[len(yamlstring)-1])[0])) for offset in offsets]
with open(filepath[:-5]+'_404.yaml', 'w') as file:
file.write(''.join(['\n'+yamlstring[o[0]:o[1]] for o in offsets]).strip())
yamlfill(argv[-1])
It processes the %1, which would be path/to/original.yaml and saves it as path/to/original_404.yaml (but you can change it to overwrite the original).
This example script changes the yaml formating (quotes type, spacing, ordering etc), because of the library used pyyaml. I had to reorder the file with the order = ['openapi','paths','components'], because it loses ordering. For less instrusion, maybe a more manual insertion would be better suited. Maybe one that uses only regex. Maye using awk, there are plenty of ways.
Unfortunately it is just a hack not not a solution.

Related

Jenkins writeYaml not replacing value in yaml file

I have a yaml file with the following structure:
transfers:
- name: xyz
cloud: aws
subheading:
impact: Low
reason: ---
artifacts:
- name: name1
type: type1
source:
hash: a1b2C3dd4 ---> VALUE TO OVERWRITE
I would like to overwrite the existing hash value with a value of the latest GIT_COMMIT.
I have tried the method from the following question: write yaml file in jenkins with groovy. However, the value of hash[0][0] remains unchanged. This is the case even when I replace env.GIT_COMMIT with a test hash string "testHash123". I'm unsure why this is the case?
def filename = ('path/to/file.yaml')
def datas = readYaml file: filename
//change hash
datas.transfers['artifacts'].source.hash[0][0] = env.GIT_COMMIT
writeYaml file: filename, data: datas, overwrite: true
Please try the following.
datas.transfers[0]['artifacts'][0]['source'].hash = env.GIT_COMMIT
The easiest way to figure this out is by printing, so you can understand the structure.
[transfers:[[name:xyz, cloud:aws, subheading:[impact:Low, reason:xxxx], artifacts:[[name:name1, type:type1, source:[hash:a1b2C3dd4]]]]]]
As you can see above the transfer is a sequence, so you need to extract the correct segment with an index.

How to specify an example of two path template parts in OpenAPI / Swagger

Given a path template with two parts such as:
paths:
/blah/{fooPart}-stuff-{barPart}:
parameters:
- in: path
name: fooPart
description: foo part of this matrix ID
required: true
schema:
type: string
- in: path
name: barPart
description: bar part of this matrix ID
required: true
schema:
type: string
I'd like to provide an list of examples. Since the fooPart and the barPart are correlated, i'd like to have each example have the corrrelated data elements. I'd imagine putting it in the components
examples:
Happy:
summary: Happy path
value:
fooPart: red
barPart: up
Sad:
summary: Sad path
value:
fooPart: up
barPart: red
When i add the refs as an examples list to each parameter, like so
- in: path
name: fooPart
description: foo part of this matrix ID
required: true
schema:
type: string
examples:
happy:
$ref: "#/components/example/Happy"
sad:
$ref: "#/components/example/Sad"
the rendered display is ... adequate? Wrong? Not helpful? The examples aren't correlated and the array specified as value is presented in the box for each parameter, as seen here. I recognize, this is what i told it to do. Is there any way to bundle all the examples together? Or is my only option the one i will offer as an answer? Ugh.
I'm assuming the only option is
examples:
HappyFoo:
summary: Happy path
value: red
HappyBar:
summary: Happy path
value: up
SadFoo:
summary: Sad path
value: red
SadBar:
summary: Sad path
value: red
With each parameter only including its own values like so:
parameters:
- in: path
name: fooPart
description: foo part of this matrix ID
required: true
schema:
type: string
examples:
Happy:
$ref: "#/components/examples/HappyFoo"
Sad:
$ref: "#/components/examples/SadFoo"
The examples aren't correlated, but at least the value in the box is correct as seen here.

F#- How can we validate the whole schema of API response using HttpFs.Client or Hopac?

I have a test where after getting a response I would like to validate the entire schema of the response (not individual response node/value comparison).
Sample test:
[<Test>]
let howtoValidateSchema () =
let request = Request.createUrl Post "https://reqres.in/api/users"
|> Request.setHeader (Accept "application/json")
|> Request.bodyString """{"name": "morpheus",
 "job": "leader"}"""
|> Request.responseAsString
|> run
Is there a way that I can save my expected Schema somewhere and once I get the response I do the comparison to check that response has same number of nodes (neither less nor more than expected schema)?
I am ok to opt for other libs like FSharp.Data if we there is no direct way in HttpFs.Client. I looked at FSharp.Data (https://fsharp.github.io/FSharp.Data/library/JsonProvider.html) but not able to seek how it meets the requirements where the schema comparison needs to be done with the savedExpectedSchemaJson=ResponseJson.
You can use Newtonsoft.Json.Schemato validate schemas:
open Newtonsoft.Json.Schema
open Newtonsoft.Json.Linq
let schema = JSchema.Parse expectedSchema
let json = JObject.Parse responeJson
let valid = json.IsValid schema
However this assumes you have a schema predefined somewhere. If you don't have such schema is best to use the JsonProvider who can infer it for you.
Run the call manually and save the result in a sample.json file and create a type using the JsonProvider:
type ResponseSchema = JsonProvider<"sample.json">
and you can use this type to parse any new content based on the sample (provided that the sample is a representative.
ResponseSchema.parse response
This won't validate the schema but will try to meet as best as it can given the input.

Wireshark: display filters vs nested dissectors

I have an application that sends JSON objects over AMQP, and I want to inspect the network traffic with Wireshark. The AMQP dissector gives the payload as a series of bytes in the field amqp.payload, but I'd like to extract and filter on specific fields in the JSON object, so I'm trying to write a plugin in Lua for that.
Wireshark already has a dissector for JSON, so I was hoping to piggy-back on that, and not have to deal with JSON parsing myself.
Here is my code:
local amqp_json_p = Proto("amqp_json", "AMQP JSON payload")
local amqp_json_result = ProtoField.string("amqp_json.result", "Result")
amqp_json_p.fields = { amqp_json_result }
register_postdissector(amqp_json_p)
local amqp_payload_f = Field.new("amqp.payload")
local json_dissector = Dissector.get("json")
local json_member_f = Field.new("json.member")
local json_string_f = Field.new("json.value.string")
function amqp_json_p.dissector(tvb, pinfo, tree)
local amqp_payload = amqp_payload_f()
if amqp_payload then
local payload_tvbrange = amqp_payload.range
if payload_tvbrange:range(0,1):string() == "{" then
json_dissector(payload_tvbrange:tvb(), pinfo, tree)
-- So far so good. Let's look at what the JSON dissector came up with.
local members = { json_member_f() }
local strings = { json_string_f() }
local subtree = tree:add(amqp_json_p)
for k, member in pairs(members) do
if member.display == 'result' then
for _, s in ipairs(strings) do
-- Find the string value inside this member
if not (s < member) and (s <= member) then
subtree:add(amqp_json_result, s.range)
break
end
end
end
end
end
end
end
(To start with, I'm just looking at the result field, and the payload I'm testing with is {"result":"ok"}.)
It gets me halfway there. The following shows up in the packet dissection, whereas without my plugin I only get the AMQP section:
Advanced Message Queueing Protocol
Type: Content body (3)
Channel: 1
Length: 15
Payload: 7b22726573756c74223a226f6b227d
JavaScript Object Notation
Object
Member Key: result
String value: ok
Key: result
AMQP JSON payload
Result: "ok"
Now I want to be able to use these new fields as display filters, and also to add them as columns in Wireshark. The following work for both:
json (shows up as Yes when added as a column)
json.value.string (I can also filter with json.value.string == "ok")
amqp_json
But amqp_json.result doesn't work: if I use it as a display filter, Wireshark doesn't show any packets, and if I use it as a column, the column is empty.
Why does it behave differently for json.value.string and amqp_json.result? And how can I achieve what I want? (It seems like I do need a custom dissector, as with json.value.string I can only filter on any member having a certain value, not necessarily result.)
I found a thread on the wireshark-dev mailing list ("Lua post-dissector not getting field values", 2009-09-17, 2009-09-22, 2009-09-23), that points to the interesting_hfids hash table, but it seems like the code has changed a lot since then.
If you'd like to try this, here is my PCAP file, base64-encoded, containing a single packet:
1MOyoQIABAAAAAAAAAAAAAAABAAAAAAAjBi1WfYOCgBjAAAAYwAAAB4AAABgBMEqADcGQA
AAAAAAAAAAAAAAAAAAAAEAAAAAAAAAAAAAAAAAAAAB/tcWKO232y46mkSqgBgxtgA/AAAB
AQgKRjDNvkYwzb4DAAEAAAAPeyJyZXN1bHQiOiJvayJ9zg==
Decode with base64 -d (on Linux) or base64 -D (on OSX).
It turns out I shouldn't have tried to compare the display property of the json.member field. Sometimes it gets set by the JSON dissector, and sometimes it just stays as Member.
The proper solution would involve checking the value of the json.key field, but since the key I'm looking for presumably would never get escaped, I can get away with looking for the string literal in the range property of the member field.
So instead of:
if member.display == 'result' then
I have:
if member.range:range(1, 6):string() == 'result' then
and now both filtering and columns work.

Can I write headers with the CsvProvider without providing a sample?

Why is it that if I create a new CSV type with the CsvProvider<> in F# like this:
type ThisCsv = CsvProvider<Schema = "A (decimal), B (string), C (decimal)", HasHeaders = false>
then create/fill/save the .csv, the resulting file does not contain the headers in the schema I specified? It seems like there should be a way to include headers in the final .csv file, but that's not the case.
Setting HasHeaders = true errors out, because there's no sample provided. The only way for HasHeaders = true to work is to have a sample .csv. It seems to me that there should be a way to specify the schema without a sample and also include the headers in the final file.
Am I missing something when I use [nameOfMyCSV].Save() that can include the headers from the schema or can this not be done?
I'm afraid the headers from the Schema are only used for the property names of the row. To have them in the file you save you have to provide Sample. Though, the sample can contain only the headers. Also, HasHeaders has to be set to true:
type ThisCsv = CsvProvider<
Sample="A, B, C",
Schema = "A(decimal), B, C(decimal)",
HasHeaders = true>
If the sample contains only headers then if you want to specify data types the schema has to be provided as well.
You can see that the schema is used for property only when you rename the Sample headers in the Schema:
type ThisCsv = CsvProvider<
Sample="A, B, C",
Schema = "A->AA(decimal), B->BB, C(decimal)",
HasHeaders = true>
Then the generated row will have properties like AA, B, CC. But the file generated will still have A, B, C. Also, the Headers property of a csv you created using this schema will be Some [|"A"; "B"; "C"|]:
// Run in F# Interactive
let myCsv = new ThisCsv([ThisCsv.Row(1.0m, "a", 2.0m)])
myCsv.Headers
// The last line returns: Some [|"A"; "B"; "C"|]
Also, to get better understanding of what's happening inside the parser worth taking a look at the source code in GitHub: CSV folder in general and CsvRuntime.fs in particular.

Resources