I'm using L5 Swagger from DarkOnLine to generate Swagger docs using OpenApi schematics.
To use schema I can do
#OA\Property(property="certification", type="array", #OA\Items(ref="#/components/schemas/Certification"))
and it works perfectly fine and shows as
"certification": [
{
"certification_id": 0,
"name": "string"
}
],
. But it creates an array block with square brackets with multiple objects inside it.
How do I use the same working but lose the array. Something like
#OA\Property(property="certification", type="object", #OA\Items(ref="#/components/schemas/Certification")),
so as to remove square brackets and show only object like.
"certification": {
"certification_id": 0,
"name": "string"
}
You can do:
#OA\Property(
property="certification",
ref="#/components/schemas/Certification"
)
The #OA\Items annotation is only used when you want to specify what are the properties inside an array (see Data Types: array).
In your case you just want to describe an object so you just have to reference the object's schema in the property and remove #OA\Items.
Related
I got the following .geojson file from Overpass API:
{
"type": "FeatureCollection",
"generator": "overpass-ide",
"copyright": "The data included in this document is from www.openstreetmap.org. The data is made available under ODbL.",
"timestamp": "2022-07-18T07:57:39Z",
"features": [
{
"type": "Feature"
...
I converted this simply with gdf = geopandas.read_file(filename) into a GeoDataFrame object. This all worked great.
The GeoDataFrame object now contains all elements that were listed inside of "features" within the .geojson.
But now I wondered if there is any way for me to access the value for "timestamp" on this GeoDataFrame object? Or does the GeoDataFrame object not even contain this key/value-pair anymore?
gdf["timestamp"] does not work.
I could use the json-library like shown below, but then I would have to load the entire file again which seems like a huge waste of resources.
import json
f = open('filename.geojson', encoding="utf8")
data = json.load(f)
timestamp = data["timestamp"]
Any help would be much appreciated.
The log4j2 PatternLayout offers a %notEmpty conversion pattern that allows you to skip sections of the pattern that refer to empty variables.
Is there any way to do something similar for JsonTemplateLayout, specifically for thread context data (MDC)? It correctly (IMO) suppresses null fields, but it doesn't do the same with empty ones.
E.g., given the following in my JSON template:
"application": {
"name": { "key": "x-app", "$resolver": "mdc" },
"context": { "key": "x-app-context", "$resolver": "mdc" },
"instance": {
"name": { "key": "x-appinst", "$resolver": "mdc" },
"context": { "key": "x-appinst-context", "$resolver": "mdc" }
}
}
is there a way to prevent blocks like this from being logged, where the only data in the subtree is the empty string values for context?
"application":{"context":"","instance":{"context":""}}
(Yes, ideally I'd prevent those empty strings being put into the context in the first place, but this isn't my app, I'm just configuring it.)
JsonTemplateLayout author speaking here. Currently, JsonTemplateLayout doesn't support blank property exclusion for the following reasons:
The definition of empty/blank is ambiguous. One might have, null, {}, "\s*", [], [[]], [{}], etc. as valid JSON values. Which one of these are empty/blank? Let's assume we have agreed on a certain behavior. Will it apply to the rest of its users?
Checking if a value is empty/blank incurs an extra runtime cost.
Most of the time you don't care. You persist logs in a storage system, e.g., ELK stack, and there blank value elimination is provided out of the box by the storage engine in the most efficient way.
Would you mind sharing your use case, please? Why do you want to prevent the emission of "context": "" properties? If you deliver your logs to Elasticsearch, there you can easily exclude such fields via appropriate index mappings.
Near as I can tell, no. I would suggest you create a Jira issue to get that addressed.
I am programming in Objective-C. I am using Apache Avro for data serialization.
My avro schema is this:
{
"name": "School",
"type":"record",
"fields":[
{
"name":"Employees",
"type":["null", {"type": "array",
"items":{
"name":"Teacher",
"type":"record",
"fields":[
{"name":"name", "type":"string"}
{"name":"age", "type":"int"}
]
}
}
],
"default":null
}
]
}
In my Objective-C code, I have an Array of Teacher objects, each teacher object contains value of name & age.
I want to write the teacher array data to file using Avro with the schema showing above. I am mainly concern about how to write data to the Employees array defined in above schema.
Here is my code (I have to use C style code to do it, I follow the Avro C documentation):
// I don't show this function, it constructs the a `avro_value_t` based on the schema. No problem here.
avro_value_t school = [self constructSchoolValueForSchema];
// get "Employees" field
avro_value_t employees;
avro_value_get_by_name(school, "employees", &employees, 0);
int idx = 0;
for (Teacher *teacher in teacherArray) {
// get name and age
NSString *name = teacher.name;
int age = teacher.age;
// set value to avro data type.
// here 'unionField' is the field of 'Employees', it is a Avro union type which is either null or an array as defined in schema above
avro_value_t field, unionField;
avro_value_set_branch(&employees, 1, &unionField);
// based on documentation, I should use 'avro_value_append'
avro_value_append(&employees, name, idx);
// I get confused here!!!!
// in above line of code, I append 'name' to 'employees',
//which looks not correct,
// because the 'Employees' array is an array of 'Teacher', not arrary of 'name'
// What is the correct way to add teacher to 'employees' ?
idx ++;
}
The question I want to ask is actually in the code comment above.
I am following that Avro C documentation, but I get lost how can I add each teacher to employees ? In my above code, I only added each teacher's name to the employees array.
I think there are two things wrong with your code, but I am not familiar with Avro, so I can't guarantee one of them. I just quickly peeked at the documentation you linked and here's how I understood avro_value_append:
It creates a new element, i.e. Teacher and returns that via the reference in the second parameter (so it returns-by-reference). My guess is that you then need to use the other avro... methods to fill that element (i.e. set the teacher's name and so forth). In the end, do this:
avro_value_t teacher;
size_t idx;
avro_value_append(&employees, &teacher, &idx); // note that idx is also returned by reference and now contains the new elements index
I'm not sure if you set up employees correctly, btw, I didn't have the time to look into that.
The second problem will arise with your usage of name at some point. I assume Avro expects C strings, but you're using an NSString here. You'll have to use the getCString:maxLength:encoding: method on it to fill a prepared buffer to create a C string that you can pass around within Avro. You can probably also use UTF8String, but read up its documentation: You'll likely have to copy the memory (memcpy shenanigans), otherwise your Avro container will get its data swiped away from under its feet.
I am trying to import a ms-excel 2007 sheet using excel-import plugin. It was simple to integrate with my project and I found it working as expected until I noticed that the number values in the cells are populated as real numbers with exponent.
For example if the cell contains value 9062831150099 -(populated as)->9.062831150099E12 i.e.
A |
_____________________
Registration Number |
____________________
9062831150099
Is populated as: [RegNum:9.062831150099E12]
Anyone could suggest me how I can change this representation back to its original format keeping its type as number?
Missed it at the first attempt but I figured out how to to achieve it:
When invoking the key methods (the ones that process cellMap and columnMaps) for example List bookParamsList = excelImportService.columns(workbook, CONFIG_BOOK_COLUMN_MAP) or Map bookParams = excelImportService.cells(workbook, CONFIG_BOOK_CELL_MAP )
There is also ability to pass in configuration information, i.e. to specify what to do if a cell is empty (i.e. to provide a default value), to make sure a cell is of expected type, etc.
So in my case I created a configuration parameter Map of properties of the object and passed it to the above methods. i.e.
Class UploadExcelDataController {
def excelImportService
static Map CONFIG_BOOK_COLUMN_MAP = [
sheet:'Sheet1',
startRow: 2,
columnMap: [
'A':'registrationNumber',
'B':'title',
'C':'author',
'D':'numSold',
]
]
static Map configBookPropertyMap = [
registrationNumber: ([expectedType: ExpectedPropertyType.IntType, defaultValue:0])
]
def uploadFile(){
...
List bookParamsList = excelImportService.columns(workbook, CONFIG_BOOK_COLUMN_MAP,configBookPropertyMap )
...
}
}
I have this data structure in dust:
{ 'items' : [
[ 'apples', "$1.00" , "Delicious" ],
[ 'oranges', "$2.00" , "Juicy" ],
]
}
I'm trying to access the inner items, and can't figure out how.
I can address the entire array of the current loop via {.} , but i can't seem to access the items within it. ( I could do this in Mustache )
I was expecting something like this to work...
{#items}
<b>{.[0]}</b> <em>only {.[1]}!</em>
<p>{.[2]}</p>
{/items}
When I ran your example in the dust playground: http://linkedin.github.io/dustjs/test/test.html
the output I got was:
<b>apples</b> <em>only $1.00!</em><p>Delicious</p><b>oranges</b> <em>only $2.00!</em><p>Juicy</p>
This looks to be what you were expecting. I believe fixes in this area after your post likely made this work. Grab the latest version of dust.