There are two paths in an OAS 3.0 specification. Technically both seem to be identical, I need to confirm if indeed both are identical.
If yes, why none of the tools out there validates these types of paths duplication.
/foo/{whatever}
and
/{whatever}/foo
OpenAPI Specification only has this to say:
https://spec.openapis.org/oas/v3.0.1.html#path-templating-matching
The following may lead to ambiguous resolution:
/{entity}/me
/books/{id}
That is, these paths are not considered identical, but can result in ambiguous path matching in tooling, especially if both paths support the same HTTP methods. However, if the ambiguous paths support different HTTP methods (e.g. one is GET-only, whereas the other one is POST-only), this would eliminate the ambiguity.
Related
What is recommended naming scheme for avro types, so that schema evolution works with backward and forward compatibility and schema imports? How do you name your types? How many Schema.Parser instances do you use? One per schema, one global, or any other scheme?
The namespace / type names don't need a special scheme for naming to address compatibility.
If you need to rename something, that's what aliases are for
From what I've seen, using a parser more than once per schema causes some issues with state maintained by the parser
So technically you have 2 options, each has it's own benefits and drawbacks:
A) do include version identifier into namespace or type name
B) do NOT include version identifier into namespace or type name
Explanation: If you want to use schema evolution, you need not to include version number, as both confluent schema registry and simple object encoding does use namespaces, and uses some sort of hash/modified crc as schema fingerprint. When deserializing bytes, you have to know writer schema, and you can then evolve it into reader schema. These two need not to have same name, as schema resolution does not use namespace or type name. (https://avro.apache.org/docs/current/spec.html#Schema+Resolution) On the otherhand, Schema.Parser cannon parse more than 1 schema, which does have same Name, which is fully qualified type of schema, ie namespace.name. So it depends on your usecase, which one do you want to use, both can be used.
ad A) if you do include version identifier, you will be able to parse both(or all) version using same Schema.Parser, which means, that for example these schemas will be processable together in maven-avro-plugin (sorry I do not remember, whether I tested it in single configuration only, or if I did use multiple configurations also, you have to check it yourself). Another benefit is, that you can reference same type in different versions if needed. Drawback is, that after each version upgrade, the namespace and/or type name changes, and you would have to change imports in project. Schema resolution between writer and reader schema should work, and hopefully it will.
ad B) if you do not include version identifier, only one version could be compiled by avro-maven-plugin into java files, and you cannot have one global Schema.Parser instance in project. Why you would like to have just one global instance? It would be helpful if you don't follow bad&frequest advices to use top-level union to define multiple types in one avsc file. Well, maybe it's needed in confluent registry, but if you don't use that one, you definitely don't have to use top-level union. One can use schema imports, when Schema.Parser need to process all imports first and then finally the actual type. If you use these imports, then you have to use one Schema.Parser instance for each group of type+its imports. It's little bit declarational hassle, but it relieves you from having top-level union, which has issues on its own, and it's incorrect in principle. But if your project don't need multiple versions of same schema accessible at the same time, it's probably better than A) variant, as you don't have to change imports. Also there is opened possibility of composition of schemas if you use imports. As all versions have same namespace, you can pass arbitrary version to Schema.Parser. So if there is some a-->b association in types, one can use v2 b and use it with v3 a. Not sure if that is typical usecase, but it's possible.
Some days ago I started a REST API in JavaEE 7, I implemented a single class with three methods and implemented succesfully Swagger and Swagger-UI in the project, which showed the three endpoints I implemented succesfully in the generated JSON.
However, I migrated to JavaEE 8, and after this change Swagger detects several unknown endpoints, like the "default" ones (this capture shows only part of all of them):
Investigating a bit I discovered that these endpoints may belong to a JPA REST API in Eclipselink implementation, as described here https://oracle-base.com/articles/misc/oracle-rest-data-services-ords-open-api-swagger-support and here https://www.eclipse.org/eclipselink/documentation/2.4/solutions/restful_jpa004.htm#CHDFCFFA
Despite they appear in the generated JSON, all of them contain variable paths, so I can't access them following the path given by Swagger, even inventing some parameters like "version" using the webs above examples.
The Swagger version I use is v3, aka OpenAPI version. I specify OpenAPI properties with #OpenAPIDefinition in the endpoint class, which also contains a #Tag annotation to group them and the three methods contain #Operation tag with their own #ApiResponse. There is no more Swagger/OpenAPI annotations/files/classes written by me.
The question is, how can I make Swagger ignoring these endpoints?
Thanks
Finally I have found a solution. The case is that Swagger scanner engine scans the whole project, ignoring if the class and his methods have #Operation or not. If my hypothesis is true, some Eclipselink classes could have Swagger annotations (I'm not sure), so when Swagger scans, if finds them and add them to the JSON/YAML.
The solution is creating/adding to the existant openapi.yaml (it can have several names and can be in several locations, as enumerated here: https://github.com/swagger-api/swagger-core/wiki/Swagger-2.X---Integration-and-configuration#known-locations) this:
resourceClasses:
- com.path.to.your.package.Resource
prettyPrint: true
cacheTTL: 0
scannerClass: io.swagger.v3.jaxrs2.integration.JaxrsAnnotationScanner
readAllResources: false
Instead of resourceClasses it can be written resourcePackages, and then it should be specified the whole package and the class in the same style as used to specify the package. To be honest, this property does not affect to my problem.
The solution comes on setting readAllResources to false. The reason is here, in a note: https://github.com/swagger-api/swagger-core/wiki/Swagger-2.X---Annotations#operation
Blockquote
Note: swagger-jaxrs2 reader engine includes by default also methods of scanned resources which are not annotated with #Operation, as long as a jax-rs #Path is defined at class and/or method level, together with the http method annotation (#GET, #POST, etc).
I hope this solution serves for anyone if he/she has to face the same problem.
I am sure this is documented somewhere but unable to find the answer anywhere.
if I have:
```bazel_rule(
name = "foo",
srcs = ["foo.cpp"],
attr_bar = "bar"
)```
if I have a reference to this rule (//src:foo) in a Starlark (.bzl) file, how can I query the target to get a value of a specific attribute. e.g. get_attribute("//src:foo", "attr_bar") should return "bar" in this example.
It depends on whether you're trying to read the attribute from a macro, a rule, or an aspect.
Short answers:
A macro can't read attributes of a target (roughly, macros are evaluated at build file loading time, and attributes are evaluated later at analysis time). You can do things like taking in the attributes you care about and creating the rule (bazel_rule in your example) within the macro, so that the macro has the attribute value, but this usually quickly becomes messy and hard to follow.
A Starlark rule also can't directly read attribute values from dependencies (it can read its own attributes though, of course). The rule you're interested in (bazel_rule here) has to put the information in a provider and the Starlark rule reads the provider from its dependencies.
An aspect can read the attributes of the rule its being evaluated on directly through ctx.rule.attr.<attr_name>(the example here does this).
It only recognizes top-level non-function value declarations.
e.g. let a = 2
and doesn't produce documentation for functions or type definitions.
I've checked the xml documentation file and it has all the /// comments I put in the source, but none of them (except for the top level values) show up in the resulting html.
Yes, using C#-centric tools for F# documentation generation is usually pretty horrible. We started an alternative project a while ago, but it is not yet as mature as SandCastle: http://bitbucket.org/IntelliFactory/if-doc
How can I add filters to skip some of the classes in a namespace/assembly. For example: SYM.UI is the base assembly and i want to skip SYM.UI.ViewModels. Writing the below filter but it is including all of them and not fulfilling my request:
+[SYM.UI*]* -[SYM.UI.ViewModels*]*
Kindly help me correcting this?
The opencover wiki is a good place to start.
The usage is described as +/-[modulefilter]typefilter (this is based on how you would see the types in IL; where the type filter also includes the namespace and module filter usually is the name of the assembly (without the file extension).
Thus to exclude your types you could use
+[SYM.UI]* -[SYM.UI]SYM.UI.ViewModels.*
NOTE: Exclusion filters take preference over inclusion filters.
You can use following:
"-filter:+[*]* -[SYM.UI]SYM.UI.ViewModels.*"
Note that the quotes must be around the -filter: part, too