How to enable swagger iu in Quarkus with Reactive Routes? - swagger

I'm using Quarkus to build a project and I've decided to use Reactive Routes.
I'd like to add OpenAPI information and a Swagger UI to my project. It seems like that is possible using RestEasy but I didn't find information about the same with Reactive Routes.
Is that possible? I tried to enable it but I couldn't.

It isn't really possible.
The RESTeasy approach relies on a combination of reflection and annotations to determine what the REST interface is.
When using Vert.x Web routes you have more flexibility which that that level of information isn't available in a standard way.
There are a couple of alternatives:
Write the OpenAPI definition up front and use that to generate the routes (https://how-to.vertx.io/web-and-openapi-howto/)
Find (or write) a generator that constrains your Vert.x code so that it can determine the contract dynamically (https://jitpack.io/p/ckaratzas/vertx-openapi-spec-generator is an example, it may not be the only one).
Personally I use RESTeasy if I need to generate OpenApi docs.

Related

Generated nodejs server stub is missing routing

I wanted to try out the code first approach with an OpenAPI spec. For testign purposes I treied the Pet Store Example from SwaggerHub.
In the generated Code I noticed, that there is no logic involving routing.
I also noticed that the code in the service folder is not even used when I run the nodejs server (changing values of example data changes nothing in the output. The API seems to run a swaggerhub server.
Do I have a missunderstanding here, what the swagger code gen does?
On the other hand the client code generation looks how I woudl expect it. Instead of creating rest requests in my client I only have to execute methods of the services.
Based off your question, I would argue you're actually doing a "design-first" approach.
Code-first is when you have an existing codebase/service and you then create your documentation after (whether it is generated or hand written).
A design-first approach is when you create your documentation first, and THEN build out your code. If you have an OpenAPI document, and you're using Swagger codegen to create some code, then you're doing design-first.
As for your question involving routing, all Swagger codegen will do for you is generate some boilerplate code based off of your OpenAPI document. It will not add any business logic, or even route the API calls for you. It is then on you to implement all this logic after the fact.

.NET Swagger (Swashbuckle): How to handle requests with a very large amount of parameters

We are using OpenAPI to document our APIs. A few of our calls have a very complex structure with nested classes of nested classes with possibly circular references. This is acceptable and required for the actual API. However, the documentation for these endpoints is almost unusable.
We are using swashbuckles .NET-integration to dynamically generate the documentation at startup and scraping the afterwards if there's a need for static documentation.
I have read about using $ref according to swagger specs but I'm not sure this is the use case for it.

Swagger best practices

I am currently defining a Rest API and I intend to use Swagger to document it.
I've already started to define my API specifications with the Open Api Specification in YAML into the Swagger-Editor.
Then, I understand that I will provide that file to the Swagger-codegen to generate a server implementation, and also to the Swagger-UI (whose statics files will be previously paste to my server) to expose the interactive documentation .
According to Swagger, this is the top-down approach.
But later, I will probably need to modify this API, and I want to do it via by this tediously YAML file previously defined, to keep the API easily modifiable by anyone (and Language-agnostic).
Does the way to do this is to modify the definition file and then re-use Swagger-codegen ? By this approach, I guess so that I can't even lightly modify the API directly in the implementation server code without risking to have a out of date documentation.
And If I chose to do the bottom-up approach (via Swagger-core annotation), I will restrain all my further modifications to occur in the implementation server code, and my initial definition file will never be usable again.
So another question would be : Is there a common way to deal with Swagger when we want to modify the API both via the specification file and via the implementation server code (I suppose that the file that Swagger-core can generate me from my code, will never looks like my initial one that I defined by hand).
To maintain the API documentation, the best course of action that i can suggest is to follow a hybrid approach.
Initially, when you have to do bulk development, go for the top down approach. This will reduce the initial set up and coding effort. That's the basic idea behind any codegen.
Then, when it comes to maintaining the APIs, or adding a few new ones every day (or week), follow the bottom up approach. You will already have the previous code, the only thing you'll need to do is add some more annotations or API definitions.
Going for top-down iteratively defeats the purpose of code maintenance. Boiler plates and self generated code are there to give you a quick start, not for sustenance.
My opinion may be biased.
For API client, there should not be a need to customize it in most cases. If you find that you need to customize it to meet your requirement, it may worth starting a discussion via https://github.com/swagger-api/swagger-codegen/issues/new (and also please check what are the options available to customize the output, e.g. for PHP, run java -jar modules/swagger-codegen-cli/target/swagger-codegen-cli.jar config-help -l php)
For server stub, ideally the developers only need to focus on the business/application logic and regenerate the server stub when adding/deleting/updating endpoints (but I don't think all the server stubs can achieve that yet)
Disclaimer: I'm the top contributor to Swagger Codegen

How to generate URLs to link back to objects?

I'm trying to build some RESTful services using spray. I've figured out how to build the directives I need. But the issue I'm having is how to reliable generate URLs back to the "resources" I'm working with. Note I use the term "resources" here as it is used for RESTful APIs (i.e. the server side objects one refers to through the API).
I've looked through the documentation and I haven't found any reference for this except mention of "Resources" in the Java sense (i.e. data files in the classpath).
For sure I can build a directive that maps "/items/127" to a resource on the server side. But what I don't see how to do (at least in a safe and automatic way) in Spray is how to generate such a URL given the server-side resource. I'm looking for something similar to url_for from the Flask framework.
For now, I'm writing functions to do this. But, of course, they are fragile because they aren't DRY (i.e. they don't use any knowledge of Spray routing in generating the URLS).
Am I missing something?
What you're asking for is known as reverse routing. As #iwein said, there's no direct support for reverse routing in Spray. You can confirm this from Matthias in this thread. There is an open ticket for this issue.
However, there is an approach, based on the PathMatcher that Marcel Mojzis open sourced which you can find here.
I have a need for this as well, but I'm going to get by with a "known pattern" approach until Spray (or akka-http) comes up with its own solution to this issue. Essentially, I have an object that knows how to generate the URL for certain patterns of things. Each pattern is a function and clients of the object have to ask for the url by one of the function names. Not ideal, but very simple and effective until akka-http provides a more generic solution.
I don't think that Spray has an equivalent of url_for. I don't think it would make sense in the context of Spray, because in Spray you're not annotating functions with urls that map to them, but you're creating routes that deserialize requests and eventually map them to functions.
As such there is no easy way to generate an example url from the name of a function.

How does plugin/plugin dependency work?

I've a very basic question about the practical operation of software plugin systems. I understand how a simple plugin design works, ie one where a plugin adds to a hosting application. Eg a plugin adds a new filter to a paint program. The host knows it has to call a method called filter which the plugin provides. In this case all plugins are independent.
My question relates to the case when one plugin can use the facilities in another plugin. For example there may be a plugin that provides the ability to plot data while another plugin generates data. If the data generator plugin has never seen the graphing plugin before I assume there is no way for it to know what methods to call in the graphing plugin. I presume that in these cases, the developer of the data generator plugin must have access to a description of the graphing plugin API either in the form of an abstract class or an interface. Is this how plugin dependency operates, ie plugins know explicitly about the Apis that other plugins might have?
I've just built such a plugin system and for plugins to be able to use other plugins I am including in the source code copies of the plugin interfaces each plugin needs to know about. The problem with this approach is that if a new plotting plugin comes along but with a different API, there is no way for the data generator plugin to use it without first being recompiled so that it is aware of the new API. This doesn't seem right to me.
I know this may seem to be a very simple question and have an obvious answer but I've spend hours searching the internet and I've not come across an explicit statement concerning this question.
If your "new plotting plugin" has a different API from the one your code knows about, there is no alternative but to make your code aware of this API.
If you are in control of all this, including the various plotting plugins, then you c/should specify a standard Plotting API that all plotting plugins need to implement/support. That is about the only way that you can have different providers (plugins) for some task.
A standard "language" is the way to ensure that you can use multiple implementors of an interface (providers of a service). It is also the way that you can have multiple users of the same interface (consumers of a service).
The need/wish for multiple providers of a task and for multiple consumers of a provider is probably what led to the creation of standards such as OAuth, and of protocols such as HTTP, SMTP and the like.

Resources