Some days ago I started a REST API in JavaEE 7, I implemented a single class with three methods and implemented succesfully Swagger and Swagger-UI in the project, which showed the three endpoints I implemented succesfully in the generated JSON.
However, I migrated to JavaEE 8, and after this change Swagger detects several unknown endpoints, like the "default" ones (this capture shows only part of all of them):
Investigating a bit I discovered that these endpoints may belong to a JPA REST API in Eclipselink implementation, as described here https://oracle-base.com/articles/misc/oracle-rest-data-services-ords-open-api-swagger-support and here https://www.eclipse.org/eclipselink/documentation/2.4/solutions/restful_jpa004.htm#CHDFCFFA
Despite they appear in the generated JSON, all of them contain variable paths, so I can't access them following the path given by Swagger, even inventing some parameters like "version" using the webs above examples.
The Swagger version I use is v3, aka OpenAPI version. I specify OpenAPI properties with #OpenAPIDefinition in the endpoint class, which also contains a #Tag annotation to group them and the three methods contain #Operation tag with their own #ApiResponse. There is no more Swagger/OpenAPI annotations/files/classes written by me.
The question is, how can I make Swagger ignoring these endpoints?
Thanks
Finally I have found a solution. The case is that Swagger scanner engine scans the whole project, ignoring if the class and his methods have #Operation or not. If my hypothesis is true, some Eclipselink classes could have Swagger annotations (I'm not sure), so when Swagger scans, if finds them and add them to the JSON/YAML.
The solution is creating/adding to the existant openapi.yaml (it can have several names and can be in several locations, as enumerated here: https://github.com/swagger-api/swagger-core/wiki/Swagger-2.X---Integration-and-configuration#known-locations) this:
resourceClasses:
- com.path.to.your.package.Resource
prettyPrint: true
cacheTTL: 0
scannerClass: io.swagger.v3.jaxrs2.integration.JaxrsAnnotationScanner
readAllResources: false
Instead of resourceClasses it can be written resourcePackages, and then it should be specified the whole package and the class in the same style as used to specify the package. To be honest, this property does not affect to my problem.
The solution comes on setting readAllResources to false. The reason is here, in a note: https://github.com/swagger-api/swagger-core/wiki/Swagger-2.X---Annotations#operation
Blockquote
Note: swagger-jaxrs2 reader engine includes by default also methods of scanned resources which are not annotated with #Operation, as long as a jax-rs #Path is defined at class and/or method level, together with the http method annotation (#GET, #POST, etc).
I hope this solution serves for anyone if he/she has to face the same problem.
Related
I'm using a SpringBoot 2.2.6 WebApplication with Maven 3. I'm also using spring-integration-http for my endpoint, that's mean that my endpoint are similar to follow:
#Bean
public IntegrationFlow test(CommonTransformer<TestDTO, String, TestMapper> testTransformer, Jackson2JsonObjectMapper obj) {
return IntegrationFlows.from(Http.inboundGateway("/foo/{name}")
.requestMapping(m -> m.methods(HttpMethod.GET))
.payloadExpression("#pathVariables.name")
.replyChannel(Constants.REPLY)
.requestPayloadType(String.class))
.transform(testTransformer)
.transform(new ObjectToJsonTransformer(obj))
.channel(Constants.HTTP_REQUEST)
.get();
}
Now I would like to create a OpenApi docs for my endpoint and, if it's possible, a swagger GUI interface to test it.
I have read several official/unofficial docs and I find interesting docs here another much interesting example here.
My preoccupation is that many of this articles are dated before 2020 (for example one of these use deprecated annotation likes #EnableSwagger2Mvc) but I can't managed to find out something more updated.
Is anyone aware of a more up-to-date procedure?
-------------------------- UPDATE --------------------------
First of all Thanks #ArtemBilan for yor response.
Yes I read that article and I'm not new to documenting my REST API. With springdoc-openapi-ui I'm able to create a .json file that, if putted in some editor like http://swagger.io or if used with a specific maven plugin can create a client (in both spring java and Angular language) ready for use.
I have tried the springfox way (above) to documenting my spring-integration-http but it sucks! It generate some useless files to reproduce the call via CURL..
Is not what I'm looking for. I must (the STO asks) documenting my endpoint like the .yaml you can find for the example Swagger Pet Store.
And it seems there's no way with this spring-integration-http to do so..
Any help is appreciate.
I have developed a rest-api client (in java) customised to the needs of my product. I wanted to generate tests using my rest api client using swagger-codegen modules based on yaml-file.
I have already extended DefaultCodegenConfig & even tried implementing the CodegenConfig interface to build my custom jar. I have customized the api.mustache and api_test.mustache files and passing them in the constructor and processOpts() method of my CustomCodeGen that extends DefaultCodegenConfig.
However, I want to use the custom/new mustache template variables that I have added in my customised api.mustache.
For e.g. if refer to standard api.mustache, the template variables it typically uses are
- {{classname}}
- {{#operation}}
- {{#contents}}
- {{#parameters}}
etc.
Now, I want to introduce a new template variable, let's say {{custom_param}}. Now I am not clear how do I integrate this new template variable with the implementation.
Looks like from this Mustache-Template-Variables published here, swagger-codegen does not allow adding new template-variables and perhaps we are restricted to only the variables mentioned on this page.
So, is there some way to make the new template variables work ?
Some time ago I added the uniqueItems parameter for bean validation as it was not getting processed by the engine even though it was a part of the implemented JSR.
So I believe codebase needs to be updated to use your own variable which is only possible if you fork the code.
In case it helps, these two were the PRs:
For query parameters: https://github.com/swagger-api/swagger-codegen/pull/10154.
For body parameters: https://github.com/swagger-api/swagger-codegen/pull/10490.
I am trying to generate python client from a swagger yaml file. It works fine, except that the response models are all snake cased(words sep. by underscores) instead of camel cased. I provided the camel cased versions like this:
definitions:
serviceResponse:
type: object
properties:
serviceResponseInternal:
type: object
The generated code has a ServiceResponse object which has an internal field service_response_internal. I would like it to respect the convention and just have serviceResponseInternal instead of underscored seperated names. How do I achieve this?
Assuming you're using Swagger Codegen, you can customize the toVarName in the Python code generator:
https://github.com/swagger-api/swagger-codegen/blob/master/modules/swagger-codegen/src/main/java/io/swagger/codegen/languages/PythonClientCodegen.java#L180
Can you elaborate on why you don't want to go with snake case for model properties (which should conform to Python style guide)?
UPDATE: On May 2018, about 50 top contributors and template creators of Swagger Codegen decided to fork Swagger Codegen to maintain a community-driven version called OpenAPI Generator. Please refer to the Q&A for more information.
I have a simple question regarding the architecture of my Amazon Simple Workflow / AWS Flow for Ruby app. For background, I have a simple workflow with one activity running in an AWS Flow for Ruby layer on Opsworks. I have a separate REST API running in a Rails App Server layer on Opsworks that I would like to kick off the workflow.
The code in the REST API that kicks off the workflow:
1: domain = AWS::SimpleWorkflow.new.domains['my_domain']
2: workflow_client = AWS::Flow::workflow_client(domain.client, domain) {{from_class: MyWorkflowClass}}
3: workflow_client.start_execution(input_1: #input1, input_2: #input2)
My assumption is that my workflow and REST API code bases could be separate and that the only common component would be the aws-flow Ruby gem and require 'aws/decider'. However, I'm finding that my REST API also needs to have require 'PATH_TO_MY_WORKFLOW_CLASS'. When I remove that line of code from the code file in my REST API that kicks off the workflow, I get the following error:
undefined method `_options' for nil:NilClass; ["/Users/MyName/.rvm/gems/ruby-2.0.0-p247/gems/aws-flow-2.2.1/lib/aws/decider/utilities.rb:183:in `interpret_block_for_options'", "/Users/MyName/.rvm/gems/ruby-2.0.0-p247/gems/aws-flow-2.2.1/lib/aws/decider/implementation.rb:73:in `workflow_client'"
(error at line 2 above)
Am I mistaken? Do I really need to require MyWorkflowClass in my workflow starter app (i.e. my REST API) or am I doing something wrong? I've scoured the documentation and could not find a clear answer to this. All the samples that I can find do indeed have the workflow class included in the workflow starter code, but I'm not sure if it's because they are bundled as a simple sample or if it's because it's the way it's supposed to be. The reason why I am not taking the samples at face value is because requiring the workflow class in the workflow starter code does not make any sense to me. It binds the two apps way too tightly for my taste.
I posted an issue on the aws-flow-ruby sdk and got the answer from an Amazon Engineer. In short, you can use the :from_class option or the :prefix_name and :execution_method options together.
There are two ways of starting the workflow in code
1) Using the aws sdk directly.
In this case, your code doesn't need to know anything about the workflow class. You just need the domain, workflow type (name and version) and the workflow id.
It will look something like -
require 'aws-sdk-v1'
swf = AWS::SimpleWorkflow.new.client
swf.start_workflow_execution(
domain: "HelloWorld",
workflow_type: {
name: "HelloWorldWorkflow",
version: "1.0"
},
workflow_id: "foo",
input: ....,
....other options (optional)...
)
As you can see above, this doesn't require the workflow class at all.
2) Using the aws-flow gem (which is what you are doing above).
There are two ways of using the workflow client provided by the aws-flow gem to start an execution. You can either use the client as a generic client and not tie it to any workflow class or you can use the :from_class option to fetch options from a particular workflow class. To use the from_class option, you need to have the class in the ObjectSpace (hence you need to require the workflow file).
With from_class -
require 'aws/decider'
domain = AWS::SimpleWorkflow.new.domains['my_domain']
workflow_client = AWS::Flow::workflow_client(domain.client, domain) {{from_class: "MyWorkflowClass"}}
workflow_client.start_execution(input_1: #input1, input_2: #input2)
Without from_class -
require 'aws/decider'
domain = AWS::SimpleWorkflow.new.domains['my_domain']
workflow_client = AWS::Flow::workflow_client(domain.client, domain) {{
prefix_name: "YourClassName",
execution_method: "workflow_method_name",
version: "1.0",
...other options...
}}
workflow_client.start_execution(input_1: #input1, input_2: #input2)
The recommended way to start a workflow execution is to use aws-flow WorkflowClient instead of using the SDK directly.
Additional notes with respect to the input accepted by a workflow:
The SDK and the console will only take strings as input. This can be a free form string but if your workflow is written using ruby flow, this string should be a serialized form of your input so that the WorkflowWorker can deserialize the input when it picks up the task and convert it into ruby objects (in this case a hash).
When you use the ruby flow WorkflowClient, the client will automatically serialize your input hash (or any other input) into a string before sending it to SWF. aws-flow by default uses a YAML based data converter to do this (It can be overridden).
If you just want to see what your input hash will look like as a string, you can do the following -
AWS::Flow::FlowConstants.default_data_converter.dump(input_hash)
You can then use this serialized input to start a workflow using the SDK or the console.
I need to implement a web app, but instead of using relational database I need to use different SOAP Web Services as a back-end. An important part of application only calls web services and displays the result. Since Web Services are clearly defined in form of Operation: In parameters and Return Type it seems to me that basic GUI could be easily constructed just like in the case of scaffolding based on Domain Entities.
For example in case of SearchProducts web service operation I need to enter search parameters as input, so the search page can be constructed. Operation will return a list of products, so I need a page that will display this list in some kind of table.
Is there already some library in grails that let you achieve this. If not, how would you go about creating one?
Probably the easiest approach is to use wsimport on the WSDL files to generate the client-side stubs. Then you can call methods in the stubs from Groovy just as you would have called them from Java.
For example, consider the WSDL file for Microsoft's TerraServer, located at http://terraservice.net/TerraService.asmx?wsdl . Then you run something like
wsimport -d src -keep http://terraservice.net/TerraService.asmx?WSDL
which puts all the compiled stubs in the src directory. Then you can write Groovy code like
import com.terraserver_usa.terraserver.*;
TerraServiceSoap sei = new TerraService().getTerraServiceSoap()
Place home = new Place(city:'Boston',state:'MA',country:'US')
def pt = sei.convertPlaceToLonLatPt(home)
println "$pt.lat, $pt.lon"
assert Math.abs(pt.lat - 42.360000) < 0.001
assert Math.abs(pt.lon - -71.05000) < 0.001
If you want to access a lot of web services, generate the stubs for all of them. Or you can use dynamic proxies instead.
The bottom line, though, is to let Java do what it already does well, and use Groovy where it makes your life easier.
You should be able to use XFire or CXF Plugins. For automatic scaffolding, modify your Controller.groovy template in scaffolding templates so it auto-generates methods you need.