How does swagger-codegen automatically generate applications and dockerfiles? - swagger

I want to clarify the generation process of applications and dockerfiles. In order to understand this, I have drawn a graph to represent the flow based on my understanding of the documentation and source code. I will be glad if someone takes a look and corrects or approves the schema. The flow tries to describe the generation of NodeJS application from a Java application.

The swagger-codegen doesn't generate the template files, it uses the template files that are written by the (insert language here) codegen developer(s).
I assume the schema you've drawn is the flow of the NodeJSCodeGen because not every supported language generates a dockerfile.
Personally I would replace the Java microservice -> generates Model with OpenAPI specification -> Code Generator, as the generator uses the specification. It doesn't matter if you generated it from from a micro service or if you went with the API first approach.

Related

How do you reuse the same openapi.yaml file for production and development

We are using a GitOps model for deploying our software. Everything in dev branch goes to the dev environment and everything in main gets deployed to production. All good and fine except that we use Google Cloud Endpoints that rely in the host parameter of the openapi.yaml. There is only room for a single value so we have to remember to change it for each deployment not allowing us to do a fully automated deploy.
How do you manage the same openapi.yaml definition when using Google Cloud Endpoints?
There is one example given in the official documentation, see if it helps your use-case.
Basic structure of an OpenAPI document, notice how the "host" is parameterized with "YOUR-PROJECT-ID.appspot.com"
Deploying the Endpoints configuration, using the provided script "./deploy_api.sh"
Source code for deploy_api.sh
One common solution for different environments properties management is to create different build profiles, and create different environment specific properties files like openapi_dev.yaml, openapi_qa.yaml, openapi_prod.yaml, and supply the one based on the profile(dev/qa/prod) being used. Refer here for more details.
Another way documented at GitOps-style continuous delivery with Cloud Build, where a multi branch, multi-repository approach is suggested.
Under the FAQ section in Swagger OpenAPI guide, it is clearly mentioned that, we can specify multiple hosts, e.g. development, test and production but for OpenAPI 3.0. OpenAPI2.0 supports only one host per API specification (or two if you count HTTP and HTTPS as different hosts). A possible way to target multiple hosts is to omit the host and schemes from your specification and serve it from each host. In this case, each copy of the specification will target the corresponding host.
As per Google documentation Cloud Endpoints currently support OpenAPI version 2.0. A feature request has been filed for support of version 3.0 but there have been no releases. You can follow for the updates here.

Fitnesse wiki file persistence options

What are the persistence options for fitnesse files? So far it seems like a file system is the only thing supported. There does appear to be an out of date database plugin. Is there anything else that is supported (S3, database, etc.)? Is there a way to control where files are persisted if using the filesystem?
I believe there is very little in that area. The location of the files can be controlled using a command line option. See http://fitnesse.org/FitNesse.FullReferenceGuide.UserGuide.QuickReferenceGuide#FitNesseCommandLINE
-d /path/to/fitnesse/root
How I've used the FitNesse wiki is as a local development tool, with the pages on the file system. Once I'm satisfied with the tests I commit them to version control (e.g. git) so that they become part of the (integration) test pipeline setup (e.g. they are run as part of the CI/CD pipeline of the project).
There is a plugin I believe that will automatically commit any save actions to Git, but I've never used that. Saving each edit action just pollutes version control in my opinion. I only want to see tests after they have been checked/completed, and that tends not to be each save.
Working on a shared wiki environment (where I would expect a non-file system approach would fit in) you run into the same problem, I expect. Developing automated tests is a development task that requires some iterations before it is 'done', and not all attempts reach that 'done' state. So using shared storage for wiki persistence creates 'noise' in the test-set: which are the tests that form the current reference set that should pass and what is work in-progress.
If you are working on a larger project where new features are developed together with their automated tests it becomes even more important to know which test changes belong to which features/changes. Having tests on the file system, in version control, allows you to develop test in sync with code changes in the same branch. This is what I would recommend.

Nswag Generate OpenApi Spec

I have a number of controllers marked with different versioning, and marked with corresponding documentName. This works fine, I can then generate my OAS spec. Using the following and specifying the appropriate documentName
<Target Name="NSwag" AfterTargets="Build">
<Exec Command="$(NSwagExe_Core31) aspnetcore2openapi /project:$(AssemblyName).csproj /nobuild:true /documentName:v1 /output:OAS_v1.json" />
</Target>
However what would I do in a CI/CD system if I do not know what versions exist for given assembly? Think about a feed of assemblies that I go through to generate OAS files and distribute them to an API Management system.
Therefore is there a way that I can only create the version marked as default? Is there a way for me to know what documentName's exist so I can generate all of them?
Proposed Solution
I have not come up with an elegant idea - other than having an endpoint on the service that exposes the existing versions - and then use that to query the service or assembly to generate OAS files. Somebody have a better idea?

Jenkins pipeline from YAML file

Jenkins declarative pipeline is too powerful for us, often users can abuse it. We are thinking to use an opinionated YAML to describe CI/CD pipeline. And it seems there are two choices.
Write a plugin and consume YAML and dynamically create stage / steps.
Write a plugin to convert a YAML to Jenkins pipeline.
I am not expert on Jenkins, so I hope some expert can give some guidance and maybe an example.
using official plugin pipeline-as-yaml, but it has a fixed grammar.
using or customization wolox-ci
create your own shared libaray. However, they are easy from beginning but full grammer design is required when used widely. Here is a psudo code based on curry.
// create a file named yamlCompiler.groovy in shared library,
def call(str){
def rawMap = readYaml(text: str)
// consume yaml and get a lambda function
return {
stage{
steps.each{it ->
it."$type"(it)
}
}
}
}
Use yamlCompiler in your jenkinsfile code block.
#Library('your libs name')
def str =
'''
steps:
- type: sh
script: ls -la
- type: echo
message: xxx
'''
Closure closure = yamlCompiler(str)
closure.call()
I'm looking for a similar solution. We run hardened predefined pipelines for every project, but still want to allow dev teams to customise certain steps within the process —without allowing them the full power of a Jenkinsfile.
I'm also exploring the possibility of an —in your words— "opinionated YAML".
I've so far only found one example of such an implementation: Wolox-CI supports their own pre-defined build steps via YAML. You'll be able to see the steps they support here.
I'm thinking of parsing the YAML using Snake YAML. Here's an SO answer with an example on how to do it.
Two solutions:
create a shared library to abstract the actual pipeline and provide to your users some guidance on how to setup a shared library and a Jenkinsfile sample. Here is an example of embeded pipeline https://github.com/SAP/jenkins-library/blob/master/vars/piperPipeline.groovy
use another tool like https://drone.io/
If you're not an expert and don't want/have the time to become one, the second solution might be the best one.
Really? Is the only difference here when the plugin is executed?:
Write a plugin and consume YAML and dynamically create stage / steps.
Write a plugin to convert a YAML to Jenkins pipeline.
Forgive me, because I may be a little hardened, but abstracting a layer for the dynamic creation of a declarative, or scripted, Jenkinsfile written in the simple groovy lang syntax so that it can be pretty-printed in yml prevents users from updating your yml exactly how? It seems to me your abstraction only adds to the complexity with which you wish to implement usability.
One, all the current yml plugins for Jenkins do exactly that. Two, they don't actually have the full breadth of "features" (yes, I'm using that term loosely here) accessible by implementing the groovy/(java) classes already available in the Jenkins domain (referencing the DSL). Two solutions exist right now for this, and I've investigated both, and implemented both, extensively. One is wolox-ci, which is the better of the two, and the other is Pipeline-as-YAML. In my opinion, it's easy to use, but both lack the full breadth of implementation features simply using groovy provides. So why force it? Simply so your users can have a pretty-printed yml file, and not have to be concerned with simple syntax, which you claim hardens your infrastructure-as-code backend so that the same users can't screw it up? Sorry, I'm calling bull pucky on that assertion. What's to stop anyone from totally screwing up your builds by pushing a change to the yml file which breaks the integration with groovy, or worse, completely changes an algorithm you worked hard to customize?
Sorry, I just don't get it. Sure, making something more human readable is always a good thing. Doing it because of the reasons you've stipulated makes no sense, though. Also, unless you have a super simple defined algorithm in your CI/CD process, without any non-continuous-passing-style transform methods being implemented, then using the current iterations of the yml-as-Jenkinsfile-templates plugins is probably not the way you want to go.
Now, you could write your own plugin to do this, but what's the technical debt on that, versus just learning the groovy syntax? Also, it still doesn't prevent users from making code changes to your build infrastructure, then integrating those changes in a simple yml file.

Combine swagger Definition files

I am generating a swagger definition for all the my APIs by annotating the source code.
I was wondering if there is any way for make possible merge all the APIs in one single json file?
Note: I am using Swagger 2.0 definitions.
If you deploy those apps in a WebSphere Liberty server with the apiDiscovery-1.0 feature defined in your server.xml, then you can simply go into (GET) /ibm/api/docs and retrieve your aggregated JSON file. You can also retrieve it as YAML, by adding the Accept header "application/yaml".
You can download it for free at wasdev.net then just run the installUtility command to grab the feature (wlp/bin installUtility install apiDiscovery-1.0).
More information in this blog: https://developer.ibm.com/wasdev/blog/2016/04/13/deploying-swagger-enabled-endpoints-websphere-liberty-bluemix-api-connect/

Resources