I tried running java -jar openapi-generator-cli.jar generate -i 'swagger.json' -g scala-akka -o scala_client but it didn't produce any modules (doesn't work with Java nor Clojure).
Running java -jar openapi-generator-cli.jar generate -c config.yaml -i 'swagger.json' -g scala-akka -o scala_client --generate-alias-as-model (also without config and/or without --generate-alias-as-model), doesn't produce any model libs either.
Removing "additionalProperties": false, from the inline schema def makes it generate model classes. What also works is to define the schema using definitions + "$ref" : "..." - but neither of those are feasible with the library I'm using for my server.
I don't quite understand why it won't generate the models. Can't seem to find an answer anywhere, am I missing something simple?
My swagger.json
Any help would be much appreciated.
I think the only issue is that your are using a 2.0 version of the OpenAPI-Specification. One can easily migrate to version 3.* (try THIS for example), then the models will be generated (for me it worked with scala-akka and java-spring)
transformed_to_3.0.0.json
Related
I want to combine an API specification written using the OpenAPI 3 spec, that is currently divided into multiple files that reference each other using $ref. How can I do that?
One way to do this is to use the open-source project speccy.
Open the terminal and install speccy by running (requires Node.js):
npm install speccy -g
Then run:
speccy resolve path/to/spec.yaml -o spec-output.yaml
I wrote a quick tool to do this recently. I call it openapi-merge. There is a library and an associated CLI tool:
https://www.npmjs.com/package/openapi-merge
https://www.npmjs.com/package/openapi-merge-cli
In order to use the CLI tool you just write a configuration file and then run npx openapi-merge-cli. The configuration file is fairly simple and would look something like this:
{
"inputs": [
{
"inputFile": "./gateway.swagger.json"
},
{
"inputFile": "./jira.swagger.json",
"pathModification": {
"stripStart": "/rest",
"prepend": "/jira"
}
},
{
"inputFile": "./confluence.swagger.json",
"disputePrefix": "Confluence",
"pathModification": {
"prepend": "/confluence"
}
}
],
"output": "./output.swagger.json"
}
For more details, see the README on the NPM package.
Most OpenAPI tools can work with multi-file OpenAPI definitions and resolve $refs dynamically.
If you specifically need to get a single resolved file, Swagger Codegen can do this. Codegen has a CLI version (used in the examples below), a Maven plugin (usage example) and a Docker image.
The input file (-i argument of the CLI) can be a local file or a URL.
Note: Line breaks are added for readability.
OpenAPI 3.0 example
Use Codegen 3.x to resolve OpenAPI 3.0 files:
java -jar swagger-codegen-cli-3.0.35.jar generate
-l openapi-yaml
-i ./path/to/openapi.yaml
-o ./OUT_DIR
-DoutputFile=output.yaml
-l openapi-yaml outputs YAML, -l openapi outputs JSON.
-DoutputFile is optional, the default file name is openapi.yaml / openapi.json.
OpenAPI 2.0 example
Use Codegen 2.x to resolve OpenAPI 2.0 files (swagger: '2.0'):
java -jar swagger-codegen-cli-2.4.28.jar generate
-l swagger-yaml
-i ./path/to/openapi.yaml
-o ./OUT_DIR
-DoutputFile=output.yaml
-l swagger-yaml outputs YAML, -l swagger outputs JSON.
-DoutputFile is optional, the default file name is swagger.yaml / swagger.json.
I found that the Redocly CLI was a viable option for doing exactly what you need (addressing $ref's used within specifications).
You can configure the CLI using a configuration file (that must be located in the root of your project directory as well as be written in YAML format) which allows you to specify certain linting rules among other things. But to bundle your specifications, simply perform a bundle in the directory of where the configuration file is like so:
redocly bundle -o <NameOfFileOrDir>
After running this command, all of your $ref's will be replaced with the actual code of whatever the $ref was for, combining them into one definition.
If you have multiple elements defined in your apis object in your configuration file (i.e. multiple API definitions), it will create a directory named whatever you specified for the -o option and the folder will contain all of your definitions.
I have an AngularJS project that was scaffolded using yeoman. I want to use ctags to generate tags for the whole project so that I can navigate the code in vim. But when I use the command
ctags -R .
in the root folder, it generates tags for folders at one or two levels deeper relative to root. The folders at 5-6 levels deeper are not tagged by ctags. How can I get it to work for the whole project?
I am using exuberant-ctags for generating tags.
OS : Ubuntu 15.04
Do you get the same results with the following?
ctags -R *
What OS? Have you verified it's NOT a permissions issue?
ctags uses the file extension to correctly choose the parser that it will use to generate the tags file. It's possible that the files you are not finding have an unsupported extension.
Another possibility is that those files are using language extensions not supported by Exuberant Ctags. In which case you might want to try Universal Ctags, which was forked from Exuberant and is in active development. It is possible that the JS parser was improved in this fork of ctags.
When I run Sphinx using 'latexpdf' I get an error, even though I have a complete working TeX installation on my machine:
Sphinx error: Builder name latexpdf not registered
What do I need to do to "register" latexpdf?
latexpdf is not a Sphinx builder; it is the name of a target in the Makefile created by sphinx-quickstart. This target uses the latex builder.
Executing sphinx-build -b latexpdf . _build produces the error in the question (as expected).
If you run make latexpdf, it works.
PyCharm was mentioned in a comment and the problem seems to stem from that program. The following is run when latexpdf is configured as a "Command" (Sphinx task):
sphinx_runner.py -b latexpdf <indir> <outdir>
The sphinx_runner.py script is very similar to sphinx_build (a wrapper for sphinx.cmdline.main()). Since the -b option is supposed provide the name of an actual builder, there is an error.
Use -M instead of -b. This invokes sphinx-build similarly to make latexpdf, e.g.:
sphinx-build -M latexpdf . _build
See #mzjn's answer for details.
Now have Pycharm 2016.3 generating a pdf form me based on information here: https://www.quora.com/How-to-create-a-PDF-out-of-Sphinx-documentation-tool
Install rst2pdf:
pip install rst2pdf
Edit a new Python Docs sphinx configuration and choose pdf as the command. Set input directory and directory to hold .pdf as output.
Edit the conf.py file and add the two lines that mention pdf:
extensions = [
'sphinx.ext.autodoc',
'sphinx.ext.todo',
'sphinx.ext.coverage',
'sphinx.ext.viewcode',
'rst2pdf.pdfbuilder'
]
pdf_documents = [('index', u'documentation', My Docs', u'Me'), ]
Now run the configuration and you should get a file called documentation.pdf in the output directory.
If you are interested in a pure Python solution, the following works for me:
import sphinx.cmd.make_mode as sphinx_build
OUT_DIR = "docs" # here you have your conf.py etc
build_output = os.path.join(OUT_DIR, "_build")
# build HTML (same as `make html`)
build_html_args = ["html", OUT_DIR, build_output]
sphinx_build.run_make_mode(args=build_html_args)
# build PDF latex (same as `make latexpdf`)
build_pdf_args = ["latexpdf", OUT_DIR, build_output]
sphinx_build.run_make_mode(args=build_pdf_args)
In fact, I've made a complete Python3 script that given a few convenient arguments generates the whole package documentation as HTML and PDF from scratch, with the RTD theme. It can be pretty handy if you want it to run it on different OS or Python interpreters (in my case i wanted to run it within Blender), or adapt it to your needs. It still has some dirty spots, due to some variables being hardcoded into conf.py. Let me know if you see any issues with it!
This is how it looks like:
HTML
PDF
Cheers,
Andres
As the title says, how to use luadoc in ubuntu/linux? I generated documentation in windows using batch file but no success in ubuntu. Any ideas?
luadoc
Usage: /usr/bin/luadoc [options|files]
Generate documentation from files. Available options are:
-d path output directory path
-t path template directory path
-h, --help print this help and exit
--noindexpage do not generate global index page
--nofiles do not generate documentation for files
--nomodules do not generate documentation for modules
--doclet doclet_module doclet module to generate output
--taglet taglet_module taglet module to parse input code
-q, --quiet suppress all normal output
-v, --version print version information
First off, I have little experience with Luadoc, but a lot of experience with Ubuntu and Lua, so I'm basing all my points off of that knowledge and a quick install that I've just done of luadoc. Luadoc, as far as I can see, is a Lua library (so can also be used in Lua scripts as well as bash). To make documentation (in bash), you just run
luadoc file.lua
(where file is the name of your file that you want to create documentation for)
The options -d and -t are there to choose where you want to put the file and what template you want to use (which I have no clue about, I'm afraid :P). For example (for -d):
luadoc file.lua -d ~/Docs
As far as I can see, there is little else to explain about the actual options (as your code snippet explains what they do well enough).
Now, looking at the errors you obtained when running (lua5.1: ... could not open "index.html" for writing), I'd suggest a few things. One, if you compiled the source code, then you may have made a mistake somewhere, such as not installing dependencies (which I'd be surprised about, because otherwise you wouldn't have been able to make it at all). If you did, you could try getting it from the repos with
sudo apt-get install luadoc
which will install the dependencies too. This is probably the problem, as my working copy of luadoc runs fine from /usr/bin with the command
./luadoc
which means that your luadoc is odd, or you're doing something funny (which I cannot work out from what you've said). I presume that you have lua5.1 installed (considering the errors), so it's not to do with that.
My advice to you is to try running
luadoc file.lua
in the directory of file.lua with any old lua file (although preferably one with at least a little data in) and see if it generates an index.html in the same folder (don't change the directory with -d, for testing purposes). If that DOESN'T work, then reinstall it from the repos with apt-get. If doing that and trying luadoc file.lua doesn't work, then reply with the errors, as something bigger is going wrong (probably).
If I would like to post all the xml files in a folder then I use post.jar.
java -jar post.jar *.xml
In case if I would like to post the files recursively ( i.e post xml files under subfolder level also is there anyway to achieve this.)
If you're on a unix-like (OSX or Linux), you can do something like this:
find . -name \*xml | xargs java -jar post.jar
That'll find all .xml files in or under the current directory ('.') and pass them as parameters to the java -jar post.jar command.
find is incredibly opaque, but very useful for stuff like this.
Please think twice before using stuff under the /example directory of Solr. I use Solr from Tomcat (instead of the Jetty embedded in the start.jar) and I use URLLIB2 in Python for POSTing data to Solr. (Jetty is a production-level software, so dont worry too much about that).
So, for uploading files, consider writing it in your favorite programming language. You can implement folder recursion yourself. For POSTing files, you need libCURL , which can send HTTP GET, form POSTs, multipart POSTs etc. A C program using libCURL needs no more than 8 lines to POST a file. CURL bindings exist for all major languages, so you can recycle libCURL stuff written in C to PHP, for example.
Yes, you can!
You just need to ad the -Drecursive flag.
java -Drecursive -jar post.jar *
For further options for the flags and usage just use:
java -jar post.jar -h