How can I convert existing JSFPortlet to JSON API - jsf-2

We have a Portal with huge code base consisting of 100s of JSF-Portlets. Now the requirement is to expose the existing portal as JSON API services to external parties.
One way is to make another presentation layer against each JSFPortlet to resuse the same business and data layer. It will require lot of effort and time.
Another way is if we can play with Portlet & JSF lifecycle and expose the same portlet by overriding serveResource and convert the resourceRequest into actionRequest and call processAction internally. Similarly on return, we can call doView and return the response parameters in form of JSON.
Is this workable?

you would be better off pushing this down to microservices and consuming it wherever is necessary

Related

Using Parameters of One Request to Dynamically Change the Response of Another

I have been using response templating to give dynamic responses, given that all the request and query parameters are associated with that request itself. However, I wanted to make a POST request with several parameters, and later use those parameters in a stubbed GET method's body response by using response templating. Is this something possible to do in wiremock? Any input is greatly appreciated, thank you!
Storing state between requests is not a default feature of WireMock outside of mocking the behavior through Stateful Behaviour, which is different from being actually stateful.
Without a custom plugin being able to share information between several requests is therefor not possible. In the WireMock documentation there is a section in the documentation on how to create such a plugin yourself. With a little development experience this is certainly doable.
On GitHub there are several plugin that create a storage mechanism to store information
WireMockCsv: store and retrieve information using HSQL Database.
wiremock-redis-extension does something similar using Redis.
An alternative to these approaches is to create mappings/data just before the test starts. For example generating all the responses beforehand and then using Templated BodyFileName tag to retrieve the just-in-time created file. Another way of achieving this result is to use the Admin API to create the mappings themselves directly.

Is there a way to use Swagger just for validation without using the whole framework?

Suppose I have an existing Java service implementing a JSON HTTP API, and I want to add a Swagger schema and automatically validate requests and responses against it without retooling the service to use the Swagger framework / code generation. Is there anything providing a Java API that I can tie into and pass info about the requests / responses to validate?
(Just using a JSON schema validator would mean manually implementing a lot of the additional features in Swagger.)
I don't think there's anything ready to do this alone, but you can easily do this by the following:
Grab the SchemaValidator from the Swagger Inflector project. You can use this to validate inbound and outbound payloads
Assign a schema portion to your request/response definitions. That means you'll need to assign a specific section of the JSON schema to your operations
Create a filter for your API to grab the payloads and use the schema
That will let you easily see if the payloads match the expected structure.
Of course, this is all done for you automatically with Inflector but there should be enough of the raw components to help you do this inside your own implementation

Use of REST with scaffolding actions

I see that Grails 2.3 is using REST for the CRUD actions in scaffolding. While it's a great way to learn how REST works, I am wondering if using REST to communicate inside of a single application stack is very efficient. Doesn't it send the request all the way up to the network layer and back down again instead of going directly from the app server to the database? I am visualizing a "pop fly" as opposed to a "line drive." Am I misunderstanding how this works?
I assume when you say, "using REST for the CRUD actions in scaffolding" you are referring to the basic scaffolding (i.e. generate-all example.Book). The scaffolded controllers are not calling the REST API (https://yourapp/book.json) to retrieve the data, they are still using GORM to access the database, and then using the Respond method to render the data in the appropriate format based on the Request's content-type (XML, JSON, HTML). So,
Typical request-response cycle
Client, typically an HTML page, makes request (GET http://yourapp/books/1)
Grails 'parses' request params (id: 1)
Grails' GORM retrieves data from database and creates object instance
Grails resolves response content type to HTML
Grails "responds" to the request with an HTML response using a view from the views directory
SPA/API call
Client, typically javascript, makes request (GET http://yourapp/books/1.json **)
Grails 'parses' request params (id: 1)
Grails' GORM retrieves data from database and creates object instance
Grails resolves response content type to JSON
Grails "responds" to the request with an JSON response
Client consumes the JSON response and acts accordingly
** content-type can be specified in a number of ways, just used the .json suffix since it is the most transparent. See http://grails.org/doc/2.3.x/guide/single.html#contentNegotiation.
And to answer your question regarding whether it "very efficient". I would argue that it is almost always yes, because your payload tends to be much smaller, since you are only transferring data, not data + formatting (HTML, javascript, css, etc). It also provides a way of separating concerns, allowing the client to focus on state and presentation and the backend to focus on data. Furthermore, it means that you can create multiple clients (mobile, desktop-based, web-based) using the same backend API.

CXF client loads wsdl for both service and port?

In a java web app, I need to call a remote soap service, and I'm trying to use a CXF 2.5.0-generated client. The soap service is provided by a particular ERP vendor, and its wsdl is monstrous, thousands of types, dozens of xsd imports, etc. wsdl2java generates the client ok, thanks to the -autoNameResolution flag. But at runtime it retrieves the remote wsdl twice, once when I create the service object, and again when I create a port object.
MyService_Service myService = new MyService_Service(giantWsdlUrl); // fetches giantWsdl
MyService myPort = myService.getMyServicePort(); // fetches giantWsdl again
Why is that? I can understand retrieving it when creating myService, you want to see that it matches the client I'm currently using, or let a runtime wsdl location dictate the endpoint address, etc. But I don't understand why asking for the port would reload everything it just went out on the wire for. Am I missing something?
Since this is in a web application, and I can't be sure that myPort is threadsafe, then I'd have to create a port for each thread, except that's way too slow, 6 to 8 seconds thanks to the monstrous wsdl. Or add my own pooling, create a bunch in advance, and do check-outs and check-ins. Yuck.
For the record, the JaxWsProxyFactoryBean creation route does not ever fetch the wsdl, and that's good for my situation. It still takes a long time on the first create(), then about a quarter second on subsequent create()s, and even that's less than desirable. And I dunno... it sorta feels like I'm under the hood hotwiring the thing rather than turning the key. :)
Well, you have actually answered the question yourself. Each time you invoke service.getPort() the WSDL is loaded from remote site and parsed. JaxWsProxyFactoryBean goes absolutely the same way, but once the proxy is obtained it is re-used for further invocations. That is why the 1st run is slow (because of "warming up"), but subsequent are fast.
And yes, JaxWsProxyFactoryBean is not thread-safe. Pooling client proxies is an option, but unfortunately will eat a lot of memory, as JAX-WS runtime model is not shared among client proxies; synchronization is perhaps better way to follow.

How to pass data from a web page to an application?

Trying to figure out a way where I can pass some data/fields from a web page back into my application. This needs to works on Windows/Linux/Mac so I can't use a DLL or ActiveX. Any ideas?
Here's the flow:
1. Application gathers some data and then sends it to a web page using POST that is either imbedded in the app or pops up a new IE window.
2. The web page does some services and then needs to relay the results back to the application.
The only way to do this that I can think of is writing the results locally from the page in a cookie or something like that and have the application monitor for a specific file in that folder.
Alternatively, make a web service that the application hits after passing control to the page and when the page is done the web service will return the data. This sounds like it might have some performance drawbacks.
Can anyone suggest any better solutions for this?
Thanks
My suggestion:
Break the processing logic out of the Web Page into a seperate assembly. You can then create a Web Service that handles all of the processing without needing to pass control over to a page.
Your application can then call the Web Service directly and then serialize the results and work with the data quite easily.
Update
Since the page is supplied by a third party, you obviously can't break anything out. The next best thing would be to handle the entire web request internal to your application (rather than popping a new Window).
With this method, you can get the raw HTTP response (and page markup) and work with it directly. You can then parse the Response stream and gather the required data from it.
During performing an HTTP request you should be able to retrieve the text returned by the page. For instance, if your HTTP POST was to hit a Java servlet, the doPost() method would be fired and you would then perform your actions, you could then use the PrintWriter object from the Response object (PrintWriter out = response.getWriter();) and write text back to the calling application. I'm not sure this helps?
The fact that
web page is hosted by a third party
and they need to be doing the
processing on their servers.
is important to this question.
I like your idea of having the app call a webservice after it passes the data to the third-paty web page. You can always call the webservice asynchronously if you're worried about blocking your application while waiting for results from this webservice.
Another option is that your application implements an XML-RPC server that can be called from the web page using PHP, Python or whatever you use to build the website
A REST server will do the job also...

Resources