I've a very basic question about the practical operation of software plugin systems. I understand how a simple plugin design works, ie one where a plugin adds to a hosting application. Eg a plugin adds a new filter to a paint program. The host knows it has to call a method called filter which the plugin provides. In this case all plugins are independent.
My question relates to the case when one plugin can use the facilities in another plugin. For example there may be a plugin that provides the ability to plot data while another plugin generates data. If the data generator plugin has never seen the graphing plugin before I assume there is no way for it to know what methods to call in the graphing plugin. I presume that in these cases, the developer of the data generator plugin must have access to a description of the graphing plugin API either in the form of an abstract class or an interface. Is this how plugin dependency operates, ie plugins know explicitly about the Apis that other plugins might have?
I've just built such a plugin system and for plugins to be able to use other plugins I am including in the source code copies of the plugin interfaces each plugin needs to know about. The problem with this approach is that if a new plotting plugin comes along but with a different API, there is no way for the data generator plugin to use it without first being recompiled so that it is aware of the new API. This doesn't seem right to me.
I know this may seem to be a very simple question and have an obvious answer but I've spend hours searching the internet and I've not come across an explicit statement concerning this question.
If your "new plotting plugin" has a different API from the one your code knows about, there is no alternative but to make your code aware of this API.
If you are in control of all this, including the various plotting plugins, then you c/should specify a standard Plotting API that all plotting plugins need to implement/support. That is about the only way that you can have different providers (plugins) for some task.
A standard "language" is the way to ensure that you can use multiple implementors of an interface (providers of a service). It is also the way that you can have multiple users of the same interface (consumers of a service).
The need/wish for multiple providers of a task and for multiple consumers of a provider is probably what led to the creation of standards such as OAuth, and of protocols such as HTTP, SMTP and the like.
Related
I am currently defining a Rest API and I intend to use Swagger to document it.
I've already started to define my API specifications with the Open Api Specification in YAML into the Swagger-Editor.
Then, I understand that I will provide that file to the Swagger-codegen to generate a server implementation, and also to the Swagger-UI (whose statics files will be previously paste to my server) to expose the interactive documentation .
According to Swagger, this is the top-down approach.
But later, I will probably need to modify this API, and I want to do it via by this tediously YAML file previously defined, to keep the API easily modifiable by anyone (and Language-agnostic).
Does the way to do this is to modify the definition file and then re-use Swagger-codegen ? By this approach, I guess so that I can't even lightly modify the API directly in the implementation server code without risking to have a out of date documentation.
And If I chose to do the bottom-up approach (via Swagger-core annotation), I will restrain all my further modifications to occur in the implementation server code, and my initial definition file will never be usable again.
So another question would be : Is there a common way to deal with Swagger when we want to modify the API both via the specification file and via the implementation server code (I suppose that the file that Swagger-core can generate me from my code, will never looks like my initial one that I defined by hand).
To maintain the API documentation, the best course of action that i can suggest is to follow a hybrid approach.
Initially, when you have to do bulk development, go for the top down approach. This will reduce the initial set up and coding effort. That's the basic idea behind any codegen.
Then, when it comes to maintaining the APIs, or adding a few new ones every day (or week), follow the bottom up approach. You will already have the previous code, the only thing you'll need to do is add some more annotations or API definitions.
Going for top-down iteratively defeats the purpose of code maintenance. Boiler plates and self generated code are there to give you a quick start, not for sustenance.
My opinion may be biased.
For API client, there should not be a need to customize it in most cases. If you find that you need to customize it to meet your requirement, it may worth starting a discussion via https://github.com/swagger-api/swagger-codegen/issues/new (and also please check what are the options available to customize the output, e.g. for PHP, run java -jar modules/swagger-codegen-cli/target/swagger-codegen-cli.jar config-help -l php)
For server stub, ideally the developers only need to focus on the business/application logic and regenerate the server stub when adding/deleting/updating endpoints (but I don't think all the server stubs can achieve that yet)
Disclaimer: I'm the top contributor to Swagger Codegen
I'm a programmer on a moderate ASP.NET C# project. We've been given a requirement to integrate our application with a DOORS installation. Specifically, from our app the user wants to be able to search in DOORS for relevant objects and provide links to them. I'm not a DOORS expert by any means, and I've been having a "glorious" time trying to figure out how to do this. From what I can tell, there are three different ways to access the DOORS data outside of the DOORS client:
DXL
DOORS Web Access
OSLC
The impressions I've gotten from my search are these:
DXL might be the best solution--seems to be a moderately powerful scripting solution
Web Access doesn't seem to be very well-documented. Maybe it's just a fancy term for a web-based access system that is wholly dedicated to realizing the normal client operations inside a browser. Perhaps I could hack my application to replace the browser and use that type of access to search and show results?
OSLC seems to be just a way of linking DOORS artifacts to outside systems. This might suffice if it includes an interface to the search capabilities.
So, what might be the best approach?
Thanks
Option 1:
DXL can do what you need, you would however be running a DOORS Client in batch mode. So wherever you are running this integration from must have a DOORS client installed and the integration script you write must have login information (username and password). This can be encrypted in a separate file using a dxl encryption utility that should still be available on IBM developerworks or on google. This is definitely your most flexible option as DXL is very powerful. Search batch in the DOORS DXL Help inside the tool and you should have all the information you need about running a DXL script in batch mode.
Option 2 and 3:
These options are actually related. You would not integrate to DOORS using DWA through a web browser, but in order to use OSLC you must have a DWA web server installed. OSLC communicates to DOORS through the DWA server. OSLC would be able to get you what you need, but this route is probably more difficult and less flexible.
Hope this helps.
Correct but,
dxl scripts are IBM's mean of extending the OSLC framework.
You might execute dxl scripts with out having to install/run the batch client from within your integration, you can use the dwa/oslc/dxl/yourdxlscriptname urn inorder to execute the script for you.
The only 'issue' with this, is that the Doors Admin MUST register your dxl script with the list of available executable scripts.
For my organization I am evaluating RICH technologies for our next projects.
We are currently using grails 2.1.0 and very happy with that, especially with groovy and gorm and we would like to stay with that. The idea is to extend grails with some RICH framework/library. Currently I am evaluating: grails plugin for ZK, grails plugin for Vaadin, knockoutjs, angular.js, ember.js.
I already received a feedback from my colleagues who worked with ZK (no grails) and their conclusion was: cool, but forget performances, ZK goes to the server every time you do something at client side.
My question is: is this also true with Vaadin (plugin for Grails) ? How does it react with heavy single page applications? and what about Bambi? can this be an option?
On paper grails + Vaadin is what we need: we want to write groovy/java, not xml and surely not javascript. Is this the right choice?
I know my question is very generic, but I am just at the beginning of the evaluation...
Thank you for your attention!
Vaadin works perfectly with Groovy and Grails. You can get services (actually spring beans) by using Grails.get() method and do localization via Grails.i18n() method. Because all the code is going to be written in Groovy, not Java, it will become less heavy (less lines of code and so on...).
Vaadin doesn't go with ever user action on server. You can influence that by setting setImmediate(false) on whatever component.
When you make complex application in Vaadin, you need to be careful how many components you put on the page. If you expect that there will be thousands of components on single page, then browser renderer will have performance issues with handling it (of course speed of rendering depends on your computer hardware). More hints is here.
I recommend - try to build UI in Vaadin and fake database. Then see the performance and then switch to the real database. Usually people blame Vaadin but the problem is elsewhere e.g. in database, indexing, loading to many items at once...
If you don't want to play with JavaScript, then I suppose knockoutjs, angular.js, ember.js are out of the game.
You need to find out, whether the Vaadin components are what you need. I really suggest to try it out and make Proof of Concept in Vaadin. If not, Vaadin 7 simplifies integration with JavaScript! So, you can easily integrate Vaadin server code with whatever JavaScript library (e.g. highcharts and so on...).
You will need to get your containers lazily loaded (check this)
I think you should start with Vaadin 7 (here is a tutorial)
There will be more performance optimalisations in Vaadin 7 (in versions 7.0.1 or 7.0.2)
I am fairly new to Grails and I have a requirement that I don't know how to implement.
I need to make a process that will be running along the side with Grails application and making remote calls, process received data and writing it to DB so that Grails application can make use of it.
So far I figured that I need to leverage domain controllers, but I am not sure how to make a separate process that is constantly running in the background and updating DB.
Is it possible? Can I get references or code examples.
Thank you.
Your best bet is Quartz via the http://grails.org/plugin/quartz or http://grails.org/plugin/quartz2 plugins. I've used the quartz plugin and the Job classes you create are artifacts (like controllers, services, etc.) so they support dependency injection. Services are the best place to do transactional database work, so inject one or more services into your Job classes to handle database work.
The quartz2 plugin is newer so you may have more luck using it in current versions of Grails, but it might not have all of the features of the older plugin.
I am at a new company and one of our goals is to implement a document search portal for our team and our clients. I am a bit worried that if we use an external service provider like Salesforce or some other ECM in the cloud there will be a lot of integration work in the future. From a client perspective, these documents will also exist in the same bucket as our structured content (stored in the DB, not a MS Word doc).
If you have implemented document searching, what languages, frameworks, and technologies have you used? Do you have any failure stories? I don't have a problem using something out of the box, but I think it is important that we have control over the documents and the API to access them. I would like to use Rails if we go fully custom.
Depending on your licensing needs Lucene (LGPL) and Xapian (GPL) both are great, mature, fast search engine API's with bindings for a lot of languages. I've used both of them with great success.
Lucene is probably the safest choice because it is widely used and quite good.
The easiest way to benefit from Lucene is probably with Alfresco, which is a breeze to install, and has Lucene by default. It means you just need to install Alfresco, put your documents in the repository, and you can search for your documents using the powerful web search interface.
If you need to search programmatically, my recommendation is to use Alfresco' CMIS interface, which allows you to search in a REST way. The JCR API is also available.