External program searching DOORS data - ibm-doors

I'm a programmer on a moderate ASP.NET C# project. We've been given a requirement to integrate our application with a DOORS installation. Specifically, from our app the user wants to be able to search in DOORS for relevant objects and provide links to them. I'm not a DOORS expert by any means, and I've been having a "glorious" time trying to figure out how to do this. From what I can tell, there are three different ways to access the DOORS data outside of the DOORS client:
DXL
DOORS Web Access
OSLC
The impressions I've gotten from my search are these:
DXL might be the best solution--seems to be a moderately powerful scripting solution
Web Access doesn't seem to be very well-documented. Maybe it's just a fancy term for a web-based access system that is wholly dedicated to realizing the normal client operations inside a browser. Perhaps I could hack my application to replace the browser and use that type of access to search and show results?
OSLC seems to be just a way of linking DOORS artifacts to outside systems. This might suffice if it includes an interface to the search capabilities.
So, what might be the best approach?
Thanks

Option 1:
DXL can do what you need, you would however be running a DOORS Client in batch mode. So wherever you are running this integration from must have a DOORS client installed and the integration script you write must have login information (username and password). This can be encrypted in a separate file using a dxl encryption utility that should still be available on IBM developerworks or on google. This is definitely your most flexible option as DXL is very powerful. Search batch in the DOORS DXL Help inside the tool and you should have all the information you need about running a DXL script in batch mode.
Option 2 and 3:
These options are actually related. You would not integrate to DOORS using DWA through a web browser, but in order to use OSLC you must have a DWA web server installed. OSLC communicates to DOORS through the DWA server. OSLC would be able to get you what you need, but this route is probably more difficult and less flexible.
Hope this helps.

Correct but,
dxl scripts are IBM's mean of extending the OSLC framework.
You might execute dxl scripts with out having to install/run the batch client from within your integration, you can use the dwa/oslc/dxl/yourdxlscriptname urn inorder to execute the script for you.
The only 'issue' with this, is that the Doors Admin MUST register your dxl script with the list of available executable scripts.

Related

Swagger best practices

I am currently defining a Rest API and I intend to use Swagger to document it.
I've already started to define my API specifications with the Open Api Specification in YAML into the Swagger-Editor.
Then, I understand that I will provide that file to the Swagger-codegen to generate a server implementation, and also to the Swagger-UI (whose statics files will be previously paste to my server) to expose the interactive documentation .
According to Swagger, this is the top-down approach.
But later, I will probably need to modify this API, and I want to do it via by this tediously YAML file previously defined, to keep the API easily modifiable by anyone (and Language-agnostic).
Does the way to do this is to modify the definition file and then re-use Swagger-codegen ? By this approach, I guess so that I can't even lightly modify the API directly in the implementation server code without risking to have a out of date documentation.
And If I chose to do the bottom-up approach (via Swagger-core annotation), I will restrain all my further modifications to occur in the implementation server code, and my initial definition file will never be usable again.
So another question would be : Is there a common way to deal with Swagger when we want to modify the API both via the specification file and via the implementation server code (I suppose that the file that Swagger-core can generate me from my code, will never looks like my initial one that I defined by hand).
To maintain the API documentation, the best course of action that i can suggest is to follow a hybrid approach.
Initially, when you have to do bulk development, go for the top down approach. This will reduce the initial set up and coding effort. That's the basic idea behind any codegen.
Then, when it comes to maintaining the APIs, or adding a few new ones every day (or week), follow the bottom up approach. You will already have the previous code, the only thing you'll need to do is add some more annotations or API definitions.
Going for top-down iteratively defeats the purpose of code maintenance. Boiler plates and self generated code are there to give you a quick start, not for sustenance.
My opinion may be biased.
For API client, there should not be a need to customize it in most cases. If you find that you need to customize it to meet your requirement, it may worth starting a discussion via https://github.com/swagger-api/swagger-codegen/issues/new (and also please check what are the options available to customize the output, e.g. for PHP, run java -jar modules/swagger-codegen-cli/target/swagger-codegen-cli.jar config-help -l php)
For server stub, ideally the developers only need to focus on the business/application logic and regenerate the server stub when adding/deleting/updating endpoints (but I don't think all the server stubs can achieve that yet)
Disclaimer: I'm the top contributor to Swagger Codegen

How can I import / export of form definitons between servers?

I wish to develop some forms on a dev server then export the form definition and put them in a test server involving other people. Is there an import/export function for the form definitions or I need to manually move the content of the database?
You can:
Since 4.4, use the home page remote operations to publish forms to another server; for more on this, one the documentation on the Form Runner home page, look for the section about Remote server operations.
Move the form definition at the database level.
Develop your own tools that do this calling the persistence API.
If you're on 4.4, the first option is most likely your best choice. And if you want the maximum level of programmability and automation, the second or third option are the best choice.

Building the back-end server for an iPhone App

I'm looking at building a simple login-based iOS application that needs secure access to create, read, updated and delete data from a MySQL database - with certain actions available to specific users based on roles.
I've done some research and it looks like I need to build a RESTful Web service which provides Web Services which the iPhone app calls to access the data.
I have very little experience of web services development, are there any books/tutorials that are worth checking out? Is it worth looking at a web framework, rather than start from sractch?
I've done some basic web development in PHP/Python so would prefer to build in that I think..given that hosting it would be relatively cheap..
Have done some basic C#/Java; would it be worth looking at these instead? I tried creating a simple ASMX webservice but most of the examples cite using a MSSQL server, not sure if that is the way to go though.
Use a framework. No point reinventing the wheel and giving yourself a headache. A good PHP based solution would be to use Drupal to build the backend using the Services module to provide data via webservices. Drupal is so flexible and so popular now, that you can get a lot of what you want done without any code at all.
Roughly:
Install Drupal 7 on a webserver according to the instructions
Install the Services module
Design the entities that will make up your MySQL database
Tell the services module how you want to expose things
Some examples of API calls are here.
A case study of someone else who has used Drupal as the backend for iPhone/Android is here.
You will have a learning curve to get your head round Drupal, but you'll have one anyway to get your head around webservices and the benefits you gain from having everything else Drupal offers are enormous, e.g.
The difficult bits are already done for you, so the amount of code will be massively reduced, if you even need any at all
Using Drupal's hugely flexible entities system, you can design a flexible and extensible mysql database scheme using the web based UI, which will be ready to work with any of Drupal's other modules, so you can expand add features with minimal effort in the future
There's an enormous community of people who can help you and the forums on drupal.org are very active
You would have a great UI for users, in case you ever need to give them access to their data through a normal website interface. Drupal has loads of pre-built themes (I recommend Omega) which look awesome and again, little to no code is needed to get a whole site ready made along with HTML5, standards compliance etc.
Drupal provides you with ready-made modules to provide access control via roles, as well as everything else you can imagine e.g. managing a mailing list for your users, providing you with usage statistics, admin interface for user and role management etc.
Drupal use is exploding globally and there's a serious skills shortage, so you'd be even more employable :)
First, it's not compulsory that you use a REST web service. It's just that WS si more or less standard for web-based applications.
I'm not really familiar with PHP, but in python you have django-piston. On the IOS side you have restkit to pair the server with.
What I could say from my experience is that writing a prototype in django is quite easy and you can definitely use this to develop your app.

How does plugin/plugin dependency work?

I've a very basic question about the practical operation of software plugin systems. I understand how a simple plugin design works, ie one where a plugin adds to a hosting application. Eg a plugin adds a new filter to a paint program. The host knows it has to call a method called filter which the plugin provides. In this case all plugins are independent.
My question relates to the case when one plugin can use the facilities in another plugin. For example there may be a plugin that provides the ability to plot data while another plugin generates data. If the data generator plugin has never seen the graphing plugin before I assume there is no way for it to know what methods to call in the graphing plugin. I presume that in these cases, the developer of the data generator plugin must have access to a description of the graphing plugin API either in the form of an abstract class or an interface. Is this how plugin dependency operates, ie plugins know explicitly about the Apis that other plugins might have?
I've just built such a plugin system and for plugins to be able to use other plugins I am including in the source code copies of the plugin interfaces each plugin needs to know about. The problem with this approach is that if a new plotting plugin comes along but with a different API, there is no way for the data generator plugin to use it without first being recompiled so that it is aware of the new API. This doesn't seem right to me.
I know this may seem to be a very simple question and have an obvious answer but I've spend hours searching the internet and I've not come across an explicit statement concerning this question.
If your "new plotting plugin" has a different API from the one your code knows about, there is no alternative but to make your code aware of this API.
If you are in control of all this, including the various plotting plugins, then you c/should specify a standard Plotting API that all plotting plugins need to implement/support. That is about the only way that you can have different providers (plugins) for some task.
A standard "language" is the way to ensure that you can use multiple implementors of an interface (providers of a service). It is also the way that you can have multiple users of the same interface (consumers of a service).
The need/wish for multiple providers of a task and for multiple consumers of a provider is probably what led to the creation of standards such as OAuth, and of protocols such as HTTP, SMTP and the like.

What languages, frameworks, and technologies have you used to implement document searching?

I am at a new company and one of our goals is to implement a document search portal for our team and our clients. I am a bit worried that if we use an external service provider like Salesforce or some other ECM in the cloud there will be a lot of integration work in the future. From a client perspective, these documents will also exist in the same bucket as our structured content (stored in the DB, not a MS Word doc).
If you have implemented document searching, what languages, frameworks, and technologies have you used? Do you have any failure stories? I don't have a problem using something out of the box, but I think it is important that we have control over the documents and the API to access them. I would like to use Rails if we go fully custom.
Depending on your licensing needs Lucene (LGPL) and Xapian (GPL) both are great, mature, fast search engine API's with bindings for a lot of languages. I've used both of them with great success.
Lucene is probably the safest choice because it is widely used and quite good.
The easiest way to benefit from Lucene is probably with Alfresco, which is a breeze to install, and has Lucene by default. It means you just need to install Alfresco, put your documents in the repository, and you can search for your documents using the powerful web search interface.
If you need to search programmatically, my recommendation is to use Alfresco' CMIS interface, which allows you to search in a REST way. The JCR API is also available.

Resources