Java program for IBM Doors - ibm-doors

I recently started using IBM Doors program, I also did start writing scripts for it in DXL. However when I checked the eclipse main page, I realized that a tool called MDAccess for Doors exsists. My question is that is it possible to write code in java for Doors if so, what are the disadvantages compared to DXL?

Yes, it is possible to write Java code for DOORS. You already found the solution: MDAccess is a commercial product provided by Sodius. According to the product specs and some marketing presentation it provides access to a DOORS server using the Java programming language.
Sodius sent me this information on personal request, indicating a disadvantage which might concern you:
Our Java layer is done to manipulate DOORS data, meaning read/write
DOORS data. You will not find Java wrappers of DXL functions that
interact with DOORS UI for example.
Note we are able to execute DXL code trough the Java layer so you can
always get this mean to achieve DXL-based operations.
However, it's not too cheap.

This might interest you, as it is using available APIs and does not rely on additional commercial products:
https://www.ibm.com/developerworks/rational/library/oslc-services-rational-doors/index.html
Disclaimer:
I work for IBM DOORS product support, however all information posted here is done so in a personal, private capacity and does not necessarily reflect the position of my employer. Therefore what I post does not constitute an official statement by IBM nor is any endorsement of the information by IBM to be assumed, implied or otherwise.
For your own benefit, please do not contact me in a business capacity on this platform, but use the official IBM Support community web portal. Thank you! :-)

Related

Electron with C++ backend - secure?

I have written a UI in Electron and I would like to connect it with my C++ code. However, I will be selling this product and so I would like to know if this makes it easier for people to crack my C++ code? Obviously I know compiled C++ can be cracked anyway, but does this affect it in any way?
Additionally, what is the best way to go about this while preserving maximum possible security?
Thanks.
EDIT: How about this? Is it possible to use c++ as back-end for Electron.js?
EDIT2: To clarify, my Electron app will be showing the status of operations being performed in the C++ program. As such, I will need to send lists, dictionaries, strings etc. from C++ to JS which will then render it. Additionally, buttons on my Electron app need to trigger actions in the C++ code, such as stopping or starting certain parts of the program.
I have written a UI in Electron and I would like to connect it with my C++ code ...
I would like to know if this makes it easier for people to crack my C++ code?
Using electron does not make any meaningful difference for protecting the C++ source code. (Your intellectual property)
The Javascript code running in electron will be very easy to reverse engineer though, which gives users a head start on experimenting with your C++ binary. Using minification and obfuscation tools can at least make that harder.
For the C++ side, connecting C++ to Electron can be done in at least these two ways:
By dynamically linking to a shared library (Node.js C++ Addons)
In this case your C++ API would be functions that get exported by the shared library. There are many tools to inspect shared libraries (DLLs) and view these functions.
By communicating with another process using some sort of Inter-process communication.
In this case your API would depend on the IPC method used. If it was TCP/UDP messages you could use Wireshark to inspect the packets between the processes. There are ways to inspect messages going over any type of IPC.
Either way, your application must be delivered to the end-user with a compiled binary. Preventing reverse engineering of the binary itself is impossible if you actually give the binary to your users.
You should also expect that a savvy end-user will have access to other tools that can inspect the API and implement third-party code that talks to that API.
Additionally, what is the best way to go about this while preserving maximum possible security?
By "maximum possible security", I will assume you are referring to preventing unauthorized use of the C++ code with other applications.
You would need a licensing system that can authenticate the application that is using your C++ binary's API. Explaining what that would be exactly is probably too large of an answer for a Stack Overflow, and you will have to do some research on how licensing systems are implemented.
It may be theoretically impossible to develop a perfect licensing system though. Look at the gaming industry, it takes just a matter of days to for the licensing software become circumvented for every new game that is released. The only software architecture that cracks haven't completely conquered are cloud-based applications, which don't actually deliver compiled code with their business logic to the end-user's computer.

Processes implemented inside the DMS system?

Traditional categorization of processes is talking about integration, human centric and document centric processes, with the last one as a good candidate for placing inside the DMS system (of course, the prerequisite is that there is a built-in support for BPM).
But I was unable to find some concrete,more detailed explanation of the distinction between those options.
Imagine a company, that have Enterprise BPM solution , and also a DMS system with quite good support for BPM (i.e. Filenet DMS).
In both systems you can create user screens and workflows (process logic) as well.
Also, most processes working with documents are also quite "human-centric".
I am perfectly aware of the fact, that choosing the target platform always depends on the requirements and specific circumstances, but I wonder, if there are some general rules, or principles, based on which I can better decide where to put the process layer of the whole solution.
Additional clarification:
I don't want to implement any new platform. As I indicated a little bit in the previous post, we already have BPM platform (Oracle) and DMS as well (Filenet with BPM support - Case Foundation). So the question is not about choosing the new platform...but more about setting the rules for using the existing products/platforms. There are a lot new projects in the queue...and for some of them (that are touching the area of working with documents) we need to decide the target platform/s. For example, when you have a simple process with a few steps, and in all steps there is some work with an existing document (the document - or at least his original version, is also input to this process), the requirements on the front-end are not very complicated etc...it would simpler to build the whole solution in the Filenet platform( mostly because of the cost). But I am wondering if there are some similar rules....Like you should think about that or that... when you want use only the DMS platform...or both platforms etc. You can call these rules the principles for development, references architectures or something like that....that is guiding you when designing the target architecture/s.
Thank you
I'm reposting the answer because I don't see a reason for deletion (by #Bohemian).
I think it adds value to anyone asking the same question. #Bohemian could have at least specified why he deleted the post.
Here it goes:
You gave us rather small amount of information. And what exactly is
the question? What do you mean by "where to put the process layer"?
You shouldn't constrain yourself to only those DM systems that claim
to have BPM built-in. That's marketing speak behind which often lay
two half-baked products. You should instead question which
standards-based integration points the system has, so you can
integrate effortlessly. And then invest in best-of-breed DM and best
BPM separately. All-in-one solutions are often too closed, difficult
to extend and above all, they bring free vendor-lock-in with them.
What are your business requirements, i.e. what do you have to do?
Implement BPM inside organization that already has DM or not? Do you
have some BPM platform already? Do you have any
constraints/requirements when choosing either of those (vendor,
technology foundation, Gartner quadrant...)?
What are the options you're considering for DM and which options are
you evaluating (if any) as a BPM platform? Have you already settled on
IBM or you can go elsewhere? Is open source an option?
What is your role/responsibility in this project?
EDIT - after the author's clarifications:
I have not worked with Oracle's BPM, but I can tell you that, although Case Foundation is more suited to Case Management, you can develop a complete Process Management solution with it (workflows, tasks, roles, deadlines, in-baskets, etc.).
If you go that path and later come across the business need to allow business users to define their own case templates, take a look at IBM Case Manager, as it builds on top of Case Foundation, but also brings additional WebUI features (built on IBM Content Navigator), suitable for business users (although, more often than not, it turns out the IT does that job).
A few IBM redbooks about Case & Content management that might help you make an informed decision:
Introducing IBM FileNet Business Process Manager - this is the former name for Case Foundation - the same product, new version.
Advanced Case Management with IBM Case Manager
Customizing and Extending IBM Content Navigator - you'll need this one for customizations, if you decide to go with CF (instead of Oracle).
Building IBM Enterprise Content Management Solutions From End to End - from ingestion to case/process management (contains Case Manager).
I agree with #Robert regarding integration, after all, before version 5.2 FileNet Content Platform Engine was FN Content Engine + FN Process Engine.
The word of advice I can give you is to first document all features that business requires from BPM. Then do a due diligence on both products, noting down which of those features each of those products supports. Then the answer, if not laid out in front of you, will at least be much easier.
You also have to take into account that IBM is oriented towards IBM BPM (former Lombardi) when process management is concerned. Former FN BPM is now more pushed into Case Management (but those two are very similar paradigms).
You should definitely post back about your experience, whichever option you choose.
Good "luck" :)

remote data & query to OpenVMS RMS files

What options exist to query RMS files in OpenVMS? The context for the query/access would be for BI & reporting. Currently, a very old FOCUS (Infomation Builders, v. 6.9.8) is in use, and that only from within the native OpenVMS command line shell.
My challenge working within the VMS environment is that output is intended for off-platform consumption & analysis in Excel, R, and Business Objects/Crystal Reports, and Splunk/Hunk. On-platform, I'm limited in what I can use by whatever I can compile &/or run from within my own user space, and the CONNX & similar tools all look to require a server process in the VMS environment.
Edit: I have accepted a comprehensive answer which, given organizational constraints, may not be feasible. My likely path will be to write additional data extractions jobs in FOCUS, and incur the latency & maintenance overhead that goes along with that.
Do you want to the reporting to be on-platform, or off-platform (for example with Excel)?
On-platform, after 30+ years, I still really really like Datatrieve, as mentions in a comment.
This tool was created before SQL became all the rage, so its query language takes a little getting use. It knows show to used just about every RMS option (keys, RFA's for collections, joins, locks and sharing,...)
I'm sure there are multiple commercial tools like Focus you mention, and perhaps the IGH tool Vselect for data extraction, column shuffling, sorting. Some would even recommend OpenVMS native SORT but now you are still in Command Line space.
For a (green screen) windows approach, and command line, perhaps check out the freeware tool DIX: http://www.oooovms.dyndns.org/dix/
Off-platform google for "openVMS odbc" (jdbc). You'll find tools from Connx, Easysoft and "Connect" from the company I work for : Attunity.
Those will allow you to use (windows, linux) tools like DBvizualizer or Excell to get to the OpenVMS sourced data.
Perhaps an interesting hybrid could be Attunity's Connect ( "AIS" ) solution which allows for SQL language RMS file access, but on platform (NAV_UTIL) and off-platform, ("Studio, Nav_util, Oracle db-link, ODBC, JDBC, XML, ... )
For better help, please clearify the query still better. Notably the remark " only from within the native OpenVMS command line shell". What's wrong with that? :-). What alternative access did you envision? fake-gui, DECwindows? Native API? Remote API? ...
Hope this helps some already,
Hein
You could consider writing code in a native language such as C or Java. The company I work for uses Apache, DCL scripts in cgi-bin, and the Userbase 4GL to put an Intranet reporting front-end over an OpenVMS legacy system. As long as you wrap the output in HTML etc Apache will stream it back to a browser which will interpret it accordingly. However, with the impending move to Itanium we're faced with no support for porting Userbase. If anyone knows who holds the source code could they tag a reply onto the end of this. We're looking for a terminal (character mode) reporting solution for Itanium as not all users have PCs. If it weren't for this we'd just slap Crystal over CONNX and call it a day. Many thanks.
Further to my previous answer I'm now evaluating R as a reporting solution, using the RODBC to interrogate the RMS database via CONNX using R's RODBC library.

Which is the programming language to retrieve info. such as OS info, memory, processes/threads, program version, DLL version etc?

I want to develop an application that can retrieve information such as, DLL version, DLL build mode(debug or release), info. regarding OS, memory, processer, processes/threads, program version etc. I am developing this mainly for Windows, but it'd be good if the application supports Linux too(wherever applicable).
I am basically a java programmer, and I know C, C++ to some extent.
Which programming language should I go for, that'd make my job easy? i.e. which language has APIs to fetch these kind of information?
Well... APIs are available regardless of the language... But the easiest way to get at what you are trying to do is going to be a C or C++ app. That doesn't mean it'll be easy (getting a DLL version is easy, getting memory and processor type is easy. The other stuff is certainly possible, but you may have to roll up your sleeves and learn the win32 API).
You might want to take a look at an application that already does exactly what you are asking about (Process Explorer) before you try to develop this yourself... It's going to be a big undertaking - and the folks at Sys Internals are really, really good at this stuff, and have already done it.
You commented on Kevin Day's answer that you would prefer to use Java for this.
Java is not very well suited for this, because the information you want to get is very platform-specific, and since Java is designed to be platform-independent, there are not a lot of ways to get at this kind of information from Java.
There are some methods in classes java.lang.System and java.lang.Runtime to get information about the platform that your Java program is running on. For example, class Runtime has a method availableProcessors() that tells you how many processors are available to the Java virtual machine. Note that this is not the same as the number of processors (or cores) that exist in the computer; the documentation even says that the number may change while the program is running.
Lookup the documentation for java.lang.System and java.lang.Runtime for more information.
Most likely you're not going to get exactly the information that you need by using pure Java - C or C++ will be better suited to get this kind of platform-specific information. If you would need this information from a Java program, you could write a small DLL or shared library and use JNI to call into it from your Java program.
Since DLLs are mentioned I presume we are talking about Windows.
I would recommend using WMI queries. They look very much like SQL and give you access to many very useful classes.
e.g. all info about the OS can be found here - in W32_OperatingSystem:
http://msdn.microsoft.com/en-us/library/aa394239(VS.85).aspx
You can use WMI classes from any language including C++.
As a side note - if you start a new application from scratch consider using PowerShell - new scripting language from Microsoft.

Tools to manage semantic webs

I've seen a lot frameworks to create a semantic web (or rather the model below it). What tools are there to create a small semantic web or repository on the desktop, for example for personal information management.
Please include information how easy these are to use for a casual user, (in contrast to someone who has worked in this area for years). So I'd like to hear which tools can create a repository without a lot of types and where you can type the nodes later, as you learn about your problem domain.
For personal semantic information management on the desktop there is NEPOMUK. There are two versions, one embedded in kde4, this lets you tag, rate and comment things such as files, folders, pictures, mp3s, etc. on the desktop across all applications.
Another version is written in Java and is OS independent, this is more of a research prototype. It has more features, but is overall less stable.
For KDE-Nepomuk see http://nepomuk.kde.org/
For Java-Nepomuk see http://dev.nepomuk.semanticdesktop.org/ and http://dev.nepomuk.semanticdesktop.org/download/ for downloads (the DFKI version is better)
Extensive list of semantic web tools
Also check out Protege
If you need to create a small model, then I suggest that you use topbraid. I have used for creating much larger models and I know people who have used to create humongous models. It comes packaged with a set of reasoners and provides ability to plug-in custom reasoner and in case if you decide to make your model larger, you can even integrate Topbraid with a triple store like Allegrograph.
And since its based on eclipse, to get started with it is relatively easier.
For developers who are spoiled working in more matured programming languages like Java (IDEA ? anyone), topbraid is the closest tool to an actual IDE.
Chandler is a "a notebook you can organize, back up and share!" It seems to be pretty simple to use.
OS: Windows, Mac, Linux

Resources