So, Modules are filled with Objects, only some of which are requirements. Links to non-requirements are meaningless... but naturally users insist on doing that, regardless. Lazy, careless, or simply undertrained; our users are humans, who will often enter highly imperfect data.
What's the best way to forbid such mislinks? Is there a native way? I fear not, although it seems rather basic. Pop up an annoying warning whenever they try, via DXL?
We have an enumerated Attribute that will tell us if the target is a requirement or not.
You can use triggers to accomplish this!
For a very brief idea, see this relatively recent conversation
You will want a pre-create trigger on the link confirming whether it's target object has the correct enumerated type. This could be a database level trigger, or project level, or even module level if there is only one particularly bothersome section.
Keep in mind this is only for more up-to-date versions of DOORS. Older versions will require something a bit more complex- for example, a trigger that checks a module pre-close to see if any links have been created, and if so if they have 'valid' targets according to your criteria. You could have a post-open dxl that creates an array of links and stores it in the DXL top context... but that might be a bit advanced.
Related
I am currently developing a ruby application that has a large number of different objects. As part of this application, I would like to add a reporting engine that allows a user to create custom reports on virtually any variable within the application - for example, they could create a report that shows what percentage of customers have a telephone number, or the absolute number of suppliers whose street name starts with an E. The point is, they should be able to create any report on the data in the app, regardless of how obscure, without needing to rely on it having been created in the application already.
My question is: how do I start creating a structure that allows this to happen? Will it be necessary to specify all possible variables that could be used as part of a report (e.g. I would need to specify that customers.count, customers.email_address and suppliers.addresses.street_name are all variables available to the reporting engine for the example above), or could these somehow be made available automatically?
If it is necessary to specify the variables, what would be the best way to do this?
I have searched for some resources on this, but have not yet found any - if anyone can recommend a source, it would also be appreciated.
Thanks!
Consider yourself warned that this likely violates YAGNI. I would highly recommend building reports first for the most common types of reports your users will want, so that you can make them usable and pretty. Doing this at the abstract level is an order of magnitude more complex, is error prone, may lead to some security issues if you're not careful, and will be difficult to make pretty reports rather than generic looking ones.
That said, take a look at something like Active Admin, which provides custom filters and data exports. You should be able to add custom scopes to have it do what you want, but if it still doesn't, then looking at the implementation should give you a good idea of what's involved.
I wonder what sort of things you look for when you start working on an existing, but new to you, system? Let's say that the system is quite big (whatever it means to you).
Some of the things that were identified are:
Where is a particular subroutine or procedure invoked?
What are the arguments, results and predicates of a particular function?
How does the flow of control reach a particular location?
Where is a particular variable set, used or queried?
Where is a particular variable declared?
Where is a particular data object accessed, i.e. created, read, updated or deleted?
What are the inputs and outputs of a particular module?
But if you look for something more specific or any of the above questions is particularly important to you please share it with us :)
I'm particularly interested in something that could be extracted in dynamic analysis/execution.
I like to use a "use case" approach:
First, I ask myself "what's this software's purpose?": I try to identify how users are going to interact with the application;
Once I have some "use case", I try to understand what are the objects that are more involved and how they interact with other objects.
Once I did this, I draw a UML-type diagram that describe what I've just learned for further reference. What happens after depends on the task I've been assigned, i.e. modify the code, document the code etc.
There is the question of what motivation do I have for learning the new system:
Bug fix/minor enhancement - In this case, I may focus solely on that portion of the system that performs a specific function that needs to be altered. This is a way to break down a huge system but also is a way to identify if the issue is something I can fix or if it is something that I have to hand to the off-the-shelf company whose software we are using,e.g. a CRM, CMS, or ERP system can be a customized off-the-shelf system so there are many pieces to it.
Project work - This would be the other case and is where I'd probably try to build myself a view from 30,000 feet or so to know what are the high-level components and which areas of the system does the project impact. An example of this is where I'd join a company and work off of an existing code base but I don't have the luxury of having the small focus like in the previous case. Part of that view is to look for any patterns in the code in terms of naming conventions, project structure, etc. as this may be useful once I start changing some code in the system. I'd probably do some tracing through the system and try to see where are the uglier parts of the code. By uglier I mean those parts that are kludge-like and may have some spaghetti code as this was rushed when first written and is now being reworked heavily.
To my mind another way to view this is the question of whether I'm going to be spending days or weeks wrapping my head around a system like in the second case or should this be a case where it hopefully takes only a few hours, optimistically that is, to get my footing to make the necessary changes.
Hi I'm out of setting up an TFS server and I want to set some check-in rules.
I for example want to be able to set rules about method lenght, complexity and so on, I found NDepend very convenient can I somehow use NDepend to run some rules on the files trying to check in.
I also want to be able to bypass the rules sometimes.
Are there any blogs or discussions around this, if it wont work with NDepend are there any other tools or ways I can use?
I would be very careful about this. I worked at a place once that had strict method length rules. If Calculate(a,b,c) ended up 1.5 times the limit length, the devs would just move the last third of the function into Calculate2() and call it from Calculate(). All the active locals would become parameters, of course - sometimes there would be a dozen of them. The resulting mess passed the automated tests for method length but were definitely not better or more maintainable than the long methods would have been.
Would it have been nice if the devs had spotted something refactorable in the middle of the method, pulled it out and given it a good name? Yes it would. But systems are all game-able, and the sorts of "dammit I just want to check in and go home" changes that are made to comply with method length rules (among others) make the code worse. A lot worse.
Also to bypass the rules there's a way on the checkin to say you are bypassing and why.
I often need to have modal dialogs for editing properties or application configuration settings, but I'm never really happy about how to validate these, and present the validation results to the user.
Choices and tools are typically:-
Design UI so that invalid choices
are simply impossible - i.e. use
"mask edits", range limits on
spin-edits,
Try and trap errors as they're
found - immediate dialogs or
feedback when a user has an invalid
value entered somewhere (although,
because this may be due to an
incomplete entry, this can be
visually distracting)
Detect errors on change of
control focus
Validate entire dialog when OK
is pressed, and present message
box(es) showing what's wrong.
No.4 is typically the easiest and quickest to code, but I'm never really happy with it.
What good techniques have you found to handle this?
While this question is fairly generic, an ideal answer would be easily implementable in Delphi for Win32...
As with everything, it depends. :) I try to look at some of these from the user's perspective.
Number 1. I don't like mask edits personally, but things like range limits on spin edits, pre-populated combo boxes etc make a lot of sense for general sanity checking and it makes the user's life easier.
I think number 2 could make using the dialog painful for the user. They might not enter the information in the order you think they will, or might leave an incomplete field and come back to it at the end.
For validation, I use a combination of 3 and 4.
Depending on the field (e.g. required value), I might validate it on each key press and disable the OK button if it's invalid. You could get fancy and change the colour of the bad field or use some other kind of visible validator control. It's obvious to the user and doesn't interrupt their "flow".
Things that aren't as easy to check on the fly (e.g. calls to the server) are done once when the user hits OK.
Just an observation but I have watched a lot of users populate dialog boxes (especially complex ones) and they DO NOT use the TAB key. They tend to click in/on edits combos radio buttons as they "think through" the answers or read from disparate documentation. This order will not be the same that you thought it would be! We as programmers are hopefully logical (captain, said Spock) but users well...
One way that is nice (but requires effort) is to have each editor validate itself, either on change or on exit, and it simply changes colour if it is invalid. Your routine in the "OK button" code is then a simple matter of iterating through the control list and setting the focus to the first one that reports itself as "invalid" until none do.
I do work for the airline industry with focus on credit card stuff and I have TTicketNumberEdit, TCardNumberEdit, TExpiryDateEdit, TFormOfPaymentEdit etc. works well because in some of these the validation is not simple. As mentioned, you need to put effort in early on but it pays off in complex dialogs.
I think N°4 is the best way to do the validation, in addition to being the easiest & quickest to code, you have all your validation logic in the same place, so if you need to connect to database, compare 2+ inputs, etc... everything is done only once,
While:
N°1: this may be a nightmare to implement for some cases & to update
N°2/3: you have to be aware of all UI events related to validation, input changes, focus, .. -> heavy coding & hard to debug
The JVCL offers a component set for validating input (TJvValidators etc.). It marks fields that have no valid input and shows a hint to the user when he moves the mouse over that marker. (I think I read about a similar functionality in dotNET but I have never used it.)
While I like the concept and have actually used these components in a number of dialogs, I don't like the implementation much: It is a hog on cpu usage and the pre-defined validators that come with the JVCL are not really usefull. Of course, having access to the jvcl svn repository, I could just stop complaining and start improving the components...
Don't forget to give a look at Jim's great coderage session : Stop Annoying Your Users!
He has a verse on input validation...
IMO, option #1 should be done as a matter of course, not optional, and the interface simplified as far as you can take it while still allowing the user to input the detail needed for the application. I don't like using masked edits, though. If I want a user to enter a number, for example, I'll just use a textbox, then try to parse the number when I go to save the field value.
For direct validation, I use #4 exclusively, unless there's a special case that calls for using one of the other methods. I like to let my users modify their inputs if they change their mind, so they can make a mistake and go back and fix it on their own because they already know there's an error in their input. I do help them out if possible, though (i.e., if a form field is empty or invalid and they hit OK, I will focus/select the offending field after showing an error message).
Doing #2 in a Windows Forms application is rarely pulled off well on its own, so I would just avoid it altogether as a primary means of validation. It could, however, be combined with #4 effectively, but I think in most cases, that would be overkill.
I believe several of us have already worked on a project where not only the UI, but also data has to be supported in different languages. Such as - being able to provide and store a translation for what I'm writing here, for instance.
What's more, I also believe several of us have some time-triggered events (such as when expiring membership access) where user location should be taken into account to calculate, like, midnight according to the right time-zone.
Finally there's also the need to support Right to Left user interfaces accoring to certain languages and the use of diferent encodings when reading submitted data files (parsing text and excel data, for instance)
Currently I'm storing all my translations for all my entities on a single table (not so pratical as it is very hard to find yourself when doing sql queries to look into a problem), setting UI translations mainly on satellite assemblies and not supporting neither time zones nor right to left design.
What are your experiences when dealing with these challenges?
[Edit]
I assume most people think that this level of multiculture requirement is just like building a huge project. As a matter of fact if you tihnk about an online survey where:
Answers will collected only until
midnight
Questionnaire definition and part of
the answers come from a text file
(in any language) as well as
translations
Questions and response options must
be displayed in several languages,
according to who is accessing it
Reports also have to be shown and
generated in several different
languages
As one can see, we do not have to go too far in an application to have this kind of requirements.
[Edit2]
Just found out my question is a duplicate
i18n in your projects
The first answer (when ordering by vote) is so compreheensive I have to get at least a part of it implemented someday.
Be very very cautious. From what you say about the i18n features you're trying to implement, I wonder if you're over-reaching.
Notice that the big boy (e.g. eBay, amazon.com, yahoo, bbc) web applications actually deliver separate apps in each language they want to support. Each of these web applications do consume a common core set of services. Don't be surprised if the business needs of two different countries that even speak the same language (e.g. UK & US) are different enough that you do need a separate app for each.
On the other hand, you might need to become like the next amazon.com. It's difficult to deliver a successful web application in one language, much less many. You should not be afraid to favor one user population (say, your Asian-language speakers) over others if this makes sense for your web app's business needs.
Go slow.
Think everything through, then really think about what you're doing again. Bear in mind that the more you add (like Right to Left) the longer your QA cycle will be.
The primary piece to your puzzle will be extensive use of interfaces on the code side, and either one data source that gets passed through a translator to whichever languages need to be supported, or separate data sources for each language.
The time issues can be handled by the interfaces, because presumably you will want things to function in the same fashion, but differ in the implementation details. To a large extent, a similar thought process can be applied to the creation of the interface when adjusting it to support differing languages. When you get down to it, skinning is exactly this, where the content being skinned is the interface, and the look/feel is the implementation.
Do what your users need. For instance, most programmer understand English, there is no sense to translate posts on this site. If many of your users need a translation, add a new table column with the language id, and another column to link a translated row to its original. If your target auditory contains the users from the Middle East, implement Right to Left. If time precision is critical up to an hour, add a time zone column to the user table, and so on.
If you're on *NIX, use gettext. Most languages I've used have some level of support; PHP's is pretty good, for instance.
I'll describe what has been done in my project (it wasn't my original architecture but I liked it anyways)
Providing Translation Support
Text which needs to be translated have been divided into three different categories:
Error text: Like errors which happen deep in the application business layer
UI Text: Text which is shown in the User interface (labels, buttons, grid titles, menus)
User-defined Text: text which needs to be translatable according to the final user's preferences (that is - the user creates a question in a survey and he can also create a translated version of that survey)
For each different cathegory the schema used to provide translation service is different - so that we have:
Error Text: A library with static functions which access resource files
UI Text: A "Helper" class which, linked to the view engine, provides translations from remote assemblies
User-defined Text: A table in the database which provides translations (according to typeID of the translated entity and object id) and is linked to the entity via a 1 x N relationship
I haven't, however, attacked the other obvious problems such as dealing with time zones, different layouts and picture translation (if this is really necessary). Does anyone have tackled this problem in a different way?
Has anyone ever tackled the other i18n problems?