Semantics of OMG IDL attributes - corba

I'm working on the verification of an interface formalised in the OMG's IDL, and am having problems finding a definitive answer on the semantics of getting an attribute value. In an interface, I have an entry...
interface MyInterface {
readonly attribute SomeType someName;
};
I need to know if it is acceptable for someObj.someName != someObj.someName to be true (where someObj is an instance of an object implementing MyInterface).
All I can find in OMG documentation in regards to attributes is...
(5.14) An attribute definition is logically equivalent to declaring a
pair of accessor functions; one to retrieve the value of the attribute
and one to set the value of the attribute.
...
The optional readonly keyword indicates that there is only a single
accessor function—the retrieve value function.
Ergo, I'm forced to conclude that IDL attributes need not be backed by a data member, and are free to return basically any value the interface deems appropriate. Can anyone with more experience in IDL confirm that this is indeed the case?

As we know, IDL interface always will be represented by a remote object. An attribute is no more then a syntatic sugar for getAttributeName() and setAttributeName(). Personally, i don't like to use attribute because it is hardly to understand than a simply get/set method.
CORBA also has valuetypes, object by value structure - better explaned here. They are very usefull because, different from struct, allow us inherit from other valuetypes, abstract interface or abstract valuetype. Usualy, when i'm modeling objects with alot of
get/set methods i prefer to use valuetypes instead of interfaces.
Going back to your question, the best way to understand 'attribute' is looking for C#. IIOP.NET maps 'attribute' to properties. A property simulates a public member but they are a get/set method.
Answering your question, i can't know if someObj.someName != someObj.someName will return true or false without see the someObj implementation. I will add two examples to give an ideia about what we can see.
Example 1) This implementation will always return false for the expression above:
private static i;
public string getSomeName() {
return "myName" i;
}
Example 2) This implementation bellow can return true or false, depending of concurrency or 'race condition' between clients.
public string getSomeName() {
return this.someName;
}
public setSomeName(string name) {
this.someName = name;
}
First client can try to access someObj.someName() != someObj.someName(). A second client could call setSomeName() before de second call from the first client.

It is perfectly acceptable for someObj.someName != someObj.someName to be true, oddly as it may seem.
The reason (as others alluded to) is because attributes map to real RPC functions. In the case of readonly attributes they just map to a setter, and for non-readonly attributes there's a setter and a getter implicitly created for you when the IDL gets compiled. But the important thing to know is that an IDL attribute has a dynamic, server-dictated, RPC-driven value.
IDL specifies a contract for distributed interactions which can be made at runtime between independent, decoupled entities. Almost every interaction with an IDL-based type will lead to an RPC call and any return value will be dependent on what the server decides to return.
If the attribute is, say, currentTime then you'll perhaps get the server's current clock time with each retrieval of the value. In this case, someObj.currentTime != someObj.currentTime will very likely always be true (assuming the time granularity used is smaller than the combined roundtrip time for two RPC calls).
If the attribute is instead currentBankBalance then you can still have someObj.currentBankBalance != someObj.currentBankBalance be true, because there may be other clients running elsewhere who are constantly modifying the attribute via the setter function, so you're dealing with a race condition too.
All that being said, if you take a very formal look at the IDL spec, it contains no language that actually requires that the setting/accessing of an attribute should result in an RPC call to the server. It could be served by the client-side ORB. In fact, that's something which some ORB vendors took advantage of back in the CORBA heyday. I used to work on the Orbix ORB, and we had a feature called Smart Proxies - something which would allow an application developer to overload the ORB-provided default client proxies (which would always forward all attribute calls to the server hosting the target object) with custom functionality (say, to cache the attribute values and return a local copy without incurring network or server overhead).
In summary, you need to be very clear and precise about what you are trying to verify formally. Given the dynamic and non-deterministic nature of the values they can return (and the fact that client ORBs might behave differently from each other and still remain compliant to the CORBA spec) you can only reliably expect IDL attributes to map to getters and setters that can be used to retrieve or set a value. There is simply no predictability surrounding the actual values returned.

Generally, attribute does not need to be backed by any data member on the server, although some language mapping might impose such convention.
So in general case it could happen that someObj.someName != someObj.someName. For instance attribute might be last access time.

Related

Encapsulation - Why I am using getter setter to make my data members public if I already declare them private in class

I just want to ask, we are having one class, in which we have two private data members say:
class Employee{
private int empid;
private String empname;
}
I am declaring them private that means I can use them in Employee class only. So what is the need to create getter setter for both the data members and making them public.
Hope you got my problem.
This is an excellent question. Often you see code examples that make members private but exposing them via a getter/setter pair, without the getter/setter doing anything else than setting the corresponding member.
In my book this is not encapsulation at all. You are no better of than just making the members public. Although a lot of people are uneasy to do that, they would happily provide accessors automatically for all their members.
One reason to do provide accessors is the ability to do input validation. E.g. if you empIds have a checksum, you could enforce it in the setter. Something that is not possible with direct access to the member.
In my opinion it would be better to think about the role this object will play and see how it can achieve that role with a minimum of accessors. Otherwise your code will probably violate the Law of Demeter.
You are absolutely right, that creating a setter/getter or making the fields public both violate encapsulation in the same way, therefore if you do want to encapsulate your private fields, presumably because you are working in an Object-Oriented environment, you do not want to use either of those things.
To your question why there is a need to have create setter/getters: Most projects (consciously or unconsciously) do not base their designs on Object-Orientation. There are other paradigms, where data and function are separated, thus encapsulation plays a minor role if any.
In the Java world it is common to have pure (or very close to pure) data structures (Beans), and Services/Components/EJBs/etc that work on these Beans (have access to all fields basically). Often these architectures split the function part further into topics like Presentation, Business, Persistence (3-tier architectures), or create explicit control procedures that has access to all the relevant fields (like how MVC is often done).
Whether one approach is better than the other would be a subjective discussion probably, but the short answer is: It's a different paradigm usually (not OO), that is why setters/getters get created.

What is the preferred way of getting value in swift, var vs. func?

What's the preferred way of getting a value in swift?
Using a read-only variable
var getString: String? {
return "Value"
}
or using a function?
func getString() -> String? {
return "Value"
}
Also, is there a performance difference between the two?
First, neither of these would be appropriate names. They should not begin with get. (There are historical Cocoa meanings for a get prefix that you don't mean, and so even if you mean "go out to the internet and retrieve this information" you'd want to use something like fetch, but certainly not in the case you've given.)
These issues are addressed in various sections of the Swift API Design Guidelines. First, a property is a property, whether it is stored or computed. So there is no difference in design between:
let someProperty: String?
and
var someProperty: String? { return "string" }
You should not change the naming just because it's computed. We can then see in the guidelines:
The names of other types, properties, variables, and constants should read as nouns.
Furthermore, as discussed in The Swift Programming Language:
Properties associate values with a particular class, structure, or enumeration. Stored properties store constant and variable values as part of an instance, whereas computed properties calculate (rather than store) a value.
So if this is best thought of as a value associated with the type (one of its "attributes"), then it should be a property (computed or stored). If it is something that is not really "associated" with the type (something that the caller expects this type to retrieve from elsewhere for instance), then it should be a method. Again from the Design Guidelines:
Document the complexity of any computed property that is not O(1). People often assume that property access involves no significant computation, because they have stored properties as a mental model. Be sure to alert them when that assumption may be violated.
If "stored properties as a mental model" doesn't match what you mean to express, then it probably shouldn't be a property in the first place (and you need to document the discrepancies if you make it a property anyway). So, for instance, accessing a property should generally have no visible side effects. And if you read from a property immediately after writing to it, you should get back the value you wrote (again, as a general mental model without getting into the weeds of multi-threaded programming).
If you use a method, it can often result in a different appropriate name. See the "Strive for Fluent Usage" section of the Design Guidelines for more on that. There are several rules for selecting good method names. As a good example of when to use properties vs methods, consider the x.makeIterator(), i.successor() and x.sorted() examples and think about why these are methods and why they're named as they are. This is not to say there is exactly one answer in all cases, but the Design Guidelines will give you examples of what the Swift team intends.
With no discernible difference in performance, make the choice for readability:
When an attribute behaves like a variable, use a property. Your example falls into this category.
When reading an attribute changes object state, use a function. This includes
Attributes that behave like a factory, i.e. returns new objects when you access them
Attributes that produce new values, such as random number generators
Peripheral readers
Input iterators
Of course, if the attribute is computed based on one or more argument, you have no other choice but to use a function.
Just as a note: If you want to use both getters and setters in Swift you can do as follows:
var myString: String {
get {
return "My string"
}
set {
self.myPrivateString = newValue
}
}
This way you can access your value as if it was a regular variable, but you can do some "under-the-hood magic" in your getters and setters

IList<> returned type instead of List<>

I found this method in one of examples for dependency Injecting a Controller. What is the reason of using an interface type IList<> instead of List<>?
public IList<string> GetGenreNames()
{
var genres = from genre in storeDB.Genres
select genre.Name;
return genres.ToList();
}
The actual reason, you're going to go ask the original programmer of that method that.
We can come up with a plausible reason however.
Input parameters should be as open and general as possible. Don't take an array if you can use any collection type that can be enumerated over. (ie. prefer IEnumerable<int> over List<int> if all you're going to do is do a foreach)
Output parameters and return types should be as specific as possible, but try to return the most usable and flexible data type possible without sacrificing performance or security. Don't return a collection that can only be enumerated over (like IEnumerable<int>) if you can return an array or a list you created specifically for the results (like int[] or List<int>).
These guidelines are listed many places on the internet (in different words though) and are there to help people write good APIs.
The reason why IList<T> is better than List<T> is that you're returning a collection that can be:
Enumerated over (streaming access)
Accessed by index (random access)
Modified (Add, Delete, etc.)
If you were to return List<T> you wouldn't actually add any value, except to return a concrete type instead of just any type that happens to implement that interface. As such, it is better to return the interface and not the concrete type. You don't lose any useful features on the outside and you still retain the possibility of replacing the actual object returned with something different in the future.
targeting interface is always better than targeting concrete type.
So if returning IList, that means anything implementing IList could be returned, give better separation.
check this out for more info
Why is it considered bad to expose List<T>?
It's somewhat basic rule in OOP. It's good to have an interface so your clients (the caller of GetGenreNames ) knows only how to call (function signature, only thing to remember, rather than implementation details etc) to get serviced.
Programming to interface supports all goodies of OOP. its more generalized, maintains separation of concerns, more reusable.

OpenRasta: Uri seems to be irrelevant for handler selection

When registering two handlers for the same type, but with different URIs, the handler selection algorithm doesn't seem to check the uri when it determines which handler to use.
If you run the program below, you'll notice that only HandlerOne will be invoked (twice). It does not matter if I call for "/one" or "/two", the latter supposed to be handled by HandlerTwo.
Am I doing something wrong or is this something to be fixed in OpenRasta? (I'm using 2.0.3.0 btw)
class Program
{
static void Main(string[] args)
{
using (InMemoryHost host = new InMemoryHost(new Configuration()))
{
host.ProcessRequest(new InMemoryRequest
{
HttpMethod = "GET",
Uri = new Uri("http://x/one")
});
host.ProcessRequest(new InMemoryRequest
{
HttpMethod = "GET",
Uri = new Uri("http://x/two")
});
}
}
}
class Configuration : IConfigurationSource
{
public void Configure()
{
using (OpenRastaConfiguration.Manual)
{
ResourceSpace.Has.ResourcesOfType(typeof(object))
.AtUri("/one").HandledBy(typeof(HandlerOne));
ResourceSpace.Has.ResourcesOfType(typeof(object))
.AtUri("/two").HandledBy(typeof(HandlerTwo));
}
}
}
class HandlerOne
{
public object Get() { return "returned from HandlerOne.Get"; }
}
class HandlerTwo
{
public object Get() { return "returned from HandlerTwo.Get"; }
}
Update
I have a feeling that I could accomplish what I want similar using UriNameHandlerMethodSelector as described on http://trac.caffeine-it.com/openrasta/wiki/Doc/Handlers/MethodSelection, but then I'd have to annotate each handler methods and also do AtUri().Named(), which looks like boilerplate to me and I'd like to avoid that. Isn't AtUri(X).HandledBy(Y) making the connection between X and Y clear?
Eugene,
You should never have multiple registrations like that on the same resource type, and you probably never need to have ResourcesOfType<object> ever associated with URIs, that'll completely screw with the resolution algorithms used in OpenRasta.
If you're mapping two different things, create two resource classes. Handlers and URIs are only associate by resource class, and if you fail at designing your resources OpenRasta will not be able to match the two, and this is by design.
If you want to persist down that route, and I really don't think you should, then you can register various URIs to have a name, and hint on each of your methods that the name ought to be handled using HttpOperation(ForUriName=blah). That piece of functionality is only there for those very, very rare scenarios where you do need to opt-out of the automatic method resolution.
Finally, as OpenRasta is a compsable framework, you shouldnt have to go and hack around existing classes, you ought to plug yourself into the framework to ensure you override the components you don't want and replace them by things you code yourself. In this case, you could simply write a contributor that replaces the handler selection with your own moel if you don't like the defaults and want an MVC-style selection system. Alternatively, if you want certain methods to be selected rather than others, you can remove the existing operation selectors and replace them (or complement them with) your own. That way you will rely on published APIs to extend OpenRasta and your code won't be broken in the future. I can't give that guarantee if you forked and hacked existing code.
As Seb explained, when you register multiple handlers with the same resource type OpenRasta treats the handlers as one large concatenated class. It therefore guesses (best way to describe it) which potential GET (or other HTTP verb) method to execute, which ever it thinks is most appropriate. This isn't going to be acceptable from the developers prospective and must be resolved.
I have in my use of OpenRasta needed to be able to register the same resource type with multiple handlers. When retrieving data from a well normalised relational database you are bound to get the same type response from multiple requests. This happens when creating multiple queries (in Linq) to retrieve data from either side of the one-to-many relation, which of course is important to the whole structure of the database.
Taking advice from Seb, and hoping I've implemented his suggestion correctly, I have taken the database model class, and built a derived class from it in a resources namespace for each instance of when a duplicating resource type might have been introduced.
ResourceSpace.Has.ResourcesOfType<IList<Client>>()
.AtUri("/clients").And
.AtUri("/client/{clientid}").HandledBy<ClientsHandler>().AsJsonDataContract();
ResourceSpace.Has.ResourcesOfType<IList<AgencyClient>>()
.AtUri("/agencyclients").And
.AtUri("/agencyclients/{agencyid}").HandledBy<AgencyClientsHandler>().AsJsonDataContract();
Client is my Model class which I have then derived AgencyClient from.
namespace ProductName.Resources
{
public class AgencyClient: Client { }
}
You don't even need to cast the base class received from your Linq-SQL data access layer into your derived class. The Linq cast method isn't intended for that kind of thing anyway, and although this code will compile it is WRONG and you will receive a runtime exception 'LINQ to Entities only supports casting Entity Data Model primitive types.'
Context.Set<Client>().Cast<AgencyClient>().ToList(); //will receive a runtime error
More conventional casts like (AgencyClient) won't work as conversion to a SubClass isn't easily possible in C#. Convert base class to derived class
Using the AS operator will again compile and will even run, but will give a null value in the returned lists and therefore won't retrieve the data intended.
Context.Set<Client>().ToList() as IEnumerable<AgencyClient>; //will compile and run but will return null
I still don't understand how OpenRasta handles the differing return class from the handler to the ResourceType but it does, so let's take advantage of it. Perhaps Seb might be able to elaborate?
OpenRasta therefore treats these classes separately and the right handler methods are executed for the URIs.
I patched OpenRasta to make it work. These are the files I touched:
OpenRasta/Configuration/MetaModel/Handlers/HandlerMetaModelHandler.cs
OpenRasta/Handlers/HandlerRepository.cs
OpenRasta/Handlers/IHandlerRepository.cs
OpenRasta/Pipeline/Contributors/HandlerResolverContributor.cs
The main change is that now the handler repository gets the registered URIs in the initializing call to AddResourceHandler, so when GetHandlerTypesFor is called later on during handler selection, it can also check the URI. Interface-wise, I changed this:
public interface IHandlerRepository
{
void AddResourceHandler(object resourceKey, IType handlerType);
IEnumerable<IType> GetHandlerTypesFor(object resourceKey);
to that:
public interface IHandlerRepository
{
void AddResourceHandler(object resourceKey, IList<UriModel> resourceUris, IType handlerType);
IEnumerable<IType> GetHandlerTypesFor(object resourceKey, UriRegistration selectedResource);
I'll omit the implementation for brevity.
This change also means that OpenRasta won't waste time on further checking of handlers (their method signatures etc.) that are not relevant to the request at hand.
I'd still like to get other opinions on this issue, if possible. Maybe I just missed something.

A pragmatic view on private vs public

I've always wondered on the topic of public, protected and private properties. My memory can easily recall times when I had to hack somebody's code, and having the hacked-upon class variables declared as private was always upsetting.
Also, there were (more) times I've written a class myself, and had never recognized any potential gain of privatizing the property. I should note here that using public vars is not in my habit: I adhere to the principles of OOP by utilizing getters and setters.
So, what's the whole point in these restrictions?
The use of private and public is called Encapsulation. It is the simple insight that a software package (class or module) needs an inside and an outside.
The outside (public) is your contract with the rest of the world. You should try to keep it simple, coherent, obvious, foolproof and, very important, stable.
If you are interested in good software design the rule simply is: make all data private, and make methods only public when they need to be.
The principle for hiding the data is that the sum of all fields in a class define the objects state. For a well written class, each object should be responsible for keeping a valid state. If part of the state is public, the class can never give such guarantees.
A small example, suppose we have:
class MyDate
{
public int y, m, d;
public void AdvanceDays(int n) { ... } // complicated month/year overflow
// other utility methods
};
You cannot prevent a user of the class to ignore AdvanceDays() and simply do:
date.d = date.d + 1; // next day
But if you make y, m, d private and test all your MyDate methods, you can guarantee that there will only be valid dates in the system.
The whole point is to use private and protected to prevent exposing internal details of your class, so that other classes only have access to the public "interfaces" provided by your class. This can be worthwhile if done properly.
I agree that private can be a real pain, especially if you are extending classes from a library. Awhile back I had to extend various classes from the Piccolo.NET framework and it was refreshing that they had declared everything I needed as protected instead of private, so I was able to extend everything I needed without having to copy their code and/or modify the library. An important take-away lesson from that is if you are writing code for a library or other "re-usable" component, that you really should think twice before declaring anything private.
The keyword private shouldn't be used to privatize a property that you want to expose, but to protect the internal code of your class. I found them very helpful because they help you to define the portions of your code that must be hidden from those that can be accessible to everyone.
One example that comes to my mind is when you need to do some sort of adjustment or checking before setting/getting the value of a private member. Therefore you'd create a public setter/getter with some logic (check if something is null or any other calculations) instead of accessing the private variable directly and always having to handle that logic in your code. It helps with code contracts and what is expected.
Another example is helper functions. You might break down some of your bigger logic into smaller functions, but that doesn't mean you want to everyone to see and use these helper functions, you only want them to access your main API functions.
In other words, you want to hide some of the internals in your code from the interface.
See some videos on APIs, such as this Google talk.
Having recently had the extreme luxury of being able to design and implement an object system from scratch, I took the policy of forcing all variables to be (equivalent to) protected. My goal was to encourage users to always treat the variables as part of the implementation and not the specification. OTOH, I also left in hooks to allow code to break this restriction as there remain reasons to not follow it (e.g., the object serialization engine cannot follow the rules).
Note that my classes did not need to enforce security; the language had other mechanisms for that.
In my opinion the most important reason for use private members is hiding implementation, so that it can changed in the future without changing descendants.
Some languages - Smalltalk, for instance - don't have visibility modifiers at all.
In Smalltalk's case, all instance variables are always private and all methods are always public. A developer indicates that a method's "private" - something that might change, or a helper method that doesn't make much sense on its own - by putting the method in the "private" protocol.
Users of a class can then see that they should think twice about sending a message marked private to that class, but still have the freedom to make use of the method.
(Note: "properties" in Smalltalk are simply getter and setter methods.)
I personally rarely make use of protected members. I usually favor composition, the decorator pattern or the strategy pattern. There are very few cases in which I trust a subclass(ing programmer) to handle protected variables correctly. Sometimes I have protected methods to explicitly offer an interface specifically for subclasses, but these cases are actually rare.
Most of the time I have an absract base class with only public pure virtuals (talking C++ now), and implementing classes implement these. Sometimes they add some special initialization methods or other specific features, but the rest is private.
First of all 'properties' could refer to different things in different languages. For example, in Java you would be meaning instance variables, whilst C# has a distinction between the two.
I'm going to assume you mean instance variables since you mention getters/setters.
The reason as others have mentioned is Encapsulation. And what does Encapsulation buy us?
Flexibility
When things have to change (and they usually do), we are much less likely to break the build by properly encapsulating properties.
For example we may decide to make a change like:
int getFoo()
{
return foo;
}
int getFoo()
{
return bar + baz;
}
If we had not encapsulated 'foo' to begin with, then we'd have much more code to change. (than this one line)
Another reason to encapsulate a property, is to provide a way of bullet-proofing our code:
void setFoo(int val)
{
if(foo < 0)
throw MyException(); // or silently ignore
foo = val;
}
This is also handy as we can set a breakpoint in the mutator, so that we can break whenever something tries to modify our data.
If our property was public, then we could not do any of this!

Resources