I'm working on a profile switching addon and came across nsICategoryManager.
I was wondering what is this? What are some practical uses for it?
I read the MDN article but can't think of any uses for it.
The purpose of nsICategoryManager is to add entries (typically XPCOM components) to categories. The manager itself merely provides that registration mechanisms, how the categories are used depends entirely on the code that reads out the category entries. For example, there it the profile-after-change category for components that need to be activated when Firefox starts up.
Most extensions should no longer be using nsICategoryManager explicitly, adding a category entry can be done with a line in chrome.manifest:
category profile-after-change MyComponent #foobar/mycomponent;1
This will call nsICategoryManager.addCategoryEntry() implicitly when the extension is activated.
Edit: Just out of curiosity, I decided to search for nsCategoryCache in the Firefox source code to see what other categories there are. Here the list:
"content-policy" for nsIContentPolicy instances.
"net-content-sniffers" and "content-sniffing-services" for nsIContentSniffer instances.
"vacuum-participant" for mozIStorageVacuumParticipant instances.
"bookmark-observers" for nsINavBookmarkObserver instances.
"history-observers" for nsINavHistoryObserver instances.
"idle-daily" for observers managed by nsIIdleService.
These are only the categories being cached and monitored for changes, the complete list is much longer.
Related
When I use a map constructor like:
Person p = new Person(name: "Bob")
through something that is called via a grails.gsp.PageRenderer, the field values are not populated. When I use an empty constructor and then set the fields individually like:
Person p = new Person()
p.name = "Bob"
it succeeds. When I use the map constructor via a render call, it also succeeds.
Any ideas as to why this is the case?
Sample project is here in case anyone wants to dig deeper: https://github.com/danduke/constructor-test/
Actual use case, as requested by Jeff below:
I have a computationally expensive view to render. Essentially it's a multi-thousand page (when PDF'd) view of some data.
This view can be broken into smaller pieces, and I can reliably determine when each has changed.
I render one piece at a time, submitting each piece to a fixed size thread pool to avoid overloading the system. I left this out of the example project, as it has no bearing on the results.
I cache the rendered results and evict them by key when data in that portion of the view has changed. This is why I am using a page renderer.
Each template used may make use of various tag libraries.
Some tag libraries need to load data from other applications in order to display things properly (actual use case: loading preferences from a shared repository for whether particular items are enabled in the view)
When loaded, these items need to be turned into an object. In my case, it's a GORM object. This is why I am creating a new object at all.
There are quite a few opportunities for improvement in my actual use case, and I'm open to suggestions on that. However, the simplest possible (for me) demonstration of the problem still does suggest that there's a problem. I'm curious whether it should be possible to use map constructors in something called from a PageRenderer at all. I'm surprised that it doesn't work, and it feels like a bug, but obviously a very precise and edge case one.
"Technically it is a bug" (which is the best kind of bug!), and has been reported here: https://github.com/grails/grails-core/issues/11870
I'll update this if/when additional information is available.
I've got a page where a DataTemplate is being used to bind to the model for that content, e.g.:
<DataTemplate x:DataType="models:MyDataType">
... content ...
</DataTemplate>
In that content, I need to be able to bind a Click event. I need that click event to exist in the view model that is set as the page's DataContext:
<Page.DataContext>
<vm:MyViewModel x:Name="ViewModel">
</Page.DataContext>
but I'm really struggling with getting it to compile. Every approach I try results in the compilation error "Object reference not set to an instance of an object".
I know I can't use x:Bind because that will bind to the DataTemplate's DataContext, so I've been trying to use Binding and, based on other SO answers I've read, it seems like the answer should be:
Click="{Binding DataContext.Button_Click, ElementName=Page}"
where Page is defined as the x:Name for the Page. I've tried removing DataContext. I've tried adding ViewModel.
What am I misunderstanding? Is it not possible to do what I want to do? I've tried using code-behind instead but I'm using Template 10 and that pushes almost everything onto the view model, which makes it harder for me to access things like the navigation service from code-behind.
tl;dr; use Messaging.
#justinXL is right, 'ElementName' can work. But is it best?
The problem you are trying to solve has already been solved with messaging. Most MVVM implementations include a messaging solution. Prism uses PubSubEvents; MVVM Light has its own messenger. There are others, too.
The idea is that an outside class, typically described as a message aggregator, is responsible for statelessly receiving and multicasting messages. This means you need to have a reference to the aggregator but not a reference to the sender. It’s beautiful.
For example
A common use case might be a mail client and how the data template of a message in the list would include a trash/delete button. When you click that button, what should be called? With messaging, you handle the button_press in the model and send/publish a message (one that passes the item).
The hosting view-model has subscribed to the aggregator and is listening for a specific message, the Delete message that we just sent. Upon receipt, it removes it from the list and begins the process to delete it from cache/database, or whatever – including prompting the user with “Are you sure?”
This means all your data binding in your data template is local, and does NOT extend outside its local scope. Why does this matter? Because if you use Element Binding to reach the hosting page, it means you cannot 1) move this template to a resource dictionary or 2) reuse this template.
There are two other reasons.
you cannot use compiled x:Bind to do this because it already limits use of this painful binding approach – this matters because a data template is typically in a list, and performance should always be prioritized, and
It adds considerable complexity.
Complexity?
I am a big fan of sophisticated solutions. I think they are rare and are the trademark of truly smart developers. I love looking at such code/solutions. Complex is not the same as sophisticated. When it comes to complexity, I am not a fan. Data binding is already difficult to wrap your head around; multi-sourcing your data binding across scope boundaries is pure complexity.
That’s what I think.
Your binding expression is correct, except it won't work with a Button_Click event handler. You will need an ICommand defined in your page's ViewModel.
Since you are using Template10, you should be able to create a DelegateCommand called ClickCommand like this
private DelegateCommand<MyDataType> _clickCommand;
public DelegateCommand<MyDataType> ClickCommand
{
get
{
_clickCommand = _clickCommand ?? new DelegateCommand<<MyDataType>>((model) =>
{
// put your logic here.
});
return _clickCommand;
}
}
And the binding will be updated to
<Button Command="{Binding DataContext.ClickCommand, ElementName=Page}" CommandParameter="{x:Bind}" />
Note I have also added a CommandParameter binding to the button as you might want to know which MyDataType instance is associated with the clicked button.
I'm using (and loving) Siesta to communicate with a REST web service in my Swift App. I have implemented a series of ResponseTransformers to map the API call responses to model classes so that the Siesta Resources are automatically parsed into object instances. This all works great.
I now want to implement a Siesta PersistantCache object to support an offline mode by having Siesta cache these objects to disk (rather than in memory) by storing them in Realm. I am not sure how to do this because the documentation says (about the EntityCache.writeEntity function):
This method can — and should — examine the entity’s content and/or headers and ignore it if it is not encodable. While they can apply type-based rules, however, cache implementations should not apply resource-based or url-based rules; use Resource.configure(...) to select which resources are cached and by whom.
In an attempt to conform to this guideline, I have created a specific PersistentCache object for each Resource type based on URL Pattern matching during Service Configuration:
class _GFSFAPI: Service {
private init() {
configure("/Challenge/*") { $0.config.persistentCache = SiestaRealmChallengeCache() }
}
However, since the EntityCache protocol methods only include a reference to the Entity (which exposes raw content but not the typed objects), I don't see how I can call the realm write methods during the call to EntityCache.writeEntity or how to pull the objects out of Realm during EntityCache.readEntity.
Any suggestions about how to approach this would be greatly appreciated.
Excellent question. Having a separate EntityCache implementations for each model could certainly work, though it seems like it might be burdensome to create all those little glue classes.
Models in the Cache
Your writeEntity() is called with whatever comes out at the end of all your response transformers. If your transformers are configured to spit out model classes, then writeEntity() sees models. If those models are Realm-friendly models, well, I don’t see any reason why you shouldn’t be able to just call realm.add(entity.content). (If that’s giving you problems, let me know with an update to the question.)
Conversely, when reading from the cache, what readEntity() returns does not go through the transformer pipeline again, so it should return exactly the same thing your transformers would have produced, i.e. models.
Cache Lookup Keys
The particular paragraph you quote from the docs is ill-written and perhaps a bit misleading. When it says you “should not apply resource-based or url-based rules,” it’s really just trying to dissuade you from parsing the forKey: parameter — which is secretly just a URL, but should remain opaque to cache implementations. However, any information you can gather from the given entity is fair game, including the type of entity.content.
The one wrinkle under the current API — and it is a serious wrinkle — is that you need to keep a mapping from Siesta’s key (which you should treat as opaque) to Realm objects of different types. You might do this by:
keeping a Realm model dedicated to keeping a polymorphic mapping from Siesta cache keys to Realm objects of various types,
by adding a siestaKey attribute and doing some kind of union query across models, or
by keeping a (cache key) → (model type, model ID) mapping outside of Realm.
I’d probably pursue the options in that order, but I believe you are in relatively unexplored (though perfectly reasonable) territory here using Realm as the backing for EntityCache. Once you’ve sussed out the options, I’d encourage you to file a Github issue for any suggested API improvements.
Example: I want to use the interface of nsILocalFile in Javascript, how to find the corresponding Contract ID("#mozilla.org/file/local;1")? Is there a map in the source code?
You don't. This isn't a one-to-one relationship between contracts and interfaces but a many-to-many one:
A single component as accessible by a contract can implement multiple interfaces.
A single interface can have multiple components implementing it and therefore multiple contracts.
But, often it is a one-to-one relationship in practice. E.g. if I wanted to find out about what components implement nsILocalFile, I'd search it in the sources, for instance:
MXR: http://mxr.mozilla.org/mozilla-central/ident?i=nsILocalFile&tree=mozilla-central
A glance over the result list already tells me: line 1255 -- let file = Cc["#mozilla.org/file/local;1"].createInstance(Ci.nsILocalFile);
Else I'd have to look at the files the different results link, starting with the .js ones.
Other times, the contract ids are even specified in the idl itself, e.g. in nsITimer.idl (at the bottom).
The most commonly used interfaces usually are also present on MDN incl. contracts, e.g. nsILocalFile.
When saving a Core Data managed object context on iOS 6.0.1 to a SQLite store, I run into a strange "CoreData does not support persistent cross-store relationships" exception. It concerns a one-to-one relationship between Quotes and AbstractSources in the model. At runtime it concerns a Quote and a Book (where Book inherits from AbstractSource. All works well in the model editor.)
I've researched similar reports and covered the reported causes:
I am assigning both the Quote and the Book to the same persistent
store using assignObject:toPersistentStore:, so neither remains
unassigned.
The error description shows that all "absolute" x-coredata ids start
with the same prefix (e.g.
"x-coredata://82B3BEB3-60F2-4912-AC80-11AAD29CFF99/", so there
really seems to be one store only in use.
My questions are these:
Is there anything else I have to check (perhaps sg. in relation to
AbstractSource, which I do not touch/control in my source? I am
creating both the Quote and the Book with a call to
initWithEntity:insertIntoManagedObjectContext each.)
I noticed that the error description also includes several
"relative" x-coredata ids (of the form "x-coredata:///..."). Could
it be that the absolute form is always considered as
"cross-database", even if "absolute" prefixes (see example above) are the same?
And if so, how could I influence any choice between "absolute" and
"relative" x-coredata ids?
Thx (much) for your attention!
So this is what had (presumably) caused the trouble:
My managed object context's coordinator has to manage two persistent
stores. Now the one to which I assigned Quote and Book and were I
wanted them saved is reset at start-up. There was a bug
in this code, which rendered this store unusable. Since a second one
was available it silently took over, in this case leading to unwanted results.
Lesson: I now assert that there are/remain indeed two stores after setting up the core data stack.
During earlier development of my Core Data model, I had renamed some
of its entities in the model editor. By mistake I had only changed
names, but not the entities class properties. So in effect while
everything worked well in the model editor, by-then-unexpected
classes were used at runtime, and therefore unexpected classes where
assigned to unexpected/wrong stores as well. Lesson: I now make sure
that entities names and their class properties remain in perfect sync (other
circumstances permitting).
The issue is now resolved, and I've also refactored my code/model to use (non-overlapping) configurations instead of explicit assignments, which should also help going forward.
Again, thx for your attention