Blackberry listfield - blackberry

I'm wondering if anyone has come across any sample code for creating a listField that has clickable rows.
I'm using Blackberry 5.0 API and I need to create of table of clickable rows. When the row is clicked then the user will be brought to a new screen showing more content.
Have looked around but I haven't found any good examples of using a ListField (any other component using the 5.0 API) to achieve this. Any suggestions?
Thanks

Override protected boolean navigationClick(int, int) to something like:
protected boolean navigationClick(int status, int time) {
super.navigationClick(status, time);
fieldChangeNotify(1);
}
And then attach a FieldChangeListener to it, and make sure you fieldChanged(int context) method checks for context==1 to listen for clicks. You might want to head to the BB Java Development forums for another resource, as this questions and many others have been answered on there and it might be useful for you.

Related

Umbraco Intercept CMS activities

With Umbraco, is there any way to trigger within code any time a field is updated in a document?
I have an umbraco api that is using data that is stored in a table structure. This data is only used for calculations and not exposed directly on any page, but I want the back end users to be able to modify it. I have code that will take a CSV file and upload the data to the table. I've created a data type that only has one field that is an Upload field. I want to trigger the table update whenever that file is updated. The alternative is to have some sort of filewatcher monitoring the media folder for this particular file, this is the way I'm leaning if umbraco doesn't have a solution.
Yes, there is an API available which you can use.
For Umbraco v6.1+ refer to the Saved event in the ContentService, as described here.
You can register your own event handler using the ApplicationEventHandler interface:
public class RegisterEvents : ApplicationEventHandler
{
protected override void ApplicationStarted(UmbracoApplicationBase umbracoApplication, ApplicationContext applicationContext)
{
Document.Saved += DocumentSaved;
}
private void DocumentSaved(Document sender, PublishEventArgs e)
{
// check your document type and fields to see if it has changed
}
}

Working with the Output Cache and other Action Filters

I have added Output Caching to a couple of actions in my app for some easy performance boosts. However, these actions also need to increment a counter after each request (it's a views counter) by hitting a Redis db.
At first, I figured I could just adjust the order in which the action filters execute to ensure the view is counted:
public class CountersAttribute : ActionFilterAttribute
{
public override void OnResultExecuted(ResultExecutedContext filterContext)
{
//increment my counter all clever like
base.OnResultExecuted(filterContext);
}
}
But that didn't work; apparently the OutputCacheAttribute doesn't behave like a normal action filter. Then I tried implementing a custom output cache:
public class OutputCacheWithCountersAttribute : OutputCacheAttribute
{
public override void OnResultExecuted(ResultExecutedContext filterContext)
{
//straight to the source to get my headcount!
base.OnResultExecuted(filterContext);
}
}
Nope, didn't work either; action filters appear to be entirely ignored once an action is cached. Bummer.
So, uh, is there any way (without implementing a custom output caching provider) for me to ensure my views are counted properly that is clean and sensible?
The OutputCacheAttribute has limitations by the way and there is a custom attribute named DonutOutputCache developed by Paul Hiles helps to overcome the limitations.
One of the important feature it supports is you can have an action filter that can be called all the times even the action is marked with cache attribute or not.
For ex. you want to cache an action for the duration 5 seconds and at the same time you want to log every time the action receives a request using a LogThis filter you can achieve that simply by below,
[LogThis]
[DonutOutputCache(Duration=5, Order=100)]
public ActionResult Index()
From Paul,
Yes, unlike the built-in OutputCacheAttribute, the action filters will
execute even when a page is retrieved from the cache. The only caveat
to add is that you do need to be careful about the filter order. If
your action filter implements OnResultExecuting or OnResultExecuted
then these methods will be executed in all cases, but for
OnActionExecuting and OnActionExecuted, they will only be executed if
the filter runs before the DonutOutputCacheAttribute. This is due to
the way that MVC prevents subsequent filters from executing when you
set the filterContext.Result property which is what we need to do for
output caching.
I do not think that you can rely on the order in which action filters
are defined on an action or controller. To ensure that one filter runs
before another, you can make use of the Order property that is present
on all ActionFilterAttribute implementations. Any actions without the
order property set, default to an value of -1, meaning that they will
execute before filters which have an explicit Order value.
Therefore, in your case, you can just add Order=100 to the
DonutOutputCache attribute and all other filters will execute before
the caching filter.
You can make an AJAX call from the Layout View and track your visitors even if the page is cached. This is what Google Analytics does. I recommend doing it from the Layout View because it's gonna be executed in all the view that uses that layout.
One more comment, let's say that you have two Layout Views: one for the public part of the site and one for the back-end (employees only). You'll probably be interested in tracking users, not employees so this is another benefit of tracking at Layout View. If in the future you want to track what the employees are doing, you can add a different tracker for the back-end Layout View.
I hope it helps.
The reason is actually in the .NET source, and nothing to do with the DonutOutputCache:
public void SetCacheability(HttpCacheability cacheability)
{
if (cacheability < HttpCacheability.NoCache || HttpCacheability.ServerAndPrivate < cacheability)
throw new ArgumentOutOfRangeException("cacheability");
if (HttpCachePolicy.s_cacheabilityValues[(int) cacheability] >= HttpCachePolicy.s_cacheabilityValues[(int) this._cacheability])
return;
this.Dirtied();
this._cacheability = cacheability;
}
In other words, if you set NoCache first (a value of 1), it will always return if you try to set a higher value, such as 4 (public).
The only solution is to fork the project and extend it to how you require, or perhaps send a pull request to mark protected ICacheHeadersHelper CacheHeadersHelper in DonutOutputCacheAttribute
Use a "Validation Callback" that is executed ALWAYS even if the cached page should be served
public class MyCacheAttribute : OutputCacheAttribute
{
public override void OnResultExecuting(ResultExecutingContext filterContext)
{
SaveToLog();
httpContext.Response.Cache.AddValidationCallback(MyCallback, null);
base.OnResultExecuting(filterContext);
}
// This method is called each time when cached page is going to be served
private void MyCallback(HttpContext context, object data, ref HttpValidationStatus validationStatus)
{
SaveToLog();
}
}
NOTE: the SaveToLog() is called in two places, that's by design (first call when cache is bypassed, seconds call when cached version is served)

JSF - List of objects in a Managed Bean and memory management question

I am creating my first JSF application. For this question I am going to simplify everything to try and make my question as clear as possible.
The application I am creating is a simple contact storing application that will allow you to create a new contact, store information such as addresses, phone numbers, where they work, and upload files that are associated to the contact.
When the application loads the user is displayed with a list of contacts that have already been created. They can click on the contact's image to open up the contact and view all of the information stored on them. This is where my question comes in.
All of this information is stored in the ContactManager.java managed bean. All of the data on the contact is displayed in datatables. So there is an address datatable, phone datatable, uploads datatable. Each datatable view resides within an appropriate tab. I am using Primefaces to create the tabs. So basically when a contact is opened the system has to load maybe about 10 lists of data as well as dropwdown lists used for select values. to populate these datatables in the tabs.
#ManagedBean
#ViewScoped
public class ContactManager implements Serializable {
private Contact contactInfo;
private List<Phone> phones;
private List<Addresses> addresses;
private List<Company> jobs;
private List<Files> uploads;
//dropdown select lists
private List<Country> countryList;
private List<State> stateList;
//Getters and setters for everyting
..... (I am not going to type it all out but its there)
#PostConstruct
public void openContact() {
try {
this.countryList = myDao.getCountryList();
this.stateList = myDao.getStateList();
this.addresses = myDao.getAddresses(contactInfo.contactId);
this.phones = myDao.getPhones(contactInfo.contactId);
this.jobs = myDao.getJobs(contactInfo.contactId);
this.uploads = myDao.getUploads(contactInfo.contactId);
}
}
}
So basically when a user opens a contact all of that contact information is loaded into memory so it can be displayed in the view.
This example is small to the actual amount of lists and data that I am storing but for simplification sake I am wondering about memory management.
My system is going to be used by a number of users and loading all of this information worries me. When does the memory allocated for all of this get cleared? Am I responsible for clearing it?
If I have 10 users and they are all viewing contacts that have really big tables with a lot of data I fear that this is going to bring everything to a standstill.
Currently it runs fine on my system but I know that all of these tables and lists are kept in memory until either the user clicks and opens a new contact or closes the application.
Any insight on if this is the right way of doing things or how to handle large information like this would be great.
I am using JSF 2.0 / Primefaces 2.2RC2-Snapshot
If the tables are big, you need a lot of memory on server side, with or without JSF.
Do not use rich faces tables; use ui:repeat and write custom paging for it.
primefaces tables are better than h:dataTable or rich:datatable.
JSF 2.0 uses 50% less memory for each user.
Also, use #RequestScoped beans and ViewScoped; do not use SessionScoped beans.
You also can remove sessions beans from session from faces context.
context = FacesContext.getCurrentInstance();
externalContext = context.getExternalContext();
externalContext.getSessionMap().remove("UserContext");
externalContext.getSessionMap().remove("SessionController");
Etc.
Also, if you have some id for store in the page and you think you can not use RequestScoped beans, it is mistake
integrate with your application tomahawk myfaces, and use t:saveState value="#{bean.categoryId}"/>
it is store id or objects, list maps, in the current page scope.

sharepoint-Webparts-Development

I am only aware of two approaches we can develop webparts using Visual studio.
The First one:
Add a webpart project and write code in the appropriate methods.
protected override void OnInit(EventArgs e)
protected override void OnLoad(EventArgs e)
protected override void CreateChildControls()
protected override void LoadViewState(object savedState) //Only at Postback
protected override void OnPreRender(EventArgs e)
protected override void Render(System.Web.UI.HtmlTextWriter writer)
protected override void OnUnload(EventArgs e)
public override void Dispose()
Deploy the Solution directly from the VS. Take the WSP File and use STSADM.EXE to deploy across the Site/Farm.This is the standard approach to follow.
Second approach:
Create a User Control and copy the Usercontrol.ascx and Usercontrol.ascx.cs to _Layouts.
Create a new webpart project and register the control using the
_UserControl = this.Page.LoadControl("\\_layouts\\_UserControl.ascx");
And Deploy it from the VS.
But this approach is not looking safe as we are manually copying to the _layouts.
The only reason we are going to take this approach is we can display the controls the way we want and not bothered to see the various events of webpart life cycle.
Could anybody let me know what approach you are taking in your company.
Thank you.
Hari Gillala
When I started developing in SharePoint 2007, we used the first method you describe. After some time, we switched to something like the second method.
However, instead of placing the ascx files into layouts, we put them in a custom directory under controltemplates. Our web part code then looked like this:
public class OurControlWebPart : WebPart
{
protected override void CreateChildControls()
{
base.CreateChildControls();
Control userControl =
Page.LoadControl("~/_controltemplates/OurProject/OurControl.ascx");
Controls.Add(userControl);
}
}
If our web part had any additional properties or toolparts, they would be handled in this class and then forwarded onto the control class. I really liked the separation of the logic of the control from the logic of the web part. Also, I liked being able to control the layout of the control in HTML or using the Visual Studio designer.
And these files do not need to be deployed manually. Then can be included in your solution package. Just like you have a path deploying your features to the 12\TEMPLATE\FEATURES directory, you can deploy your ascx files to 12\TEMPLATE\CONTROLTEMPLATES.
Definitely the second one, just look at the visual webparts in 2010 (they are built exactly like that).
sp2007, either way was fine, it just depends on how you like building your control tree. i prefer the first method.
sp2010 you have a few more options.
1) Your first choice should be a sand boxed web part, which uses a code building approach.
2) If that is too limited, you can try a visual web part, similar to the smart part from sp2007.
3) And then there is the standard code based approach.

Silverlight, DataPager, RIA Services, and smart paging

I'm still trying to get my feet on the ground with Silverlight and RIA Services, and of course starting with some of the more "fun" stuff like grids and intelligent paging. I can connect to RIA Services (using a home-grown ORM, not L2S or EF), get data on the grid, and connect to a DataPager. The domain service is working well with the home-grown ORM, at least for queries. (Still working on full CRUD.) However, there are still problems:
To support the user application, I need user-controlled sorting and filtering, in addition to smart paging (only run the query for the rows needed to display) and grouping.
So far, I've seen nothing in the DataGrid or DataPager to externalize these capabilities so that filtering, sorting, and paging parameters can be passed to the server to build the appropriate query.
The datasets are potentially quite large; my table I've chosen for prototyping work can have up to 35,000 entries at some customers, and I'm sure there are other tables far larger that I will have to deal with at some point. So the "smart paging" aspect is essential.
Ideas, suggestions, guidance, and nerf bricks are all welcome.
OK, I've spent a few days in the weeds with this one, and I think I've got a handle on it.
First, an important piece of magic. For paging to work properly, the pager has to know the total item count, no matter how many items were returned by the current query. If the query returns everything, the item count is obviously the number of items returned. For smart paging, the item count is still the total of available items, although the query returns only what gets displayed. With filtering, even the total of available items changes every time the filter changes.
The Silverlight Datapager control has a property called ItemCount. It is readonly and cannot be databound in XAML, or set directly in code. However, if the user control containing the pager has a DataContext that implements IPagedCollectionView, then the data context object must implement an ItemCount property with PropertyChanged notification, and the DataPager seems to pick this up automagically.
Second, I highly recommend Brad Abrams' excellent series of blog posts on RIA Services, especially this one on ViewModel. It contains most of what you need to make paging and filtering work, although it's missing the critical piece on managing the item count. His downloadable sample also contains a very good basic framework for implementing ModelViewViewModel (MVVM). Thank you, Brad!
So here's how to make the item count work. (This code refers to a custom ORM, while Brad's code uses Entity Framework; between the two you can figure you what you need in your environment.)
First, your ORM needs to support getting record counts, with and without your filter. Here's my domain service code that makes the counts available to RIA Services:
[Invoke]
public int GetExamCount()
{
return Context.Exams.Count();
}
[Invoke]
public int GetFilteredExamCount(string descriptionFilter)
{
return Context.Exams.GetFilteredCount(descriptionFilter);
}
Note the [Invoke] attribute. You need this for any DomainService method that doesn't return an Entity or an Entity collection.
Now for the ViewModel code. You need an ItemCount, of course. (This is from Brad's example.)
int itemCount;
public int ItemCount
{
get { return itemCount; }
set
{
if (itemCount != value)
{
itemCount = value;
RaisePropertyChanged(ItemCountChangedEventArgs);
}
}
}
Your LoadData method will run the query to get the current set of rows for display in the DataGrid. (This doesn't implement custom sorting yet, but that's an easy addition.)
EntityQuery<ExamEntity> query =
DomainContext.GetPagedExamsQuery(PageSize * PageIndex, PageSize, DescriptionFilterText);
DomainContext.Load(query, OnExamsLoaded, null);
The callback method then runs the query to get the counts. If no filter is being used, we get the count for all rows; if there's a filter, then we get the count for filtered rows.
private void OnExamsLoaded(LoadOperation<ExamEntity> loadOperation)
{
if (loadOperation.Error != null)
{
//raise an event...
ErrorRaising(this, new ErrorEventArgs(loadOperation.Error));
}
else
{
Exams.MoveCurrentToFirst();
if (string.IsNullOrEmpty(DescriptionFilterText))
{
DomainContext.GetExamCount(OnCountCompleted, null);
}
else
{
DomainContext.GetFilteredExamCount(DescriptionFilterText, OnCountCompleted, null);
}
IsLoading = false;
}
}
There's also a callback method for counts:
void OnCountCompleted(InvokeOperation<int> op)
{
ItemCount = op.Value;
TotalItemCount = op.Value;
}
With the ItemCount set, the Datapager control picks it up, and we have paging with filtering and a smart query that returns only the records to be displayed!
LINQ makes the query easy with .Skip() and .Take(). Doing this with raw ADO.NET is harder. I learned how to do this by taking apart a LINQ-generated query.
SELECT * FROM
(select ROW_NUMBER() OVER (ORDER BY Description) as rownum, *
FROM Exams as T0 WHERE T0.Description LIKE #description ) as T1
WHERE T1.rownum between #first AND #last ORDER BY rownum
The clause "select ROW_NUMBER() OVER (ORDER BY Description) as rownum" is the interesting part, because not many people use "OVER" yet. This clause sorts the table on Description before assigning row numbers, and the filter is also applied before row numbers are assigned. This allows the outer SELECT to filter on row numbers, after sorting and filtering.
So there it is, smart paging with filtering, in RIA Services and Silverlight!
Here's the quick and dirty solution (that I went for):
Just move your DomainDataSource to your ViewModel! Done!
May not exactly be great for testability and probably some other limitations I haven't discovered yet, but personally I don't care about that until something better comes along.
Inside your ViewModel just instantiate the data source :
// Feedback DataSource
_dsFeedback = new DomainDataSource();
_dsFeedback.DomainContext = _razorSiteDomainContext;
_dsFeedback.QueryName = "GetOrderedFeedbacks";
_dsFeedback.PageSize = 10;
_dsFeedback.Load();
and provide a bindable property :
private DomainDataSource _dsFeedback { get; set; }
public DomainDataSource Feedback
{
get
{
return _dsFeedback;
}
}
And add your DataPager to your XAML:
<data:DataPager Grid.Row="1"
HorizontalAlignment="Stretch"
Source="{Binding Feedback.Data}"
Margin="0,0,0,5" />
<data:DataGrid ItemsSource="{Binding Feedback.Data}">
PS. Thanks to 'Francois' from the above linked page. I didn't even realize I could take the DomainDataSource out of the XAML until I saw your comment!
This is an interesting article from May 2010 about the possible future support for this type of feature in the framework.
http://www.riaservicesblog.net/Blog/post/WCF-RIA-Services-Speculation-EntityCollectionView.aspx

Resources