Saxonia s9api integrated Extension functions Provide node - saxon

we are trying to submit a node using the integrated extension function. The node looks correct as far as it goes, but we can't access the individual elements, because there is always an outOfBound exception appearance.
How can we access the individual elements below the root element?
public ExtensionFunction updateTempNode = new ExtensionFunction() {
public QName getName() {
return new QName("de.dkl.dymoServer.util.ExternalFunctions", "updateTempNode");
}
public SequenceType getResultType() {
return SequenceType.makeSequenceType(
ItemType.BOOLEAN, OccurrenceIndicator.ONE
);
}
public net.sf.saxon.s9api.SequenceType[] getArgumentTypes() {
return new SequenceType[]{
SequenceType.makeSequenceType(
ItemType.STRING, OccurrenceIndicator.ONE),
SequenceType.makeSequenceType(
ItemType.DOCUMENT_NODE, OccurrenceIndicator.ONE)};
}
public XdmValue call(XdmValue[] arguments) {
String sessionId = arguments[0].itemAt(0).getStringValue();
SaplingElement tempNode = TransformationService.tempNodes.get(sessionId);
ItemTypeFactory itemTypeFactory = new ItemTypeFactory(((XdmNode) arguments[1]).getProcessor());
tempNode.withChild(
arguments[1].stream().map(xdmValue -> Saplings.elem(xdmValue.getStringValue()).withText(xdmValue.itemAt(0).getStringValue())).toList()
.toArray(SaplingElement[]::new)
);
System.out.println(tempNode);
return new XdmAtomicValue(true);
}
};
AOOB as I try to iterate
Data expected as document_node

Wild guess is that you want something like
tempNode = tempNode.withChild(
arguments[1]
.select(Steps.child().then(Steps.child()))
.map(childNode -> Saplings.elem(childNode.getNodeName()).withText(childNode.itemAt(0).getStringValue()))
.collect(Collectors.toList())
.toArray(new SaplingElement[]{})
);
which would populate tempNode with copies of the child nodes of the root element of the document node that is arguments[1]. There might be better ways to do that.
.

Related

Accessing a Service from within an XNA Content Pipeline Extension

I need to allow my content pipeline extension to use a pattern similar to a factory. I start with a dictionary type:
public delegate T Mapper<T>(MapFactory<T> mf, XElement d);
public class MapFactory<T>
{
Dictionary<string, Mapper<T>> map = new Dictionary<string, Mapper<T>>();
public void Add(string s, Mapper<T> m)
{
map.Add(s, m);
}
public T Get(XElement xe)
{
if (xe == null) throw new ArgumentNullException(
"Invalid document");
var key = xe.Name.ToString();
if (!map.ContainsKey(key)) throw new ArgumentException(
key + " is not a valid key.");
return map[key](this, xe);
}
public IEnumerable<T> GetAll(XElement xe)
{
if (xe == null) throw new ArgumentNullException(
"Invalid document");
foreach (var e in xe.Elements())
{
var val = e.Name.ToString();
if (map.ContainsKey(val))
yield return map[val](this, e);
}
}
}
Here is one type of object I want to store:
public partial class TestContent
{
// Test type
public string title;
// Once test if true
public bool once;
// Parameters
public Dictionary<string, object> args;
public TestContent()
{
title = string.Empty;
args = new Dictionary<string, object>();
}
public TestContent(XElement xe)
{
title = xe.Name.ToString();
args = new Dictionary<string, object>();
xe.ParseAttribute("once", once);
}
}
XElement.ParseAttribute is an extension method that works as one might expect. It returns a boolean that is true if successful.
The issue is that I have many different types of tests, each of which populates the object in a way unique to the specific test. The element name is the key to MapFactory's dictionary. This type of test, while atypical, illustrates my problem.
public class LogicTest : TestBase
{
string opkey;
List<TestBase> items;
public override bool Test(BehaviorArgs args)
{
if (items == null) return false;
if (items.Count == 0) return false;
bool result = items[0].Test(args);
for (int i = 1; i < items.Count; i++)
{
bool other = items[i].Test(args);
switch (opkey)
{
case "And":
result &= other;
if (!result) return false;
break;
case "Or":
result |= other;
if (result) return true;
break;
case "Xor":
result ^= other;
break;
case "Nand":
result = !(result & other);
break;
case "Nor":
result = !(result | other);
break;
default:
result = false;
break;
}
}
return result;
}
public static TestContent Build(MapFactory<TestContent> mf, XElement xe)
{
var result = new TestContent(xe);
string key = "Or";
xe.GetAttribute("op", key);
result.args.Add("key", key);
var names = mf.GetAll(xe).ToList();
if (names.Count() < 2) throw new ArgumentException(
"LogicTest requires at least two entries.");
result.args.Add("items", names);
return result;
}
}
My actual code is more involved as the factory has two dictionaries, one that turns an XElement into a content type to write and another used by the reader to create the actual game objects.
I need to build these factories in code because they map strings to delegates. I have a service that contains several of these factories. The mission is to make these factory classes available to a content processor. Neither the processor itself nor the context it uses as a parameter have any known hooks to attach an IServiceProvider or equivalent.
Any ideas?
I needed to create a data structure essentially on demand without access to the underlying classes as they came from a third party, in this case XNA Game Studio. There is only one way to do this I know of... statically.
public class TestMap : Dictionary<string, string>
{
private static readonly TestMap map = new TestMap();
private TestMap()
{
Add("Logic", "LogicProcessor");
Add("Sequence", "SequenceProcessor");
Add("Key", "KeyProcessor");
Add("KeyVector", "KeyVectorProcessor");
Add("Mouse", "MouseProcessor");
Add("Pad", "PadProcessor");
Add("PadVector", "PadVectorProcessor");
}
public static TestMap Map
{
get { return map; }
}
public IEnumerable<TestContent> Collect(XElement xe, ContentProcessorContext cpc)
{
foreach(var e in xe.Elements().Where(e => ContainsKey(e.Name.ToString())))
{
yield return cpc.Convert<XElement, TestContent>(
e, this[e.Name.ToString()]);
}
}
}
I took this a step further and created content processors for each type of TestBase:
/// <summary>
/// Turns an imported XElement into a TestContent used for a LogicTest
/// </summary>
[ContentProcessor(DisplayName = "LogicProcessor")]
public class LogicProcessor : ContentProcessor<XElement, TestContent>
{
public override TestContent Process(XElement input, ContentProcessorContext context)
{
var result = new TestContent(input);
string key = "Or";
input.GetAttribute("op", key);
result.args.Add("key", key);
var items = TestMap.Map.Collect(input, context);
if (items.Count() < 2) throw new ArgumentNullException(
"LogicProcessor requires at least two items.");
result.args.Add("items", items);
return result;
}
}
Any attempt to reference or access the class such as calling TestMap.Collect will generate the underlying static class if needed. I basically moved the code from LogicTest.Build to the processor. I also carry out any needed validation in the processor.
When I get to reading these classes I will have the ContentService to help.

ItemDescriptionGenerator for vaadin TreeTable only returns null for column

Im using vaadin's TreeTable and im trying to add tooltips for my rows. This is how they say it should be done but the propertyId is always null so i cant get the correct column? And yes i'v run this in eclipse debugger aswell =)
Code related to this part:
private void init() {
setDataSource();
addGeneratedColumn("title", new TitleColumnGenerator());
addGeneratedColumn("description", new DescriptionGenerator());
setColumnExpandRatios();
setItemDescriptionGenerator(new TooltipGenerator());
}
protected class TooltipGenerator implements ItemDescriptionGenerator{
private static final long serialVersionUID = 1L;
#Override
public String generateDescription(Component source, Object itemId, Object propertyId) {
TaskRow taskRow = (TaskRow)itemId;
if("description".equals(propertyId)){
return taskRow.getDescription();
}else if("title".equals(propertyId)){
return taskRow.getTitle();
}else if("category".equals(propertyId)){
return taskRow.getCategory().toString();
}else if("operation".equals(propertyId)){
return taskRow.getOperation().toString();
}else if("resourcePointer".equals(propertyId)){
return taskRow.getResourcePointer();
}else if("taskState".equals(propertyId)){
return taskRow.getTaskState().toString();
}
return null;
}
}
I have passed the source object as the itemId when adding an item to the tree.
Node node = ...;
Item item = tree.addItem(node);
this uses the object "node" as the id. Which then allows me to cast itemId as an instance of Node in the generateDescription method.
public String generateDescription(Component source, Object itemId, Object propertyId) {
if (itemId instanceof Node) {
Node node = (Node) itemId;
...
Maybe not the best solution, but it Works for me. Then again, I am adding items directly to the tree rather than using a DataContainer.

db4o Tranparent Persistence doesn't store later objects in my own ActivatableCollection<T>

I'm rolling my own ActivatableCollection<T> for db4o but cribbing heavily from the builtin ActivatableList<T> implementation. I'm running into the problem where transparent persistence doesn't seem to be working correctly. In the test code below:
[Fact]
void CanStoreActivatableCollection()
{
var planets = new ActivatableCollection<Planet>();
var pagingMemoryStorage = new PagingMemoryStorage();
var config = Db4oEmbedded.NewConfiguration();
config.Common.Add(new TransparentActivationSupport());
config.Common.Add(new TransparentPersistenceSupport());
config.File.Storage = pagingMemoryStorage;
var objectContainer = Db4oEmbedded.OpenFile(config, "Memory.yap");
planets.Add(new Planet("Mercury"));
objectContainer.Store(planets);
planets.Add(new Planet("Venus"));
planets.Add(new Planet("Earth"));
objectContainer.Commit();
objectContainer.Close();
config = Db4oEmbedded.NewConfiguration();
config.Common.Add(new TransparentActivationSupport());
config.Common.Add(new TransparentPersistenceSupport());
config.File.Storage = pagingMemoryStorage;
objectContainer = Db4oEmbedded.OpenFile(config, "Memory.yap");
planets = objectContainer.Query<ActivatableCollection<Planet>>().FirstOrDefault();
Assert.NotNull(planets);
Assert.Equal(3, planets.Count);
objectContainer.Close();
}
The planet "Mercury" is stored, but not "Venus" and "Earth". If I change from ActivatableCollection to ActivatableList, then all 3 planets are stored.
What am I missing? My ActivatableCollection is just minimal implementation of ActivatableList as best as I can tell.
Below is my implementation of ActivatableCollection:
public class ActivatableCollection<T>
: ICollection<T>
, IActivatable
, INotifyCollectionChanged
{
List<T> _list;
List<T> List
{
get
{
if (_list == null)
_list = new List<T>();
return _list;
}
}
public ActivatableCollection()
{
}
public int Count
{
get
{
ActivateForRead();
return List.Count;
}
}
public bool IsReadOnly
{
get
{
ActivateForRead();
return ((IList) List).IsReadOnly;
}
}
public void Add(T t)
{
ActivateForWrite();
List.Add(t);
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Add, t));
}
public void Clear()
{
ActivateForWrite();
List.Clear();
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Reset));
}
public bool Contains(T t)
{
ActivateForRead();
return List.Contains(t);
}
public void CopyTo(T[] array, int index)
{
ActivateForRead();
List.CopyTo(array, index);
}
public IEnumerator<T> GetEnumerator()
{
ActivateForRead();
return List.GetEnumerator();
}
IEnumerator IEnumerable.GetEnumerator()
{
return GetEnumerator();
}
public bool Remove(T t)
{
ActivateForWrite();
bool removed = List.Remove(t);
if (removed)
OnCollectionChanged(new NotifyCollectionChangedEventArgs(NotifyCollectionChangedAction.Remove, t));
return removed;
}
[Transient]
private IActivator _activator;
public virtual void Bind(IActivator activator)
{
if (_activator == activator)
return;
if (activator != null && _activator != null)
throw new InvalidOperationException();
_activator = activator;
}
public virtual void Activate(ActivationPurpose purpose)
{
if (_activator == null)
return;
_activator.Activate(purpose);
}
protected virtual void ActivateForRead()
{
Activate(ActivationPurpose.Read);
}
protected virtual void ActivateForWrite()
{
Activate(ActivationPurpose.Write);
}
[Transient]
public event NotifyCollectionChangedEventHandler CollectionChanged;
protected virtual void OnCollectionChanged(NotifyCollectionChangedEventArgs e)
{
if (CollectionChanged != null)
CollectionChanged(this, e);
}
}
I've also tried copying the code from GenericTypeHandlerPredicate and registering my ActivatableCollection to use the GenericCollectionTypeHandler. That results in a crash in GenericTypeFor() throwing an InvalidOperationException() when "Mercury" is being stored.
Just want to mention my answers from the db4o forums also here, for people with a similar problem:
First part of the issue:
From db4o's point of view nothing has changed in the 'ActivatableCollection' object and therefore no changes are stored. This is what is happening:
When you add the items, the ActivatableCollection is marked as changed.
When you commit the changes are stored. However the ' ActivatableCollection' holds the reference to the same object. db4o only stores the changes in the ActivatableCollection-object, which is the reference to the List. Since it is the same, no actual change is stored.
The List of the ActivatableCollection is never updated, because it wasn't marked as 'changed'
So the transparent activation doesn't see the changes in the list. You can fix your issue simply by using an ActivatableList in you're ActivatableCollection implementation. Just change the List with a IList interface and instantiate a ActivatableList instead of an List.
The second part of the issue: Why doesn't it work even when registering the GenericCollectionTypeHandler for this type? Here we hit a implementation detail. The GenericCollectionTypeHandler has an internal list of supported types, which doesn't include the self made 'ActivatableCollection'. GenericCollectionTypeHandler is not really part of the public API and intendet for internal use only.
Workaround / Fix
Just use an ActivatableList<T> instead of a List<T>. then everything works fine.

Sharepoint 2007 - cant find my modifications to web.config in SpWebApplication.WebConfigModifications

I cant seem to find the modifications I made to web.config in my FeatureRecievers Activated event. I try to get the modifications from the SpWebApplication.WebConfigModifications collection in the deactivate event, but this is always empty.... And the strangest thing is that my changes are still reverted after deactivating the feature...
My question is, should I not be able to view all changes made to the web.config files when accessing the SpWebApplication.WebConfigModifications collection in the Deactivating event? How should I go about to remove my changes explicitly?
public class FeatureReciever : SPFeatureReceiver
{
private const string FEATURE_NAME = "HelloWorld";
private class Modification
{
public string Name;
public string XPath;
public string Value;
public SPWebConfigModification.SPWebConfigModificationType ModificationType;
public bool createOnly;
public Modification(string name, string xPath, string value, SPWebConfigModification.SPWebConfigModificationType modificationType, bool createOnly)
{
Name = name;
XPath = xPath;
Value = value;
ModificationType = modificationType;
this.createOnly = createOnly;
}
}
private Modification[] modifications =
{
new Modification("connectionStrings", "configuration", "<connectionStrings/>", SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode, true),
new Modification("add[#name='ConnectionString'][#connectionString='Data Source=serverName;Initial Catalog=DBName;User Id=UserId;Password=Pass']", "configuration/connectionStrings", "<add name='ConnectionString' connectionString='Data Source=serverName;Initial Catalog=DBName;User Id=UserId;Password=Pass'/>", SPWebConfigModification.SPWebConfigModificationType.EnsureChildNode, false)
};
public override void FeatureActivated(SPFeatureReceiverProperties properties)
{
SPSite siteCollection = (properties.Feature.Parent as SPWeb).Site as SPSite;
SPWebApplication webApplication = siteCollection.WebApplication;
siteCollection.RootWeb.Title = "Set from activating code at " + DateTime.Now.ToString();
foreach (Modification entry in modifications)
{
SPWebConfigModification webConfigModification = CreateModification(entry);
webApplication.WebConfigModifications.Add(webConfigModification);
}
webApplication.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications();
webApplication.WebService.Update();
}
public override void FeatureDeactivating(SPFeatureReceiverProperties properties)
{
SPSite siteCollection = (properties.Feature.Parent as SPWeb).Site as SPSite;
SPWebApplication webApplication = siteCollection.WebApplication;
siteCollection.RootWeb.Title = "Set from deactivating code at " + DateTime.Now.ToString();
IList<SPWebConfigModification> modifications = webApplication.WebConfigModifications;
foreach (SPWebConfigModification modification in modifications)
{
if (modification.Owner == FEATURE_NAME)
{
webApplication.WebConfigModifications.Remove(modification);
}
}
webApplication.Farm.Services.GetValue<SPWebService>().ApplyWebConfigModifications();
webApplication.WebService.Update();
}
public override void FeatureInstalled(SPFeatureReceiverProperties properties)
{
}
public override void FeatureUninstalling(SPFeatureReceiverProperties properties)
{
}
private SPWebConfigModification CreateModification(Modification entry)
{
SPWebConfigModification spWebConfigModification = new SPWebConfigModification()
{
Name = entry.Name,
Path = entry.XPath,
Owner = FEATURE_NAME,
Sequence = 0,
Type = entry.ModificationType,
Value = entry.Value
};
return spWebConfigModification;
}
}
Thanks for your time.
/Hans
Finally today I figured out what was wrong with my code (that is why the WebConfigModifications collection was empty when i queryied it in the deactivate event) it seems you must apply the changes in a different manner than I had done.
My original approach to applying the changes involved the following code:
Activate event
webApplication.Farm.Services.GetValue().ApplyWebConfigModifications();
webApplication.WebService.Update();
The "correct" way of doing it is this:
SPWebService.ContentService.ApplyWebConfigModifications();
webApplication.Update();
Although I am still at a loss why my original code did not work.. could someone with more knowlege of the configuration object in Sharepoint enlighten me?

How to cache data in a MVC application

I have read lots of information about page caching and partial page caching in a MVC application. However, I would like to know how you would cache data.
In my scenario I will be using LINQ to Entities (entity framework). On the first call to GetNames (or whatever the method is) I want to grab the data from the database. I want to save the results in cache and on the second call to use the cached version if it exists.
Can anyone show an example of how this would work, where this should be implemented (model?) and if it would work.
I have seen this done in traditional ASP.NET apps , typically for very static data.
Here's a nice and simple cache helper class/service I use:
using System.Runtime.Caching;
public class InMemoryCache: ICacheService
{
public T GetOrSet<T>(string cacheKey, Func<T> getItemCallback) where T : class
{
T item = MemoryCache.Default.Get(cacheKey) as T;
if (item == null)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(10));
}
return item;
}
}
interface ICacheService
{
T GetOrSet<T>(string cacheKey, Func<T> getItemCallback) where T : class;
}
Usage:
cacheProvider.GetOrSet("cache key", (delegate method if cache is empty));
Cache provider will check if there's anything by the name of "cache id" in the cache, and if there's not, it will call a delegate method to fetch data and store it in cache.
Example:
var products=cacheService.GetOrSet("catalog.products", ()=>productRepository.GetAll())
Reference the System.Web dll in your model and use System.Web.Caching.Cache
public string[] GetNames()
{
string[] names = Cache["names"] as string[];
if(names == null) //not in cache
{
names = DB.GetNames();
Cache["names"] = names;
}
return names;
}
A bit simplified but I guess that would work. This is not MVC specific and I have always used this method for caching data.
I'm referring to TT's post and suggest the following approach:
Reference the System.Web dll in your model and use System.Web.Caching.Cache
public string[] GetNames()
{
var noms = Cache["names"];
if(noms == null)
{
noms = DB.GetNames();
Cache["names"] = noms;
}
return ((string[])noms);
}
You should not return a value re-read from the cache, since you'll never know if at that specific moment it is still in the cache. Even if you inserted it in the statement before, it might already be gone or has never been added to the cache - you just don't know.
So you add the data read from the database and return it directly, not re-reading from the cache.
For .NET 4.5+ framework
add reference: System.Runtime.Caching
add using statement:
using System.Runtime.Caching;
public string[] GetNames()
{
var noms = System.Runtime.Caching.MemoryCache.Default["names"];
if(noms == null)
{
noms = DB.GetNames();
System.Runtime.Caching.MemoryCache.Default["names"] = noms;
}
return ((string[])noms);
}
In the .NET Framework 3.5 and earlier versions, ASP.NET provided an in-memory cache implementation in the System.Web.Caching namespace. In previous versions of the .NET Framework, caching was available only in the System.Web namespace and therefore required a dependency on ASP.NET classes. In the .NET Framework 4, the System.Runtime.Caching namespace contains APIs that are designed for both Web and non-Web applications.
More info:
https://msdn.microsoft.com/en-us/library/dd997357(v=vs.110).aspx
https://learn.microsoft.com/en-us/dotnet/framework/performance/caching-in-net-framework-applications
Steve Smith did two great blog posts which demonstrate how to use his CachedRepository pattern in ASP.NET MVC. It uses the repository pattern effectively and allows you to get caching without having to change your existing code.
http://ardalis.com/Introducing-the-CachedRepository-Pattern
http://ardalis.com/building-a-cachedrepository-via-strategy-pattern
In these two posts he shows you how to set up this pattern and also explains why it is useful. By using this pattern you get caching without your existing code seeing any of the caching logic. Essentially you use the cached repository as if it were any other repository.
I have used it in this way and it works for me.
https://msdn.microsoft.com/en-us/library/system.web.caching.cache.add(v=vs.110).aspx
parameters info for system.web.caching.cache.add.
public string GetInfo()
{
string name = string.Empty;
if(System.Web.HttpContext.Current.Cache["KeyName"] == null)
{
name = GetNameMethod();
System.Web.HttpContext.Current.Cache.Add("KeyName", name, null, DateTime.Noew.AddMinutes(5), Cache.NoSlidingExpiration, CacheitemPriority.AboveNormal, null);
}
else
{
name = System.Web.HttpContext.Current.Cache["KeyName"] as string;
}
return name;
}
AppFabric Caching is distributed and an in-memory caching technic that stores data in key-value pairs using physical memory across multiple servers. AppFabric provides performance and scalability improvements for .NET Framework applications. Concepts and Architecture
Extending #Hrvoje Hudo's answer...
Code:
using System;
using System.Runtime.Caching;
public class InMemoryCache : ICacheService
{
public TValue Get<TValue>(string cacheKey, int durationInMinutes, Func<TValue> getItemCallback) where TValue : class
{
TValue item = MemoryCache.Default.Get(cacheKey) as TValue;
if (item == null)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(durationInMinutes));
}
return item;
}
public TValue Get<TValue, TId>(string cacheKeyFormat, TId id, int durationInMinutes, Func<TId, TValue> getItemCallback) where TValue : class
{
string cacheKey = string.Format(cacheKeyFormat, id);
TValue item = MemoryCache.Default.Get(cacheKey) as TValue;
if (item == null)
{
item = getItemCallback(id);
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(durationInMinutes));
}
return item;
}
}
interface ICacheService
{
TValue Get<TValue>(string cacheKey, Func<TValue> getItemCallback) where TValue : class;
TValue Get<TValue, TId>(string cacheKeyFormat, TId id, Func<TId, TValue> getItemCallback) where TValue : class;
}
Examples
Single item caching (when each item is cached based on its ID because caching the entire catalog for the item type would be too intensive).
Product product = cache.Get("product_{0}", productId, 10, productData.getProductById);
Caching all of something
IEnumerable<Categories> categories = cache.Get("categories", 20, categoryData.getCategories);
Why TId
The second helper is especially nice because most data keys are not composite. Additional methods could be added if you use composite keys often. In this way you avoid doing all sorts of string concatenation or string.Formats to get the key to pass to the cache helper. It also makes passing the data access method easier because you don't have to pass the ID into the wrapper method... the whole thing becomes very terse and consistant for the majority of use cases.
Here's an improvement to Hrvoje Hudo's answer. This implementation has a couple of key improvements:
Cache keys are created automatically based on the function to update data and the object passed in that specifies dependencies
Pass in time span for any cache duration
Uses a lock for thread safety
Note that this has a dependency on Newtonsoft.Json to serialize the dependsOn object, but that can be easily swapped out for any other serialization method.
ICache.cs
public interface ICache
{
T GetOrSet<T>(Func<T> getItemCallback, object dependsOn, TimeSpan duration) where T : class;
}
InMemoryCache.cs
using System;
using System.Reflection;
using System.Runtime.Caching;
using Newtonsoft.Json;
public class InMemoryCache : ICache
{
private static readonly object CacheLockObject = new object();
public T GetOrSet<T>(Func<T> getItemCallback, object dependsOn, TimeSpan duration) where T : class
{
string cacheKey = GetCacheKey(getItemCallback, dependsOn);
T item = MemoryCache.Default.Get(cacheKey) as T;
if (item == null)
{
lock (CacheLockObject)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.Add(duration));
}
}
return item;
}
private string GetCacheKey<T>(Func<T> itemCallback, object dependsOn) where T: class
{
var serializedDependants = JsonConvert.SerializeObject(dependsOn);
var methodType = itemCallback.GetType();
return methodType.FullName + serializedDependants;
}
}
Usage:
var order = _cache.GetOrSet(
() => _session.Set<Order>().SingleOrDefault(o => o.Id == orderId)
, new { id = orderId }
, new TimeSpan(0, 10, 0)
);
public sealed class CacheManager
{
private static volatile CacheManager instance;
private static object syncRoot = new Object();
private ObjectCache cache = null;
private CacheItemPolicy defaultCacheItemPolicy = null;
private CacheEntryRemovedCallback callback = null;
private bool allowCache = true;
private CacheManager()
{
cache = MemoryCache.Default;
callback = new CacheEntryRemovedCallback(this.CachedItemRemovedCallback);
defaultCacheItemPolicy = new CacheItemPolicy();
defaultCacheItemPolicy.AbsoluteExpiration = DateTime.Now.AddHours(1.0);
defaultCacheItemPolicy.RemovedCallback = callback;
allowCache = StringUtils.Str2Bool(ConfigurationManager.AppSettings["AllowCache"]); ;
}
public static CacheManager Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
instance = new CacheManager();
}
}
}
return instance;
}
}
public IEnumerable GetCache(String Key)
{
if (Key == null || !allowCache)
{
return null;
}
try
{
String Key_ = Key;
if (cache.Contains(Key_))
{
return (IEnumerable)cache.Get(Key_);
}
else
{
return null;
}
}
catch (Exception)
{
return null;
}
}
public void ClearCache(string key)
{
AddCache(key, null);
}
public bool AddCache(String Key, IEnumerable data, CacheItemPolicy cacheItemPolicy = null)
{
if (!allowCache) return true;
try
{
if (Key == null)
{
return false;
}
if (cacheItemPolicy == null)
{
cacheItemPolicy = defaultCacheItemPolicy;
}
String Key_ = Key;
lock (Key_)
{
return cache.Add(Key_, data, cacheItemPolicy);
}
}
catch (Exception)
{
return false;
}
}
private void CachedItemRemovedCallback(CacheEntryRemovedArguments arguments)
{
String strLog = String.Concat("Reason: ", arguments.RemovedReason.ToString(), " | Key-Name: ", arguments.CacheItem.Key, " | Value-Object: ", arguments.CacheItem.Value.ToString());
LogManager.Instance.Info(strLog);
}
}
I use two classes. First one the cache core object:
public class Cacher<TValue>
where TValue : class
{
#region Properties
private Func<TValue> _init;
public string Key { get; private set; }
public TValue Value
{
get
{
var item = HttpRuntime.Cache.Get(Key) as TValue;
if (item == null)
{
item = _init();
HttpContext.Current.Cache.Insert(Key, item);
}
return item;
}
}
#endregion
#region Constructor
public Cacher(string key, Func<TValue> init)
{
Key = key;
_init = init;
}
#endregion
#region Methods
public void Refresh()
{
HttpRuntime.Cache.Remove(Key);
}
#endregion
}
Second one is list of cache objects:
public static class Caches
{
static Caches()
{
Languages = new Cacher<IEnumerable<Language>>("Languages", () =>
{
using (var context = new WordsContext())
{
return context.Languages.ToList();
}
});
}
public static Cacher<IEnumerable<Language>> Languages { get; private set; }
}
I will say implementing Singleton on this persisting data issue can be a solution for this matter in case you find previous solutions much complicated
public class GPDataDictionary
{
private Dictionary<string, object> configDictionary = new Dictionary<string, object>();
/// <summary>
/// Configuration values dictionary
/// </summary>
public Dictionary<string, object> ConfigDictionary
{
get { return configDictionary; }
}
private static GPDataDictionary instance;
public static GPDataDictionary Instance
{
get
{
if (instance == null)
{
instance = new GPDataDictionary();
}
return instance;
}
}
// private constructor
private GPDataDictionary() { }
} // singleton
HttpContext.Current.Cache.Insert("subjectlist", subjectlist);
You can also try and use the caching built into ASP MVC:
Add the following attribute to the controller method you'd like to cache:
[OutputCache(Duration=10)]
In this case the ActionResult of this will be cached for 10 seconds.
More on this here

Resources