PUT/POST Key/Value collection to OpenRasta - openrasta

How should an OpenRasta handler be implemented such that it can accept a URL-template based ID together with a dictionary, Hashtable or NameValueCollection (I don't care which one)?
My URL-template is "/fielddata/{correlationId}".
My PUT message is:
PUT http://myhost/fielddata/39950 HTTP/1.1
Content-Type: application/x-www-form-urlencoded
Content-Length: 14
X=123&Y=Abc
I have tried a handler like this:
public class FieldDataHandler
{
public void Put(string correlationId, NameValueCollection data)
{
}
}
But get an exception like this:
openrasta Verbose: 0 : Incoming host request for http://myhost/fielddata/39950
openrasta Verbose: 0 : Adding communication context data
openrasta Verbose: 0 : Found 1 operation(s) with a matching name.
openrasta Verbose: 0 : Found 0 operation(s) with matching [HttpOperation] attribute.
openrasta Information: 0 : Operation FieldDataHandler::Put(String correlationId, NameValueCollection data) selected with 2 required members and 0 optional members, with codec ApplicationXWwwFormUrlencodedKeyedValuesCodec with score 1,333333.
openrasta Error: 0 : An error has occurred and the processing of the request has stopped.
Exception:
System.InvalidOperationException: The operation is not ready for invocation.
at OpenRasta.OperationModel.MethodBased.MethodBasedOperation.Invoke() in c:\Projects\OpenRasta\openrasta-stable\src\core\OpenRasta\OperationModel\MethodBased\MethodBasedOperation.cs:line 56
at OpenRasta.OperationModel.Interceptors.OperationWithInterceptors.<Invoke>b__0() in c:\Projects\OpenRasta\openrasta-stable\src\core\OpenRasta\OperationModel\Interceptors\OperationWithInterceptors.cs:line 47
at OpenRasta.OperationModel.Interceptors.OperationWithInterceptors.Invoke() in c:\Projects\OpenRasta\openrasta-stable\src\core\OpenRasta\OperationModel\Interceptors\OperationWithInterceptors.cs:line 52
at OpenRasta.OperationModel.OperationExecutor.Execute(IEnumerable`1 operations) in c:\Projects\OpenRasta\openrasta-stable\src\core\OpenRasta\OperationModel\OperationExecutor.cs:line 14
at OpenRasta.Pipeline.Contributors.OperationInvokerContributor.ExecuteOperations(ICommunicationContext context) in c:\Projects\OpenRasta\openrasta-stable\src\core\OpenRasta\Pipeline\Contributors\OperationInvokerContributor.cs:line 29
at OpenRasta.Pipeline.PipelineRunner.ExecuteContributor(ICommunicationContext context, ContributorCall call) in c:\Projects\OpenRasta\openrasta-stable\src\core\OpenRasta\Pipeline\PipelineRunner.cs:line 187
openrasta Information: 0 : Executing OperationResult OperationResult: type=InternalServerError, statusCode=500.

Ended up fixing this by creating a new Codec for a domain specific resource type (instead of the generic NameValueCollection type) and then do the form-urlencoded (de)serialization by other means.
The codec itself looks like this:
[MediaType("application/x-www-form-urlencoded")]
public class RestFieldDataCodec : IMediaTypeWriter, IMediaTypeReader
{
public void WriteTo(object entity, IHttpEntity response, string[] codecParameters)
{
RestFieldData data = (RestFieldData)entity;
using (TextWriter writer = new StreamWriter(response.Stream))
{
FormUrlEncodingSerializer serializer = new FormUrlEncodingSerializer(typeof(NameValueCollection));
serializer.Serialize(writer, data.Data);
}
}
public object ReadFrom(IHttpEntity request, IType destinationType, string destinationName)
{
using (TextReader reader = new StreamReader(request.Stream))
{
FormUrlEncodingSerializer serializer = new FormUrlEncodingSerializer(typeof(NameValueCollection));
NameValueCollection keyValueData = (NameValueCollection)serializer.Deserialize(reader);
return new RestFieldData
{
Data = keyValueData
};
}
}
public object Configuration { get; set; }
}
And the resource type is rather simple:
public class RestFieldData
{
public NameValueCollection Data { get; set; }
}
The FormUrlEncodingSerializer class is from https://github.com/JornWildt/Ramone
The handler method ended up like this:
public void Put(string correlationId, RestFieldData payload)
{
...
}

Related

how to get configKey in custom feign decode

I have several APIs defined in a TestFeignClient:
#FeignClient(name = "testFeign", url = "xxx")
public interface TestFeignClient {
#CustomAnnotation(value = 1)
#GetMapping("/test2")
String test1();
#CustomAnnotation(value = 2)
#GetMapping("/test2")
String test2();
}
also I define a custom decoder like this:
public class MyDecoder implements Decoder {
#Override
public Object decode(Response response, Type type) {
// how to get the #CustomAnnotation value in this method ?
return null;
}
}
my question is how can I get the #CustomAnnotation value in the decoder method ??

OAuth2Authentication object deserialization (RedisTokenStore)

I'm trying to rewrite some legacy code which used org.springframework.security.oauth2.provider.token.store.InMemoryTokenStore to store the access tokens. I'm currently trying to use the RedisTokenStore instead of the previously used InMemoryTokenStore. The token gets generated and gets stored in Redis fine (Standalone redis configuration), however, deserialization of OAuth2Authentication fails with the following error:
Could not read JSON: Cannot construct instance of `org.springframework.security.oauth2.provider.OAuth2Authentication` (no Creators, like default constructor, exist): cannot deserialize from Object value (no delegate- or property-based Creator)
Since there's no default constructor for this class, the deserialization and mapping to the actual object while looking up from Redis fails.
RedisTokenStore redisTokenStore = new RedisTokenStore(jedisConnectionFactory);
redisTokenStore.setSerializationStrategy(new StandardStringSerializationStrategy() {
#Override
protected <T> T deserializeInternal(byte[] bytes, Class<T> aClass) {
return Utilities.parse(new String(bytes, StandardCharsets.UTF_8),aClass);
}
#Override
protected byte[] serializeInternal(Object o) {
return Objects.requireNonNull(Utilities.convert(o)).getBytes();
}
});
this.tokenStore = redisTokenStore;
public static <T> T parse(String json, Class<T> clazz) {
try {
return OBJECT_MAPPER.readValue(json, clazz);
} catch (IOException e) {
log.error("Jackson2Json failed: " + e.getMessage());
} return null;}
public static String convert(Object data) {
try {
return OBJECT_MAPPER.writeValueAsString(data);
} catch (JsonProcessingException e) {
log.error("Conversion failed: " + e.getMessage());
}
return null;
}
How is OAuth2Authentication object reconstructed when the token is looked up from Redis? Since it does not define a default constructor, any Jackson based serializer and object mapper won't be able to deserialize it.
Again, the serialization works great (since OAuth2Authentication implements Serializable interface) and the token gets stored fine in Redis. It just fails when the /oauth/check_token is called.
What am I missing and how is this problem dealt with while storing access token in Redis?
I solved the issue by writing custom deserializer. It looks like this:
import com.fasterxml.jackson.core.JacksonException;
import com.fasterxml.jackson.core.JsonParser;
import com.fasterxml.jackson.databind.DeserializationContext;
import com.fasterxml.jackson.databind.JsonDeserializer;
import com.fasterxml.jackson.databind.module.SimpleModule;
import org.springframework.security.oauth2.core.AuthorizationGrantType;
import java.io.IOException;
public class AuthorizationGrantTypeCustomDeserializer extends JsonDeserializer<AuthorizationGrantType> {
#Override
public AuthorizationGrantType deserialize(JsonParser p, DeserializationContext ctxt) throws IOException, JacksonException {
Root root = p.readValueAs(Root.class);
return root != null ? new AuthorizationGrantType(root.value) : new AuthorizationGrantType("");
}
private static class Root {
public String value;
}
public static SimpleModule generateModule() {
SimpleModule authGrantModule = new SimpleModule();
authGrantModule.addDeserializer(AuthorizationGrantType.class, new AuthorizationGrantTypeCustomDeserializer());
return authGrantModule;
}
}
Then I registered deserializer in objectMapper which is later used by jackson API
ObjectMapper mapper = new ObjectMapper()
.registerModule(AuthorizationGrantTypeCustomDeserializer.generateModule());

How can I convert an Object to Json in a Rabbit reply?

I have two applications communicating with each other using rabbit.
I need to send (from app1) an object to a listener (in app2) and after some process (on listener) it answer me with another object, now I am receiving this error:
ClassNotFound
I am using this config for rabbit in both applications:
#Configuration
public class RabbitConfiguration {
public final static String EXCHANGE_NAME = "paymentExchange";
public final static String EVENT_ROUTING_KEY = "eventRoute";
public final static String PAYEMNT_ROUTING_KEY = "paymentRoute";
public final static String QUEUE_EVENT = EXCHANGE_NAME + "." + "event";
public final static String QUEUE_PAYMENT = EXCHANGE_NAME + "." + "payment";
public final static String QUEUE_CAPTURE = EXCHANGE_NAME + "." + "capture";
#Bean
public List<Declarable> ds() {
return queues(QUEUE_EVENT, QUEUE_PAYMENT);
}
#Autowired
private ConnectionFactory rabbitConnectionFactory;
#Bean
public AmqpAdmin amqpAdmin() {
return new RabbitAdmin(rabbitConnectionFactory);
}
#Bean
public DirectExchange exchange() {
return new DirectExchange(EXCHANGE_NAME);
}
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate r = new RabbitTemplate(rabbitConnectionFactory);
r.setExchange(EXCHANGE_NAME);
r.setChannelTransacted(false);
r.setConnectionFactory(rabbitConnectionFactory);
r.setMessageConverter(jsonMessageConverter());
return r;
}
#Bean
public MessageConverter jsonMessageConverter() {
return new Jackson2JsonMessageConverter();
}
private List<Declarable> queues(String... nomes) {
List<Declarable> result = new ArrayList<>();
for (int i = 0; i < nomes.length; i++) {
result.add(newQueue(nomes[i]));
if (nomes[i].equals(QUEUE_EVENT))
result.add(makeBindingToQueue(nomes[i], EVENT_ROUTING_KEY));
else
result.add(makeBindingToQueue(nomes[i], PAYEMNT_ROUTING_KEY));
}
return result;
}
private static Binding makeBindingToQueue(String queueName, String route) {
return new Binding(queueName, DestinationType.QUEUE, EXCHANGE_NAME, route, null);
}
private static Queue newQueue(String nome) {
return new Queue(nome);
}
}
I send the message using this:
String response = (String) rabbitTemplate.convertSendAndReceive(RabbitConfiguration.EXCHANGE_NAME,
RabbitConfiguration.PAYEMNT_ROUTING_KEY, domainEvent);
And await for a response using a cast to the object.
This communication is between two different applications using the same rabbit server.
How can I solve this?
I expected rabbit convert the message to a json in the send operation and the same in the reply, so I've created the object to correspond to a json of reply.
Show, please, the configuration for the listener. You should be sure that ListenerContainer there is supplied with the Jackson2JsonMessageConverter as well to carry __TypeId__ header back with the reply.
Also see Spring AMQP JSON sample for some help.

getting Java Bad File Descriptor Close Bug while reading multipart/form-data http body

My web service hosted on Play! framework. I have few image files uploaded from a non-play! framework based client using the standard HTTP client request with content-type of multipart/form-data.
On the web service side, I tried using Play! ApacheMultipartParser to parse the Http.request.body, but failed with the Java IO Bad File Descriptor exception.
The problem seems come from Java MultipartStream, by looking at the following callstack
at java.io.FileInputStream.readBytes(Native Method)
at java.io.FileInputStream.read(FileInputStream.java:208)
at org.apache.commons.fileupload.MultipartStream$ItemInputStream.makeAvailable(MultipartStream.java:976)
at org.apache.commons.fileupload.MultipartStream$ItemInputStream.read(MultipartStream.java:886)
at java.io.InputStream.read(InputStream.java:85)
I also tried directly reading the http.request.body into a big buffer for experiment, got the same exception. What could be wrong?
The http data sent out from client side is something like the following. On web service side, I could using IO.write to save it to a file w/o any problem.
Content-Type: multipart/form-data; boundary=--3i2ndDfv2rTHiSisAbouNdArYfORhtTPEefj3q2f
--3i2ndDfv2rTHiSisAbouNdArYfORhtTPEefj3q2f
Content-Disposition: form-data; name="foo1.jpg"; filename="foo1.jpg"
Content-Length: 5578
Content-Type: image/jpeg
<image data 1 omitted>
--3i2ndDfv2rTHiSisAbouNdArYfORhtTPEefj3q2f
Content-Disposition: form-data; name="foo2.jpg"; filename="foo2.jpg"
Content-Length: 327
Content-Type: image/jpeg
<image data 2 omitted>
--3i2ndDfv2rTHiSisAbouNdArYfORhtTPEefj3q2f--
I had the exact same issue. The problem lies in the way that Play! handles multipart uploads. Usually you can add a FileUpload to your upload method and get your files there. This helps a lot as you can get the filenames and sizes and all this stuff directly from Play:
public static void uploadFile(File fileUpload) {
String name = fileUpload.getName() // etc.
}
However using this logic prevents you from using the HTTPRequest. So if you use a non-Play way of uploading files (e.g with XMLHTTPRequest) where the automatic mapping to the fileUpload won't work the following thing happens:
Play tries to bind the request to your arguments
Play encounters your File argument and parses the request.
Play finds nothing of use (as it doesn't understand XMLHttpRequest) and maps your File argument to null.
Now the request input stream has already been consumed by Play and you get your "Bad File Descriptor" message.
The solution to this is, to not use any Play! magic, if you want to use the same method for uploading via Form and XMLHttpRequest (XHR). I wanted to use Valum's file uploader script (http://github.com/valums/file-uploader) in addition to my own form based upload method. One uses XHR, the other uses plain multipart form uploads. I created the following method in my controller, that takes the uploaded file from the "qqfile" parameter and works with form based and XHR-Uploads:
#SuppressWarnings({"UnusedDeclaration"})
public static void uploadFile() {
FileUpload qqfile = null;
DataParser parser = DataParser.parsers.get(request.contentType);
if (parser != null) {
// normal upload. I have to manually parse this because
// play kills the body input stream for XHR-requests when I put the file upload as a method
// argument to {#link #uploadFile)
parser.parse(request.body);
#SuppressWarnings({"unchecked"})
ArrayList<FileUpload> uploads = (ArrayList<FileUpload>) request.args.get("__UPLOADS");
for (FileUpload upload : uploads) {
if ("qqfile".equals(upload.getFieldName())) {
qqfile = upload;
break;
}
}
} else {
// XHR upload
qqfile = new FileUpload(new XHRFileItem("qqfile"));
}
if (qqfile == null) {
badRequest();
return;
}
// and now do something with your Fileupload object here (e.g. write it to db or something else)
}
You probably can skip the IF-part of the if, if you split this method into two, so you can use the normal Play! magic for default uploads and use a separate method for your XHR uploads.
I also had to create the XHRFileItem class which just wraps around a file item that is posted via an XMLHttpRequest. You might have to modify it a bit to work with multiple files and your particular file uploader, but nevertheless here it is:
package application.util;
import org.apache.commons.fileupload.FileItem;
import org.jetbrains.annotations.Nullable;
import java.io.*;
import static play.mvc.Http.Request.current;
/**
* An implementation of FileItem to deal with XmlHttpRequest file uploads.
*/
public class XHRFileItem implements FileItem {
private String fieldName;
public XHRFileItem(String fieldName) {
this.fieldName = fieldName;
}
public InputStream getInputStream() throws IOException {
return current().body;
}
public String getContentType() {
return current().contentType;
}
public String getName() {
String fileName = current().params.get(fieldName);
if (fileName == null) {
fileName = current().headers.get("x-file-name").value();
}
return fileName;
}
public boolean isInMemory() {
return false;
}
public long getSize() {
return 0;
}
public byte[] get() {
return new byte[0];
}
public String getString(String s) throws UnsupportedEncodingException {
return s;
}
public String getString() {
return "";
}
public void write(File file) throws Exception {
FileOutputStream fos = new FileOutputStream(file);
InputStream is = getInputStream();
byte[] buf = new byte[64000];
int read;
while ((read = is.read(buf)) != -1) {
fos.write(buf, 0, read);
}
fos.close();
}
public void delete() {
}
public String getFieldName() {
return fieldName;
}
public void setFieldName(String fieldName) {
this.fieldName = fieldName;
}
public boolean isFormField() {
return false;
}
public void setFormField(boolean b) {
}
#Nullable
public OutputStream getOutputStream() throws IOException {
return null;
}
}
Hope this helps, it took me about a day to make this work on my end.

How to cache data in a MVC application

I have read lots of information about page caching and partial page caching in a MVC application. However, I would like to know how you would cache data.
In my scenario I will be using LINQ to Entities (entity framework). On the first call to GetNames (or whatever the method is) I want to grab the data from the database. I want to save the results in cache and on the second call to use the cached version if it exists.
Can anyone show an example of how this would work, where this should be implemented (model?) and if it would work.
I have seen this done in traditional ASP.NET apps , typically for very static data.
Here's a nice and simple cache helper class/service I use:
using System.Runtime.Caching;
public class InMemoryCache: ICacheService
{
public T GetOrSet<T>(string cacheKey, Func<T> getItemCallback) where T : class
{
T item = MemoryCache.Default.Get(cacheKey) as T;
if (item == null)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(10));
}
return item;
}
}
interface ICacheService
{
T GetOrSet<T>(string cacheKey, Func<T> getItemCallback) where T : class;
}
Usage:
cacheProvider.GetOrSet("cache key", (delegate method if cache is empty));
Cache provider will check if there's anything by the name of "cache id" in the cache, and if there's not, it will call a delegate method to fetch data and store it in cache.
Example:
var products=cacheService.GetOrSet("catalog.products", ()=>productRepository.GetAll())
Reference the System.Web dll in your model and use System.Web.Caching.Cache
public string[] GetNames()
{
string[] names = Cache["names"] as string[];
if(names == null) //not in cache
{
names = DB.GetNames();
Cache["names"] = names;
}
return names;
}
A bit simplified but I guess that would work. This is not MVC specific and I have always used this method for caching data.
I'm referring to TT's post and suggest the following approach:
Reference the System.Web dll in your model and use System.Web.Caching.Cache
public string[] GetNames()
{
var noms = Cache["names"];
if(noms == null)
{
noms = DB.GetNames();
Cache["names"] = noms;
}
return ((string[])noms);
}
You should not return a value re-read from the cache, since you'll never know if at that specific moment it is still in the cache. Even if you inserted it in the statement before, it might already be gone or has never been added to the cache - you just don't know.
So you add the data read from the database and return it directly, not re-reading from the cache.
For .NET 4.5+ framework
add reference: System.Runtime.Caching
add using statement:
using System.Runtime.Caching;
public string[] GetNames()
{
var noms = System.Runtime.Caching.MemoryCache.Default["names"];
if(noms == null)
{
noms = DB.GetNames();
System.Runtime.Caching.MemoryCache.Default["names"] = noms;
}
return ((string[])noms);
}
In the .NET Framework 3.5 and earlier versions, ASP.NET provided an in-memory cache implementation in the System.Web.Caching namespace. In previous versions of the .NET Framework, caching was available only in the System.Web namespace and therefore required a dependency on ASP.NET classes. In the .NET Framework 4, the System.Runtime.Caching namespace contains APIs that are designed for both Web and non-Web applications.
More info:
https://msdn.microsoft.com/en-us/library/dd997357(v=vs.110).aspx
https://learn.microsoft.com/en-us/dotnet/framework/performance/caching-in-net-framework-applications
Steve Smith did two great blog posts which demonstrate how to use his CachedRepository pattern in ASP.NET MVC. It uses the repository pattern effectively and allows you to get caching without having to change your existing code.
http://ardalis.com/Introducing-the-CachedRepository-Pattern
http://ardalis.com/building-a-cachedrepository-via-strategy-pattern
In these two posts he shows you how to set up this pattern and also explains why it is useful. By using this pattern you get caching without your existing code seeing any of the caching logic. Essentially you use the cached repository as if it were any other repository.
I have used it in this way and it works for me.
https://msdn.microsoft.com/en-us/library/system.web.caching.cache.add(v=vs.110).aspx
parameters info for system.web.caching.cache.add.
public string GetInfo()
{
string name = string.Empty;
if(System.Web.HttpContext.Current.Cache["KeyName"] == null)
{
name = GetNameMethod();
System.Web.HttpContext.Current.Cache.Add("KeyName", name, null, DateTime.Noew.AddMinutes(5), Cache.NoSlidingExpiration, CacheitemPriority.AboveNormal, null);
}
else
{
name = System.Web.HttpContext.Current.Cache["KeyName"] as string;
}
return name;
}
AppFabric Caching is distributed and an in-memory caching technic that stores data in key-value pairs using physical memory across multiple servers. AppFabric provides performance and scalability improvements for .NET Framework applications. Concepts and Architecture
Extending #Hrvoje Hudo's answer...
Code:
using System;
using System.Runtime.Caching;
public class InMemoryCache : ICacheService
{
public TValue Get<TValue>(string cacheKey, int durationInMinutes, Func<TValue> getItemCallback) where TValue : class
{
TValue item = MemoryCache.Default.Get(cacheKey) as TValue;
if (item == null)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(durationInMinutes));
}
return item;
}
public TValue Get<TValue, TId>(string cacheKeyFormat, TId id, int durationInMinutes, Func<TId, TValue> getItemCallback) where TValue : class
{
string cacheKey = string.Format(cacheKeyFormat, id);
TValue item = MemoryCache.Default.Get(cacheKey) as TValue;
if (item == null)
{
item = getItemCallback(id);
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(durationInMinutes));
}
return item;
}
}
interface ICacheService
{
TValue Get<TValue>(string cacheKey, Func<TValue> getItemCallback) where TValue : class;
TValue Get<TValue, TId>(string cacheKeyFormat, TId id, Func<TId, TValue> getItemCallback) where TValue : class;
}
Examples
Single item caching (when each item is cached based on its ID because caching the entire catalog for the item type would be too intensive).
Product product = cache.Get("product_{0}", productId, 10, productData.getProductById);
Caching all of something
IEnumerable<Categories> categories = cache.Get("categories", 20, categoryData.getCategories);
Why TId
The second helper is especially nice because most data keys are not composite. Additional methods could be added if you use composite keys often. In this way you avoid doing all sorts of string concatenation or string.Formats to get the key to pass to the cache helper. It also makes passing the data access method easier because you don't have to pass the ID into the wrapper method... the whole thing becomes very terse and consistant for the majority of use cases.
Here's an improvement to Hrvoje Hudo's answer. This implementation has a couple of key improvements:
Cache keys are created automatically based on the function to update data and the object passed in that specifies dependencies
Pass in time span for any cache duration
Uses a lock for thread safety
Note that this has a dependency on Newtonsoft.Json to serialize the dependsOn object, but that can be easily swapped out for any other serialization method.
ICache.cs
public interface ICache
{
T GetOrSet<T>(Func<T> getItemCallback, object dependsOn, TimeSpan duration) where T : class;
}
InMemoryCache.cs
using System;
using System.Reflection;
using System.Runtime.Caching;
using Newtonsoft.Json;
public class InMemoryCache : ICache
{
private static readonly object CacheLockObject = new object();
public T GetOrSet<T>(Func<T> getItemCallback, object dependsOn, TimeSpan duration) where T : class
{
string cacheKey = GetCacheKey(getItemCallback, dependsOn);
T item = MemoryCache.Default.Get(cacheKey) as T;
if (item == null)
{
lock (CacheLockObject)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.Add(duration));
}
}
return item;
}
private string GetCacheKey<T>(Func<T> itemCallback, object dependsOn) where T: class
{
var serializedDependants = JsonConvert.SerializeObject(dependsOn);
var methodType = itemCallback.GetType();
return methodType.FullName + serializedDependants;
}
}
Usage:
var order = _cache.GetOrSet(
() => _session.Set<Order>().SingleOrDefault(o => o.Id == orderId)
, new { id = orderId }
, new TimeSpan(0, 10, 0)
);
public sealed class CacheManager
{
private static volatile CacheManager instance;
private static object syncRoot = new Object();
private ObjectCache cache = null;
private CacheItemPolicy defaultCacheItemPolicy = null;
private CacheEntryRemovedCallback callback = null;
private bool allowCache = true;
private CacheManager()
{
cache = MemoryCache.Default;
callback = new CacheEntryRemovedCallback(this.CachedItemRemovedCallback);
defaultCacheItemPolicy = new CacheItemPolicy();
defaultCacheItemPolicy.AbsoluteExpiration = DateTime.Now.AddHours(1.0);
defaultCacheItemPolicy.RemovedCallback = callback;
allowCache = StringUtils.Str2Bool(ConfigurationManager.AppSettings["AllowCache"]); ;
}
public static CacheManager Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
instance = new CacheManager();
}
}
}
return instance;
}
}
public IEnumerable GetCache(String Key)
{
if (Key == null || !allowCache)
{
return null;
}
try
{
String Key_ = Key;
if (cache.Contains(Key_))
{
return (IEnumerable)cache.Get(Key_);
}
else
{
return null;
}
}
catch (Exception)
{
return null;
}
}
public void ClearCache(string key)
{
AddCache(key, null);
}
public bool AddCache(String Key, IEnumerable data, CacheItemPolicy cacheItemPolicy = null)
{
if (!allowCache) return true;
try
{
if (Key == null)
{
return false;
}
if (cacheItemPolicy == null)
{
cacheItemPolicy = defaultCacheItemPolicy;
}
String Key_ = Key;
lock (Key_)
{
return cache.Add(Key_, data, cacheItemPolicy);
}
}
catch (Exception)
{
return false;
}
}
private void CachedItemRemovedCallback(CacheEntryRemovedArguments arguments)
{
String strLog = String.Concat("Reason: ", arguments.RemovedReason.ToString(), " | Key-Name: ", arguments.CacheItem.Key, " | Value-Object: ", arguments.CacheItem.Value.ToString());
LogManager.Instance.Info(strLog);
}
}
I use two classes. First one the cache core object:
public class Cacher<TValue>
where TValue : class
{
#region Properties
private Func<TValue> _init;
public string Key { get; private set; }
public TValue Value
{
get
{
var item = HttpRuntime.Cache.Get(Key) as TValue;
if (item == null)
{
item = _init();
HttpContext.Current.Cache.Insert(Key, item);
}
return item;
}
}
#endregion
#region Constructor
public Cacher(string key, Func<TValue> init)
{
Key = key;
_init = init;
}
#endregion
#region Methods
public void Refresh()
{
HttpRuntime.Cache.Remove(Key);
}
#endregion
}
Second one is list of cache objects:
public static class Caches
{
static Caches()
{
Languages = new Cacher<IEnumerable<Language>>("Languages", () =>
{
using (var context = new WordsContext())
{
return context.Languages.ToList();
}
});
}
public static Cacher<IEnumerable<Language>> Languages { get; private set; }
}
I will say implementing Singleton on this persisting data issue can be a solution for this matter in case you find previous solutions much complicated
public class GPDataDictionary
{
private Dictionary<string, object> configDictionary = new Dictionary<string, object>();
/// <summary>
/// Configuration values dictionary
/// </summary>
public Dictionary<string, object> ConfigDictionary
{
get { return configDictionary; }
}
private static GPDataDictionary instance;
public static GPDataDictionary Instance
{
get
{
if (instance == null)
{
instance = new GPDataDictionary();
}
return instance;
}
}
// private constructor
private GPDataDictionary() { }
} // singleton
HttpContext.Current.Cache.Insert("subjectlist", subjectlist);
You can also try and use the caching built into ASP MVC:
Add the following attribute to the controller method you'd like to cache:
[OutputCache(Duration=10)]
In this case the ActionResult of this will be cached for 10 seconds.
More on this here

Resources