I need to allow my content pipeline extension to use a pattern similar to a factory. I start with a dictionary type:
public delegate T Mapper<T>(MapFactory<T> mf, XElement d);
public class MapFactory<T>
{
Dictionary<string, Mapper<T>> map = new Dictionary<string, Mapper<T>>();
public void Add(string s, Mapper<T> m)
{
map.Add(s, m);
}
public T Get(XElement xe)
{
if (xe == null) throw new ArgumentNullException(
"Invalid document");
var key = xe.Name.ToString();
if (!map.ContainsKey(key)) throw new ArgumentException(
key + " is not a valid key.");
return map[key](this, xe);
}
public IEnumerable<T> GetAll(XElement xe)
{
if (xe == null) throw new ArgumentNullException(
"Invalid document");
foreach (var e in xe.Elements())
{
var val = e.Name.ToString();
if (map.ContainsKey(val))
yield return map[val](this, e);
}
}
}
Here is one type of object I want to store:
public partial class TestContent
{
// Test type
public string title;
// Once test if true
public bool once;
// Parameters
public Dictionary<string, object> args;
public TestContent()
{
title = string.Empty;
args = new Dictionary<string, object>();
}
public TestContent(XElement xe)
{
title = xe.Name.ToString();
args = new Dictionary<string, object>();
xe.ParseAttribute("once", once);
}
}
XElement.ParseAttribute is an extension method that works as one might expect. It returns a boolean that is true if successful.
The issue is that I have many different types of tests, each of which populates the object in a way unique to the specific test. The element name is the key to MapFactory's dictionary. This type of test, while atypical, illustrates my problem.
public class LogicTest : TestBase
{
string opkey;
List<TestBase> items;
public override bool Test(BehaviorArgs args)
{
if (items == null) return false;
if (items.Count == 0) return false;
bool result = items[0].Test(args);
for (int i = 1; i < items.Count; i++)
{
bool other = items[i].Test(args);
switch (opkey)
{
case "And":
result &= other;
if (!result) return false;
break;
case "Or":
result |= other;
if (result) return true;
break;
case "Xor":
result ^= other;
break;
case "Nand":
result = !(result & other);
break;
case "Nor":
result = !(result | other);
break;
default:
result = false;
break;
}
}
return result;
}
public static TestContent Build(MapFactory<TestContent> mf, XElement xe)
{
var result = new TestContent(xe);
string key = "Or";
xe.GetAttribute("op", key);
result.args.Add("key", key);
var names = mf.GetAll(xe).ToList();
if (names.Count() < 2) throw new ArgumentException(
"LogicTest requires at least two entries.");
result.args.Add("items", names);
return result;
}
}
My actual code is more involved as the factory has two dictionaries, one that turns an XElement into a content type to write and another used by the reader to create the actual game objects.
I need to build these factories in code because they map strings to delegates. I have a service that contains several of these factories. The mission is to make these factory classes available to a content processor. Neither the processor itself nor the context it uses as a parameter have any known hooks to attach an IServiceProvider or equivalent.
Any ideas?
I needed to create a data structure essentially on demand without access to the underlying classes as they came from a third party, in this case XNA Game Studio. There is only one way to do this I know of... statically.
public class TestMap : Dictionary<string, string>
{
private static readonly TestMap map = new TestMap();
private TestMap()
{
Add("Logic", "LogicProcessor");
Add("Sequence", "SequenceProcessor");
Add("Key", "KeyProcessor");
Add("KeyVector", "KeyVectorProcessor");
Add("Mouse", "MouseProcessor");
Add("Pad", "PadProcessor");
Add("PadVector", "PadVectorProcessor");
}
public static TestMap Map
{
get { return map; }
}
public IEnumerable<TestContent> Collect(XElement xe, ContentProcessorContext cpc)
{
foreach(var e in xe.Elements().Where(e => ContainsKey(e.Name.ToString())))
{
yield return cpc.Convert<XElement, TestContent>(
e, this[e.Name.ToString()]);
}
}
}
I took this a step further and created content processors for each type of TestBase:
/// <summary>
/// Turns an imported XElement into a TestContent used for a LogicTest
/// </summary>
[ContentProcessor(DisplayName = "LogicProcessor")]
public class LogicProcessor : ContentProcessor<XElement, TestContent>
{
public override TestContent Process(XElement input, ContentProcessorContext context)
{
var result = new TestContent(input);
string key = "Or";
input.GetAttribute("op", key);
result.args.Add("key", key);
var items = TestMap.Map.Collect(input, context);
if (items.Count() < 2) throw new ArgumentNullException(
"LogicProcessor requires at least two items.");
result.args.Add("items", items);
return result;
}
}
Any attempt to reference or access the class such as calling TestMap.Collect will generate the underlying static class if needed. I basically moved the code from LogicTest.Build to the processor. I also carry out any needed validation in the processor.
When I get to reading these classes I will have the ContentService to help.
Related
I am trying to develop custom source for parallel GCS content scanning. The naive approach would be to loop through listObjects function calls:
while (...) {
Objects objects = gcsUtil.listObjects(bucket, prefix, pageToken);
pageToken = objects.getNextPageToken();
...
}
The problem is performance for the tens of millions objects.
Instead of the single thread code we can add delimiter / and submit parallel processed for each prefix found:
...
Objects objects = gcsUtil.listObjects(bucket, prefix, pageToken, "/");
for (String subPrefix : object.getPrefixes()) {
scanAsync(bucket, subPrefix);
}
...
Next idea was to try to wrap this logic in Splittable DoFn.
Choice of RestrictionTracker: I don't see how can be used any of exiting RestrictionTracker. So decided to write own. Restriction itself is basically list of prefixes to scan. tryClaim checks if there is more prefixes left and receive newly scanned to append them to current restriction. trySplit splits list of prefixes.
The problem that I faced that trySplit can be called before all subPrefixes are found. In this case current restriction may receive more work to do after it was splitted. And it seems that trySplit is being called until it returns a not null value for a given RestrictionTracker: after certain number of elements goes to the output or after 1 second via scheduler or when ProcessContext.resume() returned. This doesn't work in my case as new prefixes can be found at any time. And I can't checkpoint via return ProcessContext.resume() because if split was already done, possible work that left in current restriction will cause checkDone() function to fail.
Another problem that I suspect that I couldn't achieve parallel execution in DirectRunner. As trySplit was always called with fractionOfRemainder=0 and new RestrictionTracker was started only after current one completed its piece of work.
It would be also great to read more detailed explanation about Splittable DoFn components lifecycle. How parallel execution per element is achieved. And how and when state of RestrictionTracker can be modified.
UPD: Adding simplified code that should show intended implementation
#DoFn.BoundedPerElement
private static class ScannerDoFn extends DoFn<String, String> {
private transient GcsUtil gcsUtil;
#GetInitialRestriction
public ScannerRestriction getInitialRestriction(#Element String bucket) {
return ScannerRestriction.init(bucket);
}
#ProcessElement
public ProcessContinuation processElement(
ProcessContext c,
#Element String bucket,
RestrictionTracker<ScannerRestriction, ScannerPosition> tracker,
OutputReceiver<String> outputReceiver) {
if (gcsUtil == null) {
gcsUtil = c.getPipelineOptions().as(GcsOptions.class).getGcsUtil();
}
ScannerRestriction currentRestriction = tracker.currentRestriction();
ScannerPosition position = new ScannerPosition();
while (true) {
if (tracker.tryClaim(position)) {
if (position.completedCurrent) {
// position.clear();
// ideally I would to get checkpoint here before starting new work
return ProcessContinuation.resume();
}
try {
Objects objects = gcsUtil.listObjects(
currentRestriction.bucket,
position.currentPrefix,
position.currentPageToken,
"/");
if (objects.getItems() != null) {
for (StorageObject o : objects.getItems()) {
outputReceiver.output(o.getName());
}
}
if (objects.getPrefixes() != null) {
position.newPrefixes.addAll(objects.getPrefixes());
}
position.currentPageToken = objects.getNextPageToken();
if (position.currentPageToken == null) {
position.completedCurrent = true;
}
} catch (Throwable throwable) {
logger.error("Error during scan", throwable);
}
} else {
return ProcessContinuation.stop();
}
}
}
#NewTracker
public RestrictionTracker<ScannerRestriction, ScannerPosition> restrictionTracker(#Restriction ScannerRestriction restriction) {
return new ScannerRestrictionTracker(restriction);
}
#GetRestrictionCoder
public Coder<ScannerRestriction> getRestrictionCoder() {
return ScannerRestriction.getCoder();
}
}
public static class ScannerPosition {
private String currentPrefix;
private String currentPageToken;
private final List<String> newPrefixes;
private boolean completedCurrent;
public ScannerPosition() {
this.currentPrefix = null;
this.currentPageToken = null;
this.newPrefixes = Lists.newArrayList();
this.completedCurrent = false;
}
public void clear() {
this.currentPageToken = null;
this.currentPrefix = null;
this.completedCurrent = false;
}
}
private static class ScannerRestriction {
private final String bucket;
private final LinkedList<String> prefixes;
private ScannerRestriction(String bucket) {
this.bucket = bucket;
this.prefixes = Lists.newLinkedList();
}
public static ScannerRestriction init(String bucket) {
ScannerRestriction res = new ScannerRestriction(bucket);
res.prefixes.add("");
return res;
}
public ScannerRestriction empty() {
return new ScannerRestriction(bucket);
}
public boolean isEmpty() {
return prefixes.isEmpty();
}
public static Coder<ScannerRestriction> getCoder() {
return ScannerRestrictionCoder.INSTANCE;
}
private static class ScannerRestrictionCoder extends AtomicCoder<ScannerRestriction> {
private static final ScannerRestrictionCoder INSTANCE = new ScannerRestrictionCoder();
private final static Coder<List<String>> listCoder = ListCoder.of(StringUtf8Coder.of());
#Override
public void encode(ScannerRestriction value, OutputStream outStream) throws IOException {
NullableCoder.of(StringUtf8Coder.of()).encode(value.bucket, outStream);
listCoder.encode(value.prefixes, outStream);
}
#Override
public ScannerRestriction decode(InputStream inStream) throws IOException {
String bucket = NullableCoder.of(StringUtf8Coder.of()).decode(inStream);
List<String> prefixes = listCoder.decode(inStream);
ScannerRestriction res = new ScannerRestriction(bucket);
res.prefixes.addAll(prefixes);
return res;
}
}
}
private static class ScannerRestrictionTracker extends RestrictionTracker<ScannerRestriction, ScannerPosition> {
private ScannerRestriction restriction;
private ScannerPosition lastPosition = null;
ScannerRestrictionTracker(ScannerRestriction restriction) {
this.restriction = restriction;
}
#Override
public boolean tryClaim(ScannerPosition position) {
restriction.prefixes.addAll(position.newPrefixes);
position.newPrefixes.clear();
if (position.completedCurrent) {
// completed work for current prefix
assert lastPosition != null && lastPosition.currentPrefix.equals(position.currentPrefix);
lastPosition = null;
return true; // return true but we would need to claim again if we need to get next prefix
} else if (lastPosition != null && lastPosition.currentPrefix.equals(position.currentPrefix)) {
// proceed work for current prefix
lastPosition = position;
return true;
}
// looking for next prefix
assert position.currentPrefix == null;
assert lastPosition == null;
if (restriction.isEmpty()) {
// no work to do
return false;
}
position.currentPrefix = restriction.prefixes.poll();
lastPosition = position;
return true;
}
#Override
public ScannerRestriction currentRestriction() {
return restriction;
}
#Override
public SplitResult<ScannerRestriction> trySplit(double fractionOfRemainder) {
if (lastPosition == null && restriction.isEmpty()) {
// no work at all
return null;
}
if (lastPosition != null && restriction.isEmpty()) {
// work at the moment only at currently scanned prefix
return SplitResult.of(restriction, restriction.empty());
}
int size = restriction.prefixes.size();
int newSize = new Double(Math.round(fractionOfRemainder * size)).intValue();
if (newSize == 0) {
ScannerRestriction residual = restriction;
restriction = restriction.empty();
return SplitResult.of(restriction, residual);
}
ScannerRestriction residual = restriction.empty();
for (int i=newSize; i<=size; i++) {
residual.prefixes.add(restriction.prefixes.removeLast());
}
return SplitResult.of(restriction, residual);
}
#Override
public void checkDone() throws IllegalStateException {
if (lastPosition != null) {
throw new IllegalStateException("Called checkDone on not completed job");
}
}
#Override
public IsBounded isBounded() {
return IsBounded.UNBOUNDED;
}
}
How to use DecorateAllWith to decorate with a DynamicProxy all instances implements an interface?
For example:
public class ApplicationServiceInterceptor : IInterceptor
{
public void Intercept(IInvocation invocation)
{
// ...
invocation.Proceed();
// ...
}
}
public class ApplicationServiceConvention : IRegistrationConvention
{
public void Process(Type type, Registry registry)
{
if (type.CanBeCastTo<IApplicationService>() && type.IsInterface)
{
var proxyGenerator = new ProxyGenerator();
// ??? how to use proxyGenerator??
// ???
registry.For(type).DecorateAllWith(???); // How to use DecorateAllWith DynamicProxy ...??
}
}
}
I could decorate some interfaces to concrete types using (for example):
var proxyGenerator = new ProxyGenerator();
registry.For<IApplicationService>().Use<BaseAppService>().DecorateWith(service => proxyGenerator.CreateInterfaceProxyWithTargetInterface(....))
But havent able to using DecorateAll to do this.
To call registry.For<>().Use<>().DecorateWith() I have to do this:
if (type.CanBeCastTo<IApplicationService>() && !type.IsAbstract)
{
var interfaceToProxy = type.GetInterface("I" + type.Name);
if (interfaceToProxy == null)
return null;
var proxyGenerator = new ProxyGenerator();
// Build expression to use registration by reflection
var expression = BuildExpressionTreeToCreateProxy(proxyGenerator, type, interfaceType, new MyInterceptor());
// Register using reflection
var f = CallGenericMethod(registry, "For", interfaceToProxy);
var u = CallGenericMethod(f, "Use", type);
CallMethod(u, "DecorateWith", expression);
}
Only for crazy minds ...
I start to get very tired of StructureMap, many changes and no documentation, I have been read the source code but ... too many efforts for my objective ...
If someone can give me a bit of light I will be grateful.
Thanks in advance.
In addition ... I post here the real code of my helper to generate the expression tree an register the plugin family:
public static class RegistrationHelper
{
public static void RegisterWithInterceptors(this Registry registry, Type interfaceToProxy, Type concreteType,
IInterceptor[] interceptors, ILifecycle lifecycle = null)
{
var proxyGenerator = new ProxyGenerator();
// Generate expression tree to call DecoreWith of StructureMap SmartInstance type
// registry.For<interfaceToProxy>().Use<concreteType>()
// .DecoreWith(ex => (IApplicationService)
// proxyGenerator.CreateInterfaceProxyWithTargetInterface(interfaceToProxy, ex, interceptors)
var expressionParameter = Expression.Parameter(interfaceToProxy, "ex");
var proxyGeneratorConstant = Expression.Constant(proxyGenerator);
var interfaceConstant = Expression.Constant(interfaceToProxy);
var interceptorConstant = Expression.Constant(interceptors);
var methodCallExpression = Expression.Call(proxyGeneratorConstant,
typeof (ProxyGenerator).GetMethods().First(
met => met.Name == "CreateInterfaceProxyWithTargetInterface"
&& !met.IsGenericMethod && met.GetParameters().Count() == 3),
interfaceConstant,
expressionParameter,
interceptorConstant);
var convert = Expression.Convert(methodCallExpression, interfaceToProxy);
var func = typeof(Func<,>).MakeGenericType(interfaceToProxy, interfaceToProxy);
var expr = Expression.Lambda(func, convert, expressionParameter);
// Register using reflection
registry.CallGenericMethod("For", interfaceToProxy, new[] {(object) lifecycle /*Lifecicle*/})
.CallGenericMethod("Use", concreteType)
.CallNoGenericMethod("DecorateWith", expr);
}
}
public static class CallMethodExtensions
{
/// <summary>
/// Call a method with Generic parameter by reflection (obj.methodName[genericType](parameters)
/// </summary>
/// <returns></returns>
public static object CallGenericMethod(this object obj, string methodName, Type genericType, params object[] parameters)
{
var metod = obj.GetType().GetMethods().First(m => m.Name == methodName && m.IsGenericMethod);
var genericMethod = metod.MakeGenericMethod(genericType);
return genericMethod.Invoke(obj, parameters);
}
/// <summary>
/// Call a method without Generic parameter by reflection (obj.methodName(parameters)
/// </summary>
/// <returns></returns>
public static object CallNoGenericMethod(this object obj, string methodName, params object[] parameters)
{
var method = obj.GetType().GetMethods().First(m => m.Name == methodName && !m.IsGenericMethod);
return method.Invoke(obj, parameters);
}
}
Almost two years later I have needed return this issue for a new project. This time I have solved it this time I have used StructureMap 4.
You can use a custom interceptor policy to decorate an instance in function of his type. You have to implement one interceptor, one interceptor policy and configure it on a registry.
The Interceptor
public class MyExInterceptor : Castle.DynamicProxy.IInterceptor
{
public void Intercept(Castle.DynamicProxy.IInvocation invocation)
{
Console.WriteLine("-- Call to " + invocation.Method);
invocation.Proceed();
}
}
The interceptor policy
public class CustomInterception : IInterceptorPolicy
{
public string Description
{
get { return "good interception policy"; }
}
public IEnumerable<IInterceptor> DetermineInterceptors(Type pluginType, Instance instance)
{
if (pluginType == typeof(IAppService))
{
// DecoratorInterceptor is the simple case of wrapping one type with another
// concrete type that takes the first as a dependency
yield return new FuncInterceptor<IAppService>(i =>
(IAppService)
DynamicProxyHelper.CreateInterfaceProxyWithTargetInterface(typeof(IAppService), i));
}
}
}
Configuration
var container = new Container(_ =>
{
_.Policies.Interceptors(new CustomInterception());
_.For<IAppService>().Use<AppServiceImplementation>();
});
var service = container.GetInstance<IAppService>();
service.DoWork();
You can get a working example on this gist https://gist.github.com/tolemac/3e31b44b7fc7d0b49c6547018f332d68, in the gist you can find three types of decoration, the third is like this answer.
Using it you can configure the decorators of your services easily.
I am new to the dependency injection pattern and I am having issues getting a new instance of a class from container.Resolve in tinyioc it just keeps returning the same instance rather than a new instance. Now for the code
public abstract class HObjectBase : Object
{
private string _name = String.Empty;
public string Name
{
get
{
return this._name;
}
set
{
if (this._name == string.Empty && value.Length > 0 && value != String.Empty)
this._name = value;
else if (value.Length < 1 && value == String.Empty)
throw new FieldAccessException("Objects names cannot be blank");
else
throw new FieldAccessException("Once the internal name of an object has been set it cannot be changed");
}
}
private Guid _id = new Guid();
public Guid Id
{
get
{
return this._id;
}
set
{
if (this._id == new Guid())
this._id = value;
else
throw new FieldAccessException("Once the internal id of an object has been set it cannot be changed");
}
}
private HObjectBase _parent = null;
public HObjectBase Parent
{
get
{
return this._parent;
}
set
{
if (this._parent == null)
this._parent = value;
else
throw new FieldAccessException("Once the parent of an object has been set it cannot be changed");
}
}
}
public abstract class HZoneBase : HObjectBase
{
public new HObjectBase Parent
{
get
{
return base.Parent;
}
set
{
if (value == null || value.GetType() == typeof(HZoneBase))
{
base.Parent = value;
}
else
{
throw new FieldAccessException("Zones may only have other zones as parents");
}
}
}
private IHMetaDataStore _store;
public HZoneBase(IHMetaDataStore store)
{
this._store = store;
}
public void Save()
{
this._store.SaveZone(this);
}
}
And the derived class is a dummy at the moment but here it is
public class HZone : HZoneBase
{
public HZone(IHMetaDataStore store)
: base(store)
{
}
}
Now since this is meant to be an external library I have a faced class for accessing
everything
public class Hadrian
{
private TinyIoCContainer _container;
public Hadrian(IHMetaDataStore store)
{
this._container = new TinyIoCContainer();
this._container.Register(store);
this._container.AutoRegister();
}
public HZoneBase NewZone()
{
return _container.Resolve<HZoneBase>();
}
public HZoneBase GetZone(Guid id)
{
var metadataStore = this._container.Resolve<IHMetaDataStore>();
return metadataStore.GetZone(id);
}
public List<HZoneBase> ListRootZones()
{
var metadataStore = this._container.Resolve<IHMetaDataStore>();
return metadataStore.ListRootZones();
}
}
However the test is failing because the GetNewZone() method on the Hadrian class keeps returning the same instance.
Test Code
[Fact]
public void ListZones()
{
Hadrian instance = new Hadrian(new MemoryMetaDataStore());
Guid[] guids = { Guid.NewGuid(), Guid.NewGuid(), Guid.NewGuid() };
int cnt = 0;
foreach (Guid guid in guids)
{
HZone zone = (HZone)instance.NewZone();
zone.Id = guids[cnt];
zone.Name = "Testing" + cnt.ToString();
zone.Parent = null;
zone.Save();
cnt++;
}
cnt = 0;
foreach (HZone zone in instance.ListRootZones())
{
Assert.Equal(zone.Id, guids[cnt]);
Assert.Equal(zone.Name, "Testing" + cnt.ToString());
Assert.Equal(zone.Parent, null);
}
}
I know its probably something simple I'm missing with the pattern but I'm not sure, any help would be appreciated.
First, please always simplify the code to what is absolutely necessary to demonstrate the problem, but provide enough that it will actually run; I had to guess what MemoryMetaDataStore does and implement it myself to run the code.
Also, please say where and how stuff fails, to point others straight to the issue. I spent a few minues figuring out that the exception I was getting was your problem and you weren't even getting to the assertions.
That said, container.Resolve<HZoneBase>() will always return the same instance because that's how autoregistration in TinyIoC works - once an abstraction has been resolved, the same instance is always returned for subsequent calls.
To change this, add the following line to the Hadrian constructor:
this._container.Register<HZoneBase, HZone>().AsMultiInstance();
This will tell the container to create a new instance for each resolution request for HZoneBase.
Also, Bassetassen's answer about the Assert part is correct.
In general, if you want to learn DI, you should read Mark Seemann's excellent book "Dependency Injection in .NET" - not quite an easy read as the whole topic is inherently complex, but it's more than worth it and will let you get into it a few years faster than by learning it on your own.
In your assert stage you are not incrementing cnt. You are also using the actual value as the expected one in the assert. This will be confusing, becuase it says something is excpected when it actually is the actual value that is returned.
The assert part should be:
cnt = 0;
foreach (HZone zone in instance.ListRootZones())
{
Assert.Equal(guids[cnt], zone.Id);
Assert.Equal("Testing" + cnt.ToString(), zone.Name);
Assert.Equal(null, zone.Parent);
cnt++;
}
The problem I have is that I need to do about 40+ conversions to convert loosely typed info into strongly typed info stored in db, xml file, etc.
I'm plan to tag each type with a tuple i.e. a transformational form like this:
host.name.string:host.dotquad.string
which will offer a conversion from the input to an output form. For example, the name stored in the host field of type string, the input is converted into a dotquad notation of type string and stored back into host field. More complex conversions may need several steps, with each step being accomplished by a method call, hence method chaining.
Examining further the example above, the tuple 'host.name.string' with the field host of name www.domain.com. A DNS lookup is done to covert domain name to IP address. Another method is applied to change the type returned by the DNS lookup into the internal type of dotquad of type string. For this transformation, there is 4 seperate methods called to convert from one tuple into another. Some other conversions may require more steps.
Ideally I would like an small example of how method chains are constructed at runtime. Development time method chaining is relatively trivial, but would require pages and pages of code to cover all possibilites, with 40+ conversions.
One way I thought of doing is, is parsing the tuples at startup, and writing the chains out to an assembly, compiling it, then using reflection to load/access. Its would be really ugly and negate the performance increases i'm hoping to gain.
I'm using Mono, so no C# 4.0
Any help would be appreciated.
Bob.
Here is a quick and dirty solution using LINQ Expressions. You have indicated that you want C# 2.0, this is 3.5, but it does run on Mono 2.6. The method chaining is a bit hacky as i didn't exactly know how your version works, so you might need to tweak the expression code to suit.
The real magic really happens in the Chainer class, which takes a collection of strings, which represent the MethodChain subclass. Take a collection like this:
{
"string",
"string",
"int"
}
This will generate a chain like this:
new StringChain(new StringChain(new IntChain()));
Chainer.CreateChain will return a lambda that calls MethodChain.Execute(). Because Chainer.CreateChain uses a bit of reflection, it's slow, but it only needs to run once for each expression chain. The execution of the lambda is nearly as fast as calling actual code.
Hope you can fit this into your architecture.
public abstract class MethodChain {
private MethodChain[] m_methods;
private object m_Result;
public MethodChain(params MethodChain[] methods) {
m_methods = methods;
}
public MethodChain Execute(object expression) {
if(m_methods != null) {
foreach(var method in m_methods) {
expression = method.Execute(expression).GetResult<object>();
}
}
m_Result = ExecuteInternal(expression);
return this;
}
protected abstract object ExecuteInternal(object expression);
public T GetResult<T>() {
return (T)m_Result;
}
}
public class IntChain : MethodChain {
public IntChain(params MethodChain[] methods)
: base(methods) {
}
protected override object ExecuteInternal(object expression) {
return int.Parse(expression as string);
}
}
public class StringChain : MethodChain {
public StringChain(params MethodChain[] methods):base(methods) {
}
protected override object ExecuteInternal(object expression) {
return (expression as string).Trim();
}
}
public class Chainer {
/// <summary>
/// methods are executed from back to front, so methods[1] will call method[0].Execute before executing itself
/// </summary>
/// <param name="methods"></param>
/// <returns></returns>
public Func<object, MethodChain> CreateChain(IEnumerable<string> methods) {
Expression expr = null;
foreach(var methodName in methods.Reverse()) {
ConstructorInfo cInfo= null;
switch(methodName.ToLower()) {
case "string":
cInfo = typeof(StringChain).GetConstructor(new []{typeof(MethodChain[])});
break;
case "int":
cInfo = typeof(IntChain).GetConstructor(new[] { typeof(MethodChain[]) });
break;
}
if(cInfo == null)
continue;
if(expr != null)
expr = Expression.New(cInfo, Expression.NewArrayInit( typeof(MethodChain), Expression.Convert(expr, typeof(MethodChain))));
else
expr = Expression.New(cInfo, Expression.Constant(null, typeof(MethodChain[])));
}
var objParam = Expression.Parameter(typeof(object));
var methodExpr = Expression.Call(expr, typeof(MethodChain).GetMethod("Execute"), objParam);
Func<object, MethodChain> lambda = Expression.Lambda<Func<object, MethodChain>>(methodExpr, objParam).Compile();
return lambda;
}
[TestMethod]
public void ExprTest() {
Chainer chainer = new Chainer();
var lambda = chainer.CreateChain(new[] { "int", "string" });
var result = lambda(" 34 ").GetResult<int>();
Assert.AreEqual(34, result);
}
}
The command pattern would fit here. What you could do is queue up commands as you need different operations performed on the different data types. Those messages could then all be processed and call the appropriate methods when you're ready later on.
This pattern can be implemented in .NET 2.0.
Do you really need to do this at execution time? Can't you create the combination of operations using code generation?
Let me elaborate:
Assuming you have a class called Conversions which contains all the 40+ convertions you mentioned like this:
//just pseudo code..
class conversions{
string host_name(string input){}
string host_dotquad(string input){}
int type_convert(string input){}
float type_convert(string input){}
float increment_float(float input){}
}
Write a simple console app or something similar which uses reflection to generate code for methods like this:
execute_host_name(string input, Queue<string> conversionQueue)
{
string ouput = conversions.host_name(input);
if(conversionQueue.Count == 0)
return output;
switch(conversionQueue.dequeue())
{
// generate case statements only for methods that take in
// a string as parameter because the host_name method returns a string.
case "host.dotquad": return execute_host_dotquad(output,conversionQueue);
case "type.convert": return execute_type_convert(output, conversionQueue);
default: // exception...
}
}
Wrap all this in a Nice little execute method like this:
object execute(string input, string [] conversions)
{
Queue<string> conversionQueue = //create the queue..
case(conversionQueue.dequeue())
{
case "host.name": return execute_host_name(output,conversionQueue);
case "host.dotquad": return execute_host_dotquad(output,conversionQueue);
case "type.convert": return execute_type_convert(output, conversionQueue);
default: // exception...
}
}
This code generation application need to be executed only when your method signatures changes or when you decide to add new transformations.
Main advantages:
No runtime overhead
Easy to add/delete/change the conversions (code generator will take care of the code changes :) )
What do you think?
I apologize for the long code dump and the fact that it is in Java, rather than C#, but I found your problem quite interesting and I do not have much C# experience. Hopefully you will be able to adapt this solution without difficulty.
One approach to solving your problem is to create a cost for each conversion -- usually this is related to the accuracy of the conversion -- and then perform a search to find the best possible conversion sequence to get from one type to another.
The reason for needing a cost function is to choose among multiple conversion paths. For example, converting from an integer to a string is lossless, but there is no guarantee that every string can be represented by an integer. So, if you had two conversion chains
string -> integer -> float -> decimal
string -> float -> decimal
You would want to select the second one because it will reduce the chance of a conversion failure.
The Java code below implements such a scheme and performs a best-first search to find an optimal conversion sequence. I hope you find it useful. Running the code produces the following output:
> No conversion possible from string to integer
> The optimal conversion sequence from string to host.dotquad.string is:
> string to host.name.string, cost = -1.609438
> host.name.string to host.dns, cost = -1.609438 *PERFECT*
> host.dns to host.dotquad, cost = -1.832581
> host.dotquad to host.dotquad.string, cost = -1.832581 *PERFECT*
Here is the Java code.
/**
* Use best-first search to find an optimal sequence of operations for
* performing a type conversion with maximum fidelity.
*/
import java.util.*;
public class TypeConversion {
/**
* Define a type-conversion interface. It converts between to
* user-defined types and provides a measure of fidelity (accuracy)
* of the conversion.
*/
interface ITypeConverter<T, F> {
public T convert(F from);
public double fidelity();
// Could use reflection instead of handling this explicitly
public String getSourceType();
public String getTargetType();
}
/**
* Create a set of user-defined types.
*/
class HostName {
public String hostName;
public HostName(String hostName) {
this.hostName = hostName;
}
}
class DnsLookup {
public String ipAddress;
public DnsLookup(HostName hostName) {
this.ipAddress = doDNSLookup(hostName);
}
private String doDNSLookup(HostName hostName) {
return "127.0.0.1";
}
}
class DottedQuad {
public int[] quad = new int[4];
public DottedQuad(DnsLookup lookup) {
String[] split = lookup.ipAddress.split(".");
for ( int i = 0; i < 4; i++ )
quad[i] = Integer.parseInt( split[i] );
}
}
/**
* Define a set of conversion operations between the types. We only
* implement a minimal number for brevity, but this could be expanded.
*
* We start by creating some broad classes to differentiate among
* perfect, good and bad conversions.
*/
abstract class PerfectTypeConversion<T, F> implements ITypeConverter<T, F> {
public abstract T convert(F from);
public double fidelity() { return 1.0; }
}
abstract class GoodTypeConversion<T, F> implements ITypeConverter<T, F> {
public abstract T convert(F from);
public double fidelity() { return 0.8; }
}
abstract class BadTypeConversion<T, F> implements ITypeConverter<T, F> {
public abstract T convert(F from);
public double fidelity() { return 0.2; }
}
/**
* Concrete classes that do the actual conversions.
*/
class StringToHostName extends BadTypeConversion<HostName, String> {
public HostName convert(String from) { return new HostName(from); }
public String getSourceType() { return "string"; }
public String getTargetType() { return "host.name.string"; }
}
class HostNameToDnsLookup extends PerfectTypeConversion<DnsLookup, HostName> {
public DnsLookup convert(HostName from) { return new DnsLookup(from); }
public String getSourceType() { return "host.name.string"; }
public String getTargetType() { return "host.dns"; }
}
class DnsLookupToDottedQuad extends GoodTypeConversion<DottedQuad, DnsLookup> {
public DottedQuad convert(DnsLookup from) { return new DottedQuad(from); }
public String getSourceType() { return "host.dns"; }
public String getTargetType() { return "host.dotquad"; }
}
class DottedQuadToString extends PerfectTypeConversion<String, DottedQuad> {
public String convert(DottedQuad f) {
return f.quad[0] + "." + f.quad[1] + "." + f.quad[2] + "." + f.quad[3];
}
public String getSourceType() { return "host.dotquad"; }
public String getTargetType() { return "host.dotquad.string"; }
}
/**
* To find the best conversion sequence, we need to instantiate
* a list of converters.
*/
ITypeConverter<?,?> converters[] =
{
new StringToHostName(),
new HostNameToDnsLookup(),
new DnsLookupToDottedQuad(),
new DottedQuadToString()
};
Map<String, List<ITypeConverter<?,?>>> fromMap =
new HashMap<String, List<ITypeConverter<?,?>>>();
public void buildConversionMap()
{
for ( ITypeConverter<?,?> converter : converters )
{
String type = converter.getSourceType();
if ( !fromMap.containsKey( type )) {
fromMap.put( type, new ArrayList<ITypeConverter<?,?>>());
}
fromMap.get(type).add(converter);
}
}
public class Tuple implements Comparable<Tuple>
{
public String type;
public double cost;
public Tuple parent;
public Tuple(String type, double cost, Tuple parent) {
this.type = type;
this.cost = cost;
this.parent = parent;
}
public int compareTo(Tuple o) {
return Double.compare( cost, o.cost );
}
}
public Tuple findOptimalConversionSequence(String from, String target)
{
PriorityQueue<Tuple> queue = new PriorityQueue<Tuple>();
// Add a dummy start node to the queue
queue.add( new Tuple( from, 0.0, null ));
// Perform the search
while ( !queue.isEmpty() )
{
// Pop the most promising candidate from the list
Tuple tuple = queue.remove();
// If the type matches the target type, return
if ( tuple.type == target )
return tuple;
// If we have reached a dead-end, backtrack
if ( !fromMap.containsKey( tuple.type ))
continue;
// Otherwise get all of the possible conversions to
// perform next and add their costs
for ( ITypeConverter<?,?> converter : fromMap.get( tuple.type ))
{
String type = converter.getTargetType();
double cost = tuple.cost + Math.log( converter.fidelity() );
queue.add( new Tuple( type, cost, tuple ));
}
}
// No solution
return null;
}
public static void convert(String from, String target)
{
TypeConversion tc = new TypeConversion();
// Build a conversion lookup table
tc.buildConversionMap();
// Find the tail of the optimal conversion chain.
Tuple tail = tc.findOptimalConversionSequence( from, target );
if ( tail == null ) {
System.out.println( "No conversion possible from " + from + " to " + target );
return;
}
// Reconstruct the conversion path (skip dummy node)
List<Tuple> solution = new ArrayList<Tuple>();
for ( ; tail.parent != null ; tail = tail.parent )
solution.add( tail );
Collections.reverse( solution );
StringBuilder sb = new StringBuilder();
Formatter formatter = new Formatter(sb);
sb.append( "The optimal conversion sequence from " + from + " to " + target + " is:\n" );
for ( Tuple tuple : solution ) {
formatter.format( "%20s to %20s, cost = %f", tuple.parent.type, tuple.type, tuple.cost );
if ( tuple.cost == tuple.parent.cost )
sb.append( " *PERFECT*");
sb.append( "\n" );
}
System.out.println( sb.toString() );
}
public static void main(String[] args)
{
// Run two tests
convert( "string", "integer" );
convert( "string", "host.dotquad.string" );
}
}
I have read lots of information about page caching and partial page caching in a MVC application. However, I would like to know how you would cache data.
In my scenario I will be using LINQ to Entities (entity framework). On the first call to GetNames (or whatever the method is) I want to grab the data from the database. I want to save the results in cache and on the second call to use the cached version if it exists.
Can anyone show an example of how this would work, where this should be implemented (model?) and if it would work.
I have seen this done in traditional ASP.NET apps , typically for very static data.
Here's a nice and simple cache helper class/service I use:
using System.Runtime.Caching;
public class InMemoryCache: ICacheService
{
public T GetOrSet<T>(string cacheKey, Func<T> getItemCallback) where T : class
{
T item = MemoryCache.Default.Get(cacheKey) as T;
if (item == null)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(10));
}
return item;
}
}
interface ICacheService
{
T GetOrSet<T>(string cacheKey, Func<T> getItemCallback) where T : class;
}
Usage:
cacheProvider.GetOrSet("cache key", (delegate method if cache is empty));
Cache provider will check if there's anything by the name of "cache id" in the cache, and if there's not, it will call a delegate method to fetch data and store it in cache.
Example:
var products=cacheService.GetOrSet("catalog.products", ()=>productRepository.GetAll())
Reference the System.Web dll in your model and use System.Web.Caching.Cache
public string[] GetNames()
{
string[] names = Cache["names"] as string[];
if(names == null) //not in cache
{
names = DB.GetNames();
Cache["names"] = names;
}
return names;
}
A bit simplified but I guess that would work. This is not MVC specific and I have always used this method for caching data.
I'm referring to TT's post and suggest the following approach:
Reference the System.Web dll in your model and use System.Web.Caching.Cache
public string[] GetNames()
{
var noms = Cache["names"];
if(noms == null)
{
noms = DB.GetNames();
Cache["names"] = noms;
}
return ((string[])noms);
}
You should not return a value re-read from the cache, since you'll never know if at that specific moment it is still in the cache. Even if you inserted it in the statement before, it might already be gone or has never been added to the cache - you just don't know.
So you add the data read from the database and return it directly, not re-reading from the cache.
For .NET 4.5+ framework
add reference: System.Runtime.Caching
add using statement:
using System.Runtime.Caching;
public string[] GetNames()
{
var noms = System.Runtime.Caching.MemoryCache.Default["names"];
if(noms == null)
{
noms = DB.GetNames();
System.Runtime.Caching.MemoryCache.Default["names"] = noms;
}
return ((string[])noms);
}
In the .NET Framework 3.5 and earlier versions, ASP.NET provided an in-memory cache implementation in the System.Web.Caching namespace. In previous versions of the .NET Framework, caching was available only in the System.Web namespace and therefore required a dependency on ASP.NET classes. In the .NET Framework 4, the System.Runtime.Caching namespace contains APIs that are designed for both Web and non-Web applications.
More info:
https://msdn.microsoft.com/en-us/library/dd997357(v=vs.110).aspx
https://learn.microsoft.com/en-us/dotnet/framework/performance/caching-in-net-framework-applications
Steve Smith did two great blog posts which demonstrate how to use his CachedRepository pattern in ASP.NET MVC. It uses the repository pattern effectively and allows you to get caching without having to change your existing code.
http://ardalis.com/Introducing-the-CachedRepository-Pattern
http://ardalis.com/building-a-cachedrepository-via-strategy-pattern
In these two posts he shows you how to set up this pattern and also explains why it is useful. By using this pattern you get caching without your existing code seeing any of the caching logic. Essentially you use the cached repository as if it were any other repository.
I have used it in this way and it works for me.
https://msdn.microsoft.com/en-us/library/system.web.caching.cache.add(v=vs.110).aspx
parameters info for system.web.caching.cache.add.
public string GetInfo()
{
string name = string.Empty;
if(System.Web.HttpContext.Current.Cache["KeyName"] == null)
{
name = GetNameMethod();
System.Web.HttpContext.Current.Cache.Add("KeyName", name, null, DateTime.Noew.AddMinutes(5), Cache.NoSlidingExpiration, CacheitemPriority.AboveNormal, null);
}
else
{
name = System.Web.HttpContext.Current.Cache["KeyName"] as string;
}
return name;
}
AppFabric Caching is distributed and an in-memory caching technic that stores data in key-value pairs using physical memory across multiple servers. AppFabric provides performance and scalability improvements for .NET Framework applications. Concepts and Architecture
Extending #Hrvoje Hudo's answer...
Code:
using System;
using System.Runtime.Caching;
public class InMemoryCache : ICacheService
{
public TValue Get<TValue>(string cacheKey, int durationInMinutes, Func<TValue> getItemCallback) where TValue : class
{
TValue item = MemoryCache.Default.Get(cacheKey) as TValue;
if (item == null)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(durationInMinutes));
}
return item;
}
public TValue Get<TValue, TId>(string cacheKeyFormat, TId id, int durationInMinutes, Func<TId, TValue> getItemCallback) where TValue : class
{
string cacheKey = string.Format(cacheKeyFormat, id);
TValue item = MemoryCache.Default.Get(cacheKey) as TValue;
if (item == null)
{
item = getItemCallback(id);
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.AddMinutes(durationInMinutes));
}
return item;
}
}
interface ICacheService
{
TValue Get<TValue>(string cacheKey, Func<TValue> getItemCallback) where TValue : class;
TValue Get<TValue, TId>(string cacheKeyFormat, TId id, Func<TId, TValue> getItemCallback) where TValue : class;
}
Examples
Single item caching (when each item is cached based on its ID because caching the entire catalog for the item type would be too intensive).
Product product = cache.Get("product_{0}", productId, 10, productData.getProductById);
Caching all of something
IEnumerable<Categories> categories = cache.Get("categories", 20, categoryData.getCategories);
Why TId
The second helper is especially nice because most data keys are not composite. Additional methods could be added if you use composite keys often. In this way you avoid doing all sorts of string concatenation or string.Formats to get the key to pass to the cache helper. It also makes passing the data access method easier because you don't have to pass the ID into the wrapper method... the whole thing becomes very terse and consistant for the majority of use cases.
Here's an improvement to Hrvoje Hudo's answer. This implementation has a couple of key improvements:
Cache keys are created automatically based on the function to update data and the object passed in that specifies dependencies
Pass in time span for any cache duration
Uses a lock for thread safety
Note that this has a dependency on Newtonsoft.Json to serialize the dependsOn object, but that can be easily swapped out for any other serialization method.
ICache.cs
public interface ICache
{
T GetOrSet<T>(Func<T> getItemCallback, object dependsOn, TimeSpan duration) where T : class;
}
InMemoryCache.cs
using System;
using System.Reflection;
using System.Runtime.Caching;
using Newtonsoft.Json;
public class InMemoryCache : ICache
{
private static readonly object CacheLockObject = new object();
public T GetOrSet<T>(Func<T> getItemCallback, object dependsOn, TimeSpan duration) where T : class
{
string cacheKey = GetCacheKey(getItemCallback, dependsOn);
T item = MemoryCache.Default.Get(cacheKey) as T;
if (item == null)
{
lock (CacheLockObject)
{
item = getItemCallback();
MemoryCache.Default.Add(cacheKey, item, DateTime.Now.Add(duration));
}
}
return item;
}
private string GetCacheKey<T>(Func<T> itemCallback, object dependsOn) where T: class
{
var serializedDependants = JsonConvert.SerializeObject(dependsOn);
var methodType = itemCallback.GetType();
return methodType.FullName + serializedDependants;
}
}
Usage:
var order = _cache.GetOrSet(
() => _session.Set<Order>().SingleOrDefault(o => o.Id == orderId)
, new { id = orderId }
, new TimeSpan(0, 10, 0)
);
public sealed class CacheManager
{
private static volatile CacheManager instance;
private static object syncRoot = new Object();
private ObjectCache cache = null;
private CacheItemPolicy defaultCacheItemPolicy = null;
private CacheEntryRemovedCallback callback = null;
private bool allowCache = true;
private CacheManager()
{
cache = MemoryCache.Default;
callback = new CacheEntryRemovedCallback(this.CachedItemRemovedCallback);
defaultCacheItemPolicy = new CacheItemPolicy();
defaultCacheItemPolicy.AbsoluteExpiration = DateTime.Now.AddHours(1.0);
defaultCacheItemPolicy.RemovedCallback = callback;
allowCache = StringUtils.Str2Bool(ConfigurationManager.AppSettings["AllowCache"]); ;
}
public static CacheManager Instance
{
get
{
if (instance == null)
{
lock (syncRoot)
{
if (instance == null)
{
instance = new CacheManager();
}
}
}
return instance;
}
}
public IEnumerable GetCache(String Key)
{
if (Key == null || !allowCache)
{
return null;
}
try
{
String Key_ = Key;
if (cache.Contains(Key_))
{
return (IEnumerable)cache.Get(Key_);
}
else
{
return null;
}
}
catch (Exception)
{
return null;
}
}
public void ClearCache(string key)
{
AddCache(key, null);
}
public bool AddCache(String Key, IEnumerable data, CacheItemPolicy cacheItemPolicy = null)
{
if (!allowCache) return true;
try
{
if (Key == null)
{
return false;
}
if (cacheItemPolicy == null)
{
cacheItemPolicy = defaultCacheItemPolicy;
}
String Key_ = Key;
lock (Key_)
{
return cache.Add(Key_, data, cacheItemPolicy);
}
}
catch (Exception)
{
return false;
}
}
private void CachedItemRemovedCallback(CacheEntryRemovedArguments arguments)
{
String strLog = String.Concat("Reason: ", arguments.RemovedReason.ToString(), " | Key-Name: ", arguments.CacheItem.Key, " | Value-Object: ", arguments.CacheItem.Value.ToString());
LogManager.Instance.Info(strLog);
}
}
I use two classes. First one the cache core object:
public class Cacher<TValue>
where TValue : class
{
#region Properties
private Func<TValue> _init;
public string Key { get; private set; }
public TValue Value
{
get
{
var item = HttpRuntime.Cache.Get(Key) as TValue;
if (item == null)
{
item = _init();
HttpContext.Current.Cache.Insert(Key, item);
}
return item;
}
}
#endregion
#region Constructor
public Cacher(string key, Func<TValue> init)
{
Key = key;
_init = init;
}
#endregion
#region Methods
public void Refresh()
{
HttpRuntime.Cache.Remove(Key);
}
#endregion
}
Second one is list of cache objects:
public static class Caches
{
static Caches()
{
Languages = new Cacher<IEnumerable<Language>>("Languages", () =>
{
using (var context = new WordsContext())
{
return context.Languages.ToList();
}
});
}
public static Cacher<IEnumerable<Language>> Languages { get; private set; }
}
I will say implementing Singleton on this persisting data issue can be a solution for this matter in case you find previous solutions much complicated
public class GPDataDictionary
{
private Dictionary<string, object> configDictionary = new Dictionary<string, object>();
/// <summary>
/// Configuration values dictionary
/// </summary>
public Dictionary<string, object> ConfigDictionary
{
get { return configDictionary; }
}
private static GPDataDictionary instance;
public static GPDataDictionary Instance
{
get
{
if (instance == null)
{
instance = new GPDataDictionary();
}
return instance;
}
}
// private constructor
private GPDataDictionary() { }
} // singleton
HttpContext.Current.Cache.Insert("subjectlist", subjectlist);
You can also try and use the caching built into ASP MVC:
Add the following attribute to the controller method you'd like to cache:
[OutputCache(Duration=10)]
In this case the ActionResult of this will be cached for 10 seconds.
More on this here