this is my first time to get my feet wet with serialization...in fact i am developing Autodesk Revit through C#.
Objective:
I need to record data to a new file on HDD, so that this file can be opened from another computer through Revit.
Procedure:
Handle all the required data.from Main class.
Instantiate and pass these data to a Serializable class.
Save to file the data from main class.
Dispose stream and set serializable class to null.
Deserialize.
Do stuff on revit on the basis of acquired data.
Problem
- the program works perfectly with no error and every thing is OK.
- press the button again to rerun the program it fails at deserialization with this error code
[A]Cons_Coor.ThrDviewData cannot be cast to [B]Cons_Coor.ThrDviewData. Type A originates from 'Cons_Coor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' in the context 'LoadNeither' at location 'C:\Users\mostafa\AppData\Local\Temp\RevitAddins\Cons_Coor-Executing-20140820_224454_4113\Cons_Coor.dll'. Type B originates from 'Cons_Coor, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' in the context 'LoadNeither' at location 'C:\Users\mostafa\AppData\Local\Temp\RevitAddins\Cons_Coor-Executing-20140820_230011_0316\Cons_Coor.dll'.A first chance exception of type 'System.NullReferenceException' occurred in Cons_Coor.dll
Main Class:
///main class
.....
.....
ThrDviewData v3ddata = new ThrDviewData(); ///instantiate a serializable class
///collect all required data
string filename = UT_helper.conpaths(UT_constants.paths.Desktop) + "\\comment2" + DateTime.Today.ToShortTimeString().Replace(":", "") + ".a4h";
using (Stream stream = File.Open(filename, FileMode.Create))
{
BinaryFormatter bformatter = new BinaryFormatter();
Debug.WriteLine("Writting Data\r\n");
bformatter.Serialize(stream, v3ddata);
stream.Close();
}
v3ddata = null;
using (Stream stream = File.Open(filename, FileMode.Open))
{
BinaryFormatter bformatter = new BinaryFormatter();
Debug.WriteLine("Reading data from file");
try
{
v3ddata = (ThrDviewData)bformatter.Deserialize(stream);
}
catch (Exception ex)
{
Debug.Write(ex.Message);
// File.Delete(filename);
}
stream.Close();
}
....
....
///do some stuff with the acquired data
Serializable Class
public string myvariables;
public ThrDviewData()
{
myvariables = null;
}
public ThrDviewData(SerializationInfo info, StreamingContext ctxt)
{
myvariables= (String)info.GetValue("name", typeof(string));
}
public void GetObjectData(SerializationInfo info, StreamingContext context)
{
info.AddValue("name", myvariables);
}
// Public implementation of Dispose pattern callable by consumers.
public void Dispose()
{
GC.SuppressFinalize(this);
}
}
so any hints?
The Binary Serializer you're using is very tightly tied to the class you're exporting.
When you're using the Revit Addin manager to load your addin, it's making a dynamic copy of your assembly (so that you can come back and load it again while you're in the same session). When it does that, you wind up with duplicate types that have the same name (ThrDviewData). When you try to load a previously serialized binary that was from a different copy, it is still trying to map to the original type (not the new copy of the type).
Your choices are:
1. Don't use the Addin manager, just statically use your addin.
2. Use something other than the Binary serializer that is not so tightly coupled to the types (like an XML or JSON serializer - as you tried).
That's what happened...
Related
I need to dynamically creates controllers in a ASP.NET Core 6 MVC application.
I found some way to somewhat achieve this but not quite.
I'm able to dynamically add my controller but somehow it reflects only on the second request.
So here is what I do: first I initialize my console app as follows:
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Mvc.ApplicationParts;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.AspNetCore.Mvc.Infrastructure;
namespace DynamicControllerServer
{
internal class Program
{
static void Main(string[] args)
{
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
ApplicationPartManager partManager = builder.Services.AddMvc().PartManager;
// Store thePartManager in my Middleware to be able to add controlelr after initialization is done
MyMiddleware._partManager = partManager;
// Register controller change event
builder.Services.AddSingleton<IActionDescriptorChangeProvider>(MyActionDescriptorChangeProvider.Instance);
builder.Services.AddSingleton(MyActionDescriptorChangeProvider.Instance);
var app = builder.Build();
app.UseAuthorization();
app.MapControllers();
// Add Middleware which is responsible to cactn the request and dynamically add the missing controller
app.UseMiddleware<MyMiddleware>();
app.RunAsync();
Console.WriteLine("Server has been started successfully ...");
Console.ReadLine();
}
}
}
Then my middleware looks like this: it basically detects that there is the "dynamic" keyword in the url. If so, it will load the assembly containing the DynamicController:
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc.ApplicationParts;
using System;
using System.Reflection;
namespace DynamicControllerServer
{
public class MyMiddleware
{
public RequestDelegate _next { get; }
private string dllName = "DynamicController1.dll";
static public ApplicationPartManager _partManager = null;
public MyMiddleware(RequestDelegate next)
{
_next = next;
}
public async Task Invoke(HttpContext httpContext)
{
if (httpContext.Request.Path.HasValue)
{
var queryParams = httpContext.Request.Path.Value;
if(httpContext.Request.Path.Value.Contains("api/dynamic"))
{
// Dynamically load assembly
Assembly assembly = assembly = Assembly.LoadFrom(#"C:\Temp\" + dllName);
// Add controller to the application
AssemblyPart _part = new AssemblyPart(assembly);
_partManager.ApplicationParts.Add(_part);
// Notify change
MyActionDescriptorChangeProvider.Instance.HasChanged = true;
MyActionDescriptorChangeProvider.Instance.TokenSource.Cancel();
}
}
await _next(httpContext); // calling next middleware
}
}
}
The ActionDescriptorChange provider looks like this:
using Microsoft.AspNetCore.Mvc.Infrastructure;
using Microsoft.Extensions.Primitives;
namespace DynamicControllerServer
{
public class MyActionDescriptorChangeProvider : IActionDescriptorChangeProvider
{
public static MyActionDescriptorChangeProvider Instance { get; } = new MyActionDescriptorChangeProvider();
public CancellationTokenSource TokenSource { get; private set; }
public bool HasChanged { get; set; }
public IChangeToken GetChangeToken()
{
TokenSource = new CancellationTokenSource();
return new CancellationChangeToken(TokenSource.Token);
}
}
}
Dynamic controller is in separate dll and is very simple:
using Microsoft.AspNetCore.Mvc;
namespace DotNotSelfHostedOwin
{
[Route("api/[controller]")]
[ApiController]
public class DynamicController : ControllerBase
{
public string[] Get()
{
return new string[] { "dynamic1", "dynamic1", DateTime.Now.ToString() };
}
}
}
Here are the packages used in that project:
<PackageReference Include="Microsoft.AspNetCore" Version="2.2.0" />
<PackageReference Include="Swashbuckle.AspNetCore" Version="6.2.3" />
This works "almost" fine ... when first request is made to:
https://localhost:5001/api/dynamic
then it goes in the middleware and load the assembly, but returns a 404 error.
Then second request will actually work as expected:
Second request returns the expected result:
I must doing it wrong and probably my middleware is executed too late in the flow to reflect the dynamic controller right away.
Question is: what should be the proper way to achieve this?
Second question I have is say now the external dll holding our dynamic controller is updated.
How can I reload that controller to get the new definition?
Any help would be appreciated
Thanks in advance
Nick
Here is the answer to my own question in case it can help somebody out there.
It seems building and loading the controller from the middleware will always end up with failure on the first call.
This makes sense since we are already in the http pipeline.
I end up doing same thing from outside the middleware.
Basically my application detect a change in the controller assembly, unload the original assembly and load the new one.
You cannot use the Default context since it will not allow reloading different dll for same assembly:
var assembly = AssemblyLoadContext.Default.LoadFromAssemblyPath(assemblyPath); // Produce an exception on updates
To be able to reload new dll for same assembly, I’m loading each controller in its own assembly context. To do that you need to create your own class deriving from AssemblyLoadContext and managing assembly load:
public class MyOwnContext: AssemblyLoadContext
{
// You can find lots of example in the net
}
When you want to unload the assembly, you just unload the context:
MyOwnContextObj.Unload();
Now to add or remove the controller on the fly, you need to keep reference of the PartManager and the ApplicationPart.
To add controller
ApplicationPart part = new AssemblyPart(assembly);
_PartManager.ApplicationParts.Add(part);
To remove:
_PartManager.ApplicationParts.Remove(part);
On course once done, still use following piece of code to acknowledge the change:
MyActionDescriptorChangeProvider.Instance.HasChanged = true;
MyActionDescriptorChangeProvider.Instance.TokenSource.Cancel();
That allow updating controller on the fly with no interruption of service.
Hope this helps people out there.
I have done a similar solution (used for managing a web app plugins) with some differences that may help you:
List all the external assemblies in a config file or appsettings.json so all the dll names and/or addresses are known at startup
Instead of registering controllers when they are called, register them at program.cs/start up :
//Foreah dllName from settings file
var assembly = Assembly.LoadFrom(#"Base address" + dllNameLoadedFromSettings);
var part = new AssemblyPart(assembly);
services.AddControllersWithViews()
.ConfigureApplicationPartManager(apm => apm.ApplicationParts.Add(part));
// Any other configuration based on the usage you want
Second: I usually keep plugin dlls in the bin folder so when using IIS as soon as a dll file in bin is changed the upper-level app is automatically reset. So your second question would be solved too.
This is my scenario: we are building a routing system by using neo4j and the spatial plugin. We start from the OSM file and we read this file and import nodes and relationships in our graph (a custom graph model)
Now, if we don't use the batch inserter of neo4j, in order to import a compressed OSM file (with compressed dimension of around 140MB, and normal dimensions around 2GB) it takes around 3 days on a dedicated server with the following characteristics: CentOS 6.5 64bit, quad core, 8GB RAM; pease note that the most time is related to the Neo4J Nodes and relationships creation; in-fact if we read the same file without doing anything with neo4j, the file is read in around 7 minutes (i'm sure about this becouse in our process we first read the file in order to store the correct osm nodes ids and then we read again the file in order to create the neo4j graph)
Obviously we need to improve the import proces so we are trying to use the batchInserter. So far, so good (I need to check how much it will perform by using the batchInserter but I guess it will be faster); so the first thing I did was: let's try to use the batch inserter in a simple test case (very similar to our code, but without modifying our code directly)
I list my software versions:
Neo4j: 2.0.2
Neo4jSpatial: 0.13-neo4j-2.0.1
Neo4jGraphCollections: 0.7.1-neo4j-2.0.1
Osmosis: 0.43.1
Since I'm using osmosis in order to read the osm file, I wrote the following Sink implementation:
public class BatchInserterSinkTest implements Sink
{
public static final Map<String, String> NEO4J_CFG = new HashMap<String, String>();
private static File basePath = new File("/home/angelo/Scrivania/neo4j");
private static File dbPath = new File(basePath, "db");
private GraphDatabaseService graphDb;
private BatchInserter batchInserter;
// private BatchInserterIndexProvider batchIndexService;
private SpatialDatabaseService spatialDb;
private SimplePointLayer spl;
static
{
NEO4J_CFG.put( "neostore.nodestore.db.mapped_memory", "100M" );
NEO4J_CFG.put( "neostore.relationshipstore.db.mapped_memory", "300M" );
NEO4J_CFG.put( "neostore.propertystore.db.mapped_memory", "400M" );
NEO4J_CFG.put( "neostore.propertystore.db.strings.mapped_memory", "800M" );
NEO4J_CFG.put( "neostore.propertystore.db.arrays.mapped_memory", "10M" );
NEO4J_CFG.put( "dump_configuration", "true" );
}
#Override
public void initialize(Map<String, Object> arg0)
{
batchInserter = BatchInserters.inserter(dbPath.getAbsolutePath(), NEO4J_CFG);
graphDb = new SpatialBatchGraphDatabaseService(batchInserter);
spatialDb = new SpatialDatabaseService(graphDb);
spl = spatialDb.createSimplePointLayer("testBatch", "latitudine", "longitudine");
//batchIndexService = new LuceneBatchInserterIndexProvider(batchInserter);
}
#Override
public void complete()
{
// TODO Auto-generated method stub
}
#Override
public void release()
{
// TODO Auto-generated method stub
}
#Override
public void process(EntityContainer ec)
{
Entity entity = ec.getEntity();
if (entity instanceof Node) {
Node osmNodo = (Node)entity;
org.neo4j.graphdb.Node graphNode = graphDb.createNode();
graphNode.setProperty("osmId", osmNodo.getId());
graphNode.setProperty("latitudine", osmNodo.getLatitude());
graphNode.setProperty("longitudine", osmNodo.getLongitude());
spl.add(graphNode);
} else if (entity instanceof Way) {
//do something with the way
} else if (entity instanceof Relation) {
//do something with the relation
}
}
}
Then I wrote the following test case:
public class BatchInserterTest
{
private static final Log logger = LogFactory.getLog(BatchInserterTest.class.getName());
#Test
public void batchInserter()
{
File file = new File("/home/angelo/Scrivania/MilanoPiccolo.osm");
try
{
boolean pbf = false;
CompressionMethod compression = CompressionMethod.None;
if (file.getName().endsWith(".pbf"))
{
pbf = true;
}
else if (file.getName().endsWith(".gz"))
{
compression = CompressionMethod.GZip;
}
else if (file.getName().endsWith(".bz2"))
{
compression = CompressionMethod.BZip2;
}
RunnableSource reader;
if (pbf)
{
reader = new crosby.binary.osmosis.OsmosisReader(new FileInputStream(file));
}
else
{
reader = new XmlReader(file, false, compression);
}
reader.setSink(new BatchInserterSinkTest());
Thread readerThread = new Thread(reader);
readerThread.start();
while (readerThread.isAlive())
{
try
{
readerThread.join();
}
catch (InterruptedException e)
{
/* do nothing */
}
}
}
catch (Exception e)
{
logger.error("Errore nella creazione di neo4j con batchInserter", e);
}
}
}
By executing this code, I get this exception:
Exception in thread "Thread-1" java.lang.ClassCastException: org.neo4j.unsafe.batchinsert.SpatialBatchGraphDatabaseService cannot be cast to org.neo4j.kernel.GraphDatabaseAPI
at org.neo4j.cypher.ExecutionEngine.<init>(ExecutionEngine.scala:113)
at org.neo4j.cypher.javacompat.ExecutionEngine.<init>(ExecutionEngine.java:53)
at org.neo4j.cypher.javacompat.ExecutionEngine.<init>(ExecutionEngine.java:43)
at org.neo4j.collections.graphdb.ReferenceNodes.getReferenceNode(ReferenceNodes.java:60)
at org.neo4j.gis.spatial.SpatialDatabaseService.getSpatialRoot(SpatialDatabaseService.java:76)
at org.neo4j.gis.spatial.SpatialDatabaseService.getLayer(SpatialDatabaseService.java:108)
at org.neo4j.gis.spatial.SpatialDatabaseService.containsLayer(SpatialDatabaseService.java:253)
at org.neo4j.gis.spatial.SpatialDatabaseService.createLayer(SpatialDatabaseService.java:282)
at org.neo4j.gis.spatial.SpatialDatabaseService.createSimplePointLayer(SpatialDatabaseService.java:266)
at it.eng.pinf.graph.batch.test.BatchInserterSinkTest.initialize(BatchInserterSinkTest.java:46)
at org.openstreetmap.osmosis.xml.v0_6.XmlReader.run(XmlReader.java:95)
at java.lang.Thread.run(Thread.java:744)
This is related to this code:
spl = spatialDb.createSimplePointLayer("testBatch", "latitudine", "longitudine");
So now I'm wondering: how can I use the batchInserter for my case? I have to add the created nodes to the SimplePointLayer....so how can I create it by using the batchInserter graph db service?
Is there any little simple sample?
Any tip is really really appreciated
cheers
Angelo
The OSMImporter class in the code has an example of using the batch inserter to import OSM data. The main thing is that the batch inserter is not really supported by neo4j spatial, so you need to do a few things manually. If you look at the class OSMImporter.OSMBatchWriter, you will see how it does things. It is not using the SimplePointLayer at all, since that does not support the batch inserter. It is creating the graph structure it wants directly. The simple point layer is quite simple, certainly much simpler than the OSM model created by the code I'm referencing, so I think you should be able to write a batch-inserter compatible version yourself without too much trouble.
What I would recommend is that you create the layer and nodes using the batch inserter to create the correct graph structure, then switch to the normal embedded API and use that to iterate through the nodes and add them to the spatial index.
I am developing an application using Breezejs, EF 4.4, MVC4, WebAPI and OData. When breeze makes a call to the Metadata ActionMethod the result is null. We use a code first approach and therefore do not have an EDMX file so I think the error comes about when breeze tries to "re-create" the EDMX in some capacity and it can't. See below for source code where try catch produces an exception.
Example of runtime code where execution fails.
// ~/odata/Analysis/Metadata
[HttpGet]
public string Metadata()
{
return _contextProvider.Metadata();
}
I have managed to include my project into the Breezejs repository located on GitHub. The exception occurs on the line with code "EdmxWriter.WriteEdmx(dbContext, xwriter);". I'm not sure what the issue is however the contents of the "WriteEdmx" method are below as well.
Does anyone have any idea what is going on? The only thing that I can think of is that the context that I am using is inherited from a base context which then inherits from DbContext, but other than that I am completely puzzled and at a stand still. Note: I have read that inheritance is not yet supported in breeze, but I'm not sure if that includes the contexts classes and in a separate test case I used a context that inherited from DbContext and I still received the same error.
private static String GetMetadataFromDbContext(Object context) {
var dbContext = (DbContext) context;
XElement xele;
try {
using (var swriter = new StringWriter()) {
using (var xwriter = new XmlTextWriter(swriter)) {
EdmxWriter.WriteEdmx(dbContext, xwriter);
xele = XElement.Parse(swriter.ToString());
}
}
} catch (Exception e) {
if (e is NotSupportedException) {
// DbContext that fails on WriteEdmx is likely a DataBase first DbContext.
return GetMetadataFromObjectContext(dbContext);
} else {
throw;
}
}
var ns = xele.Name.Namespace;
var conceptualEle = xele.Descendants(ns + "ConceptualModels").First();
var schemaEle = conceptualEle.Elements().First(ele => ele.Name.LocalName == "Schema");
var xDoc = XDocument.Load(schemaEle.CreateReader());
var objectContext = ((IObjectContextAdapter)dbContext).ObjectContext;
// This is needed because the raw edmx has a different namespace than the CLR types that it references.
xDoc = UpdateCSpaceOSpaceMapping(xDoc, objectContext);
return XDocToJson(xDoc);
}
"WriteEdmx"
// Summary:
// Uses Code First with the given context and writes the resulting Entity Data
// Model to the given writer in EDMX form. This method can only be used with
// context instances that use Code First and create the model internally. The
// method cannot be used for contexts created using Database First or Model
// First, for contexts created using a pre-existing System.Data.Objects.ObjectContext,
// or for contexts created using a pre-existing System.Data.Entity.Infrastructure.DbCompiledModel.
//
// Parameters:
// context:
// The context.
//
// writer:
// The writer.
[SuppressMessage("Microsoft.Naming", "CA1704:IdentifiersShouldBeSpelledCorrectly", MessageId = "Edmx")]
public static void WriteEdmx(DbContext context, XmlWriter writer);
UPDATE: A downgrade from EF 4.4 to EF 4.1 seemed to have solved this problem. An upgrade to EF 5.0 or the nightly build might also do the same.
That's the best I can do regarding this obscure issue. #baffled
I have a need to access the encoded stream in OpenRasta before it gets sent to the client. I have tried using a PipelineContributor and registering it before KnownStages.IEnd, tried after KnownStages.IOperationExecution and after KnownStages.AfterResponseConding but in all instances the context.Response.Entity stream is null or empty.
Anyone know how I can do this?
Also I want to find out the requested codec fairly early on yet when I register after KnowStages.ICodecRequestSelection it returns null. I just get the feeling I am missing something about these pipeline contributors.
Without writing your own Codec (which, by the way, is really easy), I'm unaware of a way to get the actual stream of bytes sent to the browser. The way I'm doing this is serializing the ICommunicationContext.Response.Entity before the IResponseCoding known stage. Pseudo code:
class ResponseLogger : IPipelineContributor
{
public void Initialize(IPipeline pipelineRunner)
{
pipelineRunner
.Notify(LogResponse)
.Before<KnownStages.IResponseCoding>();
}
PipelineContinuation LogResponse(ICommunicationContext context)
{
string content = Serialize(context.Response.Entity);
}
string Serialize(IHttpEntity entity)
{
if ((entity == null) || (entity.Instance == null))
return String.Empty;
try
{
using (var writer = new StringWriter())
{
using (var xmlWriter = XmlWriter.Create(writer))
{
Type entityType = entity.Instance.GetType();
XmlSerializer serializer = new XmlSerializer(entityType);
serializer.Serialize(xmlWriter, entity.Instance);
}
return writer.ToString();
}
}
catch (Exception exception)
{
return exception.ToString();
}
}
}
This ResponseLogger is registered the usual way:
ResourceSpace.Uses.PipelineContributor<ResponseLogger>();
As mentioned, this doesn't necessarily give you the exact stream of bytes sent to the browser, but it is close enough for my needs, since the stream of bytes sent to the browser is basically just the same serialized entity.
By writing your own codec, you can with no more than 100 lines of code tap into the IMediaTypeWriter.WriteTo() method, which I would guess is the last line of defense before your bytes are transferred into the cloud. Within it, you basically just do something simple like this:
public void WriteTo(object entity, IHttpEntity response, string[] parameters)
{
using (var writer = XmlWriter.Create(response.Stream))
{
XmlSerializer serializer = new XmlSerializer(entity.GetType());
serializer.Serialize(writer, entity);
}
}
If you instead of writing directly to to the IHttpEntity.Stream write to a StringWriter and do ToString() on it, you'll have the serialized entity which you can log and do whatever you want with before writing it to the output stream.
While all of the above example code is based on XML serialization and deserialization, the same principle should apply no matter what format your application is using.
Becuase monotouch compile to native code, so it has some limitation such as dynamic invoke is not allowed.
But I have a lot class in .net, that I use the ChannelFactory dynamic to invoke the wcf service: new ChannelFactory(myBinding, myEndpoint); Now in monotouch I should use the slsvcutil to generate the wcf proxy class, but the slsvcutil generate a lot of Unnecessary extra code (huge), and Makes consumers difficult to unit test, due to high coupling with the WCF infrastructure through the ClientBase class.
Is there a better solution except the ChannelFactory? I would rather write the code manually, have more control over how services are invoked such as the ChannelFactory.
==========
ChannelFactory<IMyContract> factory = new ChannelFactory<IMyContract>(binding, endpointAddress);
return factory.CreateChannel();
//==> It throw exception: MonoTouch does not support dynamic proxy code generation. Override this method or its caller to return specific client proxy instance
ChannelFactory<T> has a virtual method CreateChannel(). If this is not overridden, it uses dynamic code generation, which fails on MonoTouch.
The solution is to override it and provide your own compile-time implementation.
Below is an old service implementation of mine that at least used to work on MonoTouch. I split it up into 2 partial classes - the first one being linked in all builds, the 2nd only in the iOS builds (allowing the dynamic generation mechanism to still work on windows).
I've stripped it down to only contain 1 service call.
TransactionService.cs:
public partial class TransactionService : ClientBase<IConsumerService>, IConsumerService
{
public TransactionService()
{
}
public TransactionService(string endpointConfigurationName) :
base(endpointConfigurationName)
{
}
public TransactionService(string endpointConfigurationName, string remoteAddress) :
base(endpointConfigurationName, remoteAddress)
{
}
public TransactionService(string endpointConfigurationName, EndpointAddress remoteAddress) :
base(endpointConfigurationName, remoteAddress)
{
}
public TransactionService(Binding binding, EndpointAddress remoteAddress) :
base(binding, remoteAddress)
{
}
public AccountBalanceResponse GetAccountBalance( AccountBalanceQuery query )
{
return Channel.GetAccountBalance( query );
}
}
TransactionService.iOS.cs:
ConsumerServiceClientChannel which executes the calls via reflection)
public partial class TransactionService
{
protected override IConsumerService CreateChannel()
{
return new ConsumerServiceClientChannel(this);
}
private class ConsumerServiceClientChannel : ChannelBase<IConsumerService>, IConsumerService
{
public ConsumerServiceClientChannel(System.ServiceModel.ClientBase<IConsumerService> client) :
base(client)
{
}
// Sync version
public AccountBalanceResponse GetAccountBalance(AccountBalanceQuery query)
{
object[] _args = new object[1];
_args[0] = query;
return (AccountBalanceResponse)base.Invoke("GetAccountBalance", _args);
}
// Async version
public IAsyncResult BeginGetAccountBalance(AccountBalanceQuery query, AsyncCallback callback, object asyncState )
{
object[] _args = new object[1];
_args[0] = query;
return (IAsyncResult)base.BeginInvoke("GetAccountBalance", _args, callback, asyncState );
}
public AccountBalanceResponse EndGetAccountBalance(IAsyncResult asyncResult)
{
object[] _args = new object[0];
return (AccountBalanceResponse)base.EndInvoke("GetAccountBalance", _args, asyncResult);
}
}
}
EDIT: I just tested this with the latest MT (5.2) - it no longer needs all that extra boiler plate I had in there before, just the CreateChannel() override. I've cleaned up the sample code to match.
EDIT2: I added an async method implementation.
I think you might be confusing terms here - ChannelFactory is a generic type, not a dynamic.
According to MonoTouch documentation, although there's limitations to the Generics support in MonoTouch, ChannelFactory should be okay here.
Have you tried using ChannelFactory?