I am having an issue returning a large list of objects from an activity function to an orchestrator function.
I have a function that downloads a 180 MB file and parses it. This file will produce a list of objects with over 962K entries. Each object has about 70 properties but only about 20% of them are populated.
When I run the function, the code successfully downloads and parses the file into the list, but when the list is returned, an exception is raised with the following information:
Exception: "Exception while executing function: #######"
- Source: "System.Private.CoreLib"
Inner exception: "Error while handling parameter $return after function returned."
- Source: "Microsoft.Azure.WebJobs.Host"
Inner / Inner exception: "Exception of type 'System.OutOfMemoryException' was thrown."
- Source: "System.Private.CoreLib"
The last nested exception lists the NewtonsoftJson package as being the one making the call that generates the out of memory error being reported. I am including the full stack trace for this exception at the end.
I understand that I could possibly serialize the list of objects and store them in an Azure blob entry and then just pick it up again in the next function that needs to process it, but I thought the idea behind durable functions was to avoid all this and maintain a leaner workflow? Also, I based the design on the "Large Message Support #26" github post that states that the durable functions extensions would automatically store the function payload in a blob if the size exceeds the queue message limit (see: https://github.com/Azure/azure-functions-durable-extension/issues/26).
Is there anything I need to do to get this working?
The code is pretty simple:
[FunctionName("GetDataFromSource")]
public static IEnumerable<DataDetail> GetDataFromSource([ActivityTrigger]ISource source, ILogger logger)
{
try
{
string importSettings = Environment.GetEnvironmentVariable(source.SettingsKey);
if (string.IsNullOrWhiteSpace(importSettings))
{
logger.LogError($"No settings key information found for the {source.SourceId} data source"); }
else
{
List<DataDetail> _Data = source.GetVinData().Distinct().ToList();
return vinData;
}
}
catch (Exception ex)
{
logger.LogCritical($"Error processing the {source.SourceId} Vin data source. *** Exception: {ex}");
}
return new List<DataDetail>();
}
This is the stack trace for the most inner exception:
at System.Text.StringBuilder.ExpandByABlock(Int32 minBlockCharCount)
at System.Text.StringBuilder.Append(Char value, Int32 repeatCount)
at System.Text.StringBuilder.Append(Char value)
at System.IO.StringWriter.Write(Char value)
at Newtonsoft.Json.JsonTextWriter.WritePropertyName(String name, Boolean escape)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeObject(JsonWriter writer, Object value, JsonObjectContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeValue(JsonWriter writer, Object value, JsonContract valueContract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerProperty)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeList(JsonWriter writer, IEnumerable values, JsonArrayContract contract, JsonProperty member, JsonContainerContract collectionContract, JsonProperty containerProperty)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.SerializeValue(JsonWriter writer, Object value, JsonContract valueContract, JsonProperty member, JsonContainerContract containerContract, JsonProperty containerProperty)
at Newtonsoft.Json.Serialization.JsonSerializerInternalWriter.Serialize(JsonWriter jsonWriter, Object value, Type objectType)
at Newtonsoft.Json.JsonSerializer.SerializeInternal(JsonWriter jsonWriter, Object value, Type objectType)
at DurableTask.Core.Serializing.JsonDataConverter.Serialize(Object value, Boolean formatted)
at Microsoft.Azure.WebJobs.Extensions.DurableTask.MessagePayloadDataConverter.Serialize(Object value, Int32 maxSizeInKB) in C:\projects\azure-functions-durable-extension\src\WebJobs.Extensions.DurableTask\MessagePayloadDataConverter.cs:line 55
at Microsoft.Azure.WebJobs.Extensions.DurableTask.MessagePayloadDataConverter.Serialize(Object value) in C:\projects\azure-functions-durable-extension\src\WebJobs.Extensions.DurableTask\MessagePayloadDataConverter.cs:line 43
at Microsoft.Azure.WebJobs.DurableActivityContext.SetOutput(Object output) in C:\projects\azure-functions-durable-extension\src\WebJobs.Extensions.DurableTask\DurableActivityContext.cs:line 136
at Microsoft.Azure.WebJobs.Extensions.DurableTask.ActivityTriggerAttributeBindingProvider.ActivityTriggerBinding.ActivityTriggerReturnValueBinder.SetValueAsync(Object value, CancellationToken cancellationToken) in C:\projects\azure-functions-durable-extension\src\WebJobs.Extensions.DurableTask\Bindings\ActivityTriggerAttributeBindingProvider.cs:line 213
at Microsoft.Azure.WebJobs.Host.Executors.FunctionExecutor.ParameterHelper.ProcessOutputParameters(CancellationToken cancellationToken) in C:\projects\azure-webjobs-sdk-rqm4t\src\Microsoft.Azure.WebJobs.Host\Executors\FunctionExecutor.cs:line 972
I came across a similar issue when working with Durable Functions.
There are a couple of solutions / workarounds to this:
As you say, you could store the function payload in blob storage and retrieve them when you require. This works but there is a performance hit and can take a while to retrieve depending on how big your file is.
The other option would be to batch your calls. I'm not entirely sure what your GetVinData() method does but you could modify this so you only retrieve 50,000 (or x number) items at a time. Your orchestrator could call your activity function multiple times and build up your list in the orchestrator.
[FunctionName(nameof(OrchestratorAsync))]
public async Task OrchestratorAsync([OrchestrationTrigger] IDurableOrchestrationContext context)
{
var dataDetailList = new List<DataDetail>();
var batches = BuildBatchesHere();
foreach (var batch in batches)
{
dataDetailList.AddRange(
await context.CallActivityAsync<List<DataDetail>>(
nameof(GetDataFromSource), batch);
}
// Do whatever you need with dataDetailList
}
The Durable Functions extension will automatically take care of storing large messages in blobs when they don't fit in queues and tables. However, this support assumes that enough memory is available to serialize the payloads so that they can be uploaded to blob. Unfortunately, the design of the Durable Task Framework requires serializing the payload into a string first before uploading to blob, which means there will be a lot of memory pressure.
There are a few things you can try to mitigate this problem:
Make sure your function app is running in 64-bit mode. By default, Function apps are created in 32-bit mode, which has lower memory limits. We've seen several cases where simply switching to 64-bit resolves out-of-memory issues.
Try increasing the memory limit for your particular plan. If you're running in the Azure Functions Consumption plan, maximum memory is fixed. However, if you're running in Elastic Premium or App Service Plans, you have the option of using larger VMs with more memory.
As #stephyness suggested, consider limiting the amount of data your return from your function. This could be returning a subset of the full list or it could be the full list but with smaller payloads (for example, source.GetVinData().Distinct().Select(x => x.VinNumber) (you might even get better results by simply removing .ToList(), which may be creating an unnecessary copy of your data). Essentially, return only the data that the orchestrator absolutely needs to make progress. Returning data the orchestrator doesn't need is unnecessary overhead.
Also be aware that there's a non-trivial performance impact when large message support is used. If you can avoid relying on it, your orchestrations will run much faster.
Other tips for controlling memory usage can be found in the Performance and Scale documentation.
Related
I'm using Neo4JClient in .NET Core web application (I created it in VisualStudio version 2019 template) and I had code that works as expected for my example.
I have 3 types of node: Category(important property name - string), Document(important CreatedBy - represents name of User who created it, other properties are used after WHERE), User(important Username - string). Document can have TAG relationship with Category and User can have INTERESTED_IN relationship with Category.
First I created one query to return Document if it have TAG to Category and if User is INTERESTED_IN this Category (Note: I don't have multiple relationships between nodes). Also number of connections with Category is counted so if I have too many Documents method returns only 10 with most connections.
public async Task<IActionResult> GetNewsFeed(string username)
{
string match = $"(a:User{{Username: '{username}'}})-[:INTERESTED_IN]->(res:Category)<-[:TAG]-(b:Document)";
string where = $"NOT(b.isArchived AND c.isArchived AND b.CreatedBy =~ '{username}')";
string with = "b.name AS name, b.CreatedBy AS creator, b.Pictures AS pictures, b.Paragraphs AS paragraphs, COUNT(res) AS interest1";
var result = _context.Cypher.Match(match)
//.Where(where)
.With(with)
.Return((name, creator, pictures, paragraphs, interest1)=> new SimpleNewsFeedDTO {
Name = name.As<string>()
, Creator = creator.As<string>()
, Pictures = pictures.As<string[]>()
, Paragraphs = paragraphs.As<string[]>()
, Interest = interest1.As<int>() })
.OrderBy("interest1 DESC")
.Limit(10)
.ResultsAsync);
return new JsonResult(await result);
}
When I provide correct Username I can see results in right order, but I also want to filter Documents CreatedBy that User (I want to exclude documents created by user), but when I uncomment Where and send request I get following error:
fail: Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware[1]
An unhandled exception has occurred while executing the request.
System.ArgumentException: Neo4j returned a valid response, however Neo4jClient was unable to deserialize into the object structure you supplied.
First, try and review the exception below to work out what broke.
If it's not obvious, you can ask for help at http://stackoverflow.com/questions/tagged/neo4jclient
Include the full text of this exception, including this message, the stack trace, and all of the inner exception details.
Include the full type definition of ExtraBlog.DTOs.SimpleNewsFeedDTO.
Include this raw JSON, with any sensitive values replaced with non-sensitive equivalents:
(Parameter 'content')
---> System.ArgumentNullException: Value cannot be null. (Parameter 'input')
at System.Text.RegularExpressions.Regex.Replace(String input, String replacement)
at Neo4jClient.Serialization.CommonDeserializerMethods.ReplaceAllDateInstancesWithNeoDates(String content)
at Neo4jClient.Serialization.CypherJsonDeserializer`1.Deserialize(String content, Boolean isHttp)
--- End of inner exception stack trace ---
at Neo4jClient.Serialization.CypherJsonDeserializer`1.Deserialize(String content, Boolean isHttp)
at Neo4jClient.GraphClient.Neo4jClient.IRawGraphClient.ExecuteGetCypherResultsAsync[TResult](CypherQuery query)
at ExtraBlog.Controllers.UserController.GetNewsFeed(String username) in E:\GithubRepo\NapredneBaze\Neo4JProject\extra-blog\ExtraBlog\Controllers\UsersController.cs:line 39
at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.TaskOfIActionResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeActionMethodAsync>g__Awaited|12_0(ControllerActionInvoker invoker, ValueTask`1 actionResultValueTask)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeInnerFilterAsync>g__Awaited|13_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|24_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeFilterPipelineAsync>g__Awaited|19_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)
at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Awaited|17_0(ResourceInvoker invoker, Task task, IDisposable scope)
at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)
at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)
Also when I run something like this in Neo4J browser I get the same result with or without WHERE:
MATCH (a:User{Username: 'NN'})-[:INTERESTED_IN]->(res:Category)<-[t:TAG]-(b:Document)
WHERE (NOT(b.isArchived AND res.isArchived AND b.CreatedBy =~ 'NN'))
WITH b, COUNT(res) AS interest1
RETURN b.name, interest1
ORDER BY interest1 DESC
LIMIT 10
So my question is: why I can't run my method in Visual Studio, and why this query doesn't return what I was expecting?
With these things, it's always good to check what you are generating from the client. To that end you should look at the query.DebugQueryText property. If you do, you'll see the query you would generate would look like this:
MATCH (a:User{Username: 'NN'})-[:INTERESTED_IN]->(res:Category)<-[:TAG]-(b:Document)
WHERE NOT(b.isArchived AND c.isArchived AND b.CreatedBy =~ 'NN')
WITH b.name AS name, b.CreatedBy AS creator, b.Pictures AS pictures, b.Paragraphs AS paragraphs, COUNT(res) AS interest1
RETURN name AS Name, creator AS Creator, pictures AS Pictures, paragraphs AS Paragraphs, interest1 AS Interest
ORDER BY interest1 DESC
LIMIT 10
If you try to execute that in your browser - it won't work, and that's because you use an alias c to access the isArchived property, but there is no c in your MATCH.
I have a storage queue to which I post messages constructed using the CloudQueueMessage(byte[]) constructor. I then tried to process the messages in a webjob function with the following signature:
public static void ConsolidateDomainAuditItem([QueueTrigger("foo")] CloudQueueMessage msg)
I get a consistent failure with exception
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: Program.ConsolidateDomainAuditItem ---> System.InvalidOperationException: Exception binding parameter 'msg' ---> System.Text.DecoderFallbackException: Unable to translate bytes [FF] at index -1 from specified code page to Unicode.
at System.Text.DecoderExceptionFallbackBuffer.Throw(Byte[] bytesUnknown, Int32 index)
at System.Text.DecoderExceptionFallbackBuffer.Fallback(Byte[] bytesUnknown, Int32 index)
at System.Text.DecoderFallbackBuffer.InternalFallback(Byte[] bytes, Byte* pBytes)
at System.Text.UTF8Encoding.GetCharCount(Byte* bytes, Int32 count, DecoderNLS baseDecoder)
at System.String.CreateStringFromEncoding(Byte* bytes, Int32 byteLength, Encoding encoding)
at System.Text.UTF8Encoding.GetString(Byte[] bytes, Int32 index, Int32 count)
at Microsoft.WindowsAzure.Storage.Queue.CloudQueueMessage.get_AsString()
at Microsoft.Azure.WebJobs.Host.Storage.Queue.StorageQueueMessage.get_AsString()
at Microsoft.Azure.WebJobs.Host.Queues.Triggers.UserTypeArgumentBindingProvider.UserTypeArgumentBinding.BindAsync(IStorageQueueMessage value, ValueBindingContext context)
at Microsoft.Azure.WebJobs.Host.Queues.Triggers.QueueTriggerBinding.<BindAsync>d__0.MoveNext()
Looking at the code of UserTypeArgumentBindingProvider.BindAsync, it clearly expects to be passed a message whose body is a JSON object. And the UserType... of the name also implies that it expects to bind a POCO.
Yet the MSDN article How to use Azure queue storage with the WebJobs SDK clearly states that
You can use QueueTrigger with the following types:
string
A POCO type serialized as JSON
byte[]
CloudQueueMessage
So why is it not binding to my message?
The WebJobs SDK parameter binding relies heavily on magic parameter names. Although [QueueTrigger(...)] string seems to permit any parameter name (and the MSDN article includes as examples logMessage, inputText, queueMessage, blobName), [QueueTrigger(...)] CloudQueueMessage requires that the parameter be named message. Changing the name of the parameter from msg to message fixes the binding.
Unfortunately, I'm not aware of any documentation which states this explicitly.
Try this instead:
public static void ConsolidateDomainAuditItem([QueueTrigger("foo")] byte[] message)
CloudQueueMessage is a wrapper, usually the bindings get rid of the wrapper and allow you to deal with the content instead.
I am getting an OutOfMemoryException attempting to download a large file from an MVC controller. I want to reduce demand on server memory, so how can I download a file without first buffering its entire contents?
My code is:
var cd = new System.Net.Mime.ContentDisposition
{
FileName = filename,
Inline = false
};
Response.AppendHeader("Content-Disposition", cd.ToString());
var stream = new MemoryStream();
ExportService service = new ExportService(_mapper, Repository);
service.ExportVersion(view.ExportType, version, products, regions,
indicators, periods, stream);
stream.Position = 0;
return File(stream, "text/plain");
I would like the content to be streamed to the browser as I am creating it. To try to acheive that, I passed Response.OutputStream to the service and wrote to that instead of to the MemoryStream. However, that creates the same OutOfMemoryException, since by default, ASP.NET buffers all content before sending it to the browser. Even if I set Response.BufferOutput = false (and return an EmptyResult) I still get an OutOfMemoryException.
Response.BufferOutput = false;
ExportService service = new ExportService(_mapper, Repository);
service.ExportVersion(view.ExportType, version, products, regions,
indicators, periods, Response.OutputStream);
return new EmptyResult();
Update
I am confident that this problem is due to hitting the limit of available memory since the code works on one machine, but not on another. Furthermore, on the machine with the more restricted memory, the strack trace shows the error occurs at variable locations in the code, depending on the content being downloaded. Here is a sample stack trace which shows the exception occurring while allocating memory to a List object:
[OutOfMemoryException: Exception of type 'System.OutOfMemoryException' was thrown.]
System.Collections.Generic.List`1.set_Capacity(Int32 value) +62
System.Collections.Generic.List`1.EnsureCapacity(Int32 min) +43
System.Collections.Generic.List`1.Add(T item) +51
MyApp.Mapping.DocumentVersionEntityConverter.LoadMultiVariableData(DocumentVersion version, MultiVariableQuestionView view) in c:\svn\Client Applications\MyApp\Web\Config\Mapping\DocumentVersionEntityConverter.cs:207
MyApp.Mapping.DocumentVersionEntityConverter.SetResponses(DocumentVersion version, List`1 questions, DataFilters filters) in c:\svn\Client Applications\MyApp\Web\Config\Mapping\DocumentVersionEntityConverter.cs:112
MyApp.Mapping.DocumentVersionEntityConverter.ScanSection(DocumentVersion version, SectionView section, DataFilters filters) in c:\svn\Client Applications\MyApp\Web\Config\Mapping\DocumentVersionEntityConverter.cs:79
MyApp.Mapping.DocumentVersionEntityConverter.Convert(ResolutionContext context) in c:\svn\Client Applications\MyApp\Web\Config\Mapping\DocumentVersionEntityConverter.cs:63
AutoMapper.DeferredInstantiatedConverter`2.Convert(ResolutionContext context) +57
AutoMapper.<>c__DisplayClass15.<ConvertUsing>b__14(ResolutionContext context) +10
AutoMapper.Mappers.CustomMapperStrategy.Map(ResolutionContext context, IMappingEngineRunner mapper) +13
AutoMapper.Mappers.TypeMapMapper.Map(ResolutionContext context, IMappingEngineRunner mapper) +130
AutoMapper.MappingEngine.AutoMapper.IMappingEngineRunner.Map(ResolutionContext context) +355
We are running web site with around 15.000 realtime user (google analytics) (Around 1000 request/sec (perf counters)).
We have two web server behind load balancer.
Sometimes every day sometimes 1 time in a week one of our web servers stop execute requests and start to response with error and every request is logging following exception:
"System.IndexOutOfRangeException - Index was outside the bounds of the array."
Our environment : IIS 8.5, .Net 4.5.0, Mvc 5.1.0, Unity 3.5 (same state with 3.0), WebActivatorEx 2.0
In IIS, Worker Process 1 and other settings are with defaults.
We could not catch any specific scenario made this error. After App pool recycle everything start with no problem. And before every request respond with error, there is not any error related with it.
There is one question asked in the past with related old Unity version:
https://unity.codeplex.com/discussions/328841
http://unity.codeplex.com/workitem/11791
Could not see anything I can do about it.
Here exception details:
System.IndexOutOfRangeException
Index was outside the bounds of the array.
System.IndexOutOfRangeException: Index was outside the bounds of the array.
at System.Collections.Generic.List`1.Enumerator.MoveNext()
at System.Linq.Enumerable.WhereListIterator`1.MoveNext()
at System.Collections.Generic.List`1..ctor(IEnumerable`1 collection)
at System.Linq.Enumerable.ToList[TSource](IEnumerable`1 source)
at Microsoft.Practices.Unity.NamedTypesRegistry.RegisterType(Type t, String name)
at Microsoft.Practices.Unity.UnityDefaultBehaviorExtension.OnRegisterInstance(Object sender, RegisterInstanceEventArgs e)
at System.EventHandler`1.Invoke(Object sender, TEventArgs e)
at Microsoft.Practices.Unity.UnityContainer.RegisterInstance(Type t, String name, Object instance, LifetimeManager lifetime)
at Microsoft.Practices.Unity.UnityContainerExtensions.RegisterInstance[TInterface](IUnityContainer container, TInterface instance, LifetimeManager lifetimeManager)
at DemoSite.News.Portal.UI.App_Start.UnityConfig.<>c__DisplayClass1.<RegisterTypes>b__0()
at DemoSite.News.Portal.Core.Controller.BaseController.Initialize(RequestContext requestContext)
at System.Web.Mvc.Controller.BeginExecute(RequestContext requestContext, AsyncCallback callback, Object state)
at System.Web.Mvc.MvcHandler.<BeginProcessRequest>b__4(AsyncCallback asyncCallback, Object asyncState, ProcessRequestState innerState)
at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncVoid`1.CallBeginDelegate(AsyncCallback callback, Object callbackState)
at System.Web.Mvc.Async.AsyncResultWrapper.WrappedAsyncResultBase`1.Begin(AsyncCallback callback, Object state, Int32 timeout)
at System.Web.Mvc.Async.AsyncResultWrapper.Begin[TState](AsyncCallback callback, Object callbackState, BeginInvokeDelegate`1 beginDelegate, EndInvokeVoidDelegate`1 endDelegate, TState invokeState, Object tag, Int32 timeout, SynchronizationContext callbackSyncContext)
at System.Web.Mvc.MvcHandler.BeginProcessRequest(HttpContextBase httpContext, AsyncCallback callback, Object state)
at System.Web.HttpApplication.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute()
at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)
My configuration is as follows:
public static void RegisterTypes(IUnityContainer container)
{
var section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity");
container.LoadConfiguration(section);
ServiceLocator.SetLocatorProvider(() => new UnityServiceLocator(container));
}
Initialize method as follow:
protected override void Initialize(System.Web.Routing.RequestContext requestContext)
{
if (requestContext.RouteData.Values["ViewActionId"] != null)
{
int viewActionId;
if (!int.TryParse(requestContext.RouteData.Values["ViewActionId"].ToString(), out viewActionId))
return;
var cacheProvider = ServiceLocator.Current.GetInstance<ICacheProvider>();
List<ViewActionClass> viewActionClasses = null;
string cacheKey = CacheKeyCompute.ComputeCacheKey("ViewActionClass", CacheKeyTypes.DataCache,
new KeyValuePair<string, string>("viewActionId", viewActionId.ToString()));
_configuration = ServiceLocator.Current.GetInstance<IConfiguration>();
viewActionClasses =
cacheProvider.AddOrGetExistingWithLock<List<ViewActionClass>>(cacheKey, () =>
{
var viewActionClassBusiness =
ServiceLocator.Current.GetInstance<IViewActionClassBusiness>();
return viewActionClassBusiness.ViewActionClassGetByViewActionId(viewActionId);
});
ViewBag.ActionClass = viewActionClasses;
ViewBag.Configuration = _configuration;
}
base.Initialize(requestContext);
}
Registration xml for ICacheProvider, IConfiguration and IViewActionClassBusiness
<type type="DemoSite.Runtime.Caching.ICacheProvider, DemoSite.Core"
mapTo="DemoSite.Runtime.Caching.ObjectCacheProvider, DemoSite.Core">
<lifetime type="containerControlledLifetimeManager" />
</type>
<type type="DemoSite.Core.Configuration.IConfiguration, DemoSite.Core"
mapTo="DemoSite.Core.Configuration.ConfigFileConfiguration, DemoSite.Core">
<lifetime type="containerControlledLifetimeManager" />
</type>
<type type="DemoSite.News.Business.IViewActionClassBusiness, DemoSite.News.Business"
mapTo="DemoSite.News.Business.Default.ViewActionClassBusiness, DemoSite.News.Business.Default">
<lifetime type="perRequestLifetimeManager" />
</type>
Maybe it is related with high traffic.
Is there anyone encounter a problem like that and any solution ?
Thanks in advance
As far as I can see from the stack trace you are registering instances in the container during the web request. The RegisterType and RegisterInstance methods are not thread-safe in Unity (and this probably holds for most DI libraries in .NET). This explains why this is happening to at random points and under high load.
It is good practice to register your container only at start-up and don't change it later on. With the Dependency Inversion Principle and Dependency Injection pattern in particular you try to centralize the knowledge of how object graphs are wired, but you are decentralizing it again by doing new registrations later on. And even if registration was thread-safe with Unity, it's still very likely that you introduce race conditions by changing registrations during runtime.
UPDATE
Your code has the following code that causes the problems:
ServiceLocator.SetLocatorProvider(() => new UnityServiceLocator(container));
This seems very innocent, but in fact it causes both a concurrency bug and a memory leak.
Because the new statement is inside the lambda, it will cause a new UnityServiceLocator to be created every time you call ServiceLocator.Current. That wouldn't be bad by itself, but the UnityServiceLocator's constructor makes a call to container.RegisterInstance to register itself in the container. But as I already said: calling RegisterInstance` is not thread-safe.
But even if it was thread-safe, it still causes a memory leak in your application, since a call to RegisterInstance will not replace an existing registration, but appends it to a list of registrations. This means that the list of UnityServiceLocator instances in the container will keep growing and will eventually cause the system to crash with an OutOfMemoryException. You are actually lucky that you hit this concurrency bug first, because the OOM bug would be much harder to trace back.
The fix is actually very simple: move the construction of the UnityServiceLocator out of the lambda and return that single instance every time:
var locator = new UnityServiceLocator(container);
ServiceLocator.SetLocatorProvider(() => locator);
The behavior of the UnityServiceLocator is a design flaw in my opinion, because since RegisterInstance is not thread-safe and the UnityServiceLocator has no idea how many times it is created, it should never call RegisterInstance from within its constructor -or at least- not without doing a check whether it is safe to register that instance.
Problem however is that removing that call to RegisterInstance is a breaking change, but still probably the best solution for the Unity team. Most users will probably not notice the missing IServiceLocator registration anyway and if they do, Unity will communicate a clear exception message in that case. Another option would be to let the UnityServiceLocator check whether any instances have already been resolved from the container, and in that case throw an InvalidOperationException from within the UnityServiceLocator's constructor.
Isn't that a crazy error?
I get this when trying to open a form containing some UserControls from another assebly and using Entity Framework and SQL CE on visual studio designer.
Object of type Namespace.T[] cannot be converted to type Namespace.T[]!!!
Call Stack:
at System.RuntimeType.TryChangeType(Object value, Binder binder, CultureInfo culture, Boolean needsSpecialCast) at System.RuntimeType.CheckValue(Object value, Binder binder, CultureInfo culture, BindingFlags invokeAttr) at System.Reflection.RtFieldInfo.InternalSetValue(Object obj, Object value, BindingFlags invokeAttr, Binder binder, CultureInfo culture, Boolean doVisibilityCheck, Boolean doCheckConsistency) at System.Runtime.Serialization.FormatterServices.SerializationSetValue(MemberInfo fi, Object target, Object value) at System.Runtime.Serialization.ObjectManager.CompleteObject(ObjectHolder holder, Boolean bObjectFullyComplete) at System.Runtime.Serialization.ObjectManager.DoNewlyRegisteredObjectFixups(ObjectHolder holder) at System.Runtime.Serialization.ObjectManager.RegisterObject(Object obj, Int64 objectID, SerializationInfo info, Int64 idOfContainingObj, MemberInfo member, Int32[] arrayIndex) at System.Runtime.Serialization.Formatters.Binary.ObjectReader.RegisterObject(Object obj, ParseRecord pr, ParseRecord objectPr, Boolean bIsString) at System.Runtime.Serialization.Formatters.Binary.ObjectReader.ParseObjectEnd(ParseRecord pr) at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Parse(ParseRecord pr) at System.Runtime.Serialization.Formatters.Binary.__BinaryParser.Run() at System.Runtime.Serialization.Formatters.Binary.ObjectReader.Deserialize(HeaderHandler handler, __BinaryParser serParser, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream, HeaderHandler handler, Boolean fCheck, Boolean isCrossAppDomain, IMethodCallMessage methodCallMessage) at System.Runtime.Serialization.Formatters.Binary.BinaryFormatter.Deserialize(Stream serializationStream) at System.Resources.ResXDataNode.GenerateObjectFromDataNodeInfo(DataNodeInfo dataNodeInfo, ITypeResolutionService typeResolver) at System.Resources.ResXDataNode.GetValue(ITypeResolutionService typeResolver) at System.Resources.ResXResourceReader.ParseDataNode(XmlTextReader reader, Boolean isMetaData) at System.Resources.ResXResourceReader.ParseXml(XmlTextReader reader)
But its the same name type exactly!
The project builds successfully and runs ok!!!
OK i deleted the .resx file of the form and now i get 2 other errors i thought i have passed.
1st is "The specified named connection, not intended to be used with the EntityClient provider, or not valid"
Call Stack:
at System.Data.EntityClient.EntityConnection.ChangeConnectionString(String newConnectionString) at System.Data.EntityClient.EntityConnection..ctor(String connectionString) at System.Data.Objects.ObjectContext.CreateEntityConnection(String connectionString) at System.Data.Objects.ObjectContext..ctor(String connectionString, String defaultContainerName) at DJPro.Settings.Model.SettingsEntities..ctor() in D:\Visual Studio Projects\DJProAutomation\DJPro.Settings.Model\SettingsSelfTrackModel.Context.cs:line 33 at DJPro.Data.Access.SettingsDataOperations.GetConfiguration() in D:\Visual Studio Projects\DJProAutomation\DJPro.Data.Access\SettingsDataOperations.cs:line 33 at DJPro.Studio.Controls.DeckControl..ctor() in D:\Visual Studio Projects\DJProAutomation\DJPro.Deck.Controls\DeckControl.cs:line 51
2nd is about a control i have on a library saying.
"The variable deckControl1 is either undeclared or was never assigned"
Call Stack:
at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.Error(IDesignerSerializationManager manager, String exceptionText, String helpLink) at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.DeserializeExpression(IDesignerSerializationManager manager, String name, CodeExpression expression) at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.DeserializeExpression(IDesignerSerializationManager manager, String name, CodeExpression expression) at System.ComponentModel.Design.Serialization.CodeDomSerializerBase.DeserializeStatement(IDesignerSerializationManager manager, CodeStatement statement)
Then restored from a backup the resx file and im going back to the first problem.
So strange errors, everything seems fine in the Entity Data Model libraries and the app.config has all the necessary connection strings. As for the deckControl1 UserControl it seems fine to the library i have created it and opens ok.
I think this gets me crazy enough and stops the development.
Any idea?
It looks like you have a version conflict between the assembly used to generate the ResX and the currently referenced assembly.
Try removing the reference, re-adding it as a project reference, and regenerating the ResX.
Found the problem, if you use in UserControl constructor code that initializes entity framework context provides problems....even problem can occur while trying to initialize context for data operations in the Load event handler.
Tricky!