I've been struggling with this for a while now, and I see that I'm not the only one with the problem (see this and that).
I've managed to debug for a bit, and found a solution, though I'm pretty sure that this is not the 'right' way.
The first debug session (before the dev server was enabled) showed that the ProfiledDbConnectionFactory and ProfiledDbConnection classes provide the required data, but then AFTER the connection is created, the static Instance property on ProfiledDbProviderFactory is initialized (by calling the default constructor) and apparently CreateConnection() is run on that instance resulting with a null reference exception (tail is null).
I've managed to solve this by running
ProfiledDbProviderFactory.Instance.InitProfiledDbProviderFactory(_profiler, ripInnerProvider(_conn));
at the end of ProfiledDbConnection(DbConnection connection, IDbProfiler profiler).
This allows me to view the sql profiling, but as I wrote, I have a feeling that this isn't the correct fix.
Here's the sample code I used.
Not sure if there is something wrong with my environment, or my code, as I have a feeling that this should work out of the box. Any comments/suggestions?
Sam?
there is nothing wrong with using ProfiledDbConnectionFactory entity framework is designed to support ignorance i generally just use the debugger tools and create a breakpoint to view my generated sql statements as the code first framework provides the sql statement readily there
i would stick to using profiled connections the way you described in your unit testing as opposed to using it in production code as the profiling may be a performance hit you way also want to consider using the sql profiler included in sql server and there is also an sql profiler available as a visual studio extension however i am not sure if it supports code first as of yet
This problem has been resolved in version 1.9.1 of MiniProfiler.EF
Related
Environment:
Windows 8.1
Visual Studio 2013
Project type: ASP.NET MVC
Debugging in IIS Express.
DotNet: 4.5
Database: SQLExpress 2012
EntityFramework 5
When I run my solution (F5), in Debug or Release configuration, I can manipulate data through EF with no issues; data changes persist between views. If I query the database in Management Studio however, none of the updates are reflected in it. If I update a record in Management Studio, the changes aren't reflected in my running solution either.
If I then stop and restart, or even just stop and do a build (CTRL, SHIFT, B) in VS, my data in the web application all reverts back to the state matching that of my database through Management Studio.
If I add a trace to the database, I can see reads, but no writes coming through to the db. Additionally, if I stop the SQLExpress service, my pages throw "SQL Server service has been paused/stopped" exceptions. So bizarrely enough, it looks like it's reading from the correct database, but may be writing to a development cache somewhere?
This leads me to think that on every build, a copy of the db is being used for that debug/run session's state.
So the question then becomes, where is this being set, and where is the temp db living? I have scoured my web.config, web.debug.config, web.release.config, but there is no reference to an alternate database.
I have looked in the /App_Data and /bin folders, but there's no extra database there either. I even resorted to watch the filesystem using procmon for any file operations performed by VS with a build, but I couldn't find anything of note (there is tons of data, so may have missed something).
I have a couple of debug statements spitting out the connectionstring being used by EF, and can confirm that it's pointing the the correct SQLExpress instance.
System.Diagnostics.Debug.WriteLine("Conn String: " + ctx.Database.Connection.ConnectionString);
The only other possibility is that EF is suddenly holding a large cache. I doubt this though as I trace the DB frequently and updates generally happen immediately.
This behaviour is relatively new, but don't know exactly when it started. The only significant change was the VS upgrade from 2012 to 2013, but can't be sure it correlates with the upgrade.
Anyway, I'm now at an end of my tether, and would love any suggestions that I could follow.
OK, I figured it out. So for anyone else having similar issues, it relates to synchronising EF contexts.
I was declaring my classes with a static context reference to save having to declare it in every method thus:
public class MyClass : Controller
{
private static MyContext db = new MyContext();
...
}
Being static, as you would expect, it's evaluated at start-up, and held in memory.
Adding that to the fact that I was changing properties on my objects retrieved from the static context, but updating to a different context (helper function confusion), all resulted in the confused state I was seeing.
So the moral of the story:
Don't use static context references. Declare them as you need them.
Double check that you're retrieving and updating to the same context.
Visual studio installs sqlexpress, normally this is what code first uses. You can use management studio to to connect to the express instance. It's also possible that it's using localdb by default for vs 2013 still optional in 2012.
Yes, I've read the warning label, and I know that dynamically loading assemblies is somewhat discouraged. That said, I have an application that loads assemblies - that's just how it works. It works fine on Windows. Works fine on Windows CE. I need it to "work fine" on Android, even if it takes some massaging.
Basically the app is an engine that loads up plug-in DLLs (we'll call it an Adapter) that meet specific interfaces at run time. Under Windows, it even detects the appearance of a DLL at any point and goes and loads it - I'm fine if that's not going to work under Android.
What I'm having trouble getting working is having the Engine load an Adapter that it knew about at design/compile time but without hard coding the name of that Adapter into the Engine code. I'm fine with adding a reference to the Adapter to get it to not get linked out, but I really, really don't want to have to add in the DLL name every time, as the DLLs change with different deployments, and that would lead to a huge headache.
So I figured that if it's referenced, it would get into the APK, and I could use reflection to load it like this:
var asm = Assembly.Load("TheAdapterName.dll");
Initial tests show that this works for the Adapter if I just hard code in the name, but again, I really, really want to avoid that.
So I thought that maybe I could reflect through the references and extract the name, but oddly, not all references actually show up when I do that. So I do this:
var refs = asm.GetReferencedAssemblies().Select(a => a.Name).ToArray();
And I get back an array of 14 assembly names. But the assembly (asm) has 16 references, one of which is the Adapter plug-in I need to load. The Adapter is definitely there - heck I used Assembly.Load with the full name two lines above and it resolved.
I thought, ok, maybe I can figure out the "path" to the folder from which I'm running, and then look for DLLs there and load that way. Ha. After several hours of trying to figure out a way to get the path that would work under Debug and Release, I came up with nothing but more grey hair.
Sooooo...... any thoughts on how I might get the name of a DLL that I know is in my APK, but that I don't "know" the name of at build time (I'm loading them and looking for interfaces via reflection to detect their "Adapterness").
If those methods aren't working for you, then the only suggestion I can think of is to add a prebuild step which updates either a C# or an Assets file in order to provide the list you need.
Obviously this is extra work, but should be fully automated and is guaranteed to work no matter what platform changes get thrown at you.
As an aside, I also just looked at one of my mvx projects using reflector - it shows the same asm.GetReferencedAssemblies() list as your investigations report - runtime-loaded plugins are not listed. I guess that the GetReferencedAssemblies method is reporting only on assemblies actually used to import Type references at the IL level - so if you reference an assembly in the csproj but don't import any types then it doesn't list them as references in the compiled code.
I have a very simple Delphi 2010 dll that I load from a Visiual Studio 2008 C ATL console application (MVF GUI app does not work either). When I debug the console app from the IDE directly - no break points - the output from the application is not correct but when I run the app directly or if I attach to the process with the debugger then it works 100%. Debugging the same dll from a Delphi console app (i.e. running it from the IDE) also works.
The VS debugger seems to break the app depending on how you run it. VS2010 does the same!
I have made 100% sure - several times! - that the data types and calling convention of the dll exports and those in the console app match. I can go into more detail but I don't want to confuse the matter with what may be irrelevant information. Please tell me if I have to go into the specifics of the code if what I have offered is not enough.
Has anyone experienced this sort of thing and know how to fix it?
I've got similar problem once (different behavior in IDE/debugger and in standalone application). It turned out, that I've checked the value of unset variable. Debugger allocated it (always!) in previously used block of memory, such that the value was not empty and the application worked correctly (because only the conditional block was erroneous). However, OS sometimes put the application in empty memory block (filled with 0s), the condition failed and application crashed.
Maybe this is the issue? Try using OutputDebugString()'s to track down values of variables during the library runtime. Without some source code I believe, that it's not easy to say, what might be the reason.
Best regards -- Spook.
I'm using TJvPluginManager in the JVCL to create and load BPL-based plugins for my program. Problem is, one of the plugins isn't loading properly, and I can't debug it. Every time I try to trace into the loading sequence, it gets as far as the LoadLibrary API call, and then the debugger seems to forget what it's there for. It completely loses the ability to associate program code with source lines, give meaningful data in a call stack, or display local variables. It will still stop at breakpoints, but it breaks to the CPU window, with all the inline source code stripped out.
This happens on Delphi 2007 and 2009, and it's driving me nuts. Does anyone know how to load a plugin without it breaking the debugger? Does anyone even know why it's breaking it in the first place?
NOTE: I'm not looking for alternative methods of debugging. I know all about tracing and logging and all the rest. What I want is to understand what's going wrong and how to fix it. Surely I'm not the only person who's ever used TJvPluginManager?
Not quite the answer to your question: Have you tried to debug the package project, by setting the host application and putting a breakpoint into the package's startup code?
I've found Ray Kanopka's (Raize) CodeSite to be invaluable for debugging in situations where the integrated debugger is acting up. Thinking about the things I want to monitor using CodeSite actually helps me focus on what's important - it enforces good habits.
Another alternative to Codesite is Overseer which is part of the nexus project, but stands alone so does not require you to use their framework. Codesite is by far the better option, but in a pinch Overseer would work just as well.
I found that using packages for plugins can be problematic and many years ago switched to a completely COM based implementation for plugins and never had any problems. The other advantage to COM based plugins, they don't require Delphi to write, do not need to be recompiled when the main app switches to a new version of the compiler (my plugins compiled with Delphi 5 still run fine against the main application compiled in Delphi 2009!) and they are easier to write test applications to assist in debugging.
The only side effect I notice, is that shared code ends up in both executables and the plugins require registration into the registry.
Hmmmm... This is a stupid question, but I have to ask: the initialization function have the EXACT declaration syntax like the other plugins that work ?(from your question, I deducted you made some others that work)
Check your dependencies. Make sure each unit is compiled into one package only. Whenever a package needs to reference a unit from another package, use the requires clause to do so. Watch for compiler warnings about implicitly linked units.
I'm having an interesting problem implementing a global keyboard hook.
I wrote a dll which is used to set the hook and then an application (Delphi) which loads the dll and processes the results of the hook. This was done this afternoon on my PC at work and after some testing I figured it was working 100%.
I've just tested the same app and dll here at home and I'm not getting any errors, but the application does not appear to be getting any data either.
Both machines are WinXP, although my work machine is SP2 and this one is SP3.
Has there been some change in the Win32 API which would cause this to malfunction, or could the problem be related to some A/V / Spyware / MS Update that has been released recently?
I'm hoping somebody here will know of an obvious reason that this may happen before I spend hours debugging.
Thanks!
Actually some A/Vs don't like homemade hooks. I've got the same problem with my mouse hooker on some machines, and it doesn't depend on service pack version.
Yeah, I could. I haven't installed Delphi on this machine, but I think I might have to. I'm going for the low hanging fruit here. If there's an obvious answer, there's no need to go through all the trouble of debugging and hoping to find what might be the problem.
My first suspicion is that there's been a change in the API somewhere.
As I mentioned, this app works absolutely perfectly on my work machine.
Do you have a debugger on your home computer? Do you receive any messages via the hook at all?
Can it be that some other application is hooking, and don't pass the message on down the hook-chain?
BTW: I love virtual machines for this kind of testing. Keep a clean XP install. Install SP2, and test your application. Roll back to clean install again, and install SP3. Try your application again. This way you will know if its SP3, since there is nothing else to mess things up. I like to keep a set of snapshots around with different configurations.
Which kind of hook are you using? I once used the WH_CBT-type and encountered problems when certain other applications where running. One case I could trace back to Trillian, which seems to do also some kind of hooking (and maybe screws up).
Apart from that I am currently working on an application that uses the WH_KEYBOARD-hook and this works on SP2 and SP3 equally well. The MSDN also doesn't mention any service-pack related changes.
What you can do to trace the bug on your home machine:
make sure to check all result values of all system api calls (and use GetLastError in case of error)
provide some kind of debug output in case of error (e.g. as message box or to a text file)
optional: log some status messages so you know whats going on internally
One alternative is to use a low level keyboardhook. (Just a different param to SetWindowsHookEx). The hook is processed in the message loop of the registering thread, and thus does not need to inject a dll everywhere. And for some odd reason VirusScanners/Firewalls interfere much less with it. They often silently block dllinjection or normal keyboardhooks. Also removes the need to share the hHook across processes if you want it to work in older windows versions.
And if you abuse a keyboardhook to implement global hotkeys(Have seen that a lot) use RegisterHotkey/http://msdn.microsoft.com/en-us/library/ms646309.aspx) instead.