I have set up a completely new virtual server. A Windows Server 2008R2 Datacenter Edition with a MS-SQL 2008R2 Standard Edition on the same server. The server uses 2 CPUs and has a memory off 4 GB. There, so to say, pretty sufficient power on the machine.
On the server I have only five Umbraco websites installed.
But I have a very slow loading on my Umbraco ver 4.7.1.1 installation. When I run ?umbDebugShowTrace=true I get that:
Category: umbracoInit
Message: handling request
From First(s): 4.60952439486024E-05
From Last(s): 0,000046
All other Categories are very fast. Does anyone have an idea of what the problem is?
Having accessed http://st5.workcopy.net/?umbDebugShowTrace=true (which shows your website stack trace), you main problem is in your NavigationSelect macro, which starts rendering at 0.015-0.016 seconds into the page lifecycle, and completes rendering at 8.52 seconds into the cycle. May I suggest you look into improving the method calls in that macro (are you calling library.NiceUrl() a lot?), as that seems to be the source of your page load speeds.
Your FirstPageBoxes macro also seems to be attempting an awful lot of static type conversions, which indicates a lot of calls to a property (e.g. Model.MyProperty) that has an underlying complex type (such as an XML block - YouTube data type, perhaps?) - maybe you should call this once into a separate, strongly-typed variable so that your cast is only performed once at runtime, thus improving the macro performance further.
Related
We are developing a console software, with Delphi 7.
To simplify, this software is using an embedded TCP server to answer to external requests from a CGI. These answers contain generated HTML pages with Teechart graphs, and data extracted from a database, using DbExpress.
On Windows 7 and Windows 2008 R2 servers, we noticed significant increase of the run time of our software – 2 or 3 times the original process time on Windows XP or Windows Server 2003 – in a standard context of execution: software launched as a Service with the system user account.
But when our software is launched as a simple user, from command prompt, or directly from the IDE (debug mode), the problem simply disappear.
My first question is : has anyone already noticed this problem?
Using ProcessExplorer, we also noticed that when the software is launched as a service, there is no GDI Handle created, nor is a User Handle. But when the software is launched with a user account, some of these handles are created. With Windows XP and Windows Server 2003, either the software is launched as a service or with a simple user account, these handles are always created.
Can this observation be linked with our problem?
If you already noticed these behaviour, how did you fix the problem?
Because of the many places where we relied on Windows API CompareString function we could not replace it by non Windows versions.
But, we found that instead of using LOCAL_USER_DEFAULT by using LOCALE_INVARIANT($07) the API works fine.
So, we decided to hack the constant value as defined in Windows and over write it everywhere where it was used for comparison purposes with a conditional compilation like this:
{$IFDEF OVERLOAD_LUD}
const
LOCALE_INVARIANT = $7;
LOCALE_USER_DEFAULT = LOCALE_INVARIANT;
{$ENDIF}
That solved the problem.
I think we found the source of our problems. So for those that are looking for a solution, here's what we’ve done:
Delays are due to the use of Win32 API functions using locals. Using a Locale Identifier functions are now being deprecated in favor of using the Locale Name functions (see http://msdn.microsoft.com/en-us/library/windows/desktop/dd319091%28v=vs.85%29.aspx).
Our developments use significantly “CompareString” (http://msdn.microsoft.com/en-us/library/windows/desktop/dd317759%28v=vs.85%29.aspx), including the use of the indexOf method of the TStringList .The execution of this method (CompareStringA of kernell32) is slowed while running in the user context System (in Session 0).
To get around this problem, we overloaded TStringList with CompareStr instead of CompareString. This workaround suits in our context but CompareStr makes comparisons bit to bit and isn't case sensitive unlike CompareString. (Not to mention the fact that this method is about 10 times faster ... http://www.gefvert.org/blog/archives/651)
Another solution would be to switch to a newer version of the IDE, but we all know that this is another story...
Good evening.
I'm looking for a method to share data from my application system-wide, so that other applications could read that data and then do whatever they want with it (e.g. format it for display, use it for logging, etc). The data needs to be updated dynamically in the method itself.
WMI came to mind first, but then you've got the issue of applications pausing while reading from WMI. Additionally, i've no real idea how to setup my own namespace or classes if that's even possible in Delphi.
Using files is another idea, but that could get disk heavy, and it's a real awful method to use for realtime data.
Using a driver would probably be the best option, but that's a little too intrusive on the users end for my liking, and i've no idea on where to even start with it.
WM_COPYDATA would be great, but i'm not sure if that's dynamic enough, and whether it'll be heavy on resources or not.
Using TCP/IP would be the best choice for over the network, but obviously is of little use when run on a single system with no networking requirement.
As you can see, i'm struggling to figure out where to go with this. I don't want to go into one method only to find that it's not gonna work out in the end. Essentially, something like a service, or background process, to record data and then allow other applications to read that data. I'm just unsure on methods. I'd prefer to NOT need elevation/UAC to do this, but if needs be, i'll settle for it.
I'm running in Delphi 2010 for this exercise.
Any ideas?
You want to create some Client-Server architecture, which is also called IPC.
Using WM_COPYDATA is a very good idea. I found out it is very fast, lightweight, and efficient on a local machine. And it can be broadcasted over the system, to all applications at once (to be used with care if some application does not handle it correctly).
You can also share some memory, using memory mapped files. This is may be the fastest IPC option around for huge amount of data, but synchronization is a bit complex (if you want to share more than one buffer at once).
Named pipes are a good candidates for local. They tend to be difficult to implement/configure over a network, due to security issues on modern Windows versions (and are using TCP/IP for network communication - so you should better use directly TCP/IP instead).
My personal advice is that you shall implement your data sharing with abstract classes, able to provide several implementations. You may use WM_COPYDATA first, then switch to named pipes, TCP/IP or HTTP in order to spread your application over a network.
For our Open Source Client-Server ORM, we implemented several protocols, including WM_COPY_DATA, named pipe, HTTP, or direct in-process access. You can take a look at the source code provided for implementation patterns. Here are some benchmarks, to give you data from real implementations:
Client server access:
- Http client keep alive: 3001 assertions passed
first in 7.87ms, done in 153.37ms i.e. 6520/s, average 153us
- Http client multi connect: 3001 assertions passed
first in 151us, done in 305.98ms i.e. 3268/s, average 305us
- Named pipe access: 3003 assertions passed
first in 78.67ms, done in 187.15ms i.e. 5343/s, average 187us
- Local window messages: 3002 assertions passed
first in 148us, done in 112.90ms i.e. 8857/s, average 112us
- Direct in process access: 3001 assertions passed
first in 44us, done in 41.69ms i.e. 23981/s, average 41us
Total failed: 0 / 15014 - Client server access PASSED
As you can see, fastest is direct access, then WM_COPY_DATA, then named pipes, then HTTP (i.e. TCP/IP). Message was around 5 KB of JSON data containing 113 rows, retrieved from server, then parsed on the client 100 times (yes, our framework is fast :) ). For huge blocks of data (like 4 MB), WM_COPY_DATA is slower than named pipes or HTTP-TCP/IP.
Where are several IPC (inter-process communication) methods in Windows. Your question is rather general, I can suggest memory-mapped files to store your shared data and message broadcasting via PostMessage to inform other application that the shared data changed.
If you don't mind running another process, you could use one of the NoSQL databases.
I'm pretty sure that a lot of them won't have Delphi drivers, but some of them have REST drivers and hence can be driven from pretty much anything.
Memcached is an easy way to share data between applications. Memcached is an in-memory key-value store for small chunks of arbitrary data (strings, objects).
A Delphi 2010 client for Memcached can be found on google code:
http://code.google.com/p/delphimemcache/
related question:
Are there any Caching Frameworks for Delphi?
Googling for 'delphi interprocess communication' will give you lots of pointers.
I suggest you take a look at http://madshi.net/, especially MadCodeHook (http://help.madshi.net/madCodeHook.htm)
I have good experience with the product.
We have a C/S application all written in Delphi (Client and Server-or middleware if you want)
For the client part we use Indy.
For the server we use DXSock.
Since DXSock is dead for a while we are investigating alternatives for the sever part.
I want to hear some comments about the best Server Socket alternative component for Delphi.
The current system usually have tens of permanent connections working each one on its own thread but could be hundreads in the future (this should be improved to a thread pool if possible)
If you want to have the best possible performance, you'd have to use sockets in non blocking mode, or using completion ports. IPWorks is implemented like that, as well as iocp. As far as I can tell, Indy or Synapse don't implement them (at least officially).
We used completion ports and a thread pool in our open source SynCrtSock unit, used in our Synopse SQLite3 framework.
Here are some benchmarks of this solution, working from Delphi 6 up to Delphi XE. I don't tell this is the "best component", but it's a working and speedy one (every request is about 4 KB of JSON data):
Http client keep alive (i.e. one HTTP/1.1 client connection kept alive during requests):
first in 7.87ms, done in 153.37ms i.e. 6520/s, average 153us
Http client multi connect (i.e. one new HTTP/1.0 client connection created for each request - this one uses completion ports and a thread pool):
first in 151us, done in 305.98ms i.e. 3268/s, average 305us
For speed comparison, here are other communication protocols available in our framework:
Named pipe access:
first in 78.67ms, done in 187.15ms i.e. 5343/s, average 187us
Local window messages:
first in 148us, done in 112.90ms i.e. 8857/s, average 112us
Direct in process access:
first in 44us, done in 41.69ms i.e. 23981/s, average 41us
We use HTTP/1.1 protocol over TCP/IP, because there is very little overhead over plain TCP/IP, and this is a well handled protocol for firewalls and such, and allows our framework to be used by an AJAX application, whereas its main purpose is to serve Delphi clients.
IMHO there is no "best Server Socket alternative component for Delphi", it depends what is the purpose of your server application. The main bottleneck will be in the Windows kernel itself. Perhaps direct access to the HTTP Kernel-Mode Driver (Http.sys) of Windows could help.
Consider using a dedicated optimized Server instead of a Delphi server, like lighttpd or Cherokee using FastCGI to handle the requests via a Free Pascal (or CrossKylix) application, under Linux. I guess this will be the best performance possible.
I use Indy components for commercial server-side work and the component set is pretty solid (9 or 10). My servers have millions of connections per day with no issues.
I used DXSock many moons ago. He was always optimizing, but never seemed to finish it. He does seem to have another version out.
If you want commercial support, then I'd recommend IPWorks from nSoftware.
Actually DXSock is not dead, v6.1 was just released. The web hosting company we used to use in Tennessee lost the domain - so only customers who have kept their subscription renewed annually have received DXSock 5.0, 6.0 and 6.1.
Indy CANNOT support more than 2,000 concurrent connections on 32bit Windows - as Chad and crew use TThread, which implements the defacto 1MB per thread/socket connection - 2000x1MB = >2.5GB of RAM which 32bit OSes do not support. DXSock implements a 0b per connection model (unless you define otherwise) and can handle over 50,000 concurrent on Windows, Linux, Mac, Pi, etc.
Ozz Nixon - ozznixon#bpdx.com if you want more details on 6.1
Author of DXSock
Co-Author of Winshoes which became INDY.
I have a Delphi (hence 32-bit) CGI app running on a Windows 2008 64-bit server that has 24 Gb RAM with IIS7. The web service runs fine for a few days at a time (sometimes a few weeks) and then suddenly starts reporting "Not enough storage available to process this command."
Now I've seen this before in regular Windows apps and it normally means that the machine ran out of memory. In this instance, the server shows that only 10% of physical RAM is in use. On top of that, TaskManager shows only one instance of the CGI executable, with 14Mb allocated. And once it starts it keeps giving the error, regardless of actual server load. No way is this thing really running out of memory.
So I figured there is probably some maximum memory setting in IIS7 somewhere, but I couldn't find anything of the sort. Restarting the web server makes the problem go away until next time, but is probably not the best strategy.
Any ideas?
It might be an IRPStackSize issue as discussed here. And the particular cause mentioned in that article is not the only one, apparently.
The CGI does not seem to ever unload under IIS7, even though it seems to work under IIS6. This seems to be a problem with the CGI support on IIS7.
Has anyone successfully talked profibus from a .NET application?
If you did, what device/card did you use to accomplish this, what was the application, and did you use any kind of preexisting or available code?
We've not used Profibus, but have used DeviceNET (another CAN based protocol), Ethernet/IP and ControlNet which all have similar challenges.
We've been doing this since the late 1990's and therefore rely mainly on our own generated code using off-the-shelf hardware. The companies that have shown longevity during that period that I remember are:-
AnyBus (HMS, www.anybus.com) we've recently started using their gateway products as we can place fieldbus interfaces close to the hardware and then communicate over normal Ethernet (usually using Ethernet/IP www.odva.org). This has the advantage of separating hardware and PC using only a network cable. The Ethernet/IP .NET classes were written by ourselves as nothing much was on the market at the time. I'm sure a quick google search would find suitable class libraries
SST (www.mysst.com) have had fieldbus interfaces for more than a decade. The last SST card we used for DeviceNET still only had VB6 sample code. A good selection of fieldbus support and different form-factors e.g. PC104, PCI, PMCIA
Beckhoff/Wago (www.beckhoff.com, www.wago.com) we typically use Beckhoff for the I/O more than the interface cards but again a company that has been around a long time. They also have products that support exposing using OPC (another way for you to get I/O information without directly communicating with the hardware/devicedrivers)
I suggest not using OPC interfaces to the hardware directly (it’s OK for communication using PC (.NET)->PLC->Profibus) as you need to ensure that the control system responds to loss of control from your .NET application. I’m assuming that you are needing a profibus Master here (not a slave), so as long as your control system is intrinsically fail safe, then loss of communication should mean the control system enters an "Idle" state and therefore most of the I/O will return to the fails safe state.
We also try to ensure that we do not put safety related code in .NET. Most of our .NET code is userinterface from a PLC, but in some places we do control the fieldbus directly but ensure hardware interlocks will prevent un-safe operation, either using safety switches/relays or a small PLC with the the task of interlocking only. And above all make the system fail-safe! Loss of comms from the .NET code should shutdown the automation to the fail-safe state.
We have used Steeplechase to connect to our profibus to our automated pick system.
http://www.phoenixcontact.com/automation/32131_31909.htm
Try this: http://libnodave.sourceforge.net