QTP for load test? - load-testing

I'm have some question about QTP 11, QTP 11 can be use loadtesting for performance test similar loadruner in Performance Center ? or QTP for functional test only?

AFAIK generally QTP is not used for load testing, though it is possible to measure transaction time for a business scenario using Start and End transaction.You can execute QTP scripts which would in-turn be a part of the load testing.But to go deep into load testing, you need to use other load testing tools like HP LoadRunner. It is obvious as both tools, QTP & LoadRunner, are from HP(this also signifies,according to HP for load testing you should use different testing tool) both tool can be use together for load testing,
Here is the link.

QTP is used as a Graphical Virtual User in a LoadRunner model. This requires a single OS instance per virtual user. GUI Virtual users were the first virtual user type for LoadRunner in version 1, which ran multiple versions of XRunner. Up until version 4, XRunner was the GUI VirtuaL User of Choice. From Versions 4 to 6, Graphical Virtual Users were available on both UNIX and Windows Systems, using XRunner on UNIX and WinRunner on Windows. By The time of version 3 API versions of the virtual users had replaced GUI Virtual Users for Primary Load.
From Version 8 forward QuickTest Profession was available as a Graphical Virtual user type. With Version 11 the default GUI Virtual user type is QTP, WinRunner is no longer supported.
So yes, the two can be integrated. There is a long history of using the functional automated testing tools within the Mercury/HP family with the performance testing tools.
The use of graphical performance testing tools fell out of favor for a while during the era of the thin web client. As web clients have gotten thicker, with the ability to run Javascript, C# and other technologies, the need to measure the difference between API level and GUI level has come back into vogue. In addition to the traditional GUI Virtual user, HP also offers TruClient. TruClient's main advantage is that you can run multiple TruClient virtual users per OS instance, in contrast to the GUI Virtual user where you can execute only a single virtual user per OS instance (on Microsoft Windows).
Talk to your VAR. GUI Virtual Users run about 1k per virtual user in bundles of five or better. It is not anticipated that you will ever run a full performance test using all graphical virtual users.

You can of course write a script in QTP for say logging into a website and run that script via loadrunner.
But firstly, the time will not be exact as QTP will add its own time (for execution) to the response time.
Secondly you will be able to simulate only one user per machine, while as Load runner simulates hundreds of users at a time.

No. UFT alone cannot perform load test.All you can do is measure transaction time that's all. The purpose of UFT tool is to automate GUI and API testing. For Load test, you need to use Load Runner.

UFT is basically for functional testing, said that you can do basic performance testing using UFT. Transaction Times and Page Loads. You can get the time take to complete any action, or time taken to load pages/images.

Related

WUAPI: List updates for different Windows editions

Using the Windows Update Agent API, one can list Windows updates. E.g.:
Set UpdateSession = CreateObject("Microsoft.Update.Session")
Set UpdateSearcher = UpdateSession.CreateUpdateSearcher()
Set SearchResult = UpdateSearcher.Search("IsInstalled=0 OR IsInstalled=1")
Set Updates = SearchResult.Updates
For I = 0 to SearchResult.Updates.Count-1
Set update = searchResult.Updates.Item(I)
WScript.Echo I + 1 & "> " & update.Title
Next
Since above I am querying for both installed and non-installed updates, I assume the result lists all available updates for my current Windows edition/build. Is that correct?
My point now is: can I query for a different edition too?
For example listing Windows Server 2016 updates from a Windows 10 system.
The idea is to easily provision a developer Windows virtual machine, taking the ISO and the most recent cumulative update.
To resurrect a dead question!
No, the Windows Update client is entirely predicated on the current machine. I've inspected the web traffic in a bid to reverse-engineer the update server traffic, but I got nowhere. YMMV.
The typical approach would be to have a build server create an image from the base ISO, start it up, apply updates, then shut it back down and create a master image from it. You would do this on a regular basis, so that whenever a new VM is provisioned it is no more than x days behind on updates. E.g. you could do it nightly.
Check out a tool called Packer. (This is not an endorsement, just an example.)
If you go down this road, you also open doors for yourself to do things such as run security scans against the image or install convenience packages. The more you can push into an image, the fewer tasks you have in your daily workload.
You mentioned that this is for developer machines. Have you considered dev containers in VS Code?

Do I need to open and close a Segger JLink connection when using the dlls to access multiple cores from a single PC process using a licensed SDK

Segger allow you to obtain a license for using their JLink SDK.
I am using it to create tooling to allow examination of the state of a new (not yet commercially available) SoC microprocessor that contains multiple cores (multiple ARM cortex M cores and DSPs) with SWD debug H/W.
Segger include a GDB server in their normal software download which can definitely access any single core from a single process.
I do not think Segger makes their SDK UM08002 documentation and code samples public but it demonstrates being able to access a single core (which works fine for me).
All the SDK really is is a set of headers and documentation that allow you call into the already distributed SEGGER JLink dll's (the dll's in the normal software download prompts you for auto updating) so there is no magic happening in the SDK itself; but it is licensed so I can't post any of it here.
What I do not understand is what dll calls must be made to access multiple cores sequentially from within a single process using the SDK.
Do I:
Disconnect and reconnect from the SEGGER every time I wish to access a different core
Can I switch between cores somehow without opening & closing the JLink connection
Can I leave the cores halted when switching between cores so the device doesn't 'run off' while I'm looking elsewhere
Will the core HW & SW debug points remain set & get triggered if I'm looking at a different core and allow me to discover this when I look back at a hopefully now halted core in question? (This may be core implementation dependant of course)
Short answer
No, sort of.
Long answer
The Segger JLink dll API is a single target per process API, so that means you can't start talking to another core while the process global state held by the dll is configured to talk to another core. In order to select the target core you have to inject appropriate initialisation scripts written in Seggers 'not quite sinmple c' scripting language. In order to change the scripts and ensure they have all run appropriately then you need to close down you connection to the target, set the new scripts and then reopen a connection to the new target core.
It is possible to adjust some details on the fly by executing some commands in a 'key=value' language but you can't do everything you might need to that way.
Recommended approach
What you can do is have multiple process sharing a JLink.
Each process initialised to a specific target core and then they share the JLink automatically on an API cal by API call basis. For compound, multiple api call, operations you need to serialise these operations using the dll API JLinkARM_Lock() and JLinkArm_Unlock() or you can potentially get processes 'jumping in' during these compound operations and having their behaviour become undefined or unreliable.
They to communicate with multiple target cores you do some inter process communication from your master process to your spawned JLink operation processes.
Remember to include keep alives in you inter process communication so that crashes or debugging doesn't result in a plethora of orphaned or silent processes.

How to build a virtual printer?

I'm trying to build a virtual printer.
There are already some answers like this and this.
However my demand is more specific. I just want to create a virtual printer that can be added into the system and can be accessed from any application. On clicking print command, a dialog looks like a real printer pops out and generates a PDF on printing. Then some more actions, like pushing the PDF to my server, are performed.
Do I need to dig into Windows Driver Kit? Or is there any free SDK for this?
Thanks.
Not sure if this question is still relevant to you, but you'd probably want to think about something like this:
Use the WDK (Windows Driver Kit) to create a Unidrv UI plugin. This will allow you to specify UI during the print (for your printer dialogue). The reason why you'd want to show UI here is because it's one of the only printer driver components that runs in the user session (the same process as the printing application). The XPS pipeline and port monitor are both session 0.
If you want to stick to MS convention, you'll do the spool file to PDF conversion in the render filter of the XPS Filter pipeline (this is if you're using an XPSDrv driver). The filter pipeline is where you have the opportunity to modify the XPS spool data coming in and in the final filter, convert it to your output document type (PDF in your case).
To do post print processing, you might want to consider creating a port monitor (again with the WDK) and kicking off a new process to do the post print processing after the port monitor writes out the print output to disk.
Only problem with this approach is that you can't use port monitors in Version 4 drivers (this is the new type of driver in Windows 8). Version 3 drivers still work in Win 8, but I guess they'll be phased out eventually.
Sorry it's probably not very obvious, but as I say, it's a high level overview (and unfortunately driver development is still very complex beyond a simple print to file). Version 4 printer drivers are becoming a lot easier to develop, but unfortunately with the removal of port monitor support and other improvements, it makes it a lot harder to develop anything requiring post processing.
[DISCLAIMER: I'm associated with the Mako SDK R&D team]
I know you asked for a free SDK, unfortunately I don't know of anything that would be suitable, but I know our company offer a Virtual Printer Platform (SDK) which would be good for you (prints to PDF and supports post print processing). You can find more information at the Mako SDK website
Hope this helps a bit anyway. I know printer driver development can be very confusing at times!
After reading up and doing a lot of research, with the aim to setup up something like redmon and use the printer SDK, I have completed the project using this SDK: http://www.novapdf.com/pdf-sdk.html
This solution however will work with windows only.
[I am not affiliated with novaPDF]
I have investigated an OSX version, however this will be a different build, you can probably set something up using this method: http://www.jms1.net/osx-pdf-services.shtml [I have not yet tried this]

Cloud computing: Learn to scale server up/down automatically

I'm really impressed with the power of cloud computing when it comes to the possibility to scale up and down your facilities depending on your load.
How can I shift my paradigm and learn to write my applications in that way? Write it once and forget(no matter of the future load) would be the best solution.
How can I practice my skills in that area?
Setup virtualization environment when I can add another VMs into the private cloud(via command line?) on some smart algorithms to foresee the load for some period of time?
Ideally I want to practice it without buying actual Cloud computing services and just on my hardware.
The only thing I want to practice here is app/web role and/or message queue systems scaling when current workers have too many jobs in queue. So let's rule out database scaling from the question's goal as too big topic.
One option I will throw out is to use a native Cloud execution framework. You might look at CloudIQ Platform. One component is CloudIQ Engine. It allows you to develop cloud native apps in C/C++, Java and .NET. You get the capabilities of scale up by simply adding workers to your cloud. The framework automatically distributes your applications to the new machine(s), and once installed, will begin sending work to them as requests come in. So in effect the cloud handles your queueing issue for you.
Check out the Download and Community links for more information.
You should try AWS- Amazon's offering a free tier that gives you storage, messaging and micro instances (only linux). you can start developing small try-outs without paying. writing an application that scales isn't that hard- try to break your flow into small, concurrent tasks. client-server applications are even easier- use a load balancer to raise\terminate servers by demand.

Does anyone know about issues between Citrix and Delphi 2007 applications? (And perhaps other development languages?)

The situation is simple. I've created a complex Delphi application which uses several different techniques. The main application is a WIN32 module but a few parts are developed as .NET assemblies. It also communicates with a web service or retrieves data from a specific website. It keeps most of it's user-data inside an MS Access database with some additional settings inside the Registry. In-memory, all data is converted inside an XML document, which is occasionally saved to disk as backup in case the system crashes. (Thus allowing the user to recover his data.) There's also some data in XML files for read-only purposes. The application also executes other applications and wants for those to finish. All in al, it's a pretty complex application.
We don't support Citrix with this application, although a few users do use this application on a Citrix server. (Basically, it allows those users to be more mobile.) But even though we keep telling them that we don't support Citrix, those customers are trying to push us to help them with some occasional problems that they tend to have.
The main problem seems to be an occasional random exception that seems to pop up on Citrix systems. Never at the same location and often it looks related to some memory problems. We've p[lenty of error reports already and there are just too many different errors. So I know solving all those will be complex.
So I would like to go a bit more generic and just want to know about the possible issues a Delphi (2007) can have when it's run on a Citrix system. Especially when this application is not designed to be Citrix-aware in any way. We don't want to support Citrix officially but it would be nice if we can help those customers. Not that they're going to pay us more, but still...
So does anyone know some common issues a Delphi application can have on a Citrix system?
Does anyone know about common issues with Citrix in general?
Is there some Silver Bullet or Golden Hammer solution somewhere for Citrix problems?
Btw. My knowledge about Citrix is limited to this Wikipedia entry and this website... And a bit I've Googled...
There were some issues in the past with Published Delphi Applications on Citrix having no icon in the taskbar. I think this was resolved by the MainFormOnTaskbar (available in D2007 and higher). Apart from that there's not much difference between Terminal Server and Citrix (from the Application's perspective), the most important things you need to account for are:
Users are NEVER administrator on a Terminal or Citrix Server, so they no rights in the Local Machine part of the registry, the C drive, Program Folder and so on.
It must be possible for multiple users on the same system to start your application concurrently.
Certain folders such as the Windows folder are redirected to prevent possible application issues, this is also means that API's like GetWindowsFolder do not return the real windows folder but the redirected one. Note that this behaviour can be disabled by setting a particular flag in the PE header (see delphi-and-terminal-server-aware).
Sometimes multiple servers are used in a farm which means your application can run on any of these servers, the user is redirected to the least busy server at login (load balancing). Thefore do not use any local database to store things.
If you use an external database or middleware or application server note that multiple users will connect with the same computername and ip address (certain Citrix versions can use Virtual IP addresses to address this).
Many of our customers use our Delphi applications on Citrix. Generally speaking, it works fine. We had printing problems with older versions of Delphi, but this was fixed in a more recent version of Delphi (certainly more recent than Delphi 2007). However, because you are now running under terminal services, there are certain things which will not work, with or without Citrix. For example, you cannot make a local connection to older versions of InterBase, which use a named pipe without the GLOBAL modifier. Using DoubleBuffered would also be a really bad idea. And so on. My suggestion is to look for advice concerning Win32 apps and Terminal Services, rather than looking for advice on Delphi and Citrix in particular. The one issue which is particular to Citrix that I'm aware of is that you can't count on having a C drive available. Hopefully you haven't hard-coded any drive letters into your code, but if you have you can get in trouble.
Generally speaking, your application needs to be compatible with MS Terminal Services in order to work with XenApp. My understanding is that .NET applications are Terminal Services-compatible, and so by extension should also work in a Citrix environment. Obviously, as you're suffering some problems, it's not quite that simple, however.
There's a testing and verification kit available from http://community.citrix.com/citrixready that you may find helpful. I would imagine the Test Kit and Virtual Lab tools will be of most use to you. The kit is free to use, but requires sign-up.
Security can be an issue. If sensitive folders are not "sandboxed" (See Remko's discussion about redirection), the user can break out of your app and run things that they shouldn't. You should probe your app to see what happens when they "shell out" of your app. Common attack points are CHM Help, any content that uses IE to display HTML, and File Open/Save dialogs.
ex: If you show .chm help, the user can right-click within a help topic, View Source. That typically opens Notepad. From there, they can navigate the directory structure. If they are not properly contained, they may be able to do some mischief.
ex: If they normally don't have a way to run Internet Explorer, and your app has a clickable URL in the about box or a "visit our web site" in the Help menu, voila! they have access to the web browser. If unrestrained, they can open a command shell by navigating to the windows directory.

Resources