Using the Windows Update Agent API, one can list Windows updates. E.g.:
Set UpdateSession = CreateObject("Microsoft.Update.Session")
Set UpdateSearcher = UpdateSession.CreateUpdateSearcher()
Set SearchResult = UpdateSearcher.Search("IsInstalled=0 OR IsInstalled=1")
Set Updates = SearchResult.Updates
For I = 0 to SearchResult.Updates.Count-1
Set update = searchResult.Updates.Item(I)
WScript.Echo I + 1 & "> " & update.Title
Next
Since above I am querying for both installed and non-installed updates, I assume the result lists all available updates for my current Windows edition/build. Is that correct?
My point now is: can I query for a different edition too?
For example listing Windows Server 2016 updates from a Windows 10 system.
The idea is to easily provision a developer Windows virtual machine, taking the ISO and the most recent cumulative update.
To resurrect a dead question!
No, the Windows Update client is entirely predicated on the current machine. I've inspected the web traffic in a bid to reverse-engineer the update server traffic, but I got nowhere. YMMV.
The typical approach would be to have a build server create an image from the base ISO, start it up, apply updates, then shut it back down and create a master image from it. You would do this on a regular basis, so that whenever a new VM is provisioned it is no more than x days behind on updates. E.g. you could do it nightly.
Check out a tool called Packer. (This is not an endorsement, just an example.)
If you go down this road, you also open doors for yourself to do things such as run security scans against the image or install convenience packages. The more you can push into an image, the fewer tasks you have in your daily workload.
You mentioned that this is for developer machines. Have you considered dev containers in VS Code?
Related
We are using Foxglove as a visualization tool for our ROS2 Foxy system on Ubuntu 20, but we are running into bandwidth issues with the rosbridge websocket. We have plans to switch to using the foxglove_bridge websockets since they advertise performance improvement, but are waiting until we migrate to ROS Humble.
When a client initiates a subscription to a topic, it can also pass along options to the server to throttle the message rate for each topic.
Where do I change those options? They must be set within the client, but I couldn't find anything within the GUI to set it.
I'm running foxglove-studio from binaries installed through apt. The only source code I have for the foxglove-studio is a few custom extension panels.
My temporary fix is to filter out the topics I want to throttle and hard-code the throttle_rate option within the rosbridge server before the options are passed to the subscriber handler.
This will work for the demo we have coming up, but I'm searching for a better solution.
Foxglove Studio currently uses a hard-coded set of parameters for creating the roslib Topic object and so does not support throttling. To achieve this you'd currently need to either:
patch the Studio source and build it yourself
patch the server as you've currently done, or
create a separate downsampled topic (e.g. using topic_tools/throttle).
I have an data processing application which is updated on a regular basis. This application has a bunch of dependencies which are also updated every now and then. However, different versions of the software (+dependencies) might produce different results (this is expected). The application is run on a remote computer and it can be accessed through a Web page. Every time the user uses the Web page to do some processing she/he also chooses which version of the software he/she wants to use.
Now I am trying to decide which is the best way of keeping track different software (+dependencies) versions. The simplest way of course is to just compile and install each version of my software and its dependencies in a different folder, and then based on the request the user sends, the appropriate folder is selected. However, this sounds very clunky to me. So I thought I could use Docker to keep track of the different software versions. Do you think that it is a good idea? If yes, what is most appropriate to do every time I have a new version of the software (and/or dependencies): 1) Create a new container from scratch with the new version (and end up having multiple containers), or 2) Update the existing container and commit the changes? (I suppose I can access the older commits of the container, right?)
PS: Keep in mind that the reason I looked into Docker and not a simple virtual machine solution is that the application I am running is a high-performance GPU-based software.
Docker is a reasonable choice. Your repository would contain all of the app versions you wish to publish. Note, you will only realize savings if you organize the resulting app filesystem into layers, of which the lower layers are the least likely to change between versions. This will keep the storage requirements at a minimum.
Then you have to decide how you will process each job. A robust (but complex) solution would be to have one or more API containers which take in processing jobs from your user and "dole" them out to worker containers (one or more from each release version). This would provide the lowest response latency and be non-blocking. You can look at different service discovery models to see how your "worker" containers can register with your "manager" containers. This is probably more than you'd like to bite off, but consider using a good key-value database (another container!) like etcd or a 3rd party service discovery tool like zookeeper/eureka/consul.
A much simpler model would have a single API container with one each of the release containers created, but not started. The API container would start, direct, and then stop the appropriate release container. You would incur the startup latency, but this is the least resource intensive... and easiest to manage. But this is a blocking operation.
Somewhere in the middle, but less user friendly is to have each release container running but listening on different host ports (the app always sees the same port). The user would would connect to the port which is servicing the desired release of the app. You'd have to provide some sort of index to make this useful.
One of the features of Erlang (and, by definition, Elixir) is that you can do hot code swap. However, this seems to be at odd with Docker, where you would need to stop your instances and restart new ones with new images holding the new code. This essentially seem to be what everyone does.
This being said, I also know that it is possible to use one hidden node to distribute updates to all other nodes over network. Of course, just like that is sounds like asking for trouble, but...
My question would be the following: has anyone tried and achieved with reasonable success to set up a Docker-based infrastructure for Erlang/Elixir that allowed Hot-code swapping? If so, what are the do's, don'ts and caveats?
The story
Imagine a system to handle mobile phone calls or mobile data access (that's what Erlang was created for). There are gateway servers that maintain the user session for the duration of the call, or the data access session (I will call it the session going forward). Those server have an in-memory representation of the session for as long as the session is active (user is connected).
Now there is another system that calculates how much to charge the user for the call or the data transfered (call it PDF - Policy Decision Function). Both systems are connected in such a way that the gateway server creates a handful of TCP connections to PDF and it drops users sessions if those TCP connections go down. The gateway can handle a few hundred thousand customers at a time. Whenever there is an event that the user needs to be charged for (next data transfer, another minute of the call) the gateway notifies PDF about the fact and PDF subtracts a specific amount of money from the user account. When the user account is empty PDF notifies the gateway to disconnect the call (you've run out of money, you need to top up).
Your question
Finally let's talk about your question in this context. We want to upgrade a PDF node and the node is running on Docker. We create a new Docker instance with the new version of the software, but we can't shut down the old version (there are hundreds of thousands of customers in the middle of their call, we can't disconnect them). But we need to move the customers somehow from the old PDF to the new version. So we tell the gateway node to create any new connections with the updated node instead of the old PDF. Customers can be chatty and also some of them may have a long-running data connections (downloading Windows 10 iso) so the whole operation takes 2-3 days to complete. That's how long it can take to upgrade one version of the software to another in case of a critical bug. And there may be dozens of servers like this one, each one handling hundreds thousands of customers.
But what if we used the Erlang release handler instead? We create the relup file with the new version of the software. We test it properly and deploy to PDF nodes. Each node is upgraded in-place - the internal state of the application is converted, the node is running the new version of the software. But most importantly, the TCP connection with the gateway server has not been dropped. So customers happily continue their calls or are downloading the latest Windows iso while we are upgrading the system. All is done in 10 seconds rather than 2-3 days.
The answer
This is an example of a specific system with specific requirements. Docker and Erlang's Release Handling are orthogonal technologies. You can use either or both, it all boils down to the following:
Requirements
Cost
Will you have enough resources to test both approaches predictably and enough patience to teach your Ops team so that they can deploy the system using either method? What if the testing facility cost millions of pounds (because of the required hardware) and can use only one of those two methods at a time (because the test cycle takes days)?
The pragmatic approach might be to deploy the nodes initially using Docker and then upgrade them with Erlang release handler (if you need to use Docker in the first place). Or, if your system doesn't need to be available during the upgrade (as the example PDF system does), you might just opt for always deploying new versions with Docker and forget about release handling. Or you may as well stick with release handler and forget about Docker if you need quick and reliable updates on-the-fly and Docker would be only used for the initial deployment. I hope that helps.
I'm have some question about QTP 11, QTP 11 can be use loadtesting for performance test similar loadruner in Performance Center ? or QTP for functional test only?
AFAIK generally QTP is not used for load testing, though it is possible to measure transaction time for a business scenario using Start and End transaction.You can execute QTP scripts which would in-turn be a part of the load testing.But to go deep into load testing, you need to use other load testing tools like HP LoadRunner. It is obvious as both tools, QTP & LoadRunner, are from HP(this also signifies,according to HP for load testing you should use different testing tool) both tool can be use together for load testing,
Here is the link.
QTP is used as a Graphical Virtual User in a LoadRunner model. This requires a single OS instance per virtual user. GUI Virtual users were the first virtual user type for LoadRunner in version 1, which ran multiple versions of XRunner. Up until version 4, XRunner was the GUI VirtuaL User of Choice. From Versions 4 to 6, Graphical Virtual Users were available on both UNIX and Windows Systems, using XRunner on UNIX and WinRunner on Windows. By The time of version 3 API versions of the virtual users had replaced GUI Virtual Users for Primary Load.
From Version 8 forward QuickTest Profession was available as a Graphical Virtual user type. With Version 11 the default GUI Virtual user type is QTP, WinRunner is no longer supported.
So yes, the two can be integrated. There is a long history of using the functional automated testing tools within the Mercury/HP family with the performance testing tools.
The use of graphical performance testing tools fell out of favor for a while during the era of the thin web client. As web clients have gotten thicker, with the ability to run Javascript, C# and other technologies, the need to measure the difference between API level and GUI level has come back into vogue. In addition to the traditional GUI Virtual user, HP also offers TruClient. TruClient's main advantage is that you can run multiple TruClient virtual users per OS instance, in contrast to the GUI Virtual user where you can execute only a single virtual user per OS instance (on Microsoft Windows).
Talk to your VAR. GUI Virtual Users run about 1k per virtual user in bundles of five or better. It is not anticipated that you will ever run a full performance test using all graphical virtual users.
You can of course write a script in QTP for say logging into a website and run that script via loadrunner.
But firstly, the time will not be exact as QTP will add its own time (for execution) to the response time.
Secondly you will be able to simulate only one user per machine, while as Load runner simulates hundreds of users at a time.
No. UFT alone cannot perform load test.All you can do is measure transaction time that's all. The purpose of UFT tool is to automate GUI and API testing. For Load test, you need to use Load Runner.
UFT is basically for functional testing, said that you can do basic performance testing using UFT. Transaction Times and Page Loads. You can get the time take to complete any action, or time taken to load pages/images.
Quick question:
I have two servers, and the initial idea was to use one as application-tier and data-tier, and the other one as build machine.
But it's a relatively small project, so it seems like total overkill to use one server only for the build services (I was assuming a weaker machine would come, then I got surprised).
If I do split app tier and data tier between the two servers, where should I put the Build Service?
On the app tier side or on the data tier side? Which one would be better ?
In our environment, we have the AT and DT separated on their own machines, and host a build service on the AT. What it comes down to is where you have the most "spare" resources available for the build machine. Take a look at perfmon counters for memory and CPU over time, and see which one looks like it is the most lightly used, and put your build agent there.
From experience, that's likely to be your application tier, particularly if you've got the reporting services and the cube installed on the data tier along with the source code repository and work item store.