Running a automated test against a desktop WPF application works fine on my local machine and on Azure VM Windows Server 2012R2 when accessed via RDP.
However, when the VM is used as build machine, controlled by test agent on TFS or VSTS, all test fails because of the screen resolution is set to 1024x768 screen resolution. The application is not configure to run until this display settings. Is there any way to change the screen settings when we deploy the test agent?
Change VSTS agent session screen resolution when running protractor tests
We've encountered the same issue on our Visual studio + Azure solution. To be able to execute the tests we need a higher resolution on the VM than 1027 x 768. But since it's Azure and you pay for machines that are turned on, we also want to turn them off after each run top keep the cost down (especially helpful when you want to scale up a bit).
Therefore it's a real pain that there is no simple option to let the VM boot in a certain (specified) resolution. If there is something more simple than what I'm going to show you, please let me know, but I could not find any. So I up-voted the idea mentioned by Nessi. What we did as a workaround was the following.
Idea's for possible solution
In essence we used this post as a guideline. The most important things we used from this was the Windows credentials part and the TERMSRV.
Our Setup
Visual Studio Build server
Four Azure VM's, one machine is the selenium-grid-hub the other three are nodes
Our Solution
First we let the Build server start all machines in the resource group (so far so good). Then we created a Powershell script that runs on the build server to the nodes to check and waits for the RDP service to become available. This was needed since it can take up to 10 minutes before we see that this service is active. And finally we trigger a Powershell on the selenium-grid-hub VM to make RDP connections to all nodes in a certain resolution.
In a bit more detail to make sure it all goes automatically and without any manual input needed:
Creating and export/import certificates from each node into the hub
Making sure that the credentials are stored in the credential manager > Windows Credentails (we created one user on all machines to make life a tiny bit easier)
Creating a script for checking if the RDP service is active
We call this script C:\Scripts\RDPServiceRunCheck.ps1 (see example below) in a VS build block with the arguments $(Password) $(Chrome-node) $(Username)
Where all these arguments have been stored in variables on the build server
Here is the code for the script on github
Creating a executable for starting a RDP in a certain resolution
We call this script C:\Scripts\Resolution.RDP.Remoting.exe (see example below) in a VS building block with the arguments "C:\Scripts\$(Chrome-node).rdp" 1600 1200
Where the *.rdp file for each machine was stored (upfront) in this folder and 1600 1200 is the resolution we want to set
Here is the code for the executable on github
This is an older question, so thought it best to throw this out there if it helps anyone.
There is a Screen Resolution Utility AzureDevOps Build/Release task to change the screen resolution of the agent machine. Useful when running UI tests such as Selenium, Coded UI etc.
https://marketplace.visualstudio.com/items?itemName=ms-autotest.screen-resolution-utility-task
Try running the test headless I had the same issue with TFS and this is the only way it works for me
args: [
'--headless',
'--window-size=1920,1040',
],
Try using the below Powershell command, it is a 100% working solution
Set-DisplayResolution -Width 1024 -Height 768 -Force
This is a limitation with Azure VM since it use RDP to set the screen resolution. Refer to this link for details: Why is not possible increase or change display resolution in Azure VM.
The RDP session uses the RDP display driver, not the Microsoft Virtual
Machine Bus Video Device.
Although the RemoteFX feature enables a broader range of graphics
workloads than regular RDP, RemoteFX is not available for Azure VMs.
I am having the same problem. As there is a way to specify a resolution for an RDP session (even for an Azure VM), I created a UserVoice idea to get this desired feature (specify a resolution when running UI tests with "Run Functional Tests" task).
In the meantime, I am using a workaround. Our build VM opens an RDP session in the desired resolution (currently to cover different browser sizes this session runs at 4800x2700) to the test machine with the account, the UI tests are executed with. When there is an active session, the UI tests just connect to that session and uses the resolution that is currently shown.
This way we have a constant RDP session from the Azure build VM to the Azure test VM, but it works :)
Related
We create iOS and Android apps that are white-labeled. They all use a single code base (one for iOS and one for Android). Whenever we need to make changes to all of our apps (> 100 live in App Store) we rely on Fastlane. We have a "bulk" command that submits each new build to Apple, changing out config variables first and a few files so each app is unique.
This has worked well for us... but... its getting really slow. We'd love to be able to take advantage of some of the continuous development services out there. It seems like they weren't necessarily made for this use case but it might still work?
Ideally instead of running bulk on a local machine we could spin up 100 instances on something like CircleCI and they all run side by side, using our fastlane script to build, submit, etc.
We started by looking into CircleCI. The problem we are running into is they don't allow injection of variables into a job (https://ideas.circleci.com/ideas/CCI-I-690).
Is there a better service for this goal? Is there a tool that was built to achieve this? Struggling to find an alternative to hacking together a bunch of smaller tools.
I think you already identified your first step: You will have to split your fastlane (and other tooling) configuration, so it is possible to build each app in isolation.
Then you can trigger a job for each app on a CI service like for example Travis CI or Azure Pipelines (both have a simple API you can use to start jobs and give them some parameters that will be available to your job) that builds and releases the app.
All the other things (e.g. one big build vs. many small build steps etc.) are just implementation details and will depend on the individual service or tools you choose.
I am in the process of testing a TFS 2013 to TFS 2018 onprem upgrade. I have installed 2018.1 on a new system (and upgraded a copy of my TFS databases). I have installed a build agent on a new host which shows up under Agent Queues (as online and enabled).
I'm now trying to create a build. I set things up as I feel they should be and it sits at this screen:
Build
Waiting for an available agent
Console
Waiting for an agent to be requested
The VSTS Agent service is running on the build agent system. so I feel that is OK. I'm somewhat at a loss. Any assistance is appreciated.
Just try the below items to narrow down the issue:
Check the build definition requirements (Demands section) and the agent offering. Make sure it has the required capabilities installed on the agent machine.
When a build is queued, the system sends the job only to agents that have the capabilities demanded by the build definition.
Check if the service "Visual Studio Team Foundation Background Job Agent" is running on the TFS application tier server.
If it's not started, just start the service.
If the status is Running, just try to Restart the service.
Make sure the account that the agent is run under is in the "Agent Pool Service Account" role.
Try to change a domain account which is a member of the Build Agent Service Accounts group and belongs to "Agent Pool Service Account" role, to see whether the agent would work or not.
We have just spent five days trying to diagnose this issue and believe we have finally nailed the cause (and the solution!).
TL;DR version:
We're using TFS 2017 Update 3, YMMV. We believe the problem is a result of a badly configured old version of an Elastic Search component which is used by the Code Search extension. If you do not use the Code Search feature please disable or uninstall this extension and report back - we have seen huge improvements as a result.
Detailed explanation:
So what we discovered was that MS have repurposed an Elastic Search component to provide the code search facility within TFS - the service is installed when TFS is installed if you choose to include the search feature.
For those unfamiliar with Elastic, one particularly important aspect is that it uses a multi-node architecture, shifting load between nodes and balancing the workload across the cluster and herein lies the MS Code Search problem.
The Elastic Search component installed in TFS is (badly) configured to be single node, with a variety of features intentionally suppressed or disabled. With the high water-mark setting set to 85%, as soon as the search data reaches 85% of the available disk space on the data drive, the node stops creating new indexes and will only accept data to existing indexes.
In a normal Elastic cluster, this would cause another node to create a new index to accept the new data but, since MS have lobotomised the cluster down to one node, the fall-back... is the same node - rinse and repeat.
The behaviour we saw, looking at the communications between the build agent and the build controller, suggests that the Build Controller tries to communicate with Elastic and eventually fails. Over time, Elastic becomes more unresponsive and chokes this communication which manifests as the controller taking longer and longer to respond to build requests.
It is only because we actually use Elastic Search that we were able to interpret the behaviour and logs to come to this conclusion. Without that knowledge it would be almost impossible to determine the actual cause.
How to fix this?
There are a number of ways that you can fix this:
Don't install the TFS Search feature
If you don't want to use the Code Search feature, don't install it. The problem will not occur.
Remove the TFS Search feature [what we did]
If you don't use the Code Search feature, uninstall it. The problem will go away - you can either just disable the extension in all collections or you can use the server installer to fully remove it. I have detailed instructions from MS for anyone who wants to eradicate it completely, just ask.
Point the Search feature to a properly configured, real Elastic cluster
If you use Elastic properly, rather than stuffing it in a small box on its own, the problem will not occur.
Ensure the search data disk never hits the 85% water-mark
Elastic will continue to function "properly" and should return search results as expected, within the limited parameters.
Hope this helps someone out there avoid the pain we suffered.
The TF Background Job Agent wasn't running on the application tier, because that account didn't have 'log on as a service'.
I was facing the same issue and in my case, it was resolved by restarting the TFS server (TFS is hosted locally in our office).
I am trying to get a tests to run via remote execution, and I can't find any documentation on how the following:
I understand that when the controller is registered to a team project collection and the agent runs through a lab environment, then a build needs to be attached to the process - and it then makes perfect sense that the controller pulls the dll that contains the tests from this build.
However, what does not make sense to me is in the more simplified scenario:
I have my test solution with a testsetting file, here I define the controller under roles. I also have 1 agent connected to the controller. Now in visual studio when I run the test, it runs this through the controller -> which delegates to the agent. However I have not setup any build.
I'm assuming that visual studio pushes the dll's to the controller when you first run the test. The controller then creates a cache of the dll's? This is just a guess. Is it correct?
I need to know how the internals work because I have not yet got any test to run on a remote controller. So far after much headaches I can only get the scenario to work when the controller and agent and local dev environment are located on the same machine.
All the MSDN documentation talks about the high level reuse, and does not go into any details of the internals.
Thanks in advance!
You likley want to run the tests automatedly after a deployment. If that is the case then you probably want a TFS integrated experience rather than the Visual Studio client one. The client experience is primarily for load testing at small scale.
Try: http://nakedalm.com/execute-tests-release-management-visual-studio-2013/
In this configuration your app is installed and pre-configured prior to running the tests. The agent then lifts the test assemblies from the build drop.
I'm looking for a cloud-based (pub, priv or hybrid) solution that allows me to configure every detail about the platform (container, system stack, virtualized hardware, etc.) for my app, but also deploys a templated version of my app on all app server nodes as soon as I run my 1st build. Hence I configure the app/platform, click a button, and boom: I have a WAR deployed and running across a cluster of nodes. (Granted, since I have not written any code at this point, this deployed WAR would have de minimis code inside of it and would consistitute the bare minimal code required to produce a WAR. In the case of Grails, it might just be the code that is generated by running grails create-app myapp.)
I'm calling this "Application-as-a-Service", because it not only is a traditional PaaS, but also goes one step further and deploys packaged WARs using some kind of templated source code repo.
My question: CloudFoundry says they support multiple frameworks (Spring, Grails, etc.); does this mean it can do what I describe above? If not, what does CloudFoundry mean when they say that they "support the Grails framework"?
Using CF you are able to configure platform/OS. Currently used container is warden. Virtualised hardware depends on IaaS used for(under) CF. Then you may 'click a button' and your app will be deployed and running across a cluster of nodes (DEA instances).
CloudFoundry says they support multiple frameworks (Spring, Grails, etc.); does this mean it can do what I describe above?
I don't fully follow what you're trying to do above, but I can tell you about the general work flows for CloudFoundry.
As an administrator, you use Bosh to deploy CloudFoundry to your IaaS. You can control anything you want at the IaaS level (assuming you're an administrator for your IaaS), so long as you meed the requirements of CF like storage, memory and CPU. In addition, you can control the CF deployment by adjusting the various configuration settings (YAML files) for CF. This allows you to tune the amount of resources (memory, CPU, disk space, etc..) that are available for application developers.
As a developer, you take your application and push it to a running CF installation. You may or may not be the administrator of that CF installation, if you are not then you'll be subject to the policies of the administrator.
The push process takes your application (ruby, python, php, go, java, grails, etc...) and uploads it to CF. From there, the application files are turned into a droplet by the build pack. This process is easier than it sounds, the build pack is just adding all the things that are necessary to run your app, like a web server or app container. With the droplet created, CF will run it or possibly even multiple instances of it if you so desire.
A couple further notes / clarifications:
How much memory your application gets is up to the app developer and can be adjusted at the time an app is push or later using cf scale.
CPUs are shared across all apps. You cannot reserve or guarantee usage per app. Application usage is limited though, so one app cannot steal all of the available CPUs.
The amount of disk storage per app is set by the administrator.
Disk storage for applications is ephemeral and is cleared when an application is restarted. Persistent storage should be provided through a service.
As you can see, CF administrators have a good degree of control over the system. Application developers not so much, but that is in fact the point of PaaS. Application developers don't want to waste time playing sys-admin. They just want to run their apps.
If not, what does CloudFoundry mean when they say that they "support the Grails framework"?
What is meant by this is that you can take a WAR file produced by Grails and run it on CloudFoundry without any additional work.
The CloudFoundry system uses build packs to handle the process of taking your application (i.e. your Grails WAR file) and preparing it to run. As I mentioned above, this usually involves installing and configuring some sort of server. In the case of Java / Grails, it involves setting up Apache Tomcat and configuring it to run your application (Note, if you don't like the way it's configured by default, you can customize or create your own build pack to configure it exactly the way you like).
CloudFoundry supports Grails and other JVM based languages because it can take those applications and automatically install, configure and run them.
I am evaluating the Hudson build system for use as a centralized, "sterile" build environment for a large company with very distributed development (from both a geographical and managerial perspective). One goal is to ensure that builds are only a function of the contents of a source control tree and a build script (also part of that tree). This way, we can be certain that the code placed into a production environment actually originated from our source control system.
Hudson seems to provide an ant script with the full set of rights assigned to the user invoking the Hudson server itself. Because we want to allow individual development groups to modify their build scripts without administrator intervention, we would like a way to sandbox the build process to (1) limit the potential harm caused by an errant build script, and (2) avoid all the games one might play to insert malicious code into a build.
Here's what I think I want (at least for Ant, we aren't using Maven/Ivy right now):
The Ant build script only has access to its workspace directory
It can only read from the source tree (so that svn updates can be trusted and no other code is inserted).
It could perhaps be allowed read access to certain directories (Ant distribution, JDK, etc.) that are required for the build classpath.
I can think of three ways to implement this:
Write an ant wrapper that uses the Java security model to constrain access
Create a user for each build and assign the rights described above. Launch builds in this user space.
(Updated) Use Linux "Jails" to avoid the burden of creating a new user account for each build process. I know little about these though, but we will be running our builds on a Linux box with a recent RedHatEL distro.
Am I thinking about this problem correctly? What have other people done?
Update: This guy considered the chroot jail idea:
https://www.thebedells.org/blog/2008/02/29/l33t-iphone-c0d1ng-ski1lz
Update 2: Trust is an interesting word. Do we think that any developers might attempt anything malicious? Nope. However, I'd bet that, with 30 projects building over the course of a year with developer-updated build scripts, there will be several instances of (1) accidental clobbering of filesystem areas outside of the project workspace, and (2) build corruptions that take a lot of time to figure out. Do we trust all our developers to not mess up? Nope. I don't trust myself to that level, that's for sure.
With respect to malicious code insertion, the real goal is to be able to eliminate the possibility from consideration if someone thinks that such a thing might have happened.
Also, with controls in place, developers can modify their own build scripts and test them without fear of catastrophe. This will lead to more build "innovation" and higher levels of quality enforced by the build process (unit test execution, etc.)
This may not be something you can change, but if you can't trust the developers then you have a larger problem then what they can or can not do to your build machine.
You could go about this a different way, if you can't trust what is going to be run, you may need a dedicated person(s) to act as build master to verify not only changes to your SCM, but also execute the builds.
Then you have a clear path of responsibilty for builds to not be modified after the build and to only come from that build system.
Another option is to firewall off outbound requests from the build machine to only allow certain resources like your SCM server, and your other operational network resources like e-mail, os updates etc.
This would prevent people from making requests in Ant to off the build system for resources not in source control.
When using Hudson you can setup a Master/Slave configuration and then not allow builds to be performed on the Master. If you configure the Slaves to be in a virtual machine, that can be easily snapshotted and restored, then you don't have to worry about a person messing up the build environment. If you apply a firewall to these Slaves, then it should solve your isolation needs.
I suggest you have 1 Hudson master instance, which is an entry point for everyone to see/configure/build the projects. Then you can set up multiple Hudson slaves, which might very well be virtual machines or (not 100% sure if this is possible) simply unprivileged users on the same machine.
Once you have this set up, you can tie builds to specific nodes, which are not allowed - either by virtual machine boundaries or by Linux filesystem permissions - to modify other workspaces.
How many projects will Hudson be building? Perhaps one Hudson instance would be too big, given the security concerns you are expressing. Have you considered distributing the Hudson instances out - one per team. This avoids the permission issue entirely.