MSIX packaged application will always ask for firewall rule when it updates - msix

I have an MSIX application that is being built in the pipeline but each time the application updates and starts it will ask for firewall rule to be added as soon as it makes a call to some http API.
If I go to firewall and check for all previous rules referencing this application it becomes clear why this is happening. So for instance I now have around 20 rules just from testing today and they are all tied to program and look something like this:
C:\program files\windowsapps{guid*}{app_version**}{another_uniqueIdentifier***}\MyApplication.exe
(*) This is drawn from manifest -> identity -> name
(**) There is a powershell script that updates package.appxmanifest version with build number from the pipeline
(***) I am not sure where this comes from
So obviously everytime I build a new version of the application it will receive this value and when I update and run it, firewall will think it is a new application. I doubt this is by design as Firewall would get clogged with firewall rules very soon in some instances where applications receive a lot of new updates.
What am I missing here?
My application (*.wapproj targeting the actual application) is being built from the Azure DevOps pipeline and these are msbuild parameters I use:
/p:ApplicationVersion=$(Build.BuildNumber)
/p:Version=$(Build.BuildNumber)
/p:AllowUnsafeBlocks=true
/p:SelfContained=false
/p:AppxPackageSigningEnabled=true
/p:PackageCertificateThumbprint="$(Thumbprint)"
/p:AppxPackageSigningTimestampServerUrl="http://timestamp.digicert.com"
/p:AppxPackageSigningTimestampDigestAlgorithm="SHA256"
/p:AppInstallerUri="https://ourlocal.server.com/Clients/MyApplication/"

Related

Release powerapp solution to new environment with devops

I am interested in any information about or experiences with deploying PowerApps solutions to new environments within the same tenant.
In my solution I have a canvas-app and several flows between the app and sharepoint. I have used connection references to all connections (sharepoint, mail, etc.). On the devops side I have a build pipeline from my development environment, very much in line with Microsoft's recommendations for ALM. In addition, I have a release pipeline to publish the solution in another environment, e.g. a test environment. I can publish the release but when I access the solution in the new environment all flows have been turned off and all connections to sharepoint have been severed. When I inspect the flows it throws an error that it was unable to locate the connection Id. What strikes me as odd here is that the connection references that are visible in the new solution cannot be selected. However, what I can do is to add a new connection (from each flow), whereafter I can turn the flow back on and activate each of them in the canvas app.
What I am asking for here, is any documentation, guide, tutorial, help, etc. to make this release a little more automatic, so I won't have to re-add connections for every single action from each of my flows.
I think you are in luck 😊 and you should check out the latest PA community call. I think the last demo is the thing you are looking for (especially from that moment I suppose🤔) and is now one of the targets in Power Platform.
If you are considering to introduce source control as well (like git), currently there is a cool experiment going on in the community in that direction which I think is quite promising and you may check this article. But please consider this pack/unpack tool as an experiment and don't just remove the original .msapp files yet 😉.
I think I have finally found a working solution. I'll document my steps here for other ALM hopefuls.
When pushing to the target environment for the first time I need to click on each of the connection references, click on solutions layers, ) use the breadcrumb path to go one step back ] and from here I can assign the correct connection. Subsequent deployments now work without any hassle.
Also, first time deployment, I have learned cannot activate workflows. However, future deployments can activate workflows by managing the setting the the Import Solution build tool

TFS Build Agent - Waiting for an agent to be requested

I am in the process of testing a TFS 2013 to TFS 2018 onprem upgrade. I have installed 2018.1 on a new system (and upgraded a copy of my TFS databases). I have installed a build agent on a new host which shows up under Agent Queues (as online and enabled).
I'm now trying to create a build. I set things up as I feel they should be and it sits at this screen:
Build
Waiting for an available agent
Console
Waiting for an agent to be requested
The VSTS Agent service is running on the build agent system. so I feel that is OK. I'm somewhat at a loss. Any assistance is appreciated.
Just try the below items to narrow down the issue:
Check the build definition requirements (Demands section) and the agent offering. Make sure it has the required capabilities installed on the agent machine.
When a build is queued, the system sends the job only to agents that have the capabilities demanded by the build definition.
Check if the service "Visual Studio Team Foundation Background Job Agent" is running on the TFS application tier server.
If it's not started, just start the service.
If the status is Running, just try to Restart the service.
Make sure the account that the agent is run under is in the "Agent Pool Service Account" role.
Try to change a domain account which is a member of the Build Agent Service Accounts group and belongs to "Agent Pool Service Account" role, to see whether the agent would work or not.
We have just spent five days trying to diagnose this issue and believe we have finally nailed the cause (and the solution!).
TL;DR version:
We're using TFS 2017 Update 3, YMMV. We believe the problem is a result of a badly configured old version of an Elastic Search component which is used by the Code Search extension. If you do not use the Code Search feature please disable or uninstall this extension and report back - we have seen huge improvements as a result.
Detailed explanation:
So what we discovered was that MS have repurposed an Elastic Search component to provide the code search facility within TFS - the service is installed when TFS is installed if you choose to include the search feature.
For those unfamiliar with Elastic, one particularly important aspect is that it uses a multi-node architecture, shifting load between nodes and balancing the workload across the cluster and herein lies the MS Code Search problem.
The Elastic Search component installed in TFS is (badly) configured to be single node, with a variety of features intentionally suppressed or disabled. With the high water-mark setting set to 85%, as soon as the search data reaches 85% of the available disk space on the data drive, the node stops creating new indexes and will only accept data to existing indexes.
In a normal Elastic cluster, this would cause another node to create a new index to accept the new data but, since MS have lobotomised the cluster down to one node, the fall-back... is the same node - rinse and repeat.
The behaviour we saw, looking at the communications between the build agent and the build controller, suggests that the Build Controller tries to communicate with Elastic and eventually fails. Over time, Elastic becomes more unresponsive and chokes this communication which manifests as the controller taking longer and longer to respond to build requests.
It is only because we actually use Elastic Search that we were able to interpret the behaviour and logs to come to this conclusion. Without that knowledge it would be almost impossible to determine the actual cause.
How to fix this?
There are a number of ways that you can fix this:
Don't install the TFS Search feature
If you don't want to use the Code Search feature, don't install it. The problem will not occur.
Remove the TFS Search feature [what we did]
If you don't use the Code Search feature, uninstall it. The problem will go away - you can either just disable the extension in all collections or you can use the server installer to fully remove it. I have detailed instructions from MS for anyone who wants to eradicate it completely, just ask.
Point the Search feature to a properly configured, real Elastic cluster
If you use Elastic properly, rather than stuffing it in a small box on its own, the problem will not occur.
Ensure the search data disk never hits the 85% water-mark
Elastic will continue to function "properly" and should return search results as expected, within the limited parameters.
Hope this helps someone out there avoid the pain we suffered.
The TF Background Job Agent wasn't running on the application tier, because that account didn't have 'log on as a service'.
I was facing the same issue and in my case, it was resolved by restarting the TFS server (TFS is hosted locally in our office).

Access drop location via server in TFS releases

TFS 2015 update 2, release management framework. In server definition, there's a flag whether this server should access the drop location via straight UNC, or via HTTP(S) to the Release Management Server (RMS). Question: which builtin actions/tools are aware of this setting? The "XCopy Deployer" tool, and "Copy File or Folder" action, which is based on it, definitely don't respect that setting - it just tries to xcopy straight from the source UNC path.
The only UI that mentions drop location is the custom component creation UI.
All of the built-in actions work with this setting. So do custom deployment tools, for that matter.
The way it works is as follows:
The normal behavior (directly from UNC) has the Agent reach out to the drop location directly in order to stage the files in a temporary location on the machine on which the agent runs. This folder is usually C:\users\<service account>\AppData\Local\Temp\RM\T\RM\ if I'm not mistaken. After that, it runs the deployment activity against the staged files.
The other option (via HTTP) has the RM server reach out to the UNC path, then serialize the files over HTTP to the agent machine. After that, it runs the deployment activity against the staged files.
Basically, all that flag does is change the behavior of how the files get to the target box. It doesn't change the behavior of the commands that are invoked.
However, it's entirely possible that the UNC vs HTTP option is ignored when using a component that points directly to a UNC path; that behavior I haven't tested.
Since you're using TFS 2015.2, you should look into retiring your Release Management server and migrating to the new web-based experience. The ALM Rangers have a migration tool available.
Can you please confirm one thing:
When you use the option “Through Release Management Server over HTTP(S)” it’s important that the Service account were RM is installed with has modify rights on the drop location.
if you want to use Server option, this is something by design:
With agent based flow, we don’t support “build drop on server” and “drop via HTTP on RM” together.
If you want to have these options, then you can use vNext workflow which will support both.
It ideally should work with xcopy.
Please drop me an email atdmittal#microsoft.com if things still does not work...

Registering a windows service as automatic (not manual) but stopped

My team has an InstallShield configuration that registers a service. Since it's supposed to be automatically started upon a reboot, we can't set it to manual. According to the person responsible for it, InstallShield only gives the option of automatic or manual service setting.
Not being an expert on InstallShield, I'm still trying to help her figure out how to make the registration of the service automatic but not started. I haven't seen any such option in the client.
Am I looking at the wrong spot in InstallShield or is it not doable that way all together? If so, how should one approach the problem?
Assuming a Windows Installer project type in InstallShield, there are 2 different tables at play. ServiceInstall and ServiceControl. It is completely possible to define the service using ServiceInstall with start type Automatic and not actually start the service during the installer. It's the ServiceControl table that controls when to start/stop ( none, install, uninstall, both) a service.
For this situation I'd install the service as automatic and mark it for stop during both and start for none. You may also want to set the REBOOT property to ask for a reboot at the end of the installer.
Another option is to investigate why you can't start it right away. Is this a configuration problem or some other race problem? Often time fixing that will simplify the deployment story and allow her to start it during the install. One example would be if you don't have configuration data yet you have the service spawn off a background process that idles the service and puts a file system watcher on the config data. As soon as the data is there the service goes from idle to active without stopping and starting it.

For TFS 2008 Team Build, how do I change where $Temp points to?

We have recently built a new TFS 2008 Team Build server. I don't want users putting their builds in the default temp directory that is pointed to by $Temp for the Build Agent, but I also don't want to force developers to change the path to somewhere else - this runs the risk of developers either (a) not bothering, or (b) making paths that are not consistent with other teams.
So I would like to keep the $Temp there (which is the default for when creating a Build Agent), but change the location where that points to. How do I do that?
The $Temp value is the value of the Temp environment variable for the user that the build agent process is running as. You could change the temp location for the build user - however that might not be what you want as that is a setting you will have to do to the build users environment, not just localized to the build agent process.
Personally, I set my build agent working directories to something like d:\bw\$(BuildDefinitionId) where d is a fairly fast secondary drive - however c:\bw would do just as well. This means that the builds are conducted in a place with a short path so you are less likely to run into the annoying 260 character path limit imposed by .NET's IO classes.
Presumeably, creating build agent's isn't something that your developers are doing but more of an administrative task - however if you really wanted to make sure it was done how you wanted you could provision the build agent using the TFS Build API from an internal ASP.NET page or a little application. That would give you the control you need to limit where the build working directory is set.
If you want API code for creating a build agent then let me know in the comments and I will edit my answer.

Resources