Release powerapp solution to new environment with devops - devops

I am interested in any information about or experiences with deploying PowerApps solutions to new environments within the same tenant.
In my solution I have a canvas-app and several flows between the app and sharepoint. I have used connection references to all connections (sharepoint, mail, etc.). On the devops side I have a build pipeline from my development environment, very much in line with Microsoft's recommendations for ALM. In addition, I have a release pipeline to publish the solution in another environment, e.g. a test environment. I can publish the release but when I access the solution in the new environment all flows have been turned off and all connections to sharepoint have been severed. When I inspect the flows it throws an error that it was unable to locate the connection Id. What strikes me as odd here is that the connection references that are visible in the new solution cannot be selected. However, what I can do is to add a new connection (from each flow), whereafter I can turn the flow back on and activate each of them in the canvas app.
What I am asking for here, is any documentation, guide, tutorial, help, etc. to make this release a little more automatic, so I won't have to re-add connections for every single action from each of my flows.

I think you are in luck šŸ˜Š and you should check out the latest PA community call. I think the last demo is the thing you are looking for (especially from that moment I supposešŸ¤”) and is now one of the targets in Power Platform.
If you are considering to introduce source control as well (like git), currently there is a cool experiment going on in the community in that direction which I think is quite promising and you may check this article. But please consider this pack/unpack tool as an experiment and don't just remove the original .msapp files yet šŸ˜‰.

I think I have finally found a working solution. I'll document my steps here for other ALM hopefuls.
When pushing to the target environment for the first time I need to click on each of the connection references, click on solutions layers, ) use the breadcrumb path to go one step back ] and from here I can assign the correct connection. Subsequent deployments now work without any hassle.
Also, first time deployment, I have learned cannot activate workflows. However, future deployments can activate workflows by managing the setting the the Import Solution build tool

Related

TFS Build Agent - Waiting for an agent to be requested

I am in the process of testing a TFS 2013 to TFS 2018 onprem upgrade. I have installed 2018.1 on a new system (and upgraded a copy of my TFS databases). I have installed a build agent on a new host which shows up under Agent Queues (as online and enabled).
I'm now trying to create a build. I set things up as I feel they should be and it sits at this screen:
Build
Waiting for an available agent
Console
Waiting for an agent to be requested
The VSTS Agent service is running on the build agent system. so I feel that is OK. I'm somewhat at a loss. Any assistance is appreciated.
Just try the below items to narrow down the issue:
Check the build definition requirements (Demands section) and the agent offering. Make sure it has the required capabilities installed on the agent machine.
When a build is queued, the system sends the job only to agents that have the capabilities demanded by the build definition.
Check if the service "Visual Studio Team Foundation Background Job Agent" is running on the TFS application tier server.
If it's not started, just start the service.
If the status is Running, just try to Restart the service.
Make sure the account that the agent is run under is in the "Agent Pool Service Account" role.
Try to change a domain account which is a member of the Build Agent Service Accounts group and belongs to "Agent Pool Service Account" role, to see whether the agent would work or not.
We have just spent five days trying to diagnose this issue and believe we have finally nailed the cause (and the solution!).
TL;DR version:
We're using TFS 2017 Update 3, YMMV. We believe the problem is a result of a badly configured old version of an Elastic Search component which is used by the Code Search extension. If you do not use the Code Search feature please disable or uninstall this extension and report back - we have seen huge improvements as a result.
Detailed explanation:
So what we discovered was that MS have repurposed an Elastic Search component to provide the code search facility within TFS - the service is installed when TFS is installed if you choose to include the search feature.
For those unfamiliar with Elastic, one particularly important aspect is that it uses a multi-node architecture, shifting load between nodes and balancing the workload across the cluster and herein lies the MS Code Search problem.
The Elastic Search component installed in TFS is (badly) configured to be single node, with a variety of features intentionally suppressed or disabled. With the high water-mark setting set to 85%, as soon as the search data reaches 85% of the available disk space on the data drive, the node stops creating new indexes and will only accept data to existing indexes.
In a normal Elastic cluster, this would cause another node to create a new index to accept the new data but, since MS have lobotomised the cluster down to one node, the fall-back... is the same node - rinse and repeat.
The behaviour we saw, looking at the communications between the build agent and the build controller, suggests that the Build Controller tries to communicate with Elastic and eventually fails. Over time, Elastic becomes more unresponsive and chokes this communication which manifests as the controller taking longer and longer to respond to build requests.
It is only because we actually use Elastic Search that we were able to interpret the behaviour and logs to come to this conclusion. Without that knowledge it would be almost impossible to determine the actual cause.
How to fix this?
There are a number of ways that you can fix this:
Don't install the TFS Search feature
If you don't want to use the Code Search feature, don't install it. The problem will not occur.
Remove the TFS Search feature [what we did]
If you don't use the Code Search feature, uninstall it. The problem will go away - you can either just disable the extension in all collections or you can use the server installer to fully remove it. I have detailed instructions from MS for anyone who wants to eradicate it completely, just ask.
Point the Search feature to a properly configured, real Elastic cluster
If you use Elastic properly, rather than stuffing it in a small box on its own, the problem will not occur.
Ensure the search data disk never hits the 85% water-mark
Elastic will continue to function "properly" and should return search results as expected, within the limited parameters.
Hope this helps someone out there avoid the pain we suffered.
The TF Background Job Agent wasn't running on the application tier, because that account didn't have 'log on as a service'.
I was facing the same issue and in my case, it was resolved by restarting the TFS server (TFS is hosted locally in our office).

How to backup/restore a JIRA project configuration

is there a way to backup a JIRA project configuration and then restore?
The issue I have is that sometimes doing workflows changes I can break the whole configuration.
So, I'm looking for a way to easily rollback to the previous working version of the project configuration.
Please note that I cannot rollback the whole JIRA server as it will affect other projects.
We are using the latest version of the Jira Service Desk on premises.
Thanks,
Please, see full answer here.
You can't.
JIRA does a full export of everything, and you can import
the issues from one project from that. But that's it. If you need
single project backups with configuration, you'll need extra
functionality. This is exactly the case where I would reach for
Botron's tool -
https://marketplace.atlassian.com/plugins/com.botronsoft.jira.configurationmanager
Whenever you publish a change to a workflow, JIRA asks you if it has to save a copy of the original. If you do that, it should be easy to revert to a previous version. Still it gets cumbersome to manage lots of copies of a workflow and to understand what changed when.
If you want a bit more control, you can also export your workflow to xml and keep that somewhere. If you need to rollback, you can import from that xml again. For more details see the documentation here.
If you want even more control, then add-ons like Botron's configuration manager can indeed be useful.

How to version assembliesā€”pre-buildā€”based on work items

I'd like to automatically increment my assembly versions based on this ruleset:
Revision is always 0
Build is incremented when the only WIT in the release is a Bug fix
Minor is incremented when the release contains any WIT other than a Bug fix; Build is then always set to 0
Major is never automatically incremented
Naturally this will require a build step that can interact in some way with the project.
My first thought was to build a small Windows Service that utilizes the TFS SDK to construct the version number based on these rules and return it via a WCF call, etc. But I run into a problem there with a business requirement that all code and functionality must be replicated into a VSTS project as well (the customer owns the code and must be able to proceed without me). There's no installing such a service there, of course.
I then considered installing the service on his server, in turn making it available to VSTS. This would pass the Rube Goldberg test with flying colors.
Is there an easier way of accomplishing this task? One that can work in both environments?
EDIT
I found this, but it's doubtful that the TFS SDK is registered in the GAC for VSTS.
Can someone confirm? Is the TFS SDK available to build scripts running on VSTS?
Well now that didn't take long.
I found this and this for using PowerShell to query the REST API. No GAC/SDK needed.
-- EDIT -----------------
I've intentionally excluded content from the pages behind these links as the solutions provided are exceedingly complex; it's not possible to cover the concepts here in a single post. In case the pages disappear or the URLs change, here are the links at archive.org:
1. PowerShell and vNext Builds
2. VSTS/TFS REST API: The basics and working with builds and releases
In any case, the concept is popular and well-coveredā€”in the event these two become inaccessible, there are many others available on the same subject matter. As quickly as I found these, someone could find more.

HP Quality Center

I am investigating a way to automate some of our build processes using Jenkins and HPQC. Currently, we have a process where, once a change to fix a defect has been checked in we set its status to "Fixed" and then reassign defect in HPQC from the individual developers to a team lead.
The team lead is tasked with manually deploying a build for the deliverable to the test environment and when he does this he will then update all of the defects assigned to him this way reassigning them to the test lead, who can assign them to individual testers.
I would like to automate this process where I can. Does HPQC have a web API of some kind? So that a remote system (such as a Jenkins build server) could run a post-build action script to gather a bunch of defect numbers (those included in the build) find each defect in HPQC and then update its status and owner?
There is a REST API for ALM / Quality Center, info is accessible:
http://support.openview.hp.com/selfsolve/manuals
You will have to sign up for an account with HP to access it. Ugh, troglodytes.
Search for "ALM REST API", download and read the newest guide and reference for your version of QC.
(We also use QC at my work. It's pretty damn bad. I should try and convince them to get or build something better.)
The answer above is a good one, I found the reference he mentions, but making use of that is not very intuitive, probably because I am such a newb. For my fellow unenlightened you might want to use another reference I found for how to use the reference :
http://www.consulting-bolte.de/index.php/22-hp-alm/hp-alm-rest-api/115-connect-to-hp-alm-via-java-using-rest-api
The key piece of information for me was that inside all of these class files they give you in the "Example Applications" folder there is a reference to a package :
package org.hp.qc.web.restapi.docexamples.docexamples.infrastructure;
This is just another name for all the files located in the guide in the "infrastructure" subfolder. You do not need to go find this out on github or something.

TFS and one-up releases

I apologize for the length of this post but I needed to include a lot of information for proper answers. I hope this does not discourage responses...
Our shop historically has coded web sites using Classic ASP with some newer ASP.NET sites configured as web sites. As everyone knows this means that the source files (*.asp, *.aspx, and *.aspx.vb (or *.aspx.cs)) files are deployed to development and production servers as is.
The configuration management process was (and still is) entirely manual and includes the following steps (requirements):
Taking copies of the modified files and storing them in a "release" folder for archiving.
Taking copies of the production files that will be replaced and storing them in a "archive" folder for easier rollback.
Generating a diff report of before and after source files for code review or general reference when diagnosing a post-release issue.
The developer who coded the changes is not the person who performs the production release. The original developer is required to hand off the source files to another developer for some additional testing and production deployment.
To make the situation more difficult (not with the above..but with what I talk about below) we do not follow a formal release schedule. As individual bugs or enhancements are completed they are released. This means we could easily be making several releases to a site a week. It is even possible that a given site gets two different releases to individual pages on the same day!
Since I came on board I have been trying to transition the team to newer technologies like ASP.NET web applications and ASP.NET MVC. (We have also taken on responsibility for stand-alone applications and console utilities used for non-web processes...so my dilemma still applies.)
The difference between these technologies and the legacy technologies is the pre-compiling. Instead of deploying the code-behind files (*.aspx.vb (or *.aspx.cs)) a dll or exe gets deployed. This type of deployment package has raised several questions (issues ??).
Generating difference reports when the source has been compiled. While the newly modified source files are sitting on the developers system the production copy is a compiled copy.
Making sure that changes related to other bugs or enhancements are not included in the particular release. This would apply to both the original developer and the person performing the release.
Allowing the original developer to pass along the changed files to another developer for build, testing, and deployment.
Up to now I was the only developer on the team working on these types of sites and applications so the conflicts and issues mentioned above where non-existent. (I skip the difference report step and the I do my own deployments.) However, I am trying to push the rest of the team to embrace this plus allow for better distribution of bugs and enhancement tasks.
We are currently using VSS but I am pushing (and will most likely succeed) in getting us moved over to TFS. Some ideas I have are
Setting up a separate build system for use by the developer to do the deployment. This will solve two problems -- (1) Different versions/patches of Visual Studio and other libraries between developers and (2) instances where the person performing the release has checked out files locally for another change. (Of course this does not guarantee differences between the build system and the original developer but at least that means the release is from a consistent config.
Using labels to tag just the modified files. My problem is that while I can identify (and pull down for a build) the modified files, how do I identify the files that need to be included in the build but have not changed. Again, the idea is to not included checked in files that are related to un-released changes.
Using labels to tag all the files for the release (the modified files and the unchanged files). My problem with this is similar to the last one...how do I make sure that a file checked in by another developer (say they went on vacation) for an un-related change is not labelled and included in this build.
Using the labels I could probably write a script to generate difference reports for the labeled version and the previously labeled version. If the process works properly that should result in exactly what changes are included in the the particular release..?
Any other ideas, concerns, points of interest? While I do have some flexibility of the process some of the requirements (like difference report or some way to easily view differences and having separate developer/deployer) are most likely untouchable.
Thank you so much for any help you can provide on this.
To keep track of different versions of the code and to help you manage very fast release cycles (daily) vs long term enhancements you can use branches in TFS.
There is a ton of information out there on branching, but in general I like to try to keep things simple. For example, have one branch called "release" and another "development". Everybody works on the development branch but the code to be deployed to production is merged into the release branch right before release.
This blog post describes the process:
http://team-foundation-server.blogspot.com/2008/01/how-we-branch-our-code-in-tfs.html
Well, based on my experience with VS2003 vs VS2010 for example is that the project structures are different and allowing VS to do a conversion often times results in a solution that either requires a lot of refactoring or is unusable. Having said that; if you can transition everything over to TFS2010 then one way to handle it is to setup different projects for each solution and use the TFS built in version handling for the different releases. You can also set up a build server and schedule nightly builds. If the build is ok then you can push this version into testing and ultimately production. You should really read up on TFS because it's totally different from VSS and is definitely a huge upgrade in allowing you to do team-focused development.
P.S. TFS has a really good Sharepoint integration which will help you and your team keep track of all the bugs and tasks.

Resources