ClearCase Stream Configuration for CR Based Approach - stream

I am having trouble creating a ClearCase stream structure that is best suited for a project that works on a ticket (CR) basis. For example, if I have 7 CRs that need to be developed simultaneously , what would be the best approach?
Let's assume that I have three streams: DEV, TEST, and PROD. My 7 CRs move from DEV to TEST through the deliver operation. Of those 7 CRs, only 4 are ready for PROD. How can I move only 4 out of the 7 CRs (now grouped into one deliver) into PROD? What stream structure enables this?
I have read many (sometimes contradicting) suggestions and I have still not managed to find a solid approach.
Regards,
Andrew

Delivery only some activities and not others is quite dangerous with UCM, mainly because you have the risk to link all the activities together.
PROD
TEST
DEV
That will work if you deliver always from DEV to TEST, TEST to PROD (you can deliver activities then).
You could be blocked, however, by a legitimate activity file-based dependency: see "About activity dependencies in the deliver operation".
If you have any issue delivering activities, then you can use findmerge to merge only the activities you want.
See more on the "all activities are linked" and findmerge in "ClearCase : Making new baseline with old baseline activities".

Related

ordering message pub/sub GCP

I am new to Dataflow and pub-sub tools in GCP.
Need to migrate current on prem process to GCP.
Current Process is as follows:
We have two types of data feeds
Full Feed – its adhoc job – Size of full XML is ~100GB (Single XML – very complex one – Complete data – ETL Job process this xml and load it into ~60 tables)
Separate ETL jobs are there to process full feed. ETL job process
full feed and create load ready files and all tables will be truncate
and re-load.
Delta Feed - Every 30 min need to process delta files(XML files – it will have only changes with in last 30 min)
Source system push XML files in every 30 mins(More than one, file has timestamp), scheduled ETL process will pick all the files which are produced by source system and process all the xml files and create 3 load ready files insert, delete and update for each table
Schedule – ETL Jobs are scheduled to run every 5 min, if current process is running more than 5 min, next run will not trigger until current process completes
Order of the file processing is very important(ETL Job will take care of this). Need to process all the files in sequence.
At the end of ETL process load the load ready files into tables (Mainframe)
I was asked to propose the design to Migrate this to GCP. Need to have two process in GCP as well full and delta. My proposed solution should be handle/suitable for both the feeds.
Initially I thought below design.
Pub/sub -> DataFlow -> mySQL/BigQuery
Then came to know that pub/sub will not give the guarantee to process the files in sequence/order. After doing some research learn that recently google introduced ordering key concept for pub/sub, which will make sure to process the messages in order. In google cloud docs it was mentioned that, this feature is in Beta.
I have two questions:
Whether any one used ordering key concept in pub/sub in production environment. If yes, did you face any challenges while implementing this
Is this design is suitable for the above requirement or is there any better solution in GCP
is there any alternative for DataFlow?
Came to know that pub/sub can handle maximum 10MB size of messages, for us each XML size is more than ~5G.
As was mentioned by #guillaume blaquiere, Beta product launching phase brings some restrictions but they are mostly related to the product support:
At beta, products or features are ready for broader customer testing
and use. Betas are often publicly announced. There are no SLAs or
technical support obligations in a beta release unless otherwise
specified in product terms or the terms of a particular beta program.
The average beta phase lasts about six months.
Commonly, Cloud Pub/Sub message ordering feature works as intended, once you have something for developers attention it is highly appreciated to send a report via Google Issue tracker.

How to schedule an on-premise Azure DevOps build to run every 5 minutes?

Never mind the rationale, I have a case where a build needs to run every 5 minutes. On-premise installation does not support schedules in the YAML.
So, how do we do it? I can probably use the REST Api, but that sucks, because it seems either I create a one-off script or a script for very simple type of schedules. Building a reusable solution, that could be used in general for other builds seems to be involved. So, instead of concentrating on my business I need to go sideways and cover for the deficiencies of the on-premise version of Azure DevOps.
I wonder if there is a better way.
Understand your concern. However, this is not supported at present with on-premise TFS sever.
The UI for defining time-based build triggers isn't flexible enough. It can only support fixed times on days of the week.
Just as you have pointed out in the comment, we have a need to run a build every 5 minutes which requires us to create 288 schedules which is tedious.
Actually, this has already been a user voice.
Scheduled builds - More flexible timing configuration
https://developercommunity.visualstudio.com/idea/365630/scheduled-builds-more-flexible-timing-configuratio.html
Multiple persons commented and echoed. After go through the marketplace, haven't found a pretty appropriate workaround. Sorry for any inconvenience. You could monitor the status of above user voice.

Adobe Analytics | Merge data from multiple report suites

We are capturing information for consumer sites in multiple different report suites.
Is it possible to merge all these data to a parent report suite without adding that parent report suite's account id in s_account variable?
For example
Site 1 uses report-suite1
s_account = "report-suite1";
Site 2 uses report-suite2
s_account = "report-suite2"
Instead of using
s_account = "report-suite1,report-suite2"
is it possible to merge the data to a 3rd virtual account from the Reports console itself?
The only way you can route data to a separate fully fledged report suite is either via javascript (e.g. setting s_account as you have shown in your post), or to ask Adobe To create a VISTA rule.
You didn't state your reasons for not wanting to throw a "global" rsid into your js code. Is it because you don't have the technical resources/ability to do it? If so, and if you want a full 3rd rsid for all the data to go to, then you can ask Adobe to create a VISTA rule. It should be fairly easy for them to setup, but they will charge you for it. And I think they will create one for each report suite. I don't generally recommend going this route unless you really have to, though. Mostly because the cost, but also because you don't have personal visibility into it.
Alternatively, if you do have the tech resources to update the js code, but the cost of throwing another rsid into the mix is an issue (from extra server hits), then you may want to consider replacing all of your report suites with a single global report suite, e.g.
s_account='report-global';
Then, create a Virtual Report Suite for each site. You can go to Components > Virtual Report Suites to set them up. The TL;DR is you create them by pointing at your report-global rsid as the source and then creating a segment based off something unique to the site (e.g. the domain, or maybe some eVar with a site-specific value).
The major downside to going the virtual report suite route is historical data from your previous report suites will not be available in the same place as this new global report suite and its virtual report suites. But it's a "one time migration" thing, and the historical data won't be lost; you'll just have some extra work on your end referencing it in the old rsids, esp if you want to compare historical to current in the new (virtual) risds.
The 2nd major thing to consider is unique limits. Not sure how much traffic / unique values vars get on your sites, but there is a monthly unique value limit you may have to consider with all of the sites going to the same report suite. Beyond looking at tricks to make values less unique on a case by case basis (e.g. removing query param string from URLs), there isn't a good way to solve for this except to stick with separate rsids. Well.. Adobe will increase unique limit on certain vars if you ask them, but it will cost you..
Another alternative to consider is a Rollup report suite. If you go to Admin > Report Suites, where your current report suites are listed. To the left you should see Rollups and an Add link next to it. This will create a Rollup report suite made up of data from one or more report suites.
Note though that a Rollup report suite is not the same as full fledged report suite. Please refer to the link above for full details/limitations, but the main benefit is it won't cost you anything except the couple of minutes to set it up in the interface. But the limitations of it.. the main points of note are you only get aggregated data, data is not deduped between the rsids, and many reports are limited or not available. In practice, I rarely ever see anybody actually go this route because it's too limited. But hey, maybe it's good enough for you.

Team Foundation Server - Manage external teams

Our company has a small development team in-house but we mostly outsource our customer projects to external consulting firms which we don't manage directly. We only interact with their project manager and maybe a team lead.
I'm implementing TFS 2010 and Scrum for our internal team for Project Management, Version Control and Sharepoint shared documents access.
My problem is how to to manage the external teams.
They won't use our TFS for Version control and I can't forced them to use Scrum and report as such (report on a task level adding remaining hours).
The solution I came with is this:
Use the “MSF for Agile Software Development v5.0” template in Team Foundation Server.
Break the project into user stories and then create a task for each.
The tasks have these fields:
Original Estimate
Since we’ll track percentage of completion, this will always be 100.
Remaining
This is the percentage of remaining work.
Completed
This is the percentage of competed work.
Their team lead will update the remaining work in percentage for each user story (on the task level).
If progress is reported correctly I can print a "Stories Overview" report periodically and see the percentage complete for each user story,
I'm sure it must be a better way out there and I'll appreciate any help on getting to the right direction.
Thanks
We are basically doing the same thing ... I have 10 in-house developers and teams around the world working on their projects. Most of the work we do overlaps between external and external. We are using TFS2010. We break a piece of development into user stories into lots of tasks and eventually bugs. We view the status of the external projects by looking at the breakdown of work on the individual work items.
Part of the development process flow is to get the code into TFS source control; and the control of the logs changes as it comes back into our system.
The external PM's then use the web interface spreadsheet upload update the data on these logs (Including the time spent / work remaining) so we can see the state of the work. You don't need code upload to set a work item to test / complete.
The process flow we have on the external work is; on a given user story item you can then see the state of development for all those tasks.
List item
To Spec
Specified
Spec Agreed
Open For Work
WIP
Development Complete
External Test
Source ADded to TFS
Delivered to Internal Test
Internal Test
Complete

Tools to assist managing the application promotion process in an enterprise environment

I am curious on how others manage code promotion from DEV to TEST to PROD within an enterprise.
What tools or processes do you use to manage the "red tape", entry/exit criteria side of things?
My current organisation is half stuck between some custom online forms type functionality and paper based dependencies to submit documents, gather approvals and reviews.
All this is left in the project managers hands to track what has been submitted, passed review, approved and advise management if there are any roadblocks that may need approval to be "overlooked" before an application can be promoted to the next environment.
A browser based application would be ideal... so whats out there? please show me that you googlefu is better than mine.
It's hard to find one that's good via google. There is a vast array of tools out there for issue management so I'll mention what we use and what we woudl like to use.
We currently use serena products. They have worked well for us in the past. Team Track is our issue management and handles the life cycle of any issue we work on. Version Manager is our source control and has the feature of implementing promotional groups like DEV TEST And PROD. We use DEV, TSTAGE, TEST, PSTAGE and PROD to signify the movement from one to the other, but it's much the same. The two products integrate nicely so that the source associated with the issues is linked, but we have no build process setup in this environment. It's expensive, but it works well.
We are looking ot move to a more common system using Jira for issue management, Subversion for source control, Fisheye to link the two together and Cruise Control for build management. This is less expensive, totaling a few thousand for an enterprise lisence and provides all the same features but with the added bonus of SVN which is a very nice code version mangager.
I hope that helps.
There are a few different scenarios that I've experienced over the years:
Dev -> Test : There is usually a code freeze date that stops work on new features and gets a test environment the code that has been tagged/labelled/archived that gets built. This then gets copied onto the machines and the tests go fine. This is also usually the least detailed of any push.
Test->Prod : This requires the minor change that production has to go down which can mean that a "gone fishing" page goes up or IIS doesn'thave any sites running and the code is copied over again. There are special cases to this where a load balancer can act as a switch so that the promotion happens and none of the customers experience any down time as the ones on the older server will move once their session ends.
To elaborate on that switch idea, the set up is to have 2 potentially live servers with just one server taking requests that the load balancer just sends all the traffic to one machine that can be switched when the other server has the updated code to go live.
There can also be a staging environment which is between test and production where the process is similar in terms of there is a set date when the promotion happens.
Where I used to work there would be merge days where a developer spent most of a day in Perforce merging code so that it could be promoted from one environment to another.
Now there are a couple of cases where this isn't used:
"Hotfixes" or "Hot patches" would occur where I used to work and in this case the specific files were copied up into the staging and production environments on its own since the code change had to get into Production ASAP since something broke in production or some new thing that had to get done that takes 2 minutes gets done. In this case, the code change getting pushed in had to be reviewed and approved before going out.
Those are the different approaches I've seen used where generally there are schedules and timelines potentially have to be changed or additional resources brought in to make a hard date like if a conference is on a particular weekend that such and such is ready for that.
Of course in a few places there has been the, "Oh, was that broken? Let me see..." and a few minutes later, "No, see it isn't broken for me," where someone changed things without asking permission or anything where a company still has what they call "cowboy programming."
Another point is the scale of the release:
1) Tiny - This is the case where one web page goes up so that user X can do Y.
2) Small - A handful or so of files that isn't really complicated but isn't exactly trivial.
3) Medium - Where going from one environment to another requires changing a bunch of files and usually has scripts to move.
4) Big - Where there are scheduled promotions and various developers are asked for who is taking which shifts when the live push is done. I had this in a case where there was a data migration to do in addition to a release of some new e-commerce sites.
5) Mammoth - Where everything is brand new including how this would be used. I don't think I've ever seen one of this size but I'd imagine Microsoft or Google would have releases of this size.
Somewhere in that spectrum most releases fall and so how much planning and preparation can vary quite a bit and let's not forget that regulatory compliance can be its own pain in getting some things done.

Resources