We use Microsoft Graph reports functionality. We've found that for new tenants, reports are not available some time. These reports are important to our workflow.
Is there any documentation that describes when reports become available?
Related
Can someone tell me please if is a good idea to use TFS as ticket manager for end users and in the same time as backlog for team developpers?
Not totally sure which kind of ticket manage system you are referring. For tickets, if you mean the ones raised by the end users of an organization whenever they encounter an event that interrupts their workflow.
Then seems you are looking for a helpdesk ticketing system, acts as a documentation of a particular problem, its current status, and other associated information. These tickets are routed to a ticketing software where they are categorized, prioritized, and assigned to different agents according to the organizational norms.
The agents then analyze these tickets and suggest appropriate fixes or workarounds and resolve the issue. As a central repository of all these tickets, an IT Ticketing Software helps in providing the context of the issue history and its resolution.
Then to be honest, this is not what TFS should do. You may have to look for some other system to handle this.
TFS provides integrated tools to support collaborative software development, including Git repositories, continuous integration and continuous deployment (CI/CD), and interactive Kanban boards.
You could also collect bug/feature request and related info from end users, then track them in TFS.
But it's not suggest to let them directly access your TFS system and fire work items by themselves. Cause you need to assign them license and corresponding permission.
TFS fully supports bug tracking and traceability though the code that was changes.
Create your product backlog
The out of the box bug work item is specifically designed to work with the test tools and the planning tools. Besides, you could also use them and add a few customization to meet your requirements.
If you want to use the TFS for the ticketing system. you need create task as a bug /Task in the child link for that particular backlog item or feature. Each task / Bug has to be tracked based on the sprint.
I'm partway through development of a build wallboard that displays the last 20 automated builds undertaken by the build server, all looks well and good, and when a build has tests, i'm automatically recording the test results (to be used interactively later)
I love building instant messaging bots and since my new workplace uses Lync (2010) I have started building a bot interface, that works over that.
I have managed to get it to start a conversation with the people that requested the build, that's about as far as i've got so far - i'm hoping to allow the users to ask 'why?' and be responded to with build errors or which tests failed etc etc. Thanks for reading through the backstory!
The Question
Should I continue writing a bot on lync? It seems like a pretty cool platform and protocol, but it seems very proprietory, and locked in. Are there more open platforms I should aim for that may end up being supported by Microsoft's unified messaging doodah with lync2013.
Thank you for your time, I hope this question is specific enough.
If your organization has already invested in Lync and its infrastructure, it's hard to imagine them giving up on it unless it has major issues.
Of course there are other options, but it's not like Lync is going away anytime soon, and Lync 2013 recently came out so it is still being invested in by Microsoft. And there is a wealth of information/documentation/communitry (i.e. TechNet) around the Lync SDKs, so that is also a bonus.
Also, Lync is built on top of SIP and well-known media codecs, which are not proprietary. (their SDK is proprietary, of course)
When I see tutorials regarding cucumber I see feature examples like "manage users" with scenarios such as add users, delete user etc. This all very well when starting a project.
However, I would like to use something like pivotal tracker with third-party tools such as pickler and have features as stories (the pivotal tracker concept) which can be derived from requests and bug reports (as they may be also referred to in other project and code management tools).
The problem I see is that the number of feature files could become quite large because a new one could be started for each request, also the number of scenarios could be low in each because they would be spread over multiple feature files over different periods, so how would you organise them?
also will testing become too slow over time how can this be reduced?
Have a read of this: Features != Stories
Any suggestions for an accurate Web Log analysis tool to generate reports on the IIS logs? We used WebTrends, but I don't feel it was accurate.
To analyze weblogs, I don't think you can go wrong with Analog: http://www.analog.cx/
If you are analyzing your own logs, which are often huge files, you will want the fastest analyzer you can find. Analog is fast.
You'll want one that's been around awhile and is still supported. Analog just celebrated its 10'th birthday.
Analog claims to be the most popular logfile analyser in the world.
Multi-languages.
Did I say its free and open source?
As far as accuracy goes, no tool gives perfect results. Javascript fails often in catching hits. Trying to track individual people's paths through a website (i.e. for Analytics purposes) is fraught with problems. And even trying to differentiate hits versus visits and screening out the bots is all more of a black art than a science.
What is best is simply to have a tool that gives decent basic statistics that tell you what you need to know.
I've looked at other tools, such as Deep Log Analyzer: http://www.deep-software.com/, which attempts to do analytics from your weblogs. But speed was a problem. They claim their new version 3.5 - April 2008, which I didn't try, has improved performance. The big advantage of a program like this is the advanced reporting you can do, including custom SQL requests. You have to purchase their professional version ($200) to do most of the analytics and custom queries. If Analog is too simple for you, then try the free version of Deep Log Analyzer.
And you can also try Microsoft's own Log Parser, as was the recommended answer in: https://stackoverflow.com/questions/157677/a-good-iis-log-viewer-for-large-log-files.
But you will need some extra skills to use it.
What are you wanting to analyze from your logs? There are a bunch of tools out there - free or paid for - that will go through the logs and spit out a great variety of figures. Some have real meaning, others are best used with a grain of salt.
What none will show you is "How many people are actually reading my wonderful web pages". Those that attempt to show "distinct site visitors" or any detailed metrics are at best a rough approximation to an indication of a vague trend...
But for what it's worth, we use Analog.
SHORT ANSWER:
You are correct to question the results; log analysis is not adequate to report actual traffic.
LONGER ANSWER:
WebTrends is a great tool for what it delivers. But as a previous administrator of a WebTrends installation, I found that web logs are notoriously bad at capturing metrics of interest.
For instance, if there exists any caching in your web delivery stack (or on the consumers side-- *I'm shaking my fist at YOU, AOL!), then your web logs are instantly non-reflective of your site's actual activity. This is because log analysis assumes that all user consumption will translate to an HTTP request back to the web server-- and thus having been recorded in the IIS logs. In the case of a cache, this would not be the case.
In the future if you want more reliable results, you ultimately need to ensure that there exists a way to bust any caching strategy. The obvious answer is dynamic content. But if you do not want to rewrite all of your content in such a fashion, just ensure your web traffic analysis uses a dynamic call.
WebTrends actually offers a solution to this problem, called SDC server. This is exactly what Google Analytics offers as well-- it's a javascript call back to the analysis server.
...I could go for days on this. If you want more specific information, comment back. ;)
EDIT: With WebTrends, specifically, it is quite important to configure session tracking beyond their default IP/userAgent configuration. If your web server assigns a session cookie, you will find this will increase your reliability; especially for differentiating between users which may sit behind the same NAT.
I have had really good luck with SmarterStats, from SmarterTools.
There is a logging package for free from MSFT for viewing this information using SQL Reporting Services. Google it.
doing it with the logs is only a good idea if it's internal - I'd use google analytics for anyhing on teh internets
I have been using Summary, which is paid for software, for years, and love it. The cost of updates is getting to me, and paying for an update to just get user agent string updates out of the deal is getting bothersome. Not that there are not other fixes, I just tend to not need them.
Anyone care to share if they have used Summary compared to analog?
Look at XpoLog log analysis platform for web application servers and web servers log. it a log management and analysis platform that integrate to web servers logs and create reports, provide search and log viewer and also monitor for problems. XpoLog
My company are imposing Jira and Zephyr on us for defect tracking and test management. We're quite happily using TFS 2008 for both these jobs at the moment, but management have never let the fact that something isn't broken stop them from trying to fix it.
Are there any tools/plug-ins that will allow us to synchronise between the remotely hosted repositories and our in-house TFS server?
Probably too late, but the company might want to look at the new features for bug tracking and manual tests coming in the 2010 release. Nice as Jira is, I doubt it will integrate well with the historical debugger and the ability to include a video of the test, as well as information on the test environment, and have it all be part of the work item.