I want to know how many people have visited a particular isssue in order to know its popularity( I can't trust on number of watchers of the issue) .Is there any way (JiraDB or anything) by which i can know how many people(just the count) have visited any particular issue.
The question can be modified like this : Top 10 mostly visited issues in a week or so.
Seb's earlier answer provides a possible solution for JIRA Cloud. I am not aware of any off-the-shelf product for behind-the-firewall installations of JIRA, and I do not believe that views are tracked anywhere in the JIRA database.
For behind-the-firewall instances, you could certainly write a script to parse the JIRA access logs (stored in $JIRA_HOME/logs/access_log*) to count issue accesses that way.
The JIRA access logs are stored in a format that is similar to the Apache access log format, so you just need to parse out accesses to individual issues by looking for URLs of the format "http://MYJIRA/browse/ABC-123".
Out of the box this is not possible. Jira does not log view counts for single issues.
You could have a look if there is any plugin for this at https://marketplace.atlassian.com/search?application=jira
E.g. https://marketplace.atlassian.com/plugins/communardo.connect.usage.statistic.addon looks like it could fit your requirements, but I personally have never heard of it.
Related
I would like to analyze the search terms submitted by our JIRA users so we can formalize best practices for creating subjects and descriptions.
I'd like to avoid having to pull the search terms out of log files, where I believe they live if the right log levels are set.
I am familiar with jira-python and server-side JIRA customization, but this one's stumping me.
Is there a programmatic way to generate a list of the search terms submitted to JIRA? (Client-side/API is ideal, but server-side is okay too.)
Appreciate any advice folks can share, pointers to references, and so forth!
There is no API for this, but you can get this information from the webserver logs or by looking at saved_searches directly in the database. Clearly these would be only the saved filters, not all queries performed.
What about catching everything that is submitted to search field (client-side) and log it on server side (POST to servlet).
i am working on Adobe SiteCatalyst.
i am able to download the groups with column names Group Name,Description,Users and Report Suites.
In above list am getting count of users and report suites.but i need a exact name instead of count how this can be done using Adobe Site Catalyst..
Please help me..
I don't know of a built in report that will get you this. If there is a small number of users and you don't need to do it very often then you can do this manually. But it would be a pain.
If you think it is worth investing some time into this because you have a lot of users and/or you need to do this report often then you can use the Enterprise API to automate this report.
You will need to create a user with the Web Services permission. Then using that username and secret (be careful to use the exact format from the Admin tools->Company Settings->Web Services as there is a "loginCompany:username" format and special Shared Secret)
Then you can use the APIs assuming you have some development experience. This is a good starting point. https://developer.omniture.com/en_US/get-started/api-explorer#Permissions.GetGroup and also look at GetGroups.
Best of luck C.
On the webmaster's Q and A site, I asked the following:
https://webmasters.stackexchange.com/questions/42730/how-does-indeed-com-make-it-to-the-top-of-every-single-search-for-every-single-c
But, I would like a little more information about this from a development perspective.
If you search Google for anything job related, for example, Gastonia Jobs (City + jobs), then, in addition to their search results dominating the first page of Google, you get a URL structure back that looks like this:
indeed.com/l-Gastonia,-NC-jobs.html
I am assumming that the L stands for location in the URL structure. If you do a search for an industry related job, or a job with a specific company name, you will get back something like the following (Microsoft jobs):
indeed.com/q-Microsoft-jobs.html
With just over 40,000 cities in the USA I thought, ok, maybe it's possible they looped through them and created a page for every single one. That would not be hard for a computer. But then obviously the site is dynamic as each of those pages has 10000s of results and paginated by 10. The q above obviously stands for query. The locations I can understand, but they cannot possibly have created a web page for every single query combination, could they?
Ok, it gets a tad weirder. I wanted to see if they had a sitemap, so I typed into Google "indeed.com sitemap.xml" I got the response:
indeed.com/q-Sitemap-xml-jobs.html
.. again, I searched for "indeed.com url structure" and, as I mentioned in the other post on webmasters, I got back:
indeed.com/q-change-url-structure-l-Arkansas.html
Is indeed.com somehow using programming to create a webpage on the fly based on my search input into google? If they are not, how are they able to have a static page for millions and millions and millions possible query combinations, have them dynamically paginate, and then have all of those dominate google's first page of results (albeit that very last question may be best for the webmasters QA)?
Does the javascript in the page somehow interact with the URL
It's most likely not a bunch of pages. The "actual" page might be http://indeed.com/?referrer=google&searchterm=jobs%20in%20washington. The site then cleverly produces a human readable URL using URL rewrite, fetches jobs in the database that matches the query, and voĆla...
I could be dead wrong of course. Truth be told, the technical aspect of it can probably be solved in a multitude of ways. Every time a job is added to the site, all pages that need to be done to match that job, might be created, thus producing an enormous amount of pages for Google to crawl.
This is a great question however remains unanswered on the ground that a basic Google search using,
ste:indeed.com
returns over 120MM results and secondly a query such as, "product manager new york" ranks #1 in results. These pages are obviously pre-generated which is confirmed by the fact the page is cached by the search engine (sometimes several days before) has different results from a live query on the site.
Easy when Googles search bot crawls the pages on indeed or any other job search site those page are dynamically created. Here is another site: http://jobuzu.co.uk i run this which is similar to how indeed works.
PHP is your friend in this and Indeed don't just use standard databases look into Sphinx and Solr as they offer Full text search for better performance then MySql etc.
They also make clever use of rel="canonical" and thorough internal linking:
http://www.indeed.com/find-jobs.jsp
Notice that all the pages that actually rank can be found from that direct internal link structure.
I am trying to find a way to track and produce reports for my site (out of interest). Does anyone know of any articles/projects etc that you can
Track pages / unique visitors etc
Tracking 1) relative to timestamp etc
in asp.net mvc or just asp.net ?
P.S - I know google analytics etc is available but looking to create some basic stats for myself out of interest about how web analytics work ?
There are a couple of good ways to try and determine unique visitors, none of them are exact (which is why different analytics will report different numbers).
The first is to use a cookie. Create a cookie for the user for each time frame that you want to track uniques, so you could create one that expires in a day and one that expires in a month. You can then use both of those to track how many unique daily/monthly visitors you have. Of course this is not perfect since people can clear or refuse cookies, but it is pretty accurate.
The other way is to track uniques using a combination of the IP address and User Agent of the requesting user, this is probably slightly less accurate since if a company has a good IT group lots of internal users will have the same User Agent and since they are all coming from the same internal network could have the same IP address.
If you are interested in reading more about the different methods there is a great article about it here: http://www.google.com/support/urchin45/bin/answer.py?answer=28325
I blogged about simple asp.net module.
You can check it here
http://ilkeraksu.com/post/2009/07/14/Very-very-simple-But-very-very-efficient-Aspnet-Tracking-module.aspx
I would recommend using google analytics instead of reinventing the wheel. All you have to do is stick a bit of javascript in your master page and your done.
Yo can check Piwik out. Its an open source web analytics written using PHP and mysql.
you can find great article in http://www.codeproject.com/KB/aspnet/PageTracking.aspx
which is upgraded version of http://www.15seconds.com/Issue/021119.htm
with help of a Session Tracker class that runs in Application_PreRequestHandlerExecute and mailing reports on session end and lot of usefull tips
thanks Wayne Plourde for all that stuff
I am building web client for CodeCentral web service from CodeGear web site.
I need to restrict number of items return by Search operation of CodeGear web service, say it, 10 per page. This way I can minimize loading of my web page.
I just don't know how to do it. Any ideas?
Sorry, I don't know the answer off hand.
I would suggest you contact John Kaster of CodeGear. He would know all the ins and outs of that. I haven't seen him posting here. Usually the email is [first initial][last name]#codegear.com. You might also try posting in the Developer Network / CodeCentral forum which I imagine John and his team will be monitoring.
You're right. The search form on the CodeCentral page lists the maximum number of entries as an option (the "Show" option), but does not allow you to select the option. And trying various obvious parameter values doesn't work.
I haven't tried it, but what might help you is if you research the CodeCentral Expert, which is a package to help search the CodeCentral website from within Delphi and C++ Builder: http://dn.codegear.com/article/23023
The package seems to be quite old, since comments on that page date from 2000, so there's no guarantee that it will work with the new site. However, checking through archive.org, it appears that the CodeCentral search page has remained the same for quite a number of years, and the search page, and even the missing "Show" option, has not changed over that time.
Even if the package doesn't work, maybe the package will give you the correct parameters you need for the calls to their search.
Otherwise, you'll have to send an email as previously answered.