As shown in the screenshot JIRA Workflow Status Default Order I would like to order my JIRA status's so the first option is always the next logical step.
I added the "opsbar-sequence" key property, checked for whitespaces and added values of 100, 200, 300, 400 etc.. to all my steps but the default order still appears.
I am no expert. This might be a lead towards a fix. (Might)
Jira's front-end has a custom javascript framework that you need to deal with.
The front-end has functions that rewrite the pages content on each update. So the change that you did might take efect, but once you open a dropdown, the javascript framework would rewrite any modification that taken effect.
There are events extendable and useable that helps in triggering your reorganizing function at the right moment to edit the page without running through any forceUpdates that might overright your changes.
Please read more here
Jira Javascript API Events
Related
Given a Vaadin application where a user can add and remove elements of a list that is also rendered in the browser, I am wondering what the most efficient way of handling such manipulations would be. Currently, I am simply using the add and remove methods.
I am only experienced with Apache Wicket where one should avoid to manipulate the component tree for performance reasons. In the documentation, I only found a section on how to handle repeated elements in Polymer but nothing on how this can be done using the "simple" API.
Am I choosing the right approach?
The Vaadin UI code runs on the server, so the add/remove operations don't affect the DOM directly. When a response is sent back to the browser, Vaadin will look at the difference between the previous UI state and the current and send appropriate instructions to the browser client to update the DOM. In this case, the instruction would be something like "remove the following components:...". The actual DOM manipulation is handled by Vaadin, and is not something you can affect yourself.
If you run into performance issues, help us out by filing an issue ticket on GitHub, so we can take a look at it https://github.com/vaadin/flow/issues
I am using Revulytics SDK to track feature usage and came across the below problem.
I am sending feature usage after properly setting up the SDK configuration etc, using the EventTrack() method like this:
GenericReturn grTest = telemetryObj.EventTrack("FeatureUsage", textBoxName.Text.ToString(), null, false);
This returns OK and usually, I can see the usage data in the dashboard. However, after multiple tests, the data I am sending does not show up on the dashboard.
Can anyone hint me how to debug this? Thanks for any help!
I hit a similar issue when first working with this SDK.
I was able to address this as soon as I understood the following:
There are event quotas for the incoming events;
Event names are used for making the distinction.
So when I was sending dummy test data, it made it there, but when I sent some demo data for stakeholders, it was not showing up.
I think the same happens here. You're getting the event name form textbox.text... Pretty sure that varies every time you run the code.
Here are the things to keep in mind when testing your code:
the server has a mechanism to discard / consider events;
implicitly, it allows first xx events depending on the quota;
if you are sending more than xx events, they will not show up in reports.
So, you must control which events to discard and which to consider (there are a couple of levels you can configure, and based of them you can get the events in various types of reports).
Find the "Tracked Events Whitelist Management". You will be able to control these things form there.
This blog helped me (it is not SDK documentation): https://www.revulytics.com/blog/getting-started-with-usage-intelligence-part2-event-tracking
Good luck!
This question may seems odd but we have a slight mixup within our Report Suites on Omniture (SiteCatalyst). Multiple Report Suites are generating analytics and it's hard for us to find which site URL is constituting the results.
Hence my question is, is there any way we can find which Site is filling data within a certain Report Suite.
Through this following JS, I am able to find which "report suite" is being used by a certain site though:-
javascript:void(window.open("","dp_debugger","width=600,height=600,location=0,menubar=0,status=1,toolbar=0,resizable=1,scrollbars=1").document.write("<script language=\"JavaScript\" id=dbg src=\"https://www.adobetag.com/d1/digitalpulsedebugger/live/DPD.js\"></"+"script>"));
But I am hoping to find the other way around that where Report Suite gets its data from within the SiteCatalyst admin.
Any assistance?
Thanks
Adobe Analytics (formerly SiteCatalyst) does not have anything native or built in to globally look at all data coming to see which page/site is sending data to which report suite. However, you can contact Adobe ClientCare and request raw hit logs for a date range, and you can parse those logs yourself, if you really want.
Alternatively, if you have Data Warehouse access, you can export urls and domains from there for a given date range. You can only select one report suite at a time but that's also better than nothing, if you really need the historical data now.
Another alternative is if your sites are NOT currently setting s.pageName, then you may be in some measure of luck for your historical data. The pages report is popped from s.pageName value. If you do not set that variable, it will default to the URL of the web page that made the request. So, at a minimum you will be able to see your URLs in that report right now, so that should help you out. And if you define "site" as equivalent of "domain" (location.hostname) you can also setup a classification level for pages for domain and then use the Classification Rule Builder and a regular expression to pop the classification with the domain, which will give you some aggregated numbers.
Some suggestions moving forward...
I good strategy moving forward is to have all of your sites report to a global report suite. Then, you can have each site also send data to a site level report suite (warning: make sure you have enough server calls in your contract to cover this, since AA does not have unlimited server calls). Alternatively, you can stick with one global report suite and setup segments for each site. Another alternative is to create a rollup report suite to have all data from your other report suites to also go to. Rollup report suites do not have as many features as standard report suites, but for basic things such as pages, page views, it works.
The overall point though is that one way or the other, you should have all of your data go into one report suite as the first step.
Then, you should also assign a few custom variables to be output on the pages of all your sites. These are the 4 main things I always try to include in an implementation to make it easier to find out which sites/pages are reporting to what.
A custom variable to identify the site. Some people use s.server for this. However, you may also want to pop a prop or eVar with the value as well, depending on how you'd like to be able to break data down. The big question here is: How do you define "site" ? I have seen it defined many different ways.
If you do NOT define "site" as domain (e.g. location.hostname) then I suggest you pop a prop and eVar with the domain, because AA does not have a native report for this. But if you do, then you can skip this, since it's same thing as point #1
A custom prop and eVar with the report suites(s). Unless you have a super old version of legacy code, just set it with s.sa(). This will ensure you get the final report suite(s), in case you happen to use a version that uses Dynamic Account variables (e.g. s.dynamicAccountList).
If you set s.pageName with a custom value, then I suggest you pop a prop and eVar with the URL. Tip: to save on request url length to AA, you can use dynamic variable syntax to copy the g parameter already in a given AA request. For example (assuming you don't have code that changes the dynamic variable prefix): s.prop1='D=g'; Or, you can pop this with a processing rule if you have the access.
you can normally find this sort of information in the Site Content-> Servers report. There will be information in there the indicates what sites are sending in the hits. Your milage may vary based on the actual tagging implementation, it is not common for anyone to explicitly set the server, so the implicit value is the domain the hit is coming in from.
Thanks C.
I am trying to show/load different editor on different rows of a editorgridpanel. Like a textbox on one row combobox/superboxselect on another and it could be any order, random.
The conditions which dictate which editor will be shown reside in the database.
Please tell me if this is possible and if so, how do i go about it.. I have tried pulling the conditions asynchronously which are pulled on a click event for the respective column, but calling it async causes problems. Please advise
Anything is possible, but what you want to do would take a bit of work. The basic idea would be to configure the needed grid editor(s) dynamically and update the columns with the new editors when needed. Now... what would be required to make that actually work I couldn't say offhand without digging into the Ext source -- it would almost definitely require overriding default behavior in the grid and/or column model.
Pulling your conditions asynchronously would (I imagine) be too slow for the interaction of clicking on a row to edit inline. If it takes a second or more from click to configured editors, that would not be acceptable performance. I would try to find a way to send your conditions down along with the other row data if at all possible (they can be in the store's data model on the client without having to be shown in the grid).
Without knowing more about your business requirements, it might be more appropriate to ditch the editable grid and instead go with a dynamically-configured FormPanel tied to the grid. This way the interaction of clicking and then pausing slightly while the form is configured would appear to be more natural. Also, the functionality of rendering a form with a particular configuration is perfectly standard and would require nothing fancy on your end. See this example as a starting point (your form would be dynamic, but maybe the same type of interaction could work?)
Yesterday morning I noticed Google Search was using hash parameters:
http://www.google.com/#q=Client-side+URL+parameters
which seems to be the same as the more usual search (with search?q=Client-side+URL+parameters). (It seems they are no longer using it by default when doing a search using their form.)
Why would they do that?
More generally, I see hash parameters cropping up on a lot of web sites. Is it a good thing? Is it a hack? Is it a departure from REST principles? I'm wondering if I should use this technique in web applications, and when.
There's a discussion by the W3C of different use cases, but I don't see which one would apply to the example above. They also seem undecided about recommendations.
Google has many live experimental features that are turned on/off based on your preferences, location and other factors (probably random selection as well.) I'm pretty sure the one you mention is one of those as well.
What happens in the background when a hash is used instead of a query string parameter is that it queries the "real" URL (http://www.google.com/search?q=hello) using JavaScript, then it modifies the existing page with the content. This will appear much more responsive to the user since the page does not have to reload entirely. The reason for the hash is so that browser history and state is maintained. If you go to http://www.google.com/#q=hello you'll find that you actually get the search results for "hello" (even if your browser is really only requesting http://www.google.com/) With JavaScript turned off, it wouldn't work however, and you'd just get the Google front page.
Hashes are appearing more and more as dynamic web sites are becoming the norm. Hashes are maintained entirely on the client and therefore do not incur a server request when changed. This makes them excellent candidates for maintaining unique addresses to different states of the web application, while still being on the exact same page.
I have been using them myself more and more lately, and you can find one example here: http://blixt.org/js -- If you have a look at the "Hash" library on that page, you'll see my implementation of supporting hashes across browsers.
Here's a little guide for using hashes for storing state:
How?
Maintaining state in hashes implies that your application (I'll call it application since you generally only use hashes for state in more advanced web solutions) relies on JavaScript. Without JavaScript, the only function of hashes would be to tell the browser to find content somewhere on the page.
Once you have implemented some JavaScript to detect changes to the hash, the next step would be to parse the hash into meaningful data (just as you would with query string parameters.)
Why?
Once you've got the state in the hash, it can be modified by your code (or your user) to represent the current state in your application. There are many reasons for why you would want to do this.
One common case is when only a small part of a page changes based on a variable, and it would be inefficient to reload the entire page to reflect that change (Example: You've got a box with tabs. The active tab can be identified in the hash.)
Other cases are when you load content dynamically in JavaScript, and you want to tell the client what content to load (Example: http://beta.multifarce.com/#?state=7001, will take you to a specific point in the text adventure.)
When?
If you had a look at my "JavaScript realm" you'll see a border-line overkill case. I did it simply because I wanted to cram as much JavaScript dynamics into that page as possible. In a normal project I would be conservative about when to do this, and only do it when you will see positive changes in one or more of the following areas:
User interactivity
Usually the user won't see much difference, but the URLs can be confusing
Remember loading indicators! Loading content dynamically can be frustrating to the user if it takes time.
Responsiveness (time from one state to another)
Performance (bandwidth, server CPU)
No JavaScript?
Here comes a big deterrent. While you can safely rely on 99% of your users to have a browser capable of using your page with hashes for state, there are still many cases where you simply can't rely on this. Search engine crawlers, for example. While Google is constantly working to make their crawler work with the latest web technologies (did you know that they index Flash applications?), it still isn't a person and can't make sense of some things.
Basically, you're on a crossroads between compatability and user experience.
But you can always build a road inbetween, which of course requires more work. In less metaphorical terms: Implement both solutions so that there is a server-side URL for every client-side URL that outputs relevant content. For compatible clients it would redirect them to the hash URL. This way, Google can index "hard" URLs and when users click them, they get the dynamic state stuff!
Recently google also stopped serving direct links in search results offering instead redirects.
I believe both have to do with gathering usage statistics, what searches were performed by the same user, in what sequence, what of the search results the user has followed etc.
P.S. Now, that's interesting, direct links are back. I absolutely remember seeing there only redirects in the last couple of weeks. They are definitely experimenting with something.