Omniture Site Catalyst count getting changed frequently - adobe-analytics

I am consuming the site catalyst data for different report suites for one of our application. But what i have found is the metrics data is getting changed in Omniture side (though we are downloading data 1 day prior) on refreshing the Page or verifying the data 1 hour later.
for Ex- If we are looking for (Page views, Visits, Daily UV) count for Oct 13 then the count is showing different different values after 1 hour for Oct 13.
Can anybody please suggest what may be the reason for the data get refreshed. As we are storing the data in our end in database it's creating serious issue with data mismatch for various dates.
Waiting for the quick suggestion.
Thanks

Related

How long google Page Speed Insight take to update Field Data

I m boosting my website performance . I'm testing my website page speed with Google PSI. I've done maximum correction and i'm getting 90+ score in PSI . All lab data attributes are green . But Field data is still not being updated. I m just wondering, how long Google page speed insight take to update the Field data. also, If Fields data will update, then it will be same as the Lab data ?
Page Insight Screenshort
The data displayed is aggregated daily, so the data should change day to day.
However the reason you do not see the results of any improvements you make instantly is because the data is taken over a rolling 28 day period.
After about 7 days you should start to see your score improve, in 28 days the report data will be reflective of changes you made today.
Because of this delay I would recommend taking your own metrics using the Web Vitals Library so you can see the results in real time. This also lets you verify the changes you made work at all screen sizes and cross browser.
Either that or you could query the CrUX data set yourself on shorter timescales.

Parse just went AWOL

1) I woke up this morning and my currentUser().objectId was different from what it has been the past 3 months. Now all of my personal data is lost and my profile is empty. My objectId is the same as my old one on the Parse dashboard. For some reason it's different on my device, and I've tried logging out/in repeatedly.
2) None of my queries are returning any objects. I've been using many of the same queries for months, all have worked great, but now they're not returning anything.
Is this Parse's fault? I have until April 28th to migrate my data so I shouldn't be expecting these results. Why is it going crazy?

Fusion tables 400 error

I have made an app that collects data from my robotics competition and then sends it to a fusion table. It has worked before this and was working last time it was tested one week ago now when I try to send the data I get this
Transmission Error400 Bad
Request Required:X-Goog-
Encode-Respnse-If-
ExecutableBad transmission
this is the error as formatted that is happening in the app. the only Idea I have is that it may be a security issue because it is a 400 error but I cannot even begin to know where to start the competition is tomorrow and the copies from previously do not work either I need help quickly other info is that is was made using app inventor 2

Anomaly with DetailedMerged FreeBusyViewType data returned from Exchange 2007

I'm using EWS SOAP service to fetch detailed free busy data from Exchange 2007. I'm trying to fetch data between 9am and 10:30am. The data returned is in intervals of 30 minutes so 3 slots are returned.
The first part of the response from Exchange contains the MergedFreeBusy string which shows 002, which equates FREE FREE BUSY. This would indicate that between 9am and 10am the room is busy and that between 10am and 10:30am the room is busy
However the detailed view returns two entries the first with a start time of 9am and end time at 10am and the busy type is incorrectly showing as busy. This contradicts the earlier mergedfreebusy data.
When i open Outlook and check the rooms actual availability i see that the room is free between 9-10am and busy between 10am and 10:30am. So the mergedfreebusy data content is correct while the detailed data is not. Why would this be happening?
Finally to set some more context, my timezone is GMT ( at the moment due to daylight savings its GMT+1, not sure if this should be an issue however because the response contains conflicting data).
One way i can work around this issue is to determine the bias from expected timezone that the user is requesting from. Other suggestions would be really appreciated.
I discovered that this is actually something which is covered in the MS EWS documentation. Enclosing URL in case anyone else runs into similiar problem - http://msdn.microsoft.com/en-us/library/bb655859%28v=EXCHG.80%29.aspx

Tracking impressions/visits per web page

I have a site with several pages for each company and I want to show how their page is performing in terms of number of people coming to this profile.
We have already made sure that bots are excluded.
Currently, we are recording each hit in a DB with either insert (for the first request in a day to a profile) or update (for the following requests in a day to a profile). But, given that requests have gone from few thousands per days to tens of thousands per day, these inserts/updates are causing major performance issues.
Assuming no JS solution, what will be the best way to handle this?
I am using Ruby on Rails, MySQL, Memcache, Apache, HaProxy for running overall show.
Any help will be much appreciated.
Thx
http://www.scribd.com/doc/49575/Scaling-Rails-Presentation-From-Scribd-Launch
you should start reading from slide 17.
i think the performance isnt a problem, if it's possible to build solution like this for website as big as scribd.
Here are 4 ways to address this, from easy estimates to complex and accurate:
Track only a percentage (10% or 1%) of users, then multiply to get an estimate of the count.
After the first 50 counts for a given page, start updating the count 1/13th of the time by a count of 13. This helps if it's a few page doing many counts while keeping small counts accurate. (use 13 as it's hard to notice that the incr isn't 1).
Save exact counts in a cache layer like memcache or local server memory and save them all to disk when they hit 10 counts or have been in the cache for a certain amount of time.
Build a separate counting layer that 1) always has the current count available in memory, 2) persists the count to it's own tables/database, 3) has calls that adjust both places

Resources