The following YouTube Analytics query is suddenly failing for me (has worked for the past couple of weeks):
https://www.googleapis.com/youtube/analytics/v1/reports?ids=channel==[my channel id]&start-date=2010-10-27&end-date=2010-10-30&metrics=views&dimensions=day,insightTrafficSourceType&sort=day&prettyPrint=false
The error:
{"error":{"errors":[{"domain":"global","reason":"badRequest","message":"Invalid query. Query did not conform to the expectations."}],"code":400,"message":"Invalid query. Query did not conform to the expectations."}}
Appears to be related to the "insightTrafficSourceType" dimension as the query succeeds if I only use the "day" dimension. This also fails if I use the "insightPlaybackLocationType" dimensions.
Did something change with the API? I would like to use both of these dimensions in my reports.
Any help would be appreciated.
Thanks,
J
Stack Overflow is not a good place to report bugs; I've opened a public issue tracker bug report for this problem at https://code.google.com/p/gdata-issues/issues/detail?id=3963, and I'd recommend "Star"ing that to keep updated on the status of the issue.
Also see http://apiblog.youtube.com/2012/09/the-youtube-api-on-stack-overflow.html for info on creating bug reports/feature requests yourself.
Related
So today I encountered something strange while looking at the new Realtime Database interface. While I was testing something which involved the deletion of some data I noticed this:
Video: https://imgur.com/a/HnFiWts
Basically the node bRqT3dAc5JhNWv0616lawe6w9Ln1 gets deleted by a cloud function with a simple remove() call on its reference. But as you can see in the video while it is getting deleted a few duplicates appear and get deleted immediately. I tested it for a while but the behavior seems to be random. It also sometimes happens when I try to delete some other node in the database but from the client.
This is why I think this is a visual bug and not a bug in my code:
The code is more than a year and half old and thus I have tested it countless times before and I have never seen this happen before.
There aren't any unexpected results from the code execution. This means that the database looks exactly as it should at the end of the code execution.
Since the only thing that has changed recently is the web interface for the database I think that this must be a visual bug, but I am still not 100% certain. Can someone else please confirm if this indeed is just a visual bug?
firebaser here
That indeed looks like a visual glitch that was introduced in a major update to our data viewer that was released a couple of weeks ago. As you said, it doesn't impact the data that is stored or even read, but it is definitely a bug.
From what I heard, our QA team just caught this one late last week too, but just to be certain: can you file a bug report with our support team for it, so they can track it too?
I only have 15-20 messages in my inbox for June 20. But when I run a search query for that day
https://graph.microsoft.com/v1.0/me/mailFolders/inbox/messages?$search=" received = 2019/06/20"&$select=from,id&$top=1000
I am seeing that the results are repeating, meaning the same message id keeps circling back (think its an infinitely) and the results take several seconds to return.
I am even able to reproduce that with Graph explorer. It doesn't seem to happen for other days in my inbox. I think I have come across some bug in the system but unclear what exactly it is.
Anyone know what it is?
This issue is now fixed. Can you pls try again?
In my ios app, users upload files. I am logging a custom event called "upload_time" because I would like to see approximately how long uploads are taking.
FIRAnalytics.logEvent(withName: "upload_time", parameters: [
kFIRParameterItemID: "upload_time_\(Constants.versionNumber)",
kFIRParameterItemName: val
])
I would like to be able to filter by the version number of the app and see the percentages of upload times. I have divided up times in 10s brackets so "val" is just rounded up to the nearest 10.
Just like how the select_content default event allows you to filter by content_type and then item_id, I would also like to be able to filter by version number and see the percentages for the different brackets of times in the console. At the moment, it seems that what I have setup is just adding up all the values for each day.
How I setup parameters in the console
Would greatly appreciate any help.
There's no way to configure ad hoc reports in the Firebase console.
If you want reports other than those provided in the console, then your best bet here would be to export the results to BigQuery and use a visualization tool.
Once you have these set up, the sky is the limit :)
Firebase Performance Monitoring sounds like a better fit if you're trying to measure upload times. Check out the getting started guide here. Performance Monitoring actually captures a bunch of network data automatically.
In addition, Performance Monitoring lets you filter by a number of parameters, such as device type, OS version, app version, and more. It's still in beta, so if there's some functionality that you'd like to have that isn't there yet, feel free to file a feature request.
To add another way to make this work in GA for Firebase is to Export the Firebase data to Big Query and run a query that calculates the percentage of upload time from all you app instances filtered by version.
Take a look at the Step 6 of this doc on sample query for Big Query data gather by Firebase.
I've been working on a project involving the Watson Retrieve and Rank service and it was acting normally until now. I managed to upload a number of documents and created roughly 50 questions to start off. Normally, I was able to upload the questions just fine, but now I keep getting an error saying "Questions upload Upload failed".
I have attempted to use different browsers and going into incognito mode, yet nothing seems to solve the issue. I either get the error or the upload questions animation plays endlessly.
This is what it looks like as I try to upload the questions
If anyone could give some insight on how to approach this problem, it would be great.
Can you provide the entire error log?
Are you sure the solr cluster and collection are created correctly? The Standard Plan for this service only allow 7 rankers in the free plan.
You can try it with a new instance of the service.
Are you sure your training data meet the requirements?
Training data requirements:
https://www.ibm.com/watson/developercloud/doc/retrieve-rank/training_data.html
Retrieve and Rank wasn't working correctly on Wednesday and Thursday. But today its up and running properly.
I'm having an issue with an umbraco site of mine: For some reason some of the nodes are timing out when I try to click on them in the back-end of the site.
The front-end works fine and there aren't any slowdown issues there, however I'm unable to edit these same nodes in the back-end as the system seems to just hang. This is making it incredibly difficult to debug as I have no idea what properties are actually causing the problems here. What's strange is I can create a node of the same document type and enter in some dummy values and that works fine, yet I can't seem to edit the existing nodes.
I've tried republishing the entire site, republishing the individual nodes, deleting the umbraco.config file and nothing has worked up to this point.
What's also interesting is that if I close down the browser the system seems to stop hanging and I can log in and try again.
Has anyone encountered this before or know where to begin?
Thanks
I have encountered something similar. The longer you work with Umbraco the slower it becomes and if you check the memory usage in Chrome's task manager, you can see that certain actions upon nodes bump the memory usage up a little further. The answer is just to close down the tab and open a new one.
I have reported this and Umbraco cannot replicate this. However, I do think that this is possibly due to maybe a package installed into Umbraco, maybe uComponents. It's very difficult to pin point.
Update:
If you can access some nodes but not others, then this is actually slightly easier to debug. I would check what similarities the nodes that timeout have.
Are they all of the same document type?
Do they all use the same data type?
I would guess that the nodes in question are using a data type that is performing an operation when the node is loading, and that operation is timing out. For example, do you have any data types that load data from the database, like enums? Do you have any datatypes that load data from a web service?
Do you have any usercontrol data types wrapped in the UserControlWrapper data type? These would be somewhere to check.
Finally, check:
The databases [umbracoLog] table. Any Umbraco-specific errors will be listed there.
Check the computer's event viewer. This will show any unhandled errors.
My money's on a database timeout.