I'm not sure how long the issue has been happening, it must have been a few months. In first it definitely works for short videos (less then 5s). Now it fails. Is there a way to solve it?
"code": 3,
"message": "Request field config.editList is invalid duration, expected edit list duration should be atleast 5s long, current duration is 2.554195s.",
"details": [
{
"#type": "type.googleapis.com/google.rpc.BadRequest",
"fieldViolations": [
{
"field": "config.editList",
"description": "InvalidInputDuration"
}
]
}
]
}
Related
Since mid of last week I witnessed increasing issues with the events list query params for upper and lower bounds. Since the weekend it is no longer working at all.
Issue summary:
Positive timezone offsets no longer accepted in events list query params for time bounds.
Api call:
GET https://www.googleapis.com/calendar/v3/calendars/<myCalendarId>/events
with query params: ?timeMin=2020-12-01T09:31:04+0100
Expected behavior and behavior until around Nov 25th, on some servers until Nov 27th:
{
"kind": "calendar#events",
"etag": "\"blahblah\"",
"summary": "blahblah",
"updated": "2020-12-01T07:46:56.357Z",
"timeZone": "Europe/Berlin",
"accessRole": "owner",
"defaultReminders": [
{
"method": "popup",
"minutes": 30
}
],
"nextSyncToken": "blahblah",
"items": [
...
Actual behavior / response body:
{
"error": {
"errors": [
{
"domain": "global",
"reason": "badRequest",
"message": "Bad Request"
}
],
"code": 400,
"message": "Bad Request"
}
}
Further details:
Apparently the "bad request" response is solely induced by the plus (+) symbol before the time zone offset. Changing the "+" to a "-" in the above request will return a valid response as expected only that the offset is wrong (in this case by two hours as it should be).
Writing the offset with or without ":" (e.g. +01:00 vs +0100) does not affect results.
Most likely I have missed something like a depreciation of positive time zone offsets or I was anyway using a wrong time format since I am admittedly not an expert in the RFC3339.
The other option is that the Google calendar team has updated their parser in their calendar API and in the wake of that deployed a bug. In that case a test case should be added to the pipeline for testing several time zone offsets including the limits.
I would be happy to receive advice on how to gracefully select and time zone east of UTC.
Thanks a lot in advance!
There is no such deprecation, but when you make an HTTP request you need to URL encode the parameters
Sample:
https://www.googleapis.com/calendar/v3/calendars/primary/events?q=timeMin%3D2020-12-01T09%3A31%3A04%2B0100
I have an event with attachment which is about 50 MB.
Below requests are fine:
GET /users/{id}/events/{id}
GET /users/{id}/events/{id}/attachments/{id}?$select=name,size
{
"#odata.context": "https://graph.microsoft.com/v1.0/$metadata#users('u_id')/events('e_id')/attachments(name,size)",
"value": [
{
"#odata.type": "#microsoft.graph.fileAttachment",
"id": "a_id",
"name": "xxxx",
"size": 51564506
}
]
}
But when I want to get the content of this attachment:
GET /users/{id}/events/{id}/attachments/{id}
it returns
{
"error": {
"code": "ErrorMessageSizeExceeded",
"message": "The message exceeds the maximum supported size., The message exceeds the maximum supported size.",
"innerError": {
"request-id": "426c3bf3-eda8-40c8-afe6-9b83877a328c",
"date": "2018-10-24T02:31:48"
}
}
}
How could I deal with this?
Is it possible to increase the size limit of this API?
Thank you!
According to your describe, I suggest you should to limit your attachment size.
Also, Microsoft Graph currently has a 4MB limit. If you want to download large attachment, you need to write custom download logic which will like the breakpoint Continuingly and it doesn’t more than 30MB.
P.S. In general, the attachment size of main stream's mailbox are between 20MB and 30MB.
reference:
4MB total size of each REST request
Is there any way to get the videos from autogenerated channels (like this one https://www.youtube.com/channel/UC-9-kyTW8ZkZNDHQJ6FgpwQ/videos) directly, without having to access all the playlists?
Using https://www.googleapis.com/youtube/v3/search?part=snippet&channelId=UC-9-kyTW8ZkZNDHQJ6FgpwQ&key=... gives me 0 items.
Gives me 0 Items
If you mean running the following exactly
https://www.googleapis.com/youtube/v3/search?part=snippet&channelId=UC-9-kyTW8ZkZNDHQJ6FgpwQ&key=
Returns the following error
{
"error": {
"errors": [
{
"domain": "usageLimits",
"reason": "keyInvalid",
"message": "Bad Request"
}
],
"code": 400,
"message": "Bad Request"
}
}
That is because you have neglected to add an API key on the end.
Try testing in the query explorer this seams to return quite a few no idea if its all of them.
GET https://www.googleapis.com/youtube/v3/search?part=snippet&channelId=UC-9-kyTW8ZkZNDHQJ6FgpwQ&maxResults=25&key={YOUR_API_KEY}
returns
"pageInfo": {
"totalResults": 786,
"resultsPerPage": 25
},
This apears to return a list of the playlists only for this user.
Update:
If you try and only request videos for this user you get 0 returned. This is not the case for any other user which i have tested.
GET https://www.googleapis.com/youtube/v3/search?part=snippet&channelId=UC-9-kyTW8ZkZNDHQJ6FgpwQ&maxResults=50&order=date&type=video&key={YOUR_API_KEY}
This leads me to believe that it is not possible to retrieve the videos for an autogenerated channel. I recommend either logging this as a bug or adding it as a feature request here. Personally i think its more a feature request.
Google Sheets API v4
This API has been giving error 500 and error 503 for 24+ hours.
The code was working previously for many months and nothing has changed.
I know 500 and 503 are supposed to be internal errors, but from searches it seems that they are returned in obscure cases that might not be well documented that users can work around.
The issues is not related to rate-limiting. My gut feel was related to an auth token expiring (as nothing in the code had changed), but I tried refreshing the auth token and still get the issue.
I don't see any issues on Google's status/uptime pages.
The response from the sheets.spreadsheets.values.append API is usually:
{
"code": 503,
"errors": [
{
"message": "The service is currently unavailable.",
"domain": "global",
"reason": "backendError"
}
]
}
But sometimes also:
{
"code": 500,
"errors": [
{
"message": "Internal error encountered.",
"domain": "global",
"reason": "backendError"
}
]
}
Example request payload, which includes an actual sheet ID that repros this if there's a Googler available to try it repro on their end:
{
"spreadsheetId": "1_P5IR4OLbYd27L9m184R37L_PP2drCk6PSJndIlEhms",
"range": "Incoming!A4",
"valueInputOption": "USER_ENTERED",
"insertDataOption": "INSERT_ROWS",
"resource": {
"values": [
[
"=HYPERLINK(\"https://url/\", \"Blah\")",
"6/13 22:18",
"=IF(AND(INDIRECT(\"R[0]C[3]\",false)<>\"\",INDIRECT(\"R[0]C[9]\",false)=\"\"),((INDIRECT(\"R[0]C[-1]\",false)+C$3/24)-NOW())*24,)",
"dv1",
"testdoc",
"170613_006_0400PM.MP3",
"00:40:00.000",
"",
"",
"",
"",
"",
"=IF(INDIRECT(\"R[0]C[-1]\",false)<>\"\",IFERROR(INDIRECT(\"R[0]C[-6]\",false)/INDIRECT(\"R[0]C[-1]\",false),\"---\"),)",
"",
"",
"",
"",
"",
"=IF(INDIRECT(\"R[0]C[1]\",false)=\"\",(INDIRECT(\"R[0]C[-17]\",false)+S$3/24-NOW())*24,)",
""
]
]
},
"auth": {
"transporter": {},
"clientId_": "anonymizied.apps.googleusercontent.com",
"clientSecret_": "anonymizied",
"redirectUri_": "urn:ietf:wg:oauth:2.0:oob",
"opts": {},
"credentials": {
"access_token": "anonymizied",
"refresh_token": "anonymizied",
"token_type": "Bearer",
"expiry_date": 1502144766732
}
}
}
I've carefully worked out this issue and a workaround. It's definitely a bug on Google's side, which seems to have been pushed to production around Aug 5th (+/- 1.5 days).
In my case, simply un-hiding a hidden row resolves the error. Hiding the row again reproduces the issue.
So if you hit this error, try un-hiding any hidden rows.
I have filed this issue with Google at https://issuetracker.google.com/64468867, but it seems they only triage public issues there every month or two.
This was an issue on the Google Sheets side, sorry. The fix is now rolled out so the problem shouldn't happen anymore. Please reply back here if it continues.
I have retrieved a json object using typhoeus gem.
url = 'www.example.com' <br>
request = ::Typhoeus::Request.get(url,userpwd: username + ":" + pass)<br>
content = JSON.parse(request.body)
I would like to count the occurence of "Priority":"high" including the quotes inside the json response. How do I go about doing this?
Edit:
"priority":"high" is a key value pair. It is deeply nested inside the json tree.(Don't how deeply it is nested). All I need is count of occurence of "priority":"high"
Any and all suggestion is welcome.
Sample data:
"tickets": [{
"url": "https://.zendesk.com/api/v2/tickets/xxxx.json",
"id": xxxxx,
"external_id": null,
"via": {
"channel": "email",
"source": {
"from": {
"address": "#compli.com",
"name": ""
},
"to": {
"name": "organization Global Support",
"address": "support#organization.zendesk.com"
},
"rel": null
}
},
"created_at": "2016-08-04T16:23:13Z",
"updated_at": "2016-08-08T20:26:01Z",
"type": "problem",
"subject": "Problems with abc Connect",
"raw_subject": "Problems with abc Connect",
"description": "Hi – our Tenet ID is 5675.\n\n \n\nThe abc report is not providing the full data when I run the billing preview. I am running it using Chrome. Attached are snapshots of what I’m doing plus the report generated.\n\n \n\nA perfect example of the problem is shown at the bottom of the report generated. Garber Automotive Group, account number A00000490 does not display the data for all of their products. Their data is shown on rows 5658 thru 5712 on the excel file BillingPreviewResult_201620 report run 08.04.16.\n\n \n\nHowever the EXACT same report (all the parameters are the same) run on 07/01/16 included all of Garber’s information. The excel file abc report run 07.01.16 10.13 AM has the data for Garber on rows 6099 – 6182.\n\n \n\nThe report is cutting off a lot of data for some reason. As you can see by comparing the amount of data between the two excel reports there are much fewer lines on the report run on today as opposed to the one run on 07/01, 6182 rows vs 5712 rows.\n\n \n\nThis is a business critical report for us. It is used for cash forecasting, monthly financial reporting, rolling budgeting and ad hoc reporting.\n\n \n\nWe need this problem identified and fixed immediately. It is already causing a problem with finalizing our July results.\n\n \n\nLet me know if you have any questions or need any additional data.\n\n \n\n \n\nRegards,\n\n \n\n \n\n \n\n| Controller\ndesk: 503.963-4239 | fax: 503.294.1200 | \n\nCompli - Cool, Calm and Compliant. TM\n\nVisit() to learn more.\n\n \n\nFollow us on LinkedIn () and Twitter",
"priority": "normal",
"status": "open",
"recipient": "support#organization.zendesk.com",
"requester_id": 1336424406,
"submitter_id": 1336424406,
"assignee_id": null,
"organization_id": 224504969,
"group_id": 21606503,
"collaborator_ids": [560973773, 786229209, 421597631, 539566717, 707192615, 1336424406, 31365392, 719608577, 1817633993],
"forum_topic_id": null,
"problem_id": null,
"has_incidents": false,
"due_at": null,
"tags": ["1_price", "best_practice_advise", "engage_global_services__email_", "escalate", "hard", "internal_escalation", "p0", "yes_escalated", "xxxxx", "zhub"],
"custom_fields": [{
"id": 22024091,
"value": "p0"
}, {
"id": 24212576,
"value": "best_practice_advise"
}, {
"id": 22035048,
"value": "xxx and so on.....