Cloudkit dashboard unexpected server error - ios

When I want to use Cloudkit, I found that I can't manage the data by Cloudkit dashboard. The page tips me a message "Unexpected server error". How can I solve this problem? Here shows this error screenshot.
[Update]
Full error log:
ERROR TITLE Unexpected server error.
IS FATAL true
TYPE server
APPLICATION NAME Dashboard
BUILD NUMBER 15BDev63
TIME Tue Mar 24 2015 08:43:06 GMT+0100 (CET) (1427182986565)
HOST icloud.developer.apple.com
USER AGENT Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_2)
AppleWebKit/600.4.10 (KHTML, like Gecko) Version/8.0.4 Safari/600.4.10
ENVIRONMENT unknown
RECENT LOG MESSAGES Tue, 24 Mar 2015 07:43:06 GMT: INFO: --> Request
1: GET to
https://ckdashboardws.icloud.apple.com/bootstrap?request_uuid=e7516168-dcfe-4ed5-aa17-dc3a5a97ffb6,
headers: Content-Type=text/plain, body: (empty) <-- Response 1: 500
(294ms), headers: Cache-Control=no-cache, no-store, private,
Content-Type=application/json; charset=UTF-8,
X-Apple-Request-UUID=f05681b8-1f1c-4af5-a534-5f6531c5a463 body:
{"errorReason":"Internal Server
Error","errorCode":500,"requestUUID":"f05681b8-1f1c-4af5-a534-5f6531c5a463"}
Tue, 24 Mar 2015 07:43:06 GMT: DEBUG: CloudKit: ErrorCatcher dialog
invoked. Tue, 24 Mar 2015 07:43:06 GMT: DEBUG: SC.Module: Attempting
to load 'cloudkit/error_catcher' Tue, 24 Mar 2015 07:43:06 GMT: DEBUG:
SC.Module: Module 'cloudkit/error_catcher' is not loaded, loading now.
Tue, 24 Mar 2015 07:43:06 GMT: DEBUG: SC.Module: Loading CSS file in
'cloudkit/error_catcher' ->
'/applications/dashboard/15BDev63/cloudkit/error_catcher/15BDev63/en-us/stylesheet.css'
Tue, 24 Mar 2015 07:43:06 GMT: DEBUG: SC.Module: Loading JavaScript
file in 'cloudkit/error_catcher' ->
'/applications/dashboard/15BDev63/cloudkit/error_catcher/15BDev63/en-us/javascript.js'
Tue, 24 Mar 2015 07:43:06 GMT: DEBUG: SC.Module: Module
'cloudkit/error_catcher' finished loading. Tue, 24 Mar 2015 07:43:06
GMT: DEBUG: SC.Module: Evaluating and invoking callbacks for
'cloudkit/error_catcher'. Tue, 24 Mar 2015 07:43:06 GMT: DEBUG:
SC.Module: Module 'cloudkit/error_catcher' has completed loading,
invoking callbacks.

you should be able to just manage your CloudKit data. There have been incompatible changes to the CloudKit containers during the beta's. Is your container created during the beta phase? If you keep having this problem and don't have valuable data in your container, then try resetting it. Click on Deployment and then on Reset development environment

For my case the Apple ID associated with my developer account was not activated for iCloud. You can determine if it is or isn't by going to https://www.icloud.com and trying to sign in with your developer Apple ID. If you can't then you won't be able to sign in to the dashboard either.
What I ended up doing was going to My Apple ID (https://appleid.apple.com/) and reverifying my Apple ID address. Once this was completed I was able to login to both www.icloud.com and the iCloud CloudKit dashboard.

I just had this problem and looked everywhere for a solution, but had no luck.
Finally, I realised that this started happening immediately after I added a record to the iCloud (CloudKit) database from one of my devices, so I thought deleting all records might work.
This wasn't too easy as I, like you, couldn't log into the dashboard, however I ran the following code snippet in my application for each entity in my database, and that fixed the problem.
func applicationDidFinishLaunching(_ application: UIApplication) {
let publicDatabase = CKContainer.default().publicCloudDatabase
let allRecordsQuery = CKQuery(recordType: "ENTITY", predicate: NSPredicate(format: "TRUEPREDICATE", []))
publicDatabase.perform(allRecordsQuery, inZoneWith: nil) { (records, error) in
for record in records! {
publicDatabase.delete(withRecordID: record.recordID, completionHandler: { (deletedRecord, error) in
if error == nil {
print("Deleted " + (deletedRecord?.recordName)!)
} else {
print(error!)
}
})
}
}
}
Replace "ENTITY" with every type of record in your database and run again.
So if you have just two entities: Pet and Task, first set "ENTITY" to "Pet" and run the application. Then set the "ENTITY" to "Task" and run the application again.
I realise that your question was asked a while ago, but this bug clearly still hasn't been fixed and so I hope my solution helps anybody else who has this problem.

I was getting this same error message yesterday when trying to access the CloudKit Dashboard. I had previously accessed it with no problem. After trying several of the above suggestions with no luck, I called Apple. Apparently it was a system wide problem. That allowed me to stop messing up my code looking for the problem and just move on to another task. So, if nothing else seems to be working, that might be worth checking.

Related

Microsoft Graph API deprecation headers - how to interpret?

I'm calling an Microsoft Graph API endpoint to change sensitivity labels (like described here and documented here). It's a beta endpoint and currently working well. Here's how to use it according to the documentation:
PATCH https://graph.microsoft.com/beta/groups/{id}
Content-type: application/json
{
"assignedLabels":
[
{
"labelId" : "45cd0c48-c540-4358-ad79-a3658cdc5b88"
}
]
}
Looking at the response headers I noticed those three related to deprecation Deprecation, Sunset and Link:
"Link": "<https://developer.microsoft-tst.com/en-us/graph/changes?$filterby=beta,PrivatePreview:Restricted_AU_Properties&from=2021-04-01&to=2021-05-01>;rel=\"deprecation\";type=\"text/html\",<https://developer.microsoft-tst.com/en-us/graph/changes?$filterby=beta,Device_Properties&from=2022-01-01&to=2022-02-01>;rel=\"deprecation\";type=\"text/html\"",
"Deprecation": "Mon, 05 Apr 2021 23:59:59 GMT",
"Sunset": "Sat, 19 Feb 2022 23:59:59 GMT",
I'm trying to determine whether this means that the endpoint stops working on Feb 19 with respect to sensitivity labels. The links in the Link response header unfortunately do not work and look kind of internal-ish. E.g. https://developer.microsoft-tst.com/en-us/graph/changes?$filterby=beta,PrivatePreview:Restricted_AU_Properties&from=2021-04-01&to=2021-05-01
Looking at the query parameters of the link I see the keywords Restricted_AU_Properties and Device_Properties. The Microsoft Graph change log does not show anything about those or the assignedLabels that is about to happen.
How do I have to read this response? Is setting sensitivity labels using this endpoint going to stop working on Feb 19?
Talking to a colleague helps. Apparently the 1.0 Graph API endpoint also allows setting sensitivity labels and the documentation claiming that it is read-only is wrong:
So my interpretation for now is that the deprecation headers tell me to use the 1.0 endpoint instead of beta. Which would kind of make sense.

IIS7.5 max-age issue(asp.net mvc output cache)

We use Windows server 2008 R2 Enterprise And IIS7.5.7600.16385,
and i deployed a simple web (asp.net mvc, c#, .net framework 4.5.1) on the server.
a controller like below, and *.cshtml only output a datetime:
public class DetailController : Controller
{
[OutputCache(Duration = 300, VaryByParam = "id")]
public ActionResult Index(int id)
{
return View();
}
}
when i first request the url http://localhost:80/Detail/Index?id=3 , the response is correct:
Cache-Control:public, max-age=300
Date:Mon, 24 Oct 2016 12:11:59 GMT
Expires:Mon, 24 Oct 2016 12:16:51 GMT
Last-Modified:Mon, 24 Oct 2016 12:11:51 GMT
but, when i request the url again(ctrl+f5), the max-age incorrect (then the response is from the server cache):
Cache-Control:public, max-age=63612908450
Date:Mon, 24 Oct 2016 12:16:34 GMT
Expires:Mon, 24 Oct 2016 12:20:50 GMT
Last-Modified:Mon, 24 Oct 2016 12:15:50 GMT
i don't know why the max-age so large, and how it generated, it will reconvert when the output cache expired (ctrl+f5).
In my production env, the incorrect max-age cause a url link click read the content from browser's disk cache.
any one know how and how to fixed it?
This is a known issue and a bug is open for .NET 4.6.2 coming with KB151864.
Please see here for additional details: https://github.com/Microsoft/dotnet/issues/330
This is going to be fixed in .NET 4.6.3. I currently don't know if a fix will be made available earlier for 4.6.2.
The only known workaround at present is to downgrade and remove KB151864, when possible.
NOTE: the bug is affecting ONLY the compilation of the "max-age" attribute in the Cache-Control header for the cached responses. The actual caching mechanism and lifetime expiration is working.
I just spoke with Microsoft Support team and this is what they responded me:
The suggested workaround is to downgrade the frame work from 4.6.2 to
4.6.1 by uninstalling the update KB31511864.
Go to control panel -> programs -> Programs and Feature -> Installed Updates.
And remove KB3151864, that will fix this issue.

Salesforce Bulk API error: InvalidBatch: Records not processed

I'm sending a batch of contacts through Restforce gem via Bulk apu
response = connection.post("/services/async/#{connection.options[:api_version]}/job/#{job_id}/batch") do |req|
req.headers['Content-Type'] = 'text/csv; charset=UTF-8'
req.headers['X-SFDC-Session'] = connection.options[:oauth_token]
req.headers['Content-Length'] = payload.length.to_s
req.body = Restforce::UploadIO.new(StringIO.new(payload), 'text/csv; charset=UTF-8')
end
Where payload is:
"AccountId,FirstName,LastName,Description,Phone,Email\n0011510001DXiOVAA1,Matt,Cali,Nice
guy,+14150000000,matt#example.com\n0011501001DXiOWAA1,Michael,Michael,very nice guy,+14150000001,michael#example.com\n"
I'm getting an error: "InvalidBatch: Records not processed"
The only response that I get.
How can I see what's exactly wrong with my batch?
It used to work before, and at some point it stopped working. I made sure I added all permissions on a trial account I created.
Request/response data:
struct Faraday::Env method=:post, body=#Restforce::Mash batchInfo=#Restforce::Mash apexProcessingTime="0"
apiActiveProcessingTime="0" createdDate="2015-12-06T23:06:28.000Z"
id="SOME_ID" jobId="SOME_ID" numberRecordsFailed="0"
numberRecordsProcessed="0" state="Queued"
systemModstamp="2015-12-06T23:06:28.000Z" totalProcessingTime="0">>,
url=#https://na22.salesforce.com/services/async/33.0/job/*SOME_ID*/batch>,
request=#, request_headers={"User-Agent"=>"Faraday
v0.9.2", "Content-Type"=>"text/csv; charset=UTF-8",
"X-SFDC-Session"=>"SOME_SESSION_ID", "Content-Length"=>"233",
"Authorization"=>"SOME_AUTH_ID"}, ssl=#, parallel_manager=nil, params=nil,
response=#> #url=#URI::HTTPS
https://na22.salesforce.com/services/async/33.0/job/SOME_ID/batch>
#request=#Faraday::RequestOptions timeout=600 seconds,
open_timeout=600 seconds> #request_headers={"User-Agent"=>"Faraday
v0.9.2", "Content-Type"=>"text/csv; charset=UTF-8",
"X-SFDC-Session"=>"SOME_ID", "Content-Length"=>"233",
"Authorization"=>"SOME_ID"} #ssl=#Faraday::SSLOptions verify=true>
#response=#Faraday::Response:0x007f22b44a78 ...>
#response_headers={"date"=>"Sun, 06 Dec 2015 23:06:28 GMT",
"set-cookie"=>"*SOME_DATA>",
"location"=>"/services/async/33.0/job/SOME_ID/batch/SOME_ID",
"content-type"=>"application/xml", "transfer-encoding"=>"chunked",
"connection"=>"close"} #status=201>>, response_headers={"date"=>"Sun,
06 Dec 2015 23:06:28 GMT", "set-cookie"=>"SOME_ID",
"expires"=>"SOME_ID",
"location"=>"/services/async/33.0/job/SOME_ID/batch/SOME_ID",
"content-type"=>"application/xml", "transfer-encoding"=>"chunked",
"connection"=>"close"}, status=201>
Check that API enabled
Check that Salesforce schema is not different
from expected
Check Bulk Jobs in Salesforce UI (Setup ->
Administrative Setup -> Monitoring -> Bulk Data Load Jobs)
Since you get batch id, you could fetch
batch state (https://instance_name-api.salesforce.com/services/async/APIversion/job/jobid/batch/batchId) and see if there are more details in there

How to use CATENATE command in IMAP

Already been through RFC 4469.
Just wanted to know how exactly I can use the CATENATE command.
I also referred the example given in the RFC. But couldn't really execute it against the server.
Any help will be appreciated.
I know this is old issue, but since I was looking this myself, and noticed this I thought I'd share what I found.
So. The simple examples:
s SELECT INBOX
a APPEND INBOX (\Seen) CATENATE (TEXT {53+}
Date: Tue, 03 Jan 2017 22:39:40 +0200
Hello, world.
)
This will work with modern system. You can also use
s SELECT INBOX
a APPEND INBOX (\Seen) CATENATE (TEXT {53}
Date: Tue, 03 Jan 2017 22:39:40 +0200
Hello, world.
)
The thing about catenate is that it can also combine input from other emails. You can do this with URL.
a APPEND INBOX CATENATE (URL "/INBOX;UIDVALIDITY=1483364905/;UID=2/;SECTION=HEADER" TEXT {8}
Hello..
)
a OK [APPENDUID 1483364905 4] Append completed.
FETCH 4:4 (BODY[])
Date: Tue, 03 Jan 2017 22:39:40 +0200
Hello..
)
a OK Fetch completed.
And we have reused headers from mail with UID 2 in INBOX. UIDVALIDITY can be acquired by saying s STATUS INBOX.
The examples in the IETF are bit spooky, but they show how to use mime multipart as input.

Why does Chrome use the client cache differently in these two scenarios?

I'm working on a small single-page application using HTML5. One feature is to show PDF documents embedded in the page, which documents can be selected form a list.
NOw I'm trying to make Chrome (at first, and then all the other modern browsers) use the local client cache to fulfill simple GET request for PDF documents without going through the server (other than the first time of course). I cause the PDF file to be requested by setting the "data" property on an <object> element in HTML.
I have found a working example for XMLHttpRequest (not <object>). If you use Chrome's developer tools (Network tab) you can see that the first request goes to the server, and results in a response with these headers:
Cache-Control:public,Public
Content-Encoding:gzip
Content-Length:130
Content-Type:text/plain; charset=utf-8
Date:Tue, 03 Jul 2012 20:34:15 GMT
Expires:Tue, 03 Jul 2012 20:35:15 GMT
Last-Modified:Tue, 03 Jul 2012 20:34:15 GMT
Server:Microsoft-IIS/7.5
Vary:Accept-Encoding
The second request is served from the local cache without any server roundtrip, which is what I want.
Back in my own application, I then used ASP-NET MVC 4 and set
[OutputCache(Duration=60)]
on my controller. The first request to this controller - with URL http://localhost:63035/?doi=10.1155/2007/98732 results in the following headers:
Cache-Control:public, max-age=60, s-maxage=0
Content-Length:238727
Content-Type:application/pdf
Date:Tue, 03 Jul 2012 20:45:08 GMT
Expires:Tue, 03 Jul 2012 20:46:06 GMT
Last-Modified:Tue, 03 Jul 2012 20:45:06 GMT
Server:Microsoft-IIS/8.0
Vary:*
The second request results in another roundtrip to the server, with a much quicker response (suggesting server-side caching?) but returns 200 OK and these headers:
Cache-Control:public, max-age=53, s-maxage=0
Content-Length:238727
Content-Type:application/pdf
Date:Tue, 03 Jul 2012 20:45:13 GMT
Expires:Tue, 03 Jul 2012 20:46:06 GMT
Last-Modified:Tue, 03 Jul 2012 20:45:06 GMT
Server:Microsoft-IIS/8.0
Vary:*
The third request for the same URL results in yet another roundtrip and a 304 response with these headers:
Cache-Control:public, max-age=33, s-maxage=0
Date:Tue, 03 Jul 2012 20:45:33 GMT
Expires:Tue, 03 Jul 2012 20:46:06 GMT
Last-Modified:Tue, 03 Jul 2012 20:45:06 GMT
Server:Microsoft-IIS/8.0
Vary:*
My question is, how should I set the OutputCache attribute in order to get the desired behaviour (i.e. PDF requests fullfilled from the client cache, within X seconds of the initial request)?
Or, am I not doing things right when I cause the PDF to display by setting the "data" property on an <object> element?
Clients are never obligated to cache. Each browser is free to use its own heuristic to decide whether it is worth caching an object. After all, any use of cache "competes" with other uses of the cache.
Caching is not designed to guarantee a quick response; it is designed to, on average, increase the likelihood that frequently used resources that are not changing will already be there. What you are trying to do, is not what caches are designed to help with.
Based on the results you report, the version of Chrome you were using in 2012 decided it was pointless to cache an object that would expire in 60 seconds, and had only been asked for once. So it threw away the first copy, after using it. Then you asked a second time, and it started to give this URL a bit more priority -- it must have remembered recent URLs, and observed that this was a second request -- it kept the copy in cache, but when the third request came, asked server to verify that it was still valid (presumably because the expiration time was only a few seconds away). The server said "304 -- not modified -- use the copy you cached". It did NOT send the pdf again.
IMHO, that was reasonable cache behavior, for an object that will expire soon.
If you want to increase the chance that the PDF will stick around longer, then give a later expiration time, but say that it must check with the server to see if it is still valid.
If using HTTP Cache-Control header, this might be: private, max-age: 3600, must-revalidate. With this, you should see a round-trip to server, which will give 304 response as long as the page is valid. This should be a quick response, since no data is sent back -- the browser's cached version is used.
private is optional -- not related to this caching behavior -- I'm assuming whatever this volatile PDF is, it only makes sense for the given user and/or shouldn't be hanging around for a long time, in some shared location.
If you really need the performance of not talking to the server at all, then consider writing javascript to hide/show the DOM element holding that PDF, rather than dropping it, and needing to ask for it again.
Your javascript code for the page is the only place that "understands" that you really want that PDF to stick around, even if you aren't currently showing it to the user.
Have you tried setting the Location property of the OutputCache to "Client"
[OutputCache(Duration=60, Location = OutputCacheLocation.Client)]
By default the location property is set to "Any" which could mean that the response is cached on the client, on a proxy, or at the server.
more at MSDN OutputCacheLocation

Resources