We've started getting 500 & 503 errors in our Pipelines when running them this morning. Looks like it cannot get the job status once again.
46142 [main] WARN com.google.cloud.dataflow.sdk.runners.DataflowPipelineJob - There were problems getting current job status:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 500 Internal Server Error
{
"code" : 500,
"errors" : [ {
"domain" : "global",
"message" : "Internal error encountered.",
"reason" : "backendError"
} ],
"message" : "Internal error encountered.",
"status" : "INTERNAL"
}
1399601 [main] WARN com.google.cloud.dataflow.sdk.runners.DataflowPipelineJob - There were problems getting current job status:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 503 Service Unavailable
{
"code" : 503,
"errors" : [ {
"domain" : "global",
"message" : "The service is currently unavailable.",
"reason" : "backendError"
} ],
"message" : "The service is currently unavailable.",
"status" : "UNAVAILABLE"
}
What's the problem?
Job id: 2015-05-19_17_41_46-7486669477281046678
This was a client-side only issue that was not affecting job submission, due to a transient error. Should not be occurring anymore.
Related
I have a simple Windows Service with the following features :
.Net 6.0
RabbitMQ
Serilog
Serilog Seq, email, console, MSSql, Configuration...
Dependency Injection (Microsoft)
EntityFramework
HostBuilder
The service starts up fine MQ queues are created and log data is coming in to both console and Seq. A couple of seconds after startup(while application waiting on MQ messages) an IOException is thrown which is posted to both console and Seq, this is how the console looks like :
[17:31:25.720 [Information] Myapp.Cloud.SharedObjectService.Program "Myapp.Cloud.SharedObjectService" microservice starting up.
[17:31:25.790 [Information] Myapp.Cloud.SharedObjectService.Program Database: Data Source=Uranus;Initial Catalog=MyappCloudDev;User ID=sa;Password=jaffa
[17:31:26.689 [Information] "Myapp.Cloud.SharedObjectService" all built.
[17:31:26.982 [Information] Myapp.Cloud.MQ.MQCloudProducer MQCloudProducer.Producer is connecting to MQ service "Myapp.Cloud.MQ" at "localhost"
[17:31:26.982 [Information] Myapp.Cloud.MQ.MQCloudConsumer MQCloudConsumer.Connecting to MQ service "Myapp.Cloud.MQ" at "localhost" with user """" and DestinationCode "SharedObjectService"
[17:31:27.143 [Information] Myapp.Cloud.MQ.MQCloudProducer Producer connected to MQ service "Myapp.Cloud.MQ".
[17:31:27.144 [Information] Myapp.Cloud.MQ.MQCloudConsumer MQCloudConsumer.Connecting to MQ service "Myapp.Cloud.MQ" at "localhost" with user """" and DestinationCode "SharedObjectService"
[17:31:27.144 [Information] Myapp.Cloud.SharedObjectService.BusinessLogicLayer.SharedObjectService "SharedObject" microservice started.
[17:31:36.504 [Information] Myapp.Cloud.MQ.MQCloudConsumer MQCloudConsumer.Connected to MQ service queue "Myapp.Cloud.MQ":"SendSharedObject" at "localhost".
[17:31:36.504 [Information] Myapp.Cloud.MQ.MQCloudConsumer MQCloudConsumer.Connected to MQ service queue "Myapp.Cloud.MQ":"OutputResponse" at "localhost".
FirstChanceException : System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host..
---> System.Net.Sockets.SocketException (10054): An existing connection was forcibly closed by the remote host.
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.CreateException(SocketError error, Boolean forAsyncThrow)
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ReceiveAsync(Socket socket, CancellationToken cancellationToken)
at System.Net.Sockets.Socket.ReceiveAsync(Memory`1 buffer, SocketFlags socketFlags, Boolean fromNetworkStream, CancellationToken cancellationToken)
at System.Net.Sockets.NetworkStream.ReadAsync(Memory`1 buffer, CancellationToken cancellationToken)
at System.Net.Http.HttpConnection.<CheckUsabilityOnScavenge>g__ReadAheadWithZeroByteReadAsync|44_0()
at System.Runtime.CompilerServices.AsyncMethodBuilderCore.Start[TStateMachine](TStateMachine& stateMachine)
at System.Net.Http.HttpConnection.<CheckUsabilityOnScavenge>g__ReadAheadWithZeroByteReadAsync|44_0()
at System.Net.Http.HttpConnection.CheckUsabilityOnScavenge()
at System.Net.Http.HttpConnectionPool.<CleanCacheAndDisposeIfUnused>g__IsUsableConnection|115_2(HttpConnectionBase connection, Int64 nowTicks, TimeSpan pooledConnectionLifetime, TimeSpan pooledConnectionIdleTimeout)
at System.Net.Http.HttpConnectionPool.<CleanCacheAndDisposeIfUnused>g__ScavengeConnectionList|115_1[T](List`1 list, List`1& toDispose, Int64 nowTicks, TimeSpan pooledConnectionLifetime, TimeSpan pooledConnectionIdleTimeout)
at System.Net.Http.HttpConnectionPool.CleanCacheAndDisposeIfUnused()
at System.Net.Http.HttpConnectionPoolManager.RemoveStalePools()
at System.Net.Http.HttpConnectionPoolManager.<>c.<.ctor>b__11_0(Object s)
at System.Threading.TimerQueueTimer.CallCallback(Boolean isThreadPool)
at System.Threading.TimerQueueTimer.Fire(Boolean isThreadPool)
at System.Threading.TimerQueue.FireNextTimers()
at System.Threading.TimerQueue.AppDomainTimerCallback(Int32 id)
at System.Threading.UnmanagedThreadPoolWorkItem.System.Threading.IThreadPoolWorkItem.Execute()
at System.Threading.ThreadPoolWorkQueue.Dispatch()
at System.Threading.PortableThreadPool.WorkerThread.WorkerThreadStart()
at System.Threading.Thread.StartCallback()
--- End of stack trace from previous location ---
--- End of inner exception stack trace ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
FirstChanceException : System.IO.IOException: Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host..
---> System.Net.Sockets.SocketException (10054): An existing connection was forcibly closed by the remote host.
--- End of inner exception stack trace ---
at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)
Its only the AppDomain.CurrentDomain.FirstChanceException that fetch this exception. Break on all errors or stepping do not show exception or where it occurs? The Exception do only have a system stacktrace, nothing related ty the project code.
If I remove the Seq part of the appsettings file the problem is gone.
The Serilog part of appsettings.json look like this :
"serilog": {
"Using": [ "Serilog.Sinks.MSSqlServer", "Serilog.Sinks.Email" ],
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"Enrich": [ "FromLogContext", "WithMachineName", "WithProcessId" ],
"WriteTo": [
{
"Name": "Console",
"Args": {
"outputTemplate": "[{Timestamp:HH:mm:ss.fff} [{Level}] {SourceContext} {Message}{NewLine}{Exception}",
"theme": "Serilog.Sinks.SystemConsole.Themes.AnsiConsoleTheme::Code, Serilog.Sinks.Console"
}
},
{
"Name": "File",
"Args": {
"path": "/Logs/log.txt",
"outputTemplate": "{Timestamp:G} {SourceContext} [{Level}] {Message}{NewLine:1}{Exception:1}",
"formatter": "Serilog.Formatting.Json.JsonFormatter, Serilog",
"fileSizeLimitBytes": 1000000,
"rollOnFileSizeLimit": "true",
"shared": "true",
"flushToDiskInterval": 3
}
},
{
"Name": "Seq",
"Args": {
"serverUrl": "http://localhost:8081/"
}
},
{
"Name": "MSSqlServer",
"Args": {
"connectionString": "Data Source=x;Initial Catalog=MyAppClouddev;User ID=x;Password=x",
"tableName": "Logs",
"restrictedToMinimumLevel": "Fatal"
}
}
]
}
Serilog are called like this(Structured logging) :
_logger.LogInformation("OutputResponse received {#message}, {#messageGuid}, {#messageChainGuid}", args.Data, args.Data.MessageGuid, args.Data.MessageChainGuid);
Some times it might not just be string values that are logged but whole objects.
How do you troubleshoot this?
EDIT : When turning off "just my code" I can see that the exception is thrown from within a Microsoft library and I can see that the URL(with port) is the same as the SEQ service got(serverURL) so that Is why I Think the problem are due to this sink. And if I remove the Seq part of the configuration I do not get any exceptions. The regular logging to console are working just fine.
I am currently working on an already started project, with the current situation (completely new to Caddy, so sorry if asking something basic):
A docker container with postgresSQL -- container called myappdb
A Spring Boot docker application with some microservices -- container called backend
A caddy docker container that reverse proxies to Spring boot container -- container called caddy
The three containers are in a docker network called project_net.
I worked on the spring boot backend and everything worked well. Accidentally I stopped the caddy container and restarted it, and now I cannot make rest calls to https server anymore.
Here the Caddyfile:
https://app.myapp.it {
tls myapp#gmail.com
reverse_proxy /* {
to backend:48795
flush interval -1
}
}
Here the Dockerfile for caddy image:
FROM caddy:2.4.5
COPY Caddyfile /etc/caddy/Caddyfile
ENV ACME_AGREE=true
EXPOSE 443
All is running on an apache application server and I thing everything is set up because everything worked well until yesterday!
Here the log of the caddy container on start:
2022-02-24T00:49:13.077709051Z 2022/02/24 00:49:13.077 INFO using provided configuration {"config_file": "/etc/caddy/Caddyfile", "config_adapter": "caddyfile"}
2022-02-24T00:49:13.080517683Z 2022/02/24 00:49:13.080 WARN input is not formatted with 'caddy fmt' {"adapter": "caddyfile", "file": "/etc/caddy/Caddyfile", "line": 2}
2022-02-24T00:49:13.082483777Z 2022/02/24 00:49:13.082 INFO admin admin endpoint started {"address": "tcp/localhost:2019", "enforce_origin": false, "origins": ["localhost:2019", "[::1]:2019", "127.0.0.1:2019"]}
2022-02-24T00:49:13.083012379Z 2022/02/24 00:49:13.082 INFO http server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443}
2022-02-24T00:49:13.083044007Z 2022/02/24 00:49:13.082 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"}
2022-02-24T00:49:13.083262915Z 2022/02/24 00:49:13.082 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc0003bdb90"}
2022-02-24T00:49:13.088176927Z 2022/02/24 00:49:13.087 INFO tls cleaning storage unit {"description": "FileStorage:/data/caddy"}
2022-02-24T00:49:13.088214299Z 2022/02/24 00:49:13.087 INFO tls finished cleaning storage units
2022-02-24T00:49:13.088566440Z 2022/02/24 00:49:13.088 INFO http enabling automatic TLS certificate management {"domains": ["app.myapp.it"]}
2022-02-24T00:49:13.089217858Z 2022/02/24 00:49:13.088 INFO autosaved config (load with --resume flag) {"file": "/config/caddy/autosave.json"}
2022-02-24T00:49:13.089255497Z 2022/02/24 00:49:13.088 INFO serving initial configuration
2022-02-24T00:49:13.090255185Z 2022/02/24 00:49:13.089 INFO tls.obtain acquiring lock {"identifier": "app.myapp.it"}
2022-02-24T00:49:13.104037308Z 2022/02/24 00:49:13.103 INFO tls.obtain lock acquired {"identifier": "app.myapp.it"}
2022-02-24T00:49:13.980759033Z 2022/02/24 00:49:13.980 INFO tls.issuance.acme waiting on internal rate limiter {"identifiers": ["app.myapp.it"], "ca": "https://acme-v02.api.letsencrypt.org/directory", "account": "myapp#gmail.com"}
2022-02-24T00:49:13.980807648Z 2022/02/24 00:49:13.980 INFO tls.issuance.acme done waiting on internal rate limiter {"identifiers": ["app.myapp.it"], "ca": "https://acme-v02.api.letsencrypt.org/directory", "account": "myapp#gmail.com"}
2022-02-24T00:49:14.538528714Z 2022/02/24 00:49:14.538 INFO tls.issuance.acme.acme_client trying to solve challenge {"identifier": "app.myapp.it", "challenge_type": "tls-alpn-01", "ca": "https://acme-v02.api.letsencrypt.org/directory"}
2022-02-24T00:49:15.976582736Z 2022/02/24 00:49:15.976 ERROR tls.issuance.acme.acme_client challenge failed {"identifier": "app.myapp.it", "challenge_type": "tls-alpn-01", "status_code": 403, "problem_type": "urn:ietf:params:acme:error:unauthorized", "error": "Cannot negotiate ALPN protocol \"acme-tls/1\" for tls-alpn-01 challenge"}
2022-02-24T00:49:15.976692391Z 2022/02/24 00:49:15.976 ERROR tls.issuance.acme.acme_client validating authorization {"identifier": "app.myapp.it", "error": "authorization failed: HTTP 403 urn:ietf:params:acme:error:unauthorized - Cannot negotiate ALPN protocol \"acme-tls/1\" for tls-alpn-01 challenge", "order": "https://acme-v02.api.letsencrypt.org/acme/order/422657490/66417417610", "attempt": 1, "max_attempts": 3}
2022-02-24T00:49:17.508224302Z 2022/02/24 00:49:17.507 INFO tls.issuance.acme.acme_client trying to solve challenge {"identifier": "app.myapp.it", "challenge_type": "http-01", "ca": "https://acme-v02.api.letsencrypt.org/directory"}
2022-02-24T00:49:18.933967989Z 2022/02/24 00:49:18.933 ERROR tls.issuance.acme.acme_client challenge failed {"identifier": "app.myapp.it", "challenge_type": "http-01", "status_code": 403, "problem_type": "urn:ietf:params:acme:error:unauthorized", "error": "Invalid response from http://app.ripapp.it/.well-known/acme-challenge/QG2yr7WcBg8Wbj9evi8oyk1CzaTFM0Y9bkgkmqq5Iww [91.187.200.219]: \"<html lang=\\\"en\\\" xml:lang=\\\"en\\\" xmlns=\\\"http://www.w3.org/1999/xhtml\\\">\\n<head>\\n <title>Connection denied by Geolocation</title>\\n \""}
2022-02-24T00:49:18.934101729Z 2022/02/24 00:49:18.933 ERROR tls.issuance.acme.acme_client validating authorization {"identifier": "app.myapp.it", "error": "authorization failed: HTTP 403 urn:ietf:params:acme:error:unauthorized - Invalid response from http://app.myapp.it/.well-known/acme-challenge/QG2yr7WcBg8Wbj9evi8oyk1CzaTFM0Y9bkgkmqq5Iww [91.187.200.219]: \"<html lang=\\\"en\\\" xml:lang=\\\"en\\\" xmlns=\\\"http://www.w3.org/1999/xhtml\\\">\\n<head>\\n <title>Connection denied by Geolocation</title>\\n \"", "order": "https://acme-v02.api.letsencrypt.org/acme/order/422657490/66417426840", "attempt": 2, "max_attempts": 3}
2022-02-24T00:49:20.696387362Z 2022/02/24 00:49:20.695 ERROR tls.obtain could not get certificate from issuer {"identifier": "app.myapp.it", "issuer": "acme-v02.api.letsencrypt.org-directory", "error": "[app.myapp.it] solving challenges: app.myapp.it: no solvers available for remaining challenges (configured=[http-01 tls-alpn-01] offered=[http-01 dns-01 tls-alpn-01] remaining=[dns-01]) (order=https://acme-v02.api.letsencrypt.org/acme/order/422657490/66417435240) (ca=https://acme-v02.api.letsencrypt.org/directory)"}
2022-02-24T00:49:21.383148322Z 2022/02/24 00:49:21.382 INFO tls.issuance.zerossl generated EAB credentials {"key_id": "fiNQgkXxmfwTdX1q1gFasg"}
2022-02-24T00:49:24.460492479Z 2022/02/24 00:49:24.459 INFO tls.issuance.acme waiting on internal rate limiter {"identifiers": ["app.myapp.it"], "ca": "https://acme.zerossl.com/v2/DV90", "account": "myapp#gmail.com"}
2022-02-24T00:49:24.460580992Z 2022/02/24 00:49:24.460 INFO tls.issuance.acme done waiting on internal rate limiter {"identifiers": ["app.myapp.it"], "ca": "https://acme.zerossl.com/v2/DV90", "account": "myapp#gmail.com"}
I cannot work without resolving this (on http port is listening current active website, so I cannot test anything over http port).
It seems the problem is that letsencrypt refuses someway the connection. What can I do?
Is there something that I can do to solve? (or also if you need some other files and configurations)
Was thinking about changing to traefik, but the ideal thing is to solve and leave the structure of the project as it is.
Problem:
I am trying to upload images to one drive and frequently getting error
504 Gateway Timeout (Unknown error ).
API used:
PUT
https://graph.microsoft.com/v1.0/users/{userId}/drive/items/{rootFolderId}/{folderPath}/{fileName}:/content
Response:
504 Gateway Timeout
{
"error": {
"code": "UnknownError",
"message": "",
"innerError": {
"request-id": "9709847a-36d4-42f2-90dd-4c37094caead",
"date": "2018-05-16T12:18:37"
}
}
}
I have a batch read and write to a storage bucket within the same project. Im see this exception when is trying to write the output. any idea?
(c1a5d1aff2d8459b): java.lang.RuntimeException: com.google.cloud.dataflow.sdk.util.UserCodeException: java.io.IOException: com.google.api.client.googleapis.json.GoogleJsonResponseException: 400 Bad Request
{
"code" : 400,
"errors" : [ {
"domain" : "global",
"message" : "No object name",
"reason" : "required"
} ],
"message" : "No object name"
}
at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn$1.output(SimpleParDoFn.java:160)
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase$DoFnContext.outputWindowedValue(DoFnRunnerBase.java:288)
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase$DoFnContext.outputWindowedValue(DoFnRunnerBase.java:284)
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase$DoFnProcessContext$1.outputWindowedValue(DoFnRunnerBase.java:508)
at com.google.cloud.dataflow.sdk.util.GroupAlsoByWindowsViaIteratorsDoFn.processElement(GroupAlsoByWindowsViaIteratorsDoFn.java:123)
at com.google.cloud.dataflow.sdk.util.SimpleDoFnRunner.invokeProcessElement(SimpleDoFnRunner.java:49)
at com.google.cloud.dataflow.sdk.util.DoFnRunnerBase.processElement(DoFnRunnerBase.java:139)
at com.google.cloud.dataflow.sdk.runners.worker.SimpleParDoFn.processElement(SimpleParDoFn.java:188)
at com.google.cloud.dataflow.sdk.runners.worker.ForwardingParDoFn.processElement(ForwardingParDoFn.java:42)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerLoggingParDoFn.processElement(DataflowWorkerLoggingParDoFn.java:47)
at com.google.cloud.dataflow.sdk.util.common.worker.ParDoOperation.process(ParDoOperation.java:55)
at com.google.cloud.dataflow.sdk.util.common.worker.OutputReceiver.process(OutputReceiver.java:52)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.runReadLoop(ReadOperation.java:221)
at com.google.cloud.dataflow.sdk.util.common.worker.ReadOperation.start(ReadOperation.java:182)
at com.google.cloud.dataflow.sdk.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:69)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.executeWork(DataflowWorker.java:284)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.doWork(DataflowWorker.java:220)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:170)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.doWork(DataflowWorkerHarness.java:192)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:172)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:159)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
This is caused by specifying an incorrect path to TextIO.Write (missing the GCS bucket - an example correct path is gs://some-bucket/some-output-prefix whereas in this job it was specified as simply gs://some-output-prefix).
This should have been caught at pipeline construction time, before starting the workers. This is a bug in Apache Beam and Dataflow SDK's validation of GCS paths. I'm working on a fix at http://github.com/apache/beam/pull/2602, follow that PR for updates. – jkff 10 mins ago
A number of my Google Cloud Dataflow jobs failed yesterday, reporting internal errors that I have not seen before.
Here are two examples:
Job ID 2016-01-31_12_14_47-10166346951693629111 failed with the following error:
Jan 31, 2016, 10:15:25 PM
(bc20d8395f1f7459): Staged package jetty-servlet-9.2.10.v20150310-3EcW9gR7xsTM1TnqPH__rQ.jar at location 'gs://XXXXXXXXX/jetty-servlet-9.2.10.v20150310-3EcW9gR7xsTM1TnqPH__rQ.jar' is inaccessible.
and job ID 2016-01-31_12_22_11-15290010907236071290 failed with this error:
Jan 31, 2016, 11:13:58 PM
(56214ba1d51ca7d6): java.io.IOException: com.google.api.client.googleapis.json.GoogleJsonResponseException: 410 Gone { "code" : 500, "errors" : [ { "domain" : "global", "message" : "Backend Error", "reason" : "backendError" } ], "message" : "Backend Error" }
at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.waitForCompletionAndThrowIfUploadFailed(AbstractGoogleAsyncWriteChannel.java:431)
at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel.close(AbstractGoogleAsyncWriteChannel.java:289)
at com.google.cloud.dataflow.sdk.runners.worker.TextSink$TextFileWriter.close(TextSink.java:243)
at com.google.cloud.dataflow.sdk.util.common.worker.WriteOperation.finish(WriteOperation.java:100)
at com.google.cloud.dataflow.sdk.util.common.worker.MapTaskExecutor.execute(MapTaskExecutor.java:77)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.executeWork(DataflowWorker.java:254)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.doWork(DataflowWorker.java:191)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorker.getAndPerformWork(DataflowWorker.java:144)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.doWork(DataflowWorkerHarness.java:180)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:161)
at com.google.cloud.dataflow.sdk.runners.worker.DataflowWorkerHarness$WorkerThread.call(DataflowWorkerHarness.java:148)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 410 Gone { "code" : 500, "errors" : [ { "domain" : "global", "message" : "Backend Error", "reason" : "backendError" } ], "message" : "Backend Error" }
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:145)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:432)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.cloud.hadoop.util.AbstractGoogleAsyncWriteChannel$UploadOperation.call(AbstractGoogleAsyncWriteChannel.java:357) ... 4 more
Was there any maintenance or other work occurring on the Dataflow service that might have caused these errors?