I have a Postgres database on Heroku, upon deploying the data model by doing prisma deploy often times the following error is produced.
ERROR: Whoops. Looks like an internal server error. Search your server logs for request ID: local:cjxrmcnpx00hq0692zuwttqwv
{
"data": {
"addProject": null
},
"errors": [
{
"message": "Whoops. Looks like an internal server error. Search your server logs for request ID: local:cjxrmcnpx00hq0692zuwttqwv",
"path": [
"addProject"
],
"locations": [
{
"line": 2,
"column": 9
}
],
"requestId": "local:cjxrmcnpx00hq0692zuwttqwv"
}
],
"status": 200
}
and on checking the Docker logs I am seeing this erorr:
Jul 14, 2019 12:18:34 PM org.postgresql.Driver connect
prisma_1 | SEVERE: Connection error:
prisma_1 | org.postgresql.util.PSQLException: FATAL: too many connections for role "bcueventxumaik"
prisma_1 | at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
prisma_1 | at org.postgresql.core.v3.QueryExecutorImpl.readStartupMessages(QueryExecutorImpl.java:2566)
prisma_1 | at org.postgresql.core.v3.QueryExecutorImpl.<init>(QueryExecutorImpl.java:131)
prisma_1 | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:210)
prisma_1 | at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
prisma_1 | at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
prisma_1 | at org.postgresql.Driver.makeConnection(Driver.java:452)
prisma_1 | at org.postgresql.Driver.connect(Driver.java:254)
prisma_1 | at slick.jdbc.DriverDataSource.getConnection(DriverDataSource.scala:101)
prisma_1 | at slick.jdbc.DataSourceJdbcDataSource.createConnection(JdbcDataSource.scala:68)
prisma_1 | at slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:453)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:46)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:37)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:249)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:248)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:37)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:274)
prisma_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
prisma_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
prisma_1 | at java.lang.Thread.run(Thread.java:748)
prisma_1 |
prisma_1 | Exception in thread "main" org.postgresql.util.PSQLException: FATAL: too many connections
prisma_1 | at org.postgresql.core.v3.QueryExecutorImpl.readStartupMessages(QueryExecutorImpl.java:2566)prisma_1 | at org.postgresql.core.v3.QueryExecutorImpl.<init>(QueryExecutorImpl.java:131)prisma_1 | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:210)
prisma_1 | at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)prisma_1 | at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
prisma_1 | at org.postgresql.Driver.makeConnection(Driver.java:452)
prisma_1 | at org.postgresql.Driver.connect(Driver.java:254)prisma_1 | at slick.jdbc.DriverDataSource.getConnection(DriverDataSource.scala:101)
prisma_1 | at slick.jdbc.DataSourceJdbcDataSource.createConnection(JdbcDataSource.scala:68)
prisma_1 | at slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:453)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:46)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:37)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:249)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:248)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:37)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:274)
prisma_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
prisma_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
prisma_1 | at java.lang.Thread.run(Thread.java:748)
prisma_prisma_1 exited with code 1
The error is saying too many connections but I am firing prisma deploy from only one terminal and at the same time I am able to connect to the database using PgAdmin4. Moreover, the database seems to be perfectly reachable as I am able to ping the database from inside the container.
PS. Updated the docker logs as earlier on running docker logs -f processid I was getting older logs but now upon building the container again using docker-compose up I got the lastest logs
As the error clearly states there are too many connections to the database. So we need to investigate how many connections there are, who is creating them and why they are created. In order to either limit the consumers or increase the amount of available connections.
First we can start by using the heroku CLI to check the amount of used and available connections:
$ heroku pg:info
=== DATABASE_URL
Plan: Private 2
Status: Available
HA Status: Available
Data Size: 2.23 GB
Tables: 83
PG Version: 10.1
Connections: 26/400
Connection Pooling: Available
For more information on how to investigate heroku postgres databases see: https://devcenter.heroku.com/articles/heroku-postgresql#pg-info
To further investigate who is connected to your database you can either use psql or pgAdmin. If using pgAdmin you can select the database, click on the dashboard tab and select the server activity panel on the bottom of the page revealing all connected sessions. If using psql you could write a select like this:
SELECT pid as process_id,
usename as username,
datname as database_name,
client_addr as client_address,
application_name,
backend_start,
state,
FROM pg_stat_activity;
For a more detailed how to see: https://dataedo.com/kb/query/postgresql/list-database-sessions
By now you probably identified who is creating the connections to your database and can limit the client to use less (or increase the amount of available database connections).
One possible consumer of database connections is the prisma server itself of course. The prisma config luckily provides a setting to limit database connections.
The connectionLimit property in PRISMA_CONFIG determines the number of
database connections a Prisma service is going to use.
You can read more about it here: https://www.prisma.io/docs/prisma-server/database-connector-POSTGRES-jgfr/#managing-database-connections
If you are using heroku to run the docker container with your prisma server a PRISMA_CONFIG could look like this:
port: $PORT
managementApiSecret: ${PRISMA_MANAGEMENT_API_SECRET}
databases:
default:
connector: postgres
migrations: true
connectionLimit: 2
uri: ${DATABASE_URL}?ssl=1
I hope this structured approach helped. Let me know if you need more clarification. If so please provide details regarding the nature of the existing database connections.
run this command
docker logs <YOUR_PRISMA_CONTAINER_NAME>
use pooling:
import dotenv from 'dotenv'
dotenv.config()
import { PrismaClient } from '#prisma/client'
// add prisma to the NodeJS global type
interface CustomNodeJsGlobal extends NodeJS.Global {
prisma: PrismaClient
}
// Prevent multiple instances of Prisma Client in development
declare const global: CustomNodeJsGlobal
const prisma = global.prisma || new PrismaClient()
if (process.env.NODE_ENV === 'development') global.prisma = prisma
export default prisma
plus use:
await prisma.$disconnect()
Related
I user command line docker run -d --restart=unless-stopped \ -p 80:80 -p 443:443 \ --privileged \ rancher/rancher:latest
The container still run but I cannot access Rancher UI
8e95a158842c rancher/rancher:latest "entrypoint.sh" 45 minutes ago Up 7 seconds 0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp relaxed_chandrasekhar
Then t run docker logs 8e95a158842c
2021-11-04 22:25:56.455037 W | pkg/fileutil: check file permission: directory "management-state/etcd" exist, but the permission is "drwxr-xr-x". The recommended permission is "-rwx------" to prevent possible unprivileged access to the data.
2021-11-04 22:25:56.543162 I | etcdserver: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 1839
raft2021/11/04 22:25:56 INFO: 8e9e05c52164694d switched to configuration voters=()
raft2021/11/04 22:25:56 INFO: 8e9e05c52164694d became follower at term 17
raft2021/11/04 22:25:56 INFO: newRaft 8e9e05c52164694d [peers: [], term: 17, commit: 1839, applied: 0, lastindex: 1839, lastterm: 17]
2021-11-04 22:25:56.547839 W | auth: simple token is not cryptographically signed
2021-11-04 22:25:56.573956 I | etcdserver: starting server... [version: 3.4.15, cluster version: to_be_decided]
2021-11-04 22:25:56.580742 I | embed: listening for peers on 127.0.0.1:2380
raft2021/11/04 22:25:56 INFO: 8e9e05c52164694d switched to configuration voters=(10276657743932975437)
2021-11-04 22:25:56.582873 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
2021-11-04 22:25:56.583346 N | etcdserver/membership: set the initial cluster version to 3.4
2021-11-04 22:25:56.583568 I | etcdserver/api: enabled capabilities for version 3.4
raft2021/11/04 22:26:02 INFO: 8e9e05c52164694d is starting a new election at term 17
raft2021/11/04 22:26:02 INFO: 8e9e05c52164694d became candidate at term 18
raft2021/11/04 22:26:02 INFO: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 18
raft2021/11/04 22:26:02 INFO: 8e9e05c52164694d became leader at term 18
raft2021/11/04 22:26:02 INFO: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 18
2021-11-04 22:26:02.051592 I | etcdserver: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
2021-11-04 22:26:02.052775 I | embed: ready to serve client requests
2021-11-04 22:26:02.059541 N | embed: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
2021/11/04 22:26:02 [INFO] Waiting for server to become available: Get "https://127.0.0.1:6443/version?timeout=15m0s": dial tcp 127.0.0.1:6443: connect: connection refused
2021/11/04 22:26:04 [INFO] Waiting for server to become available: the server is currently unable to handle the request
2021/11/04 22:26:16 [INFO] Running in single server mode, will not peer connections
2021-11-04 22:26:17.724466 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" " with result "range_response_count:92 size:445717" took too long (109.807669ms) to execute
2021/11/04 22:26:18 [INFO] Applying CRD features.management.cattle.io
2021/11/04 22:26:22 [INFO] Applying CRD navlinks.ui.cattle.io
2021/11/04 22:26:22 [INFO] Applying CRD clusters.management.cattle.io
2021/11/04 22:26:22 [INFO] Applying CRD apiservices.management.cattle.io
2021/11/04 22:26:23 [INFO] Applying CRD clusterregistrationtokens.management.cattle.io
2021/11/04 22:26:23 [INFO] Applying CRD settings.management.cattle.io
2021/11/04 22:26:24 [INFO] Applying CRD preferences.management.cattle.io
2021/11/04 22:26:24 [INFO] Applying CRD features.management.cattle.io
2021/11/04 22:26:25 [INFO] Applying CRD clusterrepos.catalog.cattle.io
2021/11/04 22:26:26 [INFO] Applying CRD operations.catalog.cattle.io
2021/11/04 22:26:31 [INFO] Applying CRD apps.catalog.cattle.io
2021-11-04 22:26:33.250474 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" " with result "range_response_count:92 size:445717" took too long (139.120063ms) to execute
2021/11/04 22:26:45 [INFO] Applying CRD fleetworkspaces.management.cattle.io
2021-11-04 22:26:47.449199 W | etcdserver: read-only range request "key:\"/registry/apiextensions.k8s.io/customresourcedefinitions/\" range_end:\"/registry/apiextensions.k8s.io/customresourcedefinitions0\" " with result "range_response_count:92 size:445717" took too long (321.346575ms) to execute
2021-11-04 22:26:52.656294 W | etcdserver: request "header:<ID:7587858304790119201 > txn:<compare:<target:MOD key:\"/registry/configmaps/kube-system/k3s\" mod_revision:1632 > success:<request_put:<key:\"/registry/configmaps/kube-system/k3s\" value_size:456 >> failure:<request_range:<key:\"/registry/configmaps/kube-system/k3s\" > >>" with result "size:16" took too long (107.766444ms) to execute
2021-11-04 22:27:03.165794 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/k3s\" " with result "range_response_count:1 size:515" took too long (138.87999ms) to execute
2021-11-04 22:27:03.182578 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/\" range_end:\"/registry/serviceaccounts0\" " with result "range_response_count:36 size:10156" took too long (196.135777ms) to execute
2021-11-04 22:27:21.345406 W | etcdserver: read-only range request "key:\"/registry/flowschemas/exempt\" " with result "range_response_count:1 size:879" took too long (241.774296ms) to execute
2021-11-04 22:27:21.633929 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:340" took too long (248.96888ms) to execute
2021-11-04 22:27:30.019952 W | etcdserver: read-only range request "key:\"/registry/rolebindings/\" range_end:\"/registry/rolebindings0\" count_only:true " with result "range_response_count:0 size:7" took too long (102.695372ms) to execute
When I install rancher on my laptop everything is normal but when I try on my VPS this error appeared
How can I fix it?
Facing the same issue with the rancher/rancher:latest image as well. This works for me though rancher/rancher:v2.4-head
docker pull rancher/rancher:v2.4-head
I am trying to connect a Spark Cluster running within the docker via the host system. I tried both the python script and spark-shell both gave the same results:
Within Docker
park-master_1 | 20/07/24 10:13:26 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
spark-master_1 | java.io.InvalidClassException: org.apache.spark.deploy.ApplicationDescription; local class incompatible: stream classdesc serialVersionUID = 1574364215946805297, local class serialVersionUID = 6543101073799644159
spark-master_1 | at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:699)
spark-master_1 | at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
spark-master_1 | at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
spark-master_1 | at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
spark-master_1 | at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
spark-master_1 | at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
spark-master_1 | at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
spark-master_1 | at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
spark-master_1 | at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
spark-master_1 | at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
spark-master_1 | at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
spark-master_1 | at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108)
spark-master_1 | at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$deserialize$1$$anonfun$apply$1.apply(Nett
Running spark-shell on the command line from the host system gives the following error:
➜
docker-spark-cluster git:(master) ✗ spark-shell --master spark://localhost:7077
20/07/24 15:13:17 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
20/07/24 15:14:25 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
20/07/24 15:14:25 WARN StandaloneSchedulerBackend: Application ID is not initialized yet.
20/07/24 15:14:25 WARN StandaloneAppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master
20/07/24 15:14:26 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:281)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:92)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:565)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2555)
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$1(SparkSession.scala:930)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106)
at $line3.$read$$iw$$iw.<init>(<console>:15)
at $line3.$read$$iw.<init>(<console>:42)
at $line3.$read.<init>(<console>:44)
at $line3.$read$.<init>(<console>:48)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.$print$lzycompute(<console>:7)
at $line3.$eval$.$print(<console>:6)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:745)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1021)
at scala.tools.nsc.interpreter.IMain.$anonfun$interpret$1(IMain.scala:574)
at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:41)
at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:37)
Docker Containers
git:(master) ✗ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dfe3d47790ee spydernaz/spark-worker:latest "/bin/bash /start-wo…" 42 hours ago Up 23 minutes 0.0.0.0:32769->8081/tcp docker-spark-cluster_spark-worker_2
c5e36b94efdd spydernaz/spark-worker:latest "/bin/bash /start-wo…" 42 hours ago Up 23 minutes 0.0.0.0:32768->8081/tcp docker-spark-cluster_spark-worker_3
60f3d29e9059 spydernaz/spark-worker:latest "/bin/bash /start-wo…" 42 hours ago Up 23 minutes 0.0.0.0:32770->8081/tcp docker-spark-cluster_spark-worker_1
d11c67d462fb spydernaz/spark-master:latest "/bin/bash /start-ma…" 42 hours ago Up 23 minutes 6066/tcp, 0.0.0.0:7077->7077/tcp, 0.0.0.0:9090->8080/tcp docker-spark-cluster_spark-master_1
➜ docker-spark-cluster git:(master) ✗
Spark Shell Commands
spark-shell --master spark://localhost:7077
As #koiralo already mentioned in the comments, this is due to the version difference in pySpark running in local and on the server.
Had the same error and it was fixed once the versions at both the places were matched.
so I'm working on getting RabbitMQ up in a clustered setting, and I "think" I have the configuration to have the cluster nodes talk to each other over SSL, but when running the nodes, I get this error:
rmq-node-1 | {{shutdown,
rmq-node-1 | {failed_to_start_child,ssl_dist_sup,
rmq-node-1 | {'EXIT',
rmq-node-1 | {{bad_ssl_dist_optfile,
rmq-node-1 | 16:36:58.591 [error] {failed_to_start_child,ssl_dist_sup,
rmq-node-1 | 16:36:58.592 [error] {'EXIT',
rmq-node-1 | 16:36:58.592 [error] {{bad_ssl_dist_optfile,
rmq-node-1 | ["/etc/rabbitmq/inter_node_tls.config"]},
rmq-node-1 | [{ssl_dist_sup,consult,1,
rmq-node-1 | [{file,"ssl_dist_sup.erl"},{line,105}]},
rmq-node-1 | 16:36:58.593 [error] ["/etc/rabbitmq/inter_node_tls.config"]},
rmq-node-1 | {ssl_dist_sup,start_link,0,
rmq-node-1 | [{file,"ssl_dist_sup.erl"},{line,45}]},
The gist of the error is the line that says bad_ssl_dist_optfile and I'm thinking the configuration is part of the problem. I've been following this guide: Securing Cluster Communications - Option 2 - config files
I also generated the pem files with the recommended "easy" solution. (also did the same with easy-rsa and got the same issues.) automated certificate generation
I've setup my folder structure like so:
I've created a compose file to include all the files, variables, and configuration:
I've created a rabbitmq.conf, rabbitmq-env.conf, and inter_node_tls.config to complete the setup:
RabbitMQ Config
RabbitMQ Env
Inter Node SSL
Am I missing a piece of the configuration? or is there some issue with formatting on the inter node configuration?
EDIT: I did catch the file name issue with the TLS config file, but got the same issue.
I'm having trouble with Gerrit replication plugin. I'm trying to replicate repository to Gitlab over HTTPS. Most important configuration:
etc/replication.config
[gerrit]
replicateOnStartup = true
[remote "gitlab-mirror"]
url = https://<name.surname>:<password>#gitlab.domain/<Name.Surname>/${name}.git
push = +refs/heads/*:refs/heads/*
push = +refs/tags/*:refs/tags/*
mirror = true
projects = hello-world
rescheduleDelay = 15
Repository on Gitlab side does exists under: https://gitlab.domain/<Name.Surname>/hello-world
I even cloned repository from Gerrit, add another remote to gitlab called mirror and pushed to it without hassle:
git clone ssh://admin#gerrit.domain:29418/hello-world
git remote add mirror https://<name.surname>:<password>#gitlab.domain/<Name.Surname>/hello-world.git
git push -u mirror --all
I'm scheduling replication as follows:
ssh -p 29418 gerrit.domain replication start
Which produce following log:
gerrit | [2020-03-23 22:01:40,019 +0000] 6c533415 [sshd-SshDaemon[33060020](port=22)-nio2-thread-1] admin a/1000000 LOGIN FROM 172.64.1.1
gerrit | [2020-03-23 22:01:40,071 +0000] 6c533415 [SSH replication start (admin)] admin a/1000000 replication.start 7ms 1ms 0
gerrit | [2020-03-23 22:01:40,102 +0000] 6c533415 [sshd-SshDaemon[33060020](port=22)-nio2-thread-2] admin a/1000000 LOGOUT
But then when process take place I got following Stack Trace:
gerrit | [2020-03-23 22:02:04,660] [ReplicateTo-gitlab-mirror-1] ERROR com.googlesource.gerrit.plugins.replication.ReplicationTasksStorage : Error while deleting task d44f53430eda0b204ca13da6aab17c2173531c94
gerrit | java.nio.file.NoSuchFileException: /srv/gerrit/data/replication/ref-updates/running/d44f53430eda0b204ca13da6aab17c2173531c94
gerrit | at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
gerrit | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
gerrit | at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
gerrit | at sun.nio.fs.UnixFileSystemProvider.implDelete(UnixFileSystemProvider.java:244)
gerrit | at sun.nio.fs.AbstractFileSystemProvider.delete(AbstractFileSystemProvider.java:103)
gerrit | at java.nio.file.Files.delete(Files.java:1126)
gerrit | at com.googlesource.gerrit.plugins.replication.ReplicationTasksStorage$Task.finish(ReplicationTasksStorage.java:232)
gerrit | at com.googlesource.gerrit.plugins.replication.ReplicationTasksStorage.finish(ReplicationTasksStorage.java:130)
gerrit | at com.googlesource.gerrit.plugins.replication.Destination.notifyFinished(Destination.java:574)
gerrit | at com.googlesource.gerrit.plugins.replication.PushOne.runPushOperation(PushOne.java:413)
gerrit | at com.googlesource.gerrit.plugins.replication.PushOne.lambda$run$0(PushOne.java:300)
gerrit | at com.google.gerrit.server.util.RequestScopePropagator.lambda$cleanup$1(RequestScopePropagator.java:182)
gerrit | at com.google.gerrit.server.util.RequestScopePropagator.lambda$context$0(RequestScopePropagator.java:170)
gerrit | at com.google.gerrit.server.git.PerThreadRequestScope$Propagator.lambda$scope$0(PerThreadRequestScope.java:70)
gerrit | at com.googlesource.gerrit.plugins.replication.PushOne.run(PushOne.java:303)
gerrit | at com.google.gerrit.server.logging.LoggingContextAwareRunnable.run(LoggingContextAwareRunnable.java:87)
gerrit | at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
gerrit | at java.util.concurrent.FutureTask.run(FutureTask.java:266)
gerrit | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
gerrit | at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
gerrit | at com.google.gerrit.server.git.WorkQueue$Task.run(WorkQueue.java:610)
gerrit | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
gerrit | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
gerrit | at java.lang.Thread.run(Thread.java:748)
This is how data directory for replication looks like (I think whole time):
gerrit:/srv/gerrit$ find data/replication/
data/replication/
data/replication/ref-updates
data/replication/ref-updates/running
data/replication/ref-updates/building
data/replication/ref-updates/waiting
data/replication/ref-updates/waiting/50d5b9f61203cdd9223f21c21de7174f58a89bd3
data/replication/ref-updates/waiting/d44f53430eda0b204ca13da6aab17c2173531c94
Yep Gerrit tries to delete task from running (I have no idea why?) but task is in waiting. Gitlab repository does not get changes which is biggest problem.
I also tried queue replication event as follows, but that blocks indefinitely, until CTRL+C:
ssh -p 29418 gerrit.domain replication start --wait
Any idea what I'm missing or what more I could look for?
I installed datastax enterprise, version as below, when I run stress test , get error below:
cqlsh> show version
[cqlsh 5.0.1 | Cassandra 2.1.12.1046 | DSE 4.8.4 | CQL spec 3.2.1 | Native protocol v3]
[root#pg0 opt]# cassandra-stress write n=19000000 -rate threads=4
why get error below:
Exception in thread "main" java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused.