Unable to connect Apache Spark running in docker - docker

I am trying to connect a Spark Cluster running within the docker via the host system. I tried both the python script and spark-shell both gave the same results:
Within Docker
park-master_1 | 20/07/24 10:13:26 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
spark-master_1 | java.io.InvalidClassException: org.apache.spark.deploy.ApplicationDescription; local class incompatible: stream classdesc serialVersionUID = 1574364215946805297, local class serialVersionUID = 6543101073799644159
spark-master_1 | at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:699)
spark-master_1 | at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1885)
spark-master_1 | at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1751)
spark-master_1 | at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2042)
spark-master_1 | at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
spark-master_1 | at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
spark-master_1 | at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
spark-master_1 | at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
spark-master_1 | at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
spark-master_1 | at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
spark-master_1 | at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
spark-master_1 | at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:108)
spark-master_1 | at org.apache.spark.rpc.netty.NettyRpcEnv$$anonfun$deserialize$1$$anonfun$apply$1.apply(Nett
Running spark-shell on the command line from the host system gives the following error:
➜
docker-spark-cluster git:(master) ✗ spark-shell --master spark://localhost:7077
20/07/24 15:13:17 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
20/07/24 15:14:25 ERROR StandaloneSchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
20/07/24 15:14:25 WARN StandaloneSchedulerBackend: Application ID is not initialized yet.
20/07/24 15:14:25 WARN StandaloneAppClient$ClientEndpoint: Drop UnregisterApplication(null) because has not yet connected to master
20/07/24 15:14:26 ERROR SparkContext: Error initializing SparkContext.
java.lang.IllegalArgumentException: requirement failed: Can only call getServletHandlers on a running MetricsSystem
at scala.Predef$.require(Predef.scala:281)
at org.apache.spark.metrics.MetricsSystem.getServletHandlers(MetricsSystem.scala:92)
at org.apache.spark.SparkContext.<init>(SparkContext.scala:565)
at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2555)
at org.apache.spark.sql.SparkSession$Builder.$anonfun$getOrCreate$1(SparkSession.scala:930)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:921)
at org.apache.spark.repl.Main$.createSparkSession(Main.scala:106)
at $line3.$read$$iw$$iw.<init>(<console>:15)
at $line3.$read$$iw.<init>(<console>:42)
at $line3.$read.<init>(<console>:44)
at $line3.$read$.<init>(<console>:48)
at $line3.$read$.<clinit>(<console>)
at $line3.$eval$.$print$lzycompute(<console>:7)
at $line3.$eval$.$print(<console>:6)
at $line3.$eval.$print(<console>)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at scala.tools.nsc.interpreter.IMain$ReadEvalPrint.call(IMain.scala:745)
at scala.tools.nsc.interpreter.IMain$Request.loadAndRun(IMain.scala:1021)
at scala.tools.nsc.interpreter.IMain.$anonfun$interpret$1(IMain.scala:574)
at scala.reflect.internal.util.ScalaClassLoader.asContext(ScalaClassLoader.scala:41)
at scala.reflect.internal.util.ScalaClassLoader.asContext$(ScalaClassLoader.scala:37)
Docker Containers
git:(master) ✗ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dfe3d47790ee spydernaz/spark-worker:latest "/bin/bash /start-wo…" 42 hours ago Up 23 minutes 0.0.0.0:32769->8081/tcp docker-spark-cluster_spark-worker_2
c5e36b94efdd spydernaz/spark-worker:latest "/bin/bash /start-wo…" 42 hours ago Up 23 minutes 0.0.0.0:32768->8081/tcp docker-spark-cluster_spark-worker_3
60f3d29e9059 spydernaz/spark-worker:latest "/bin/bash /start-wo…" 42 hours ago Up 23 minutes 0.0.0.0:32770->8081/tcp docker-spark-cluster_spark-worker_1
d11c67d462fb spydernaz/spark-master:latest "/bin/bash /start-ma…" 42 hours ago Up 23 minutes 6066/tcp, 0.0.0.0:7077->7077/tcp, 0.0.0.0:9090->8080/tcp docker-spark-cluster_spark-master_1
➜ docker-spark-cluster git:(master) ✗
Spark Shell Commands
spark-shell --master spark://localhost:7077

As #koiralo already mentioned in the comments, this is due to the version difference in pySpark running in local and on the server.
Had the same error and it was fixed once the versions at both the places were matched.

Related

How to get steam to run on Ubuntu 20.04

Steam won't run =( Here's what I've tried:
I have a fresh install of Ubuntu 20.04 (via Ubuntu Server Live Installer + ubuntu-desktop package) with nvidia drivers:
$ nvidia-smi
Mon Jun 22 10:26:49 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.64 Driver Version: 440.64 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 2070 Off | 00000000:01:00.0 On | N/A |
| 28% 31C P8 22W / 175W | 303MiB / 7981MiB | 2% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1542 G /usr/lib/xorg/Xorg 53MiB |
| 0 7835 G /usr/lib/xorg/Xorg 124MiB |
| 0 8086 G /usr/bin/gnome-shell 111MiB |
+-----------------------------------------------------------------------------+
Attempt 1: .deb
Download deb from https://store.steampowered.com/about/
$ sudo dpkg -i steam_latest.deb
$ steam
Steam needs to install these additional packages:
libgl1-mesa-dri:i386, libgl1:i386, libc6:i386
Enter sudo password and to install them, and it installs a 49 *:i386 packages
"Updating Steam..." windows pops up and downloads and runs stuff for a bit, then!
CRASH!
[2020-06-22 17:00:18] Installing update...
[2020-06-22 17:00:19] Cleaning up...
[2020-06-22 17:00:19] Update complete, launching...
[2020-06-22 17:00:19] Shutdown
Restarting Steam by request...
Traceback (most recent call last):
File "/usr/bin/steamdeps", line 484, in <module>
sys.exit(main())
File "/usr/bin/steamdeps", line 460, in main
if dep.is_available():
File "/usr/bin/steamdeps", line 96, in is_available
return is_provided(self.name)
File "/usr/bin/steamdeps", line 68, in is_provided
(name, version) = provider.split()
ValueError: too many values to unpack (expected 2)
Running Steam on ubuntu 20.04 64-bit
STEAM_RUNTIME has been set by the user to: /home/username/.local/share/Steam/ubuntu12_32/steam-runtime
Found newer runtime version for 64-bit libGLU.so.1. Host: 1.3.1 Runtime: 1.3.8004
Found newer runtime version for 64-bit libdbusmenu-glib.so.4. Host: 4.0.12 Runtime: 4.0.13
Found newer runtime version for 64-bit libvulkan.so.1. Host: 1.2.131 Runtime: 1.2.135
Forced use of runtime version for 64-bit libcurl.so.4. Host: 4.6.0 Runtime: 4.2.0
Found newer runtime version for 32-bit libvulkan.so.1. Host: 1.2.131 Runtime: 1.2.135
Steam client's requirements are satisfied
/home/username/.local/share/Steam/ubuntu12_32/steam
[2020-06-22 17:00:34] Startup - updater built Jun 4 2020 05:50:42
Installing breakpad exception handler for appid(steam)/version(1591251555)
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
SteamUpdateUI: An X Error occurred
X Error of failed request: GLXBadContext
SteamUpdateUI: An X Error occurred
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 152 (GLX)
Minor opcode of failed request: 3 (X_GLXCreateContext)
Value in failed request: 0x0
Serial number of failed request: 51
xerror_handler: X failed, continuing
Major opcode of failed request: 152 (GLX)
Minor opcode of failed request: 6 (X_GLXIsDirect)
Serial number of failed request: 52
xerror_handler: X failed, continuing
Installing breakpad exception handler for appid(steam)/version(1591251555)
[2020-06-22 17:00:34] Verifying installation...
[2020-06-22 17:00:35] Verification complete
Loaded SDL version 2.0.13-5893924
Gtk-Message: Failed to load module "gail"
Gtk-Message: Failed to load module "atk-bridge"
(steam:32777): Gtk-WARNING **: Unable to locate theme engine in module_path: "adwaita",
/usr/share/themes/Yaru/gtk-2.0/main.rc:775: error: unexpected identifier `direction', expected character `}'
(steam:32777): Gtk-WARNING **: Unable to locate theme engine in module_path: "adwaita",
/usr/share/themes/Yaru/gtk-2.0/hacks.rc:28: error: invalid string constant "normal_entry", expected valid string constant
libGL error: No matching fbConfigs or visuals found
libGL error: failed to load driver: swrast
Steam: An X Error occurred
X Error of failed request: GLXBadContext
Major opcode of failed request: 152
Serial number of failed request: 64
xerror_handler: X failed, continuing
Steam: An X Error occurred
X Error of failed request: BadValue (integer parameter out of range for operation)
Major opcode of failed request: 152
Value in failed request: 0x0
Serial number of failed request: 63
xerror_handler: X failed, continuing
Steam: An X Error occurred
X Error of failed request: BadMatch (invalid parameter attributes)
Major opcode of failed request: 152
Serial number of failed request: 65
xerror_handler: X failed, continuing
assert_20200622170034_1.dmp[32831]: Uploading dump (out-of-process)
/tmp/dumps/assert_20200622170034_1.dmp
/home/username/.local/share/Steam/steam.sh: line 750: 32777 Segmentation fault (core dumped) $STEAM_DEBUGGER "$STEAMROOT/$STEAMEXEPATH" "$#"
Subsequent attempts to run steam result in the update window flashing and then same crash.
Attempt #2 via multiverse repo, per linuxconfig.org
$ sudo add-apt-repository multiverse
'multiverse' distribution component is already enabled for all sources.
$ sudo apt update
$ sudo apt install steam
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
steam-launcher
The following NEW packages will be installed:
steam:i386 steam-launcher
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 2,980 kB of archives.
After this operation, 3,163 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://repo.steampowered.com/steam precise/steam amd64 steam-launcher all 1:1.0.0.62 [2,972 kB]
Get:2 http://repo.steampowered.com/steam precise/steam i386 steam i386 1:1.0.0.62 [8,052 B]
Fetched 2,980 kB in 1s (3,294 kB/s)
Selecting previously unselected package steam-launcher.
(Reading database ... 158744 files and directories currently installed.)
Preparing to unpack .../steam-launcher_1%3a1.0.0.62_all.deb ...
Unpacking steam-launcher (1:1.0.0.62) ...
Selecting previously unselected package steam:i386.
Preparing to unpack .../steam_1%3a1.0.0.62_i386.deb ...
Unpacking steam:i386 (1:1.0.0.62) ...
Setting up steam-launcher (1:1.0.0.62) ...
Setting up steam:i386 (1:1.0.0.62) ...
Processing triggers for mime-support (3.64ubuntu1) ...
Processing triggers for hicolor-icon-theme (0.17-2) ...
Processing triggers for gnome-menus (3.36.0-1ubuntu1) ...
Processing triggers for man-db (2.9.1-1) ...
Processing triggers for desktop-file-utils (0.24-1ubuntu3) ...
$ steam
CRASH! Same errors as the first method.
I recently had the same issue, but had a fix. I hope this works for you.
Here is what I did:
Install steam from the steam website. https://store.steampowered.com/about/
Run this line in a terminal STEAM_RUNTIME=0 steam
You should get a line telling you the missing dependencies
Running Steam on ubuntu 20.04 64-bit
STEAM_RUNTIME is disabled by the user
Error: You are missing the following 32-bit libraries, and Steam may not run:
libXtst.so.6
libXrandr.so.2
libXrender.so.1
libgobject-2.0.so.0
libglib-2.0.so.0
libgio-2.0.so.0
libgtk-x11-2.0.so.0
libpulse.so.0
libgdk_pixbuf-2.0.so.0
libva.so.2
libbz2.so.1.0
libvdpau.so.1
libva.so.2
libva-x11.so.2
Can't find 'steam-runtime-check-requirements', continuing anyway
/home/timothy/.local/share/Steam/ubuntu12_32/steam
Once you get the missing dependencies run this line in the terminal for every missing dependency. sudo apt install (Dependency name)
EXAMPLE (sudo apt install libXtst.so.6)
"libXtst.so.6" was part of the list of dependencies that was given to me by the terminal
Once you installed all those dependencies steam should open up.
Let steam install what it needs to and login in, it should work
If you have any issues just leave a reply.
Other Forums/Community where I got most of the idea from
https://steamcommunity.com/app/221410/discussions/0/530645446314818582/
As #Helper Shoes mentioned, it is highly probable that you have missing 32 bit libraries.
Installing the following libraries made it work for me:
$ sudo dpkg --add-architecture i386
$ sudo apt update
$ sudo apt install libxtst6:i386 libxrandr2:i386 libgtk2.0-0:i386 libsm6:i386 libpulse0:i386 ffmpeg:i386

"orderer" node / docker container exited few seconds after running docker-compose.cli

I am new to hyperledger fabric. I used the byfn example and it worked fine and I am now working on my own network. I created crypto-config, config.tx all docker files (including base) as the byfn example.
Every thing works fine untill I run the command "docker-compose -f docker-compose-cli.yaml up -d"
all the nodes are generated but the order node fails within a few seconds.
I think the problem could be in my artifacts/genesis.block file, but I could not solve it.
orderer.expleoFabric.com | 2020-05-21 16:17:59.624 UTC [orderer.common.server] initializeServerConfig -> INFO 003 Starting orderer with TLS enabled
orderer.expleoFabric.com | 2020-05-21 16:17:59.741 UTC [orderer.common.server] Main -> PANI 004 Failed validating bootstrap block: initializing configtx manager failed: bad channel ID: 'Orderer-channel' contains illegal characters
orderer.expleoFabric.com | panic: Failed validating bootstrap block: initializing configtx manager failed: bad channel ID: 'Orderer-channel' contains illegal characters
orderer.expleoFabric.com |
This is from my logs but I could not find Ordrer-channel in any of my files.
channel ID can only contain lowercase alphabetical character.
for more information : https://github.com/hyperledger/fabric/blob/0c3f3f78178f8a639374fba1a12344f381877459/common/configtx/validator.go#L72..L74

Getting Internal Server Error on prisma deploy

I have a Postgres database on Heroku, upon deploying the data model by doing prisma deploy often times the following error is produced.
ERROR: Whoops. Looks like an internal server error. Search your server logs for request ID: local:cjxrmcnpx00hq0692zuwttqwv
{
"data": {
"addProject": null
},
"errors": [
{
"message": "Whoops. Looks like an internal server error. Search your server logs for request ID: local:cjxrmcnpx00hq0692zuwttqwv",
"path": [
"addProject"
],
"locations": [
{
"line": 2,
"column": 9
}
],
"requestId": "local:cjxrmcnpx00hq0692zuwttqwv"
}
],
"status": 200
}
and on checking the Docker logs I am seeing this erorr:
Jul 14, 2019 12:18:34 PM org.postgresql.Driver connect
prisma_1 | SEVERE: Connection error:
prisma_1 | org.postgresql.util.PSQLException: FATAL: too many connections for role "bcueventxumaik"
prisma_1 | at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2433)
prisma_1 | at org.postgresql.core.v3.QueryExecutorImpl.readStartupMessages(QueryExecutorImpl.java:2566)
prisma_1 | at org.postgresql.core.v3.QueryExecutorImpl.<init>(QueryExecutorImpl.java:131)
prisma_1 | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:210)
prisma_1 | at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)
prisma_1 | at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
prisma_1 | at org.postgresql.Driver.makeConnection(Driver.java:452)
prisma_1 | at org.postgresql.Driver.connect(Driver.java:254)
prisma_1 | at slick.jdbc.DriverDataSource.getConnection(DriverDataSource.scala:101)
prisma_1 | at slick.jdbc.DataSourceJdbcDataSource.createConnection(JdbcDataSource.scala:68)
prisma_1 | at slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:453)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:46)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:37)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:249)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:248)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:37)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:274)
prisma_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
prisma_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
prisma_1 | at java.lang.Thread.run(Thread.java:748)
prisma_1 |
prisma_1 | Exception in thread "main" org.postgresql.util.PSQLException: FATAL: too many connections
prisma_1 | at org.postgresql.core.v3.QueryExecutorImpl.readStartupMessages(QueryExecutorImpl.java:2566)prisma_1 | at org.postgresql.core.v3.QueryExecutorImpl.<init>(QueryExecutorImpl.java:131)prisma_1 | at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:210)
prisma_1 | at org.postgresql.core.ConnectionFactory.openConnection(ConnectionFactory.java:49)prisma_1 | at org.postgresql.jdbc.PgConnection.<init>(PgConnection.java:195)
prisma_1 | at org.postgresql.Driver.makeConnection(Driver.java:452)
prisma_1 | at org.postgresql.Driver.connect(Driver.java:254)prisma_1 | at slick.jdbc.DriverDataSource.getConnection(DriverDataSource.scala:101)
prisma_1 | at slick.jdbc.DataSourceJdbcDataSource.createConnection(JdbcDataSource.scala:68)
prisma_1 | at slick.jdbc.JdbcBackend$BaseSession.<init>(JdbcBackend.scala:453)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:46)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.createSession(JdbcBackend.scala:37)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef.acquireSession(BasicBackend.scala:249)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef.acquireSession$(BasicBackend.scala:248)
prisma_1 | at slick.jdbc.JdbcBackend$DatabaseDef.acquireSession(JdbcBackend.scala:37)
prisma_1 | at slick.basic.BasicBackend$DatabaseDef$$anon$2.run(BasicBackend.scala:274)
prisma_1 | at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
prisma_1 | at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
prisma_1 | at java.lang.Thread.run(Thread.java:748)
prisma_prisma_1 exited with code 1
The error is saying too many connections but I am firing prisma deploy from only one terminal and at the same time I am able to connect to the database using PgAdmin4. Moreover, the database seems to be perfectly reachable as I am able to ping the database from inside the container.
PS. Updated the docker logs as earlier on running docker logs -f processid I was getting older logs but now upon building the container again using docker-compose up I got the lastest logs
As the error clearly states there are too many connections to the database. So we need to investigate how many connections there are, who is creating them and why they are created. In order to either limit the consumers or increase the amount of available connections.
First we can start by using the heroku CLI to check the amount of used and available connections:
$ heroku pg:info
=== DATABASE_URL
Plan: Private 2
Status: Available
HA Status: Available
Data Size: 2.23 GB
Tables: 83
PG Version: 10.1
Connections: 26/400
Connection Pooling: Available
For more information on how to investigate heroku postgres databases see: https://devcenter.heroku.com/articles/heroku-postgresql#pg-info
To further investigate who is connected to your database you can either use psql or pgAdmin. If using pgAdmin you can select the database, click on the dashboard tab and select the server activity panel on the bottom of the page revealing all connected sessions. If using psql you could write a select like this:
SELECT pid as process_id,
usename as username,
datname as database_name,
client_addr as client_address,
application_name,
backend_start,
state,
FROM pg_stat_activity;
For a more detailed how to see: https://dataedo.com/kb/query/postgresql/list-database-sessions
By now you probably identified who is creating the connections to your database and can limit the client to use less (or increase the amount of available database connections).
One possible consumer of database connections is the prisma server itself of course. The prisma config luckily provides a setting to limit database connections.
The connectionLimit property in PRISMA_CONFIG determines the number of
database connections a Prisma service is going to use.
You can read more about it here: https://www.prisma.io/docs/prisma-server/database-connector-POSTGRES-jgfr/#managing-database-connections
If you are using heroku to run the docker container with your prisma server a PRISMA_CONFIG could look like this:
port: $PORT
managementApiSecret: ${PRISMA_MANAGEMENT_API_SECRET}
databases:
default:
connector: postgres
migrations: true
connectionLimit: 2
uri: ${DATABASE_URL}?ssl=1
I hope this structured approach helped. Let me know if you need more clarification. If so please provide details regarding the nature of the existing database connections.
run this command
docker logs <YOUR_PRISMA_CONTAINER_NAME>
use pooling:
import dotenv from 'dotenv'
dotenv.config()
import { PrismaClient } from '#prisma/client'
// add prisma to the NodeJS global type
interface CustomNodeJsGlobal extends NodeJS.Global {
prisma: PrismaClient
}
// Prevent multiple instances of Prisma Client in development
declare const global: CustomNodeJsGlobal
const prisma = global.prisma || new PrismaClient()
if (process.env.NODE_ENV === 'development') global.prisma = prisma
export default prisma
plus use:
await prisma.$disconnect()

Jenkins - Unexpected executor death

I see all my executors frequently changing to Dead state in one of my Jenkins slave machine(Windows 2008 R2 SP2).
Jenkins ver. 1.651.3
I have restarted Jenkins server as well as the service.
error logs-
Unexpected executor death
java.io.IOException: Failed to create a temporary file in /var/lib/jenkins/jobs/ABCD/jobs/EFGH/jobs/Build
at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:68)
at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:55)
at hudson.util.TextFile.write(TextFile.java:118)
at hudson.model.Job.saveNextBuildNumber(Job.java:293)
at hudson.model.Job.assignBuildNumber(Job.java:351)
at hudson.model.Run.<init>(Run.java:284)
at hudson.model.AbstractBuild.<init>(AbstractBuild.java:167)
at hudson.model.Build.<init>(Build.java:92)
at hudson.model.FreeStyleBuild.<init>(FreeStyleBuild.java:34)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at jenkins.model.lazy.LazyBuildMixIn.newBuild(LazyBuildMixIn.java:175)
at hudson.model.AbstractProject.newBuild(AbstractProject.java:1018)
at hudson.model.AbstractProject.createExecutable(AbstractProject.java:1209)
at hudson.model.AbstractProject.createExecutable(AbstractProject.java:144)
at hudson.model.Executor$1.call(Executor.java:364)
at hudson.model.Executor$1.call(Executor.java:346)
at hudson.model.Queue._withLock(Queue.java:1365)
at hudson.model.Queue.withLock(Queue.java:1230)
at hudson.model.Executor.run(Executor.java:346)
Caused by: java.io.IOException: Permission denied
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1006)
at java.io.File.createTempFile(File.java:1989)
at hudson.util.AtomicFileWriter.<init>(AtomicFileWriter.java:66)
... 21 more
I see this error log in my slave machine
INFO: File download attempt 1
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.clients.versioncontrol.VersionControlClient downloadFileToStreams
INFO: File download attempt 1
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.ws.runtime.client.SOAPService executeSOAPRequestInternal
INFO: SOAP method='UpdateLocalVersion', status=200, content-length=367, server-wait=402 ms, parse=0 ms, total=402 ms, throughput=913 B/s, gzip
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.clients.versioncontrol.VersionControlClient downloadFileToStreams
INFO: File download attempt 1
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.clients.versioncontrol.VersionControlClient downloadFileToStreams
INFO: File download attempt 1
Oct 17, 2017 10:32:00 AM com.microsoft.tfs.core.clients.versioncontrol.VersionControlClient downloadFileToStreams
INFO: File download attempt 1
Can you please check the owner of the path /var/lib/jenkins/jobs/ABCD/jobs/EFGH/jobs/Build ? By any chance if it is created manually, you will get permission denied error if the owner is not Jenkins. Also check for free disk space on server as well as agent and try rebooting the slave agent. It has helped it at times.
How long are the real job names for ABCD and EFGH?
I've run into the 260 character maximum path length with Jenkins on Windows 2008 R2 before.
The path in:
java.io.IOException: Failed to create a temporary file in /var/lib/jenkins/jobs/ABCD/jobs/EFGH/jobs/Build
with the three /jobs in it seems strange to me. In Jenkins it normally should rather be:
+- /var/lib/jenkins/jobs
+- ABCD
| +- builds
| | +- ...
| +- ...
+- EFGH
| +- builds
| | +- ...
| +- ...
+- Build
+- builds
| +- ...
+- ...
Maybe there's some misconfiguration concerning paths and Jenkins tries a mkdir /var/lib/jenkins/jobs/ABCD/jobs/EFGH/jobs/Build and the Jenkins user or the user under which the job runs doesn't have permissions to do that.
See also File permissions and attributes:
| w | ... | The directory's contents can be modified (create new files or folders; [...]); requires the execute permission to be also set, otherwise this permission has no effect. |
In my situation, this happened because the server was very low on space. Click on "Build Executor Status" from the dashboard and see if there is low disk space or 0 swap space. Try to free up some space. Then restart the Jenkins server / service and try again.

cassandra-stress error in cqlsh 5.0.1 | Cassandra 2.1.12.1046 | DSE 4.8.4

I installed datastax enterprise, version as below, when I run stress test , get error below:
cqlsh> show version
[cqlsh 5.0.1 | Cassandra 2.1.12.1046 | DSE 4.8.4 | CQL spec 3.2.1 | Native protocol v3]
[root#pg0 opt]# cassandra-stress write n=19000000 -rate threads=4
why get error below:
Exception in thread "main" java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused.

Resources