Local CosmosDB connectivity with pymongo for gramex - gramex

Not able to connect cosmos db using gramex mongo adapter.
From python, We can connect to local cosmosDB instance using:
# --------------------------------------
import pymongo
uri = r"mongodb://localhost:Tm%2BOzpb8BrV7DrZHSrGm3GMKyx9r%2Frl5ue9letmD1XRUUiafHFUyIQNenAQDla85nqVDrb8tr%2FtB0LR4azi1FQ%3D%3D#localhost:10255/admin?ssl=true"
client = pymongo.MongoClient(uri,
tls=True,
tlsCAFile='./documentdbemulatorcert.cer')
db = client.admin
print(db.command("serverStatus"))
# --------------------------------------
Please mind “tlsCAFile” parameter
However in gramex when I connect using:
# --------------------------------------
url:
envvartest-app-data:
pattern: /$YAMLURL/appdata
handler: FormHandler
kwargs:
url: "mongodb://localhost:Tm%2BOzpb8BrV7DrZHSrGm3GMKyx9r%2Frl5ue9letmD1XRUUiafHFUyIQNenAQDla85nqVDrb8tr%2FtB0LR4azi1FQ%3D%3D#localhost:10255/admin?ssl=true"
database: galaxy-dev
collection: threats
id: record_number
connect_args:
tls: True
tlsCAFile: $YAMLPATH/documentdbemulatorcert.cer
# ssl:
# ssl_ca: $YAMLPATH/documentdbemulatorcert.cer
# --------------------------------------
the connection fails reading:
ERROR 17-Feb 14:05:06 formhandler 9988 envvartest-app-data: filter failed
Traceback (most recent call last):
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\gramex\handlers\formhandler.py", line 158, in get
result[key] = yield val
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\tornado\gen.py", line 1133, in run
value = future.result()
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\concurrent\futures\_base.py", line 428, in result
return self.__get_result()
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\concurrent\futures\_base.py", line 384, in __get_result
raise self._exception
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\gramex\data.py", line 235, in filter
data = method(url=url, controls=controls, args=args, query=query, **kwargs)
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\gramex\data.py", line 1474, in _filter_mongodb
meta_cols = pd.DataFrame(list(table.find().limit(100)))
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\pymongo\cursor.py", line 1238, in next
if len(self.__data) or self._refresh():
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\pymongo\cursor.py", line 1130, in _refresh
self.__session = self.__collection.database.client._ensure_session()
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\pymongo\mongo_client.py", line 1935, in _ensure_session
return self.__start_session(True, causal_consistency=False)
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\pymongo\mongo_client.py", line 1883, in __start_session
server_session = self._get_server_session()
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\pymongo\mongo_client.py", line 1921, in _get_server_session
return self._topology.get_server_session()
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\pymongo\topology.py", line 520, in get_server_session
session_timeout = self._check_session_support()
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\pymongo\topology.py", line 502, in _check_session_support
None)
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\pymongo\topology.py", line 220, in _select_servers_loop
(self._error_message(selector), timeout, self.description))
pymongo.errors.ServerSelectionTimeoutError: localhost:10255: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1091), Timeout: 30s, Topology Description: <TopologyDescription id: 620e089c395cf9273486aa57, topology_type: Single, servers: [<ServerDescription ('localhost', 10255) server_type: Unknown, rtt: None, error=AutoReconnect('localhost:10255: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate (_ssl.c:1091)')>]>
ERROR 17-Feb 14:05:06 __init__ 9988 500 GET /appdata (127.0.0.1) 30795.58ms envvartest-app-data
Not able to connect cosmos db using gramex mongo adapter.
Steps to reproduce
Install CosmosDB emulator
Generate access key:
.\Microsoft.Azure.Cosmos.Emulator.exe /GenKeyFile=D:\99exps\cosmosdb\key
Start cosmosDB emulator with mongoDB support:
.\Microsoft.Azure.Cosmos.Emulator.exe /FailOnSslCertificateNameMismatch /EnableMongoDbEndpoint=3.6 /EnableMongoDbEndpoint=3.2 /KeyFile=D:\99exps\cosmosdb\key
Create Gramex Application with above mentioned configuration
Observe the error
Edit 1:
The error is changed now:
ERROR 19-Feb 00:10:33 formhandler 9988 galaxy-app-data: filter failed
Traceback (most recent call last):
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\gramex\handlers\formhandler.py", line 158, in get
result[key] = yield val
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\tornado\gen.py", line 1133, in run
value = future.result()
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\concurrent\futures\_base.py", line 428, in result
return self.__get_result()
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\concurrent\futures\_base.py", line 384, in __get_result
raise self._exception
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\gramex\data.py", line 239, in filter
columns=columns, **kwargs)
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\gramex\data.py", line 1521, in _filter_mongodb
cols = [k for k in table.find().limit(1)[0].keys()]
File "C:\Users\shraddheya.shrivasta\Anaconda3\lib\site-packages\pymongo\cursor.py", line 694, in __getitem__
raise IndexError("no such item for Cursor instance")
IndexError: no such item for Cursor instance
WARNING 19-Feb 00:10:33 web 500 GET /appdata (::1): IndexError('no such item for Cursor instance')
ERROR 19-Feb 00:10:33 __init__ 9988 500 GET /appdata (::1) 846.11ms galaxy-app-data

As of Gramex 1.76.0 (Feb 2022), FormHandler does not connect to empty MongoDB collections, since there is no schema defined.
There is a plan to support empty MongoDB collections by specifying the schema explicitly.
But for now, you should be able to access the collection once you've added at least one row to it.

Related

Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))

my code is ok but it is returning internal server error
below is the error code
C:\Users\Aisha\anaconda3\python.exe "C:/Program Files/JetBrains/PyCharm Community Edition 2022.3.2/plugins/python-ce/helpers/pydev/pydevd.py" --multiprocess --qt-support=auto --client 127.0.0.1 --port 52852 --file C:\Users\Aisha\Codesample\KAFDISEASEDETECTION\API\tfservingversion.py
Connected to pydev debugger (build 223.8617.48)
INFO: Started server process [15480]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://localhost:8000 (Press CTRL+C to quit)
INFO: ::1:52885 - "POST /predict HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Users\Aisha\anaconda3\lib\site-packages\urllib3\connection.py", line 174, in _new_conn
conn = connection.create_connection(
File "C:\Users\Aisha\anaconda3\lib\site-packages\urllib3\util\connection.py", line 95, in create_connection
raise err
File "C:\Users\Aisha\anaconda3\lib\site-packages\urllib3\util\connection.py", line 85, in create_connection
sock.connect(sa)
ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Aisha\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "C:\Users\Aisha\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "C:\Users\Aisha\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "C:\Users\Aisha\anaconda3\lib\site-packages\urllib3\connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "C:\Users\Aisha\anaconda3\lib\site-packages\urllib3\connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x0000029197A48D00>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Aisha\anaconda3\lib\site-packages\requests\adapters.py", line 489, in send
resp = conn.urlopen(
File "C:\Users\Aisha\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "C:\Users\Aisha\anaconda3\lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='localhost', port=8501): Max retries exceeded with url: /v1/models/MyModels:predict (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000029197A48D00>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Aisha\anaconda3\lib\site-packages\uvicorn\protocols\http\httptools_impl.py", line 419, in run_asgi
result = await app( # type: ignore[func-returns-value]
File "C:\Users\Aisha\anaconda3\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 78, in call
return await self.app(scope, receive, send)
File "C:\Users\Aisha\anaconda3\lib\site-packages\fastapi\applications.py", line 270, in call
await super().call(scope, receive, send)
File "C:\Users\Aisha\anaconda3\lib\site-packages\starlette\applications.py", line 124, in call
await self.middleware_stack(scope, receive, send)
File "C:\Users\Aisha\anaconda3\lib\site-packages\starlette\middleware\errors.py", line 184, in call
raise exc
File "C:\Users\Aisha\anaconda3\lib\site-packages\starlette\middleware\errors.py", line 162, in call
await self.app(scope, receive, _send)
File "C:\Users\Aisha\anaconda3\lib\site-packages\starlette\middleware\cors.py", line 84, in call
await self.app(scope, receive, send)
File "C:\Users\Aisha\anaconda3\lib\site-packages\starlette\middleware\exceptions.py", line 79, in call
raise exc
File "C:\Users\Aisha\anaconda3\lib\site-packages\starlette\middleware\exceptions.py", line 68, in call
await self.app(scope, receive, sender)
File "C:\Users\Aisha\anaconda3\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 21, in call
raise e
File "C:\Users\Aisha\anaconda3\lib\site-packages\fastapi\middleware\asyncexitstack.py", line 18, in call
await self.app(scope, receive, send)
File "C:\Users\Aisha\anaconda3\lib\site-packages\starlette\routing.py", line 706, in call
await route.handle(scope, receive, send)
File "C:\Users\Aisha\anaconda3\lib\site-packages\starlette\routing.py", line 276, in handle
await self.app(scope, receive, send)
File "C:\Users\Aisha\anaconda3\lib\site-packages\starlette\routing.py", line 66, in app
response = await func(request)
File "C:\Users\Aisha\anaconda3\lib\site-packages\fastapi\routing.py", line 235, in app
raw_response = await run_endpoint_function(
File "C:\Users\Aisha\anaconda3\lib\site-packages\fastapi\routing.py", line 161, in run_endpoint_function
return await dependant.call(**values)
File "C:\Users\Aisha\Codesample\KAFDISEASEDETECTION\API\tfservingversion.py", line 68, in predict
response = requests.post(endpoint, json=json_data, verify=False, timeout=5)
File "C:\Users\Aisha\anaconda3\lib\site-packages\requests\api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "C:\Users\Aisha\anaconda3\lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\Aisha\anaconda3\lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\Aisha\anaconda3\lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "C:\Users\Aisha\anaconda3\lib\site-packages\requests\adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='localhost', port=8501): Max retries exceeded with url: /v1/models/MyModels:predict (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000029197A48D00>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))
Process finished with exit code -1
from wsgiref import headers
from fastapi import FastAPI, File, UploadFile
import uvicorn
import numpy as np
from io import BytesIO
from PIL import Image
import tensorflow as tf
import requests
import json
import ssl
# import urllib3
from fastapi.middleware.cors import CORSMiddleware
# Create an SSL context with the desired options
ssl_context = ssl.create_default_context()
ssl_context.options |= ssl.OP_NO_TLSv1 | ssl.OP_NO_TLSv1_1
# urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
app = FastAPI()
origins = [
"http://localhost",
]
app.add_middleware(
CORSMiddleware,
allow_origins=origins,
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# '' # ''beta_model = tf.keras.model.load_model("../MyModels/2")
endpoint = "https://localhost:8501/v1/models/MyModels:predict"
CLASS_NAMES = ["Banana_cordana", "Banana_healthy", "Banana_pestalotiopsis", "Banana_sigatoka",
"Corn_Cercospora_Grayleaf_spot", "Corn_Common_rust", 'Corn_Northern_Leaf_Blight', 'Corn_healthy',
'Potato___Early_blight', 'Potato___Late_blight', 'Potato___healthy', 'Rice_Bacterial _Leaf_blight',
'Rice_Brown_Spot', 'Rice_Leaf_ Smut', 'Tomato_Bacterial_spot', 'Tomato_Early_blight',
'Tomato_Late_blight', "Tomato_Leaf_Mold", "Tomato_Septoria_leaf_spot",
"Tomato_Spider_mites_Two_spotted_spider_mite", "Tomato__Target_Spot",
"Tomato__Tomato_YellowLeaf__Curl_Virus", "Tomato__Tomato_mosaic_virus", "Tomato_healthy"]
#app.get("/ping")
async def ping():
return " How are you today "
def read_file_as_image(data) -> np.ndarray:
image = np.array(Image.open(BytesIO(data)))
return image
#app.post("/predict")
async def predict(
file: UploadFile = File(...)
):
image = read_file_as_image(await file.read())
img_batch = np.expand_dims(image, 0)
json_data = {
"instances": img_batch.tolist()
}
response = requests.post(endpoint, json=json_data, verify=False, timeout=5)
# Make a request to the TensorFlow Serving endpoint with the SSL context
# response = requests.post('https://localhost:8501/v1/models/your-model:predict', json=json_data,
# verify=False, timeout=5, cert=None, headers=headers, auth=None, proxies=None, stream=None, allow_redirects=True, proxies_auth=None, **ssl_context)
# predictions = json.loads(response.text)['predictions'][0]
# predicted_class_index = np.argmax(predictions)
# predicted_class = CLASS_NAMES[predicted_class_index]
# confidence = str(round(100 * (predictions[predicted_class_index]), 2)) + "%"
# return {"class": predicted_class, "confidence": confidence}
pass`your text`
if __name__ == "__main__":
uvicorn.run(app, host='localhost', port=8000)
this is my code
what i expected response 200 ok i tried to change the port number , update python, download openssl,update tensorflow and other dependencies
instead i got internal server error from my postman and the following error from pycharm
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='localhost', port=8501): Max retries exceeded with url: /v1/models/MyModels:predict (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x0000029197A48D00>: Failed to establish a new connection: [WinError 10061] No connection could be made because the target machine actively refused it'))

GCP Composer - ftplib timeouterror errno 110 connection timed out

I trying to getting data from FTP server's txt file by GCP Composer Tasks.
So i imported and used ftplib package in code.
like this.
ftp = FTP()
ftp.connect(host=HOST,port=PORT, timeout=600)
ftp.login(user=USER,passwd=PSWD)
ftp.set_pasv(True)
ftp.sendcmd('TYPE A')
conn = ftp.transfercmd(F"RETR {PATH}")
fp = conn.makefile('rb')
but. this (conn = ftp.transfercmd(F"RETR {PATH}")) code made TimeoutError: [Errno 110] Connection timed out Error.
ERROR - Task failed with exception
Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1166, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1285, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/models/taskinstance.py", line 1315, in _execute_task
result = task_copy.execute(context=context)
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 150, in execute
return_value = self.execute_callable()
File "/opt/python3.8/lib/python3.8/site-packages/airflow/operators/python.py", line 161, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/home/airflow/gcs/dags/FTP_ZIPCODE_to_BQ_DAG.py", line 91, in replace_BQ_table
conn = ftp.transfercmd(F"RETR {PATH}")
File "/opt/python3.8/lib/python3.8/ftplib.py", line 389, in transfercmd
return self.ntransfercmd(cmd, rest)[0]
File "/opt/python3.8/lib/python3.8/ftplib.py", line 350, in ntransfercmd
conn = socket.create_connection((host, port), self.timeout,
File "/opt/python3.8/lib/python3.8/socket.py", line 808, in create_connection
raise err
File "/opt/python3.8/lib/python3.8/socket.py", line 796, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Connection timed out
Did you know reason????
and... this is my gcp composer evironments.
Image Version : composer-1.17.7-airflow-2.1.4
python version : 3
Network VPC-native : Enable
One thing that can be happening is that the connection is not working. In order to get the connection working, you need to set up Cloud NAT together with Cloud Composer to give the workers public internet access, as described in this documentation.

Periodic "Lost connection to MySQL server during query" after Dockerizing Flask Web App

I have a Flask web app that used to run on a standalone server using the following:
Flask/SQLAlchemy
MariaDB
uwsgi
nginx
On the stand alone server this application ran fine.
I have since "dockerized" this application across two containers:
uwsgi-nginx-flask
MariaDB
Every since dockerizing I occasionally get this error (entire traceback posted at the end):
Lost connection to MySQL server during query
The MariaDB log shows the following errors with verbose logging:
2020-05-10 18:35:32 130 [Warning] Aborted connection 130 to db: 'flspection2' user: 'fl-server' host: '172.19.0.1' (Got an error reading communication packets)
2020-05-10 18:45:34 128 [Warning] Aborted connection 128 to db: 'flspection2' user: 'fl-server' host: '172.19.0.1' (Got timeout reading communication packets)
This is experienced by the user as a 502 Bad Gateway. If the user refreshes the page, this will often solve the problem. This issue arises at random. I have not been able to reproduce it at will, but over time it will inevitably show up.
What is causing this and how can I solve it?
What I've done:
Verified that the MariaDB container has a timeout is 28800. I've seen the error occur much sooner than 28800 seconds after restarting all the containers, so I don't think it is actually a timeout issue.
Set pool_recycle option to 120
Verified that Flask-SQLAlchemy is using a scoped_session which should avoid these timing out issues.
Changed network_mode to default per a comment. This did not solve the problem.
My thoughts are that it acts like the connection between flask and the database is unreliable, but as docker containers running on the same host, shouldn't that be quite reliable?
Relevant code:
database.py
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
config.py
class Config:
...
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://username:password#127.0.0.1/database?charset=utf8mb4'
SQLALCHEMY_TRACK_MODIFICATIONS = False
SQLALCHEMY_ENGINE_OPTIONS = {
'pool_recycle': 120
}
...
docker-compose.yml
version: "3.7"
services:
db:
restart: "always"
build: ./docker/db
volumes:
- "~/db:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: "password"
MYSQL_DATABASE: "database"
MYSQL_USER: "user"
MYSQL_PASSWORD: "password"
ports:
- '3306:3306'
nginx-uwsgi-flask:
restart: "always"
depends_on:
- "db"
build:
context: .
dockerfile: ./docker/nginx-uwsgi-flask/Dockerfile
volumes:
- "~/data/fileshare:/fileshare"
ports:
- "80:80"
- "443:443"
network_mode: "host"
traceback
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.6/site-packages/pymysql/cursors.py", line 170, in execute
result = self._query(query)
File "/usr/local/lib/python3.6/site-packages/pymysql/cursors.py", line 328, in _query
conn.query(q)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 517, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 732, in _read_query_result
result.read()
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 1075, in read
first_packet = self.connection._read_packet()
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 657, in _read_packet
packet_header = self._read_bytes(4)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 707, in _read_bytes
CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")
pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.6/site-packages/flask_user/decorators.py", line 132, in decorator
allowed = _is_logged_in_with_confirmed_email(user_manager)
File "/usr/local/lib/python3.6/site-packages/flask_user/decorators.py", line 17, in _is_logged_in_with_confirmed_email
if user_manager.call_or_get(current_user.is_authenticated):
File "/usr/local/lib/python3.6/site-packages/werkzeug/local.py", line 347, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/local/lib/python3.6/site-packages/werkzeug/local.py", line 306, in _get_current_object
return self.__local()
File "/usr/local/lib/python3.6/site-packages/flask_login/utils.py", line 26, in <lambda>
current_user = LocalProxy(lambda: _get_user())
File "/usr/local/lib/python3.6/site-packages/flask_login/utils.py", line 335, in _get_user
current_app.login_manager._load_user()
File "/usr/local/lib/python3.6/site-packages/flask_login/login_manager.py", line 359, in _load_user
return self.reload_user()
File "/usr/local/lib/python3.6/site-packages/flask_login/login_manager.py", line 321, in reload_user
user = self.user_callback(user_id)
File "/usr/local/lib/python3.6/site-packages/flask_user/user_manager.py", line 130, in load_user_by_user_token
user = self.db_manager.UserClass.get_user_by_token(user_token)
File "/usr/local/lib/python3.6/site-packages/flask_user/user_mixin.py", line 51, in get_user_by_token
user = user_manager.db_manager.get_user_by_id(user_id)
File "/usr/local/lib/python3.6/site-packages/flask_user/db_manager.py", line 179, in get_user_by_id
return self.db_adapter.get_object(self.UserClass, id=id)
File "/usr/local/lib/python3.6/site-packages/flask_user/db_adapters/sql_db_adapter.py", line 48, in get_object
return ObjectClass.query.get(id)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 924, in get
ident, loading.load_on_pk_identity)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 1007, in _get_impl
return db_load_fn(self, primary_key_identity)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/loading.py", line 250, in load_on_pk_identity
return q.one()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 2954, in one
ret = self.one_or_none()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 2924, in one_or_none
ret = list(self)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 2995, in __iter__
return self._execute_and_instances(context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 3018, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 948, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
exc_info
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 248, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.6/site-packages/pymysql/cursors.py", line 170, in execute
result = self._query(query)
File "/usr/local/lib/python3.6/site-packages/pymysql/cursors.py", line 328, in _query
conn.query(q)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 517, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 732, in _read_query_result
result.read()
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 1075, in read
first_packet = self.connection._read_packet()
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 657, in _read_packet
packet_header = self._read_bytes(4)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 707, in _read_bytes
CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: 'SELECT user.is_active AS user_is_active, user.id AS user_id, user.username AS user_username, user.password AS user_password, user.reset_password_token AS user_reset_password_token, user.email AS user_email, user.email_confirmed_at AS user_email_confirmed_at, user.first_name AS user_first_name, user.last_name AS user_last_name \nFROM user \nWHERE user.id = %(param_1)s'] [parameters: {'param_1': 13}] (Background on this error at: http://sqlalche.me/e/e3q8)
I solved this issue by migrating from a mariadb container to mysql. I still don't know what the root cause is.
Try the following to rule out that you have stale connections in your pool:
From https://docs.sqlalchemy.org/en/13/core/pooling.html#pool-disconnects
The approach adds a small bit of overhead to the connection checkout process, however is otherwise the most simple and reliable approach to completely eliminating database errors due to stale pooled connections. The calling application does not need to be concerned about organizing operations to be able to recover from stale connections checked out from the pool.
Pessimistic testing of connections upon checkout is achievable by using the Pool.pre_ping argument, available from create_engine() via the create_engine.pool_pre_ping argument:
engine = create_engine("mysql+pymysql://user:pw#host/db", pool_pre_ping=True)
I dont knwo wich versions u are using but did u try to set 'SQLALCHEMY_POOL_SIZE
and 'SQLALCHEMY_POOL_RECYCLE'
I was faced a similar problem because I have multi threaded django app. When thread try to access database connection was lost.
There is a django bug.
https://code.djangoproject.com/ticket/21597
you can solve it with this workaround.
from django.db import connection
def is_connection_usable():
try:
connection.connection.ping()
except:
return False
else:
return True
def do_work():
while(True): # Endless loop that keeps the worker going (simplified)
if not is_connection_usable():
connection.close()
try:
do_a_bit_of_work()
except:
logger.exception("Something bad happened, trying again")
————————————————
版权声明:本文为CSDN博主「orangleliu」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/lzz957748332/java/article/details/41480417
First check your database connection, if connection is lost it will close your connection, when you attempt connection again, you will connect.
Bkz
https://blog.csdn.net/lzz957748332/article/details/41480417

o.n.b.v.t.BoltProtocolV1- Failed to write response to driver Cannot write to buffer when closed

I am using neo4j 3.1.3 version where I am getting following error:
2018-02-28 05:17:27.780+0000 ERROR [o.n.b.v.t.BoltProtocolV1] Failed to write response to driver Cannot write to buffer when closed
java.io.IOException: Cannot write to buffer when closed
at org.neo4j.bolt.v1.transport.ChunkedOutput.ensure(ChunkedOutput.java:163)
at org.neo4j.bolt.v1.transport.ChunkedOutput.writeShort(ChunkedOutput.java:94)
at org.neo4j.bolt.v1.packstream.PackStream$Packer.packStructHeader(PackStream.java:330)
at org.neo4j.bolt.v1.messaging.BoltResponseMessageWriter.onSuccess(BoltResponseMessageWriter.java:72)
at org.neo4j.bolt.v1.messaging.MessageProcessingHandler.onFinish(MessageProcessingHandler.java:111)
at org.neo4j.bolt.v1.runtime.BoltStateMachine.after(BoltStateMachine.java:105)
at org.neo4j.bolt.v1.runtime.BoltStateMachine.run(BoltStateMachine.java:201)
at org.neo4j.bolt.v1.messaging.BoltMessageRouter.lambda$onRun$3(BoltMessageRouter.java:80)
at org.neo4j.bolt.v1.runtime.concurrent.RunnableBoltWorker.execute(RunnableBoltWorker.java:135)
at org.neo4j.bolt.v1.runtime.concurrent.RunnableBoltWorker.run(RunnableBoltWorker.java:89)
at java.lang.Thread.run(Thread.java:748)
I have used following configurations:
dbms.memory.heap.initial_size=8g
dbms.memory.heap.max_size=8g
dbms.memory.pagecache.size=4g
Machine description: 4 vCPUs, 15 GB memory
My neo4j client is Python so using neo4j python driver, version 1.5.3. I am getting following error on client side:
[2018-02-28 10:59:43,838: ERROR/MainProcess] Task api_keyword.tasks.job.update_application_rel[14f5baa6-09a5-44df-9bd7-982116e0b184] raised unexpected: ServiceUnavailable("Failed to write to closed connection Address(host='10.160.0.9', port=7687)",)
Traceback (most recent call last):
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/celery/app/trace.py", line 438, in __protected_call__
return self.run(*args, **kwargs)
File "/home/ubuntu/celeryprod/api/src/api_keyword/tasks/job.py", line 21, in update_application_rel
rel_label='APPLIED'
File "/home/ubuntu/celeryprod/api/src/api_keyword/utils/neo4j/base.py", line 76, in delete_relationship
self.run_query(neo4j_query)
File "/home/ubuntu/celeryprod/api/src/api_keyword/utils/neo4j/base.py", line 32, in run_query
res = Neo4jConnector().run_query(neo4j_query)
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/retrying.py", line 49, in wrapped_f
return Retrying(*dargs, **dkw).call(f, *args, **kw)
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/retrying.py", line 212, in call
raise attempt.get()
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/retrying.py", line 247, in get
six.reraise(self.value[0], self.value[1], self.value[2])
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/six.py", line 686, in reraise
raise value
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/retrying.py", line 200, in call
attempt = Attempt(fn(*args, **kwargs), attempt_number, False)
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/timeout_decorator/timeout_decorator.py", line 81, in new_function
return function(*args, **kwargs)
File "/home/ubuntu/celeryprod/api/src/core/services/neo4j.py", line 56, in run_query
result = session.run(cypher_query)
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/neo4j/v1/api.py", line 339, in run
self._connection.send()
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/neo4j/bolt/connection.py", line 263, in send
self._send()
File "/home/ubuntu/celeryprod/lib/python3.5/site-packages/neo4j/bolt/connection.py", line 275, in _send
raise self.Error("Failed to write to closed connection {!r}".format(self.server.address))
neo4j.exceptions.ServiceUnavailable: Failed to write to closed connection Address(host='10.160.0.9', port=7687)
I am initializing driver like this:
self.driver = GraphDatabase.driver("bolt://{0}".format(self.neo4j_db_url),
auth=basic_auth(self.neo4j_username, self.neo4j_password), connection_timeout=60)
Can anyone help me regarding this. Any configuration I need to tune, or any other configuration I need to define?

DataStax OpsCenter 6.0 won't start if HTTPS is enabled

I recently upgraded DSE to 5.X and OpsCenter to 6.0. Everything works great except the OpsCenter fails to start if I enable HTTPS using our own wildcard certificates. I have .pem file with cert chain and .key file with password removed. Same setup works perfectly with the OpsCenter 5.2.4 but with 6.0 I get the error below:
[opscenterd] ERROR: Traceback (most recent call last):
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/OpsCenterdService.py", line 111, in setupWebServer
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/WebServer.py", line 108, in makeWebServer
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/SslUtils.py", line 24, in make_ssl_context_factory
File "/usr/share/opscenter/lib/py/twisted/internet/legacy_ssl.py", line 1133, in __init__ self.cacheContext()
File "/usr/share/opscenter/lib/py/twisted/internet/legacy_ssl.py", line 1142, in cacheContext ctx.load_cert_chain(self.certificateFileName, keyfile=self.privateKeyFileName) # Automatically checks against private key
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/ssl.py", line 1035, in load_cert_chain self._key_managers = _get_openssl_key_manager(certfile, keyfile, password, _key_store=self._key_store)
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/ssl.py", line 1035, in load_cert_chain self._key_managers = _get_openssl_key_manager(certfile, keyfile, password, _key_store=self._key_store)
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/_sslcerts.py", line 113, in _get_openssl_key_manager _certs, _private_key = _extract_certs_for_paths([cert_file], password)
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/_sslcerts.py", line 211, in _extract_certs_for_paths _certs, _private_key = _extract_cert_from_data(f, password, key_converter, cert_converter)
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/_sslcerts.py", line 230, in _extract_cert_from_data certs, private_key = _read_pem_cert_from_data(f, password, key_converter, cert_converter)
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/_sslcerts.py", line 246, in _read_pem_cert_from_data for br in _extract_readers(f):
File "/usr/share/opscenter/lib/jvm/jython-standalone-2.7.0.3.jar/Lib/_sslcerts.py", line 94, in _extract_readers raise SSLError(SSL_ERROR_SSL, "PEM lib (no start line or not enough data)")
SSLError: [Errno 1] PEM lib (no start line or not enough data)
(MainThread)
[opscenterd] ERROR: There was an error starting the OpsCenterd process: Traceback (most recent call last):
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/OpsCenterdService.py", line 49, in startService
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/OpsCenterdService.py", line 123, in setupWebServer
NameError: global name 'System' is not defined
(MainThread)
Note that OpsCenter starts if I use default certificate and key included in the installation.
Thanks for help!
Michael

Resources