I am running the datadog container as one of the services in docker compose. I am running Agent: 7 for my purposes.
version: "3.9"
services:
app:
image: app
container_name: app
hostname: app
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
ports:
- 8080:80
volumes:
- shared_volume:/tmp/logs
datadog:
container_name: dd-agent
image: gcr.io/datadoghq/agent:7
restart: always
ports:
- 8125:8125/udp
- 8126:8126
environment:
- DD_API_KEY=${DATADOG_API_KEY}
- DD_SITE=${DD_SITE}
- DD_DOGSTATSD_NON_LOCAL_TRAFFIC=${DD_DOGSTATSD_NON_LOCAL_TRAFFIC}
- DD_LOGS_ENABLED="true"
- DD_APM_ENABLED="true"
- DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL="true"
- DD_CONTAINER_EXCLUDE_LOGS="name:dd-agent"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /proc/:/host/proc/:ro
# - /opt/dd-agent/run:/opt/dd-agent/run:rw
- /sys/fs/cgroup/:/host/sys/fs/cgroup:ro
volumes:
shared_volume:
However running the datadog container runs into an error. The error log says that it's trying to connect to a redis server. I am not sure where is this coming from, as I don't recollect redis being one of the dependencies for datadog.
Pasted same log below for convenience -
dd-agent | 2022-10-11 10:13:53 UTC | CORE | ERROR | (pkg/collector/worker/check_logger.go:69 in Error) | check:php_fpm | Error running check: [{"message": "Detected 1 error while loading configuration model `InstanceConfig`:\n__root__\n Field `status_url` or `ping_url` must be set", "traceback": "Traceback (most recent call last):\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py\", line 1091, in run\n initialization()\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py\", line 492, in load_configuration_models\n instance_config = self.load_configuration_model(package_path, 'InstanceConfig', raw_instance_config)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py\", line 536, in load_configuration_model\n raise_from(ConfigurationError('\\n'.join(message_lines)), None)\n File \"<string>\", line 3, in raise_from\ndatadog_checks.base.errors.ConfigurationError: Detected 1 error while loading configuration model `InstanceConfig`:\n__root__\n Field `status_url` or `ping_url` must be set\n"}]
dd-agent | 2022-10-11 10:13:57 UTC | CORE | ERROR | (pkg/collector/worker/check_logger.go:69 in Error) | check:redisdb | Error running check: [{"message": "Timeout connecting to server", "traceback": "Traceback (most recent call last):\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/connection.py\", line 611, in connect\n sock = self.retry.call_with_retry(\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/retry.py\", line 51, in call_with_retry\n raise error\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/retry.py\", line 46, in call_with_retry\n return do()\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/connection.py\", line 612, in <lambda>\n lambda: self._connect(), lambda error: self.disconnect(error)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/connection.py\", line 677, in _connect\n raise err\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/connection.py\", line 665, in _connect\n sock.connect(socket_address)\nsocket.timeout: timed out\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py\", line 1116, in run\n self.check(instance)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/redisdb/redisdb.py\", line 556, in check\n self._check_db()\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/redisdb/redisdb.py\", line 205, in _check_db\n info = conn.info()\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/commands/core.py\", line 970, in info\n return self.execute_command(\"INFO\", **kwargs)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/client.py\", line 1235, in execute_command\n conn = self.connection or pool.get_connection(command_name, **options)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/connection.py\", line 1387, in get_connection\n connection.connect()\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/connection.py\", line 615, in connect\n raise TimeoutError(\"Timeout connecting to server\")\nredis.exceptions.TimeoutError: Timeout connecting to server\n"}]
This is a known issue, however it does not interfere with the agent collection and sending logs and traces to Datadog.
Here is a comment on that same issue
do not pay attention on this . its do not have any impact on collection and sending data
I ran into the same issue and can confirm that the agent still works. I wish they would fix this or change the error message to a warning with a message "You may disregard this warning".
Related
I am running homeassistant in a docker container on a RPi 4 with Raspbian. I am using tributs scripts to elevate the need of running the docker image as root. This all works dandy. But now I am trying to add the dsmr integration but I am not succeeding. The integration requires to connect to the "Slimme meter" via USB. However, I get a permission error. my knowledge of both docker and linux privelages is too limited to know where to start debugging this. Does anyone have some pointers for me?
This is the error message homeassistant is throwing at me:
Logger: homeassistant.components.dsmr
Source: components/dsmr/config_flow.py:93
Integration: dsmr (documentation, issues)
First occurred: 21:18:17 (1 occurrences)
Last logged: 21:18:17
Error connecting to DSMR
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/serial/serialposix.py", line 322, in open
self.fd = os.open(self.portstr, os.O_RDWR | os.O_NOCTTY | os.O_NONBLOCK)
PermissionError: [Errno 13] Permission denied: '/dev/ttyUSB0'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/dsmr/config_flow.py", line 93, in validate_connect
transport, protocol = await asyncio.create_task(reader_factory())
File "/usr/local/lib/python3.9/site-packages/serial_asyncio/__init__.py", line 445, in create_serial_connection
serial_instance = serial.serial_for_url(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/serial/__init__.py", line 90, in serial_for_url
instance.open()
File "/usr/local/lib/python3.9/site-packages/serial/serialposix.py", line 325, in open
raise SerialException(msg.errno, "could not open port {}: {}".format(self._port, msg))
serial.serialutil.SerialException: [Errno 13] could not open port /dev/ttyUSB0: [Errno 13] Permission denied: '/dev/ttyUSB0'
After some researching I figured that I needed to add the user to an extra group named dialout because only members of that group are allowed to access the USB ports (as well as other devices).
First I figured out the group id of the dialout group in the host machine (the machine running the docker container) by running
cat /etc/group | grep dialout
it returned 20 in my case. Luckily, the script of tributs has the possibility to add the user to an extra group via the environment variable EXTRA_GID. So all the relevant lines in the docker-compose file for accessing USB are (when using the script of tribut) are:
devices:
- /dev/ttyUSB0:/dev/ttyUSB0
environment:
- PUID=1000
- PGID=1000
- EXTRA_GID=20 # this is the ID of the group 'dialout'
Rather than going into painful process of bringing everything into neo4j, we I resorted to easier solution. As configuration is not my expertise https://medium.com/neo4j/transform-mongodb-collections-automagically-into-graphs-9ea085d6e3ef
I followed instructions from here to configure (I want to import data in mongo to neo4j). I am in OSX Catalina. What endpoint should I be using? Following is my settings:
I have a virutal environment created with pipenv.
mongo-connector version: 3.1.1
Python version: 3.9.1
pymongo version: 3.11.3
MongoDB version: 4.4.3
neo4j_doc_manager version: unknown (well installed following instructions from the above).
Now when I try to connect and run following
mongo-connector -m localhost:27017 -t http://localhost:11005/data/db -d neo4j_doc_manager
I used endpoint because I have following endpoints
bolt: https://localhost:11004
http: https://localhost:11005
https: https://localhost:7473
my neo4j location
/Users/Library/Application Support/com.Neo4j.Relate/Data/dbmss/dbms-bbb9318b-9083-4ebf-a934-e17ef055ae22
I have disabled authentication:
dbms.security.auth_enabled=false
I have no idea why I get HTTP 404:
2021-02-05 08:11:15,577 [ALWAYS] mongo_connector.connector:50 - Starting mongo-connector version: 3.1.1
2021-02-05 08:11:15,578 [ALWAYS] mongo_connector.connector:50 - Python version: 3.9.1 (default, Jan 8 2021, 17:17:17)
[Clang 12.0.0 (clang-1200.0.32.28)]
2021-02-05 08:11:15,612 [ALWAYS] mongo_connector.connector:50 - Platform: macOS-10.15.7-x86_64-i386-64bit
2021-02-05 08:11:15,612 [ALWAYS] mongo_connector.connector:50 - pymongo version: 3.11.3
2021-02-05 08:11:15,619 [ALWAYS] mongo_connector.connector:50 - Source MongoDB version: 4.4.3
2021-02-05 08:11:15,619 [ALWAYS] mongo_connector.connector:50 - Target DocManager: mongo_connector.doc_managers.neo4j_doc_manager version: unknown
2021-02-05 08:11:15,633 [CRITICAL] mongo_connector.oplog_manager:713 - Exception during collection dump
Traceback (most recent call last):
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/core.py", line 258, in get
response = self.__base.get(headers=headers, redirect_limit=redirect_limit, **kwargs)
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/packages/httpstream/http.py", line 966, in get
return self.__get_or_head("GET", if_modified_since, headers, redirect_limit, **kwargs)
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/packages/httpstream/http.py", line 943, in __get_or_head
return rq.submit(redirect_limit=redirect_limit, **kwargs)
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/packages/httpstream/http.py", line 452, in submit
return Response.wrap(http, uri, self, rs, **response_kwargs)
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/packages/httpstream/http.py", line 489, in wrap
raise inst
py2neo.packages.httpstream.http.ClientError: 404 Not Found
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/mongo_connector/util.py", line 33, in wrapped
return f(*args, **kwargs)
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/mongo_connector/doc_managers/neo4j_doc_manager.py", line 78, in bulk_upsert
tx = self.graph.cypher.begin()
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/core.py", line 661, in cypher
metadata = self.resource.metadata
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/core.py", line 213, in metadata
self.get()
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/core.py", line 267, in get
raise_from(self.error_class(message, **content), error)
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/util.py", line 235, in raise_from
raise exception
py2neo.error.GraphError: HTTP GET returned response 404
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/mongo_connector/oplog_manager.py", line 668, in do_dump
upsert_all(dm)
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/mongo_connector/oplog_manager.py", line 651, in upsert_all
dm.bulk_upsert(docs_to_dump(from_coll), mapped_ns, long_ts)
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/mongo_connector/util.py", line 44, in wrapped
raise new_type(str(exc_value)).with_traceback(exc_tb)
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/mongo_connector/util.py", line 33, in wrapped
return f(*args, **kwargs)
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/mongo_connector/doc_managers/neo4j_doc_manager.py", line 78, in bulk_upsert
tx = self.graph.cypher.begin()
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/core.py", line 661, in cypher
metadata = self.resource.metadata
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/core.py", line 213, in metadata
self.get()
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/core.py", line 267, in get
raise_from(self.error_class(message, **content), error)
File "/Users/.local/share/virtualenvs/insiderTrading-L0T4T-gI/lib/python3.9/site-packages/py2neo/util.py", line 235, in raise_from
raise exception
mongo_connector.doc_managers.error_handler.Neo4jOperationFailed: HTTP GET returned response 404
2021-02-05 08:11:15,634 [ERROR] mongo_connector.oplog_manager:723 - OplogThread: Failed during dump collection cannot recover! Collection(Database(MongoClient(host=['localhost:27017'], document_class=dict, tz_aware=False, connect=True, replicaset='rs0'), 'local'), 'oplog.rs')
2021-02-05 08:11:16,630 [ERROR] mongo_connector.connector:408 - MongoConnector: OplogThread <OplogThread(Thread-2, started 123145708253184)> unexpectedly stopped! Shutting down
You should check whether the version of py2neo you are using is compatible with the version of Neo4j. I suspect it isn't, since py2neo.packages.httpstream is a wildly out of date package name, and probably refers too a deprecated/removed HTTP endpoint.
I have a Flask web app that used to run on a standalone server using the following:
Flask/SQLAlchemy
MariaDB
uwsgi
nginx
On the stand alone server this application ran fine.
I have since "dockerized" this application across two containers:
uwsgi-nginx-flask
MariaDB
Every since dockerizing I occasionally get this error (entire traceback posted at the end):
Lost connection to MySQL server during query
The MariaDB log shows the following errors with verbose logging:
2020-05-10 18:35:32 130 [Warning] Aborted connection 130 to db: 'flspection2' user: 'fl-server' host: '172.19.0.1' (Got an error reading communication packets)
2020-05-10 18:45:34 128 [Warning] Aborted connection 128 to db: 'flspection2' user: 'fl-server' host: '172.19.0.1' (Got timeout reading communication packets)
This is experienced by the user as a 502 Bad Gateway. If the user refreshes the page, this will often solve the problem. This issue arises at random. I have not been able to reproduce it at will, but over time it will inevitably show up.
What is causing this and how can I solve it?
What I've done:
Verified that the MariaDB container has a timeout is 28800. I've seen the error occur much sooner than 28800 seconds after restarting all the containers, so I don't think it is actually a timeout issue.
Set pool_recycle option to 120
Verified that Flask-SQLAlchemy is using a scoped_session which should avoid these timing out issues.
Changed network_mode to default per a comment. This did not solve the problem.
My thoughts are that it acts like the connection between flask and the database is unreliable, but as docker containers running on the same host, shouldn't that be quite reliable?
Relevant code:
database.py
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
config.py
class Config:
...
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://username:password#127.0.0.1/database?charset=utf8mb4'
SQLALCHEMY_TRACK_MODIFICATIONS = False
SQLALCHEMY_ENGINE_OPTIONS = {
'pool_recycle': 120
}
...
docker-compose.yml
version: "3.7"
services:
db:
restart: "always"
build: ./docker/db
volumes:
- "~/db:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: "password"
MYSQL_DATABASE: "database"
MYSQL_USER: "user"
MYSQL_PASSWORD: "password"
ports:
- '3306:3306'
nginx-uwsgi-flask:
restart: "always"
depends_on:
- "db"
build:
context: .
dockerfile: ./docker/nginx-uwsgi-flask/Dockerfile
volumes:
- "~/data/fileshare:/fileshare"
ports:
- "80:80"
- "443:443"
network_mode: "host"
traceback
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.6/site-packages/pymysql/cursors.py", line 170, in execute
result = self._query(query)
File "/usr/local/lib/python3.6/site-packages/pymysql/cursors.py", line 328, in _query
conn.query(q)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 517, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 732, in _read_query_result
result.read()
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 1075, in read
first_packet = self.connection._read_packet()
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 657, in _read_packet
packet_header = self._read_bytes(4)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 707, in _read_bytes
CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")
pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.6/site-packages/flask_user/decorators.py", line 132, in decorator
allowed = _is_logged_in_with_confirmed_email(user_manager)
File "/usr/local/lib/python3.6/site-packages/flask_user/decorators.py", line 17, in _is_logged_in_with_confirmed_email
if user_manager.call_or_get(current_user.is_authenticated):
File "/usr/local/lib/python3.6/site-packages/werkzeug/local.py", line 347, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/local/lib/python3.6/site-packages/werkzeug/local.py", line 306, in _get_current_object
return self.__local()
File "/usr/local/lib/python3.6/site-packages/flask_login/utils.py", line 26, in <lambda>
current_user = LocalProxy(lambda: _get_user())
File "/usr/local/lib/python3.6/site-packages/flask_login/utils.py", line 335, in _get_user
current_app.login_manager._load_user()
File "/usr/local/lib/python3.6/site-packages/flask_login/login_manager.py", line 359, in _load_user
return self.reload_user()
File "/usr/local/lib/python3.6/site-packages/flask_login/login_manager.py", line 321, in reload_user
user = self.user_callback(user_id)
File "/usr/local/lib/python3.6/site-packages/flask_user/user_manager.py", line 130, in load_user_by_user_token
user = self.db_manager.UserClass.get_user_by_token(user_token)
File "/usr/local/lib/python3.6/site-packages/flask_user/user_mixin.py", line 51, in get_user_by_token
user = user_manager.db_manager.get_user_by_id(user_id)
File "/usr/local/lib/python3.6/site-packages/flask_user/db_manager.py", line 179, in get_user_by_id
return self.db_adapter.get_object(self.UserClass, id=id)
File "/usr/local/lib/python3.6/site-packages/flask_user/db_adapters/sql_db_adapter.py", line 48, in get_object
return ObjectClass.query.get(id)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 924, in get
ident, loading.load_on_pk_identity)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 1007, in _get_impl
return db_load_fn(self, primary_key_identity)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/loading.py", line 250, in load_on_pk_identity
return q.one()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 2954, in one
ret = self.one_or_none()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 2924, in one_or_none
ret = list(self)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 2995, in __iter__
return self._execute_and_instances(context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 3018, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 948, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
exc_info
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 248, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.6/site-packages/pymysql/cursors.py", line 170, in execute
result = self._query(query)
File "/usr/local/lib/python3.6/site-packages/pymysql/cursors.py", line 328, in _query
conn.query(q)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 517, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 732, in _read_query_result
result.read()
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 1075, in read
first_packet = self.connection._read_packet()
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 657, in _read_packet
packet_header = self._read_bytes(4)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 707, in _read_bytes
CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: 'SELECT user.is_active AS user_is_active, user.id AS user_id, user.username AS user_username, user.password AS user_password, user.reset_password_token AS user_reset_password_token, user.email AS user_email, user.email_confirmed_at AS user_email_confirmed_at, user.first_name AS user_first_name, user.last_name AS user_last_name \nFROM user \nWHERE user.id = %(param_1)s'] [parameters: {'param_1': 13}] (Background on this error at: http://sqlalche.me/e/e3q8)
I solved this issue by migrating from a mariadb container to mysql. I still don't know what the root cause is.
Try the following to rule out that you have stale connections in your pool:
From https://docs.sqlalchemy.org/en/13/core/pooling.html#pool-disconnects
The approach adds a small bit of overhead to the connection checkout process, however is otherwise the most simple and reliable approach to completely eliminating database errors due to stale pooled connections. The calling application does not need to be concerned about organizing operations to be able to recover from stale connections checked out from the pool.
Pessimistic testing of connections upon checkout is achievable by using the Pool.pre_ping argument, available from create_engine() via the create_engine.pool_pre_ping argument:
engine = create_engine("mysql+pymysql://user:pw#host/db", pool_pre_ping=True)
I dont knwo wich versions u are using but did u try to set 'SQLALCHEMY_POOL_SIZE
and 'SQLALCHEMY_POOL_RECYCLE'
I was faced a similar problem because I have multi threaded django app. When thread try to access database connection was lost.
There is a django bug.
https://code.djangoproject.com/ticket/21597
you can solve it with this workaround.
from django.db import connection
def is_connection_usable():
try:
connection.connection.ping()
except:
return False
else:
return True
def do_work():
while(True): # Endless loop that keeps the worker going (simplified)
if not is_connection_usable():
connection.close()
try:
do_a_bit_of_work()
except:
logger.exception("Something bad happened, trying again")
————————————————
版权声明:本文为CSDN博主「orangleliu」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/lzz957748332/java/article/details/41480417
First check your database connection, if connection is lost it will close your connection, when you attempt connection again, you will connect.
Bkz
https://blog.csdn.net/lzz957748332/article/details/41480417
I am using SAM CLI 0.6.0 and I am getting the error below when running sam local start-api with the app generated using sam init --runtime java
PS C:\Users\Kiran\AWS\SAM\java-sample\sam-app> sam local start-api
2018-09-03 10:49:49 Mounting HelloWorldFunction at http://127.0.0.1:3000/hello [GET]
2018-09-03 10:49:49 You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template
2018-09-03 10:49:49 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
2018-09-03 10:49:58 Invoking helloworld.App::handleRequest (java8)
2018-09-03 10:49:58 Found credentials in shared credentials file: ~/.aws/credentials
2018-09-03 10:49:58 Decompressing C:\Users\Kiran\AWS\SAM\java-sample\sam-app\target\HelloWorld-1.0.jar
Fetching lambci/lambda:java8 Docker container image......
2018-09-03 10:49:59 Mounting C:\Users\Kiran\AppData\Local\Temp\tmp7f8z0_zj as /var/task:ro inside runtime container
2018-09-03 10:49:59 Exception on /hello [GET]
Traceback (most recent call last):
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\api\client.py", line 229, in _raise_for_status
response.raise_for_status()
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\requests\models.py", line 939, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://192.168.1.145:2376/v1.35/containers/create
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\flask\_compat.py", line 35, in reraise
raise value
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\samcli\local\apigw\local_apigw_service.py", line 140, in _request_handler
self.lambda_runner.invoke(route.function_name, event, stdout=stdout_stream, stderr=self.stderr)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\samcli\commands\local\lib\local_lambda.py", line 80, in invoke
self.local_runtime.invoke(config, event, debug_context=self.debug_context, stdout=stdout, stderr=stderr)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\samcli\local\lambdafn\runtime.py", line 79, in invoke
self._container_manager.run(container)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\samcli\local\docker\manager.py", line 61, in run
container.create()
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\samcli\local\docker\container.py", line 120, in create
real_container = self.docker_client.containers.create(self._image, **kwargs)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\models\containers.py", line 824, in create
resp = self.client.api.create_container(**create_kwargs)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\api\container.py", line 411, in create_container
return self.create_container_from_config(config, name)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\api\container.py", line 422, in create_container_from_config
return self._result(res, True)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\api\client.py", line 235, in _result
self._raise_for_status(response)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\api\client.py", line 231, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("invalid volume specification: 'C:\Users\Kiran\AppData\Local\Temp\tmp7f8z0_zj:/var/task:ro'")
2018-09-03 10:49:59 127.0.0.1 - - [03/Sep/2018 10:49:59] "GET /hello HTTP/1.1" 502 -
The path mentioned in the last line of the error message (C:\Users\Kiran\AppData\Local\Temp\tmp7f8z0_zj) doesn't seem to exist on my machine. I do have the jar specified code URI of the template file.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
sam-app
Sample SAM Template for sam-app
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 20
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: target/HelloWorld-1.0.jar
Handler: helloworld.App::handleRequest
Runtime: java8
Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
Variables:
PARAM1: VALUE
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: get
Outputs:
HelloWorldApi:
Description: "API Gateway endpoint URL for Prod stage for Hello World function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
HelloWorldFunction:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt HelloWorldFunctionRole.Arn
Appreciate any inputs to fix this error.
Thanks!
Same errors with docker toolbox
Check out this issue.
This is a bug in SAM CLI and fixed with latest release.
Do 'Sam --version' and you should see the following.
sam --version
SAM CLI, version 0.6.1.
This issue affected people running SAM on windows 10 with docker toolbox.
Vyas
I have an issue with the docker_container module for ansible (v2.3). When i try to pass the env_file properties in the playbook, I get the error :no such file or directory
---
- hosts: preprod-api
become: yes
gather_facts: true
tasks:
- name: test configuration
docker_container:
name: "backend"
image: "backend"
state: started
exposed_ports:
- 80
volumes:
- /opt/application/i99/current/logs
user: ansible
env_file:
- "/opt/application/i99/current/backend/backend-PreProd-config.list"
I have tried with a file that exist on the ansible server and one on the target server with the same result.
here is the error :
`fatal: [my_hostname]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to my_hostname closed.\r\n",
"module_stdout": "Traceback (most recent call last):
File \"/tmp/ansible_rySqS2/ansible_module_docker_container.py\",
line 2036, in <module> main() File \"/tmp/ansible_rySqS2/ansible_module_docker_container.py\",
line 2029, in main\r\n cm = ContainerManager(client) File \"/tmp/ansible_rySqS2/ansible_module_docker_container.py\",
line 1668, in __init__\r\n self.parameters = TaskParameters(client)\r\n File \"/tmp/ansible_rySqS2/ansible_module_docker_container.py\",
line 784, in __init__\r\n self.env = self._get_environment()\r\n File \"/tmp/ansible_rySqS2/ansible_module_docker_container.py\",
line 1134, in _get_environment\r\n parsed_env_file = utils.parse_env_file(self.env_file)\r\n File \"/usr/lib/python2.7/site-packages/docker/utils/utils.py\",
line 961, in parse_env_file with open(env_file, 'r') as f:\r\nIOError: [Errno 2] No such file or directory: \"['/path/to/my/file/that/exist/backend-PreProd-config.env']\"\r\n", "msg": "MODULE FAILURE", "rc": 0}`
So my question is, how can I pass the env file ?
so i found the problem.
first the syntax is :
env_file: /local/dir/some/file.env
the file must be located on the target server and contain NO blank line or spaces in the first character.
The env_file must be local to your host, and not a file inside the container.
env_file:
- "/local/dir/some/file.env"
To add some useful information to the accepted answer.
Here is how you can write environment variable file.
myfile.env
USER=ElonMusk
PASSWORD=EV
DATABASE=Tesla