I have an issue with the docker_container module for ansible (v2.3). When i try to pass the env_file properties in the playbook, I get the error :no such file or directory
---
- hosts: preprod-api
become: yes
gather_facts: true
tasks:
- name: test configuration
docker_container:
name: "backend"
image: "backend"
state: started
exposed_ports:
- 80
volumes:
- /opt/application/i99/current/logs
user: ansible
env_file:
- "/opt/application/i99/current/backend/backend-PreProd-config.list"
I have tried with a file that exist on the ansible server and one on the target server with the same result.
here is the error :
`fatal: [my_hostname]: FAILED! => {"changed": false, "failed": true, "module_stderr": "Shared connection to my_hostname closed.\r\n",
"module_stdout": "Traceback (most recent call last):
File \"/tmp/ansible_rySqS2/ansible_module_docker_container.py\",
line 2036, in <module> main() File \"/tmp/ansible_rySqS2/ansible_module_docker_container.py\",
line 2029, in main\r\n cm = ContainerManager(client) File \"/tmp/ansible_rySqS2/ansible_module_docker_container.py\",
line 1668, in __init__\r\n self.parameters = TaskParameters(client)\r\n File \"/tmp/ansible_rySqS2/ansible_module_docker_container.py\",
line 784, in __init__\r\n self.env = self._get_environment()\r\n File \"/tmp/ansible_rySqS2/ansible_module_docker_container.py\",
line 1134, in _get_environment\r\n parsed_env_file = utils.parse_env_file(self.env_file)\r\n File \"/usr/lib/python2.7/site-packages/docker/utils/utils.py\",
line 961, in parse_env_file with open(env_file, 'r') as f:\r\nIOError: [Errno 2] No such file or directory: \"['/path/to/my/file/that/exist/backend-PreProd-config.env']\"\r\n", "msg": "MODULE FAILURE", "rc": 0}`
So my question is, how can I pass the env file ?
so i found the problem.
first the syntax is :
env_file: /local/dir/some/file.env
the file must be located on the target server and contain NO blank line or spaces in the first character.
The env_file must be local to your host, and not a file inside the container.
env_file:
- "/local/dir/some/file.env"
To add some useful information to the accepted answer.
Here is how you can write environment variable file.
myfile.env
USER=ElonMusk
PASSWORD=EV
DATABASE=Tesla
Related
I have a docker container running PySpark, hadoop and all the required dependecies. I am using spark-submit to query the minio and I want to write the output dataframe to the file. Reading the file works but writing does not. If I execute python in that container and try to create file at the same path, it works.
Am I missing some spark configuration?
This is the error I get:
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 1109, in save
File "/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/java_gateway.py", line 1304, in __call__
File "/usr/local/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 111, in deco
File "/usr/local/spark/python/lib/py4j-0.10.9-src.zip/py4j/protocol.py", line 326, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o38.save
: java.net.ConnectException: Call From 10d3463d04ce/10.0.1.132 to localhost:9000 failed on connection exception:
java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
Relevant code:
spark = SparkSession.builder.getOrCreate()
spark_context = spark.sparkContext
spark_context._jsc.hadoopConfiguration().set('fs.s3a.access.key', 'minio')
spark_context._jsc.hadoopConfiguration().set(
'fs.s3a.secret.key', AWS_SECRET_ACCESS_KEY
)
spark_context._jsc.hadoopConfiguration().set('fs.s3a.path.style.access', 'true')
spark_context._jsc.hadoopConfiguration().set(
'fs.s3a.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem'
)
spark_context._jsc.hadoopConfiguration().set('fs.s3a.endpoint', AWS_S3_ENDPOINT)
spark_context._jsc.hadoopConfiguration().set(
'fs.s3a.connection.ssl.enabled', 'false'
)
df = spark.sql(query)
df.show() # this works perfectly fine
df.coalesce(1).write.format('json').save(output_path) # here I get the error
Solution was to prepend file:// to output_path.
I am running the datadog container as one of the services in docker compose. I am running Agent: 7 for my purposes.
version: "3.9"
services:
app:
image: app
container_name: app
hostname: app
build:
context: .
dockerfile: Dockerfile
restart: unless-stopped
ports:
- 8080:80
volumes:
- shared_volume:/tmp/logs
datadog:
container_name: dd-agent
image: gcr.io/datadoghq/agent:7
restart: always
ports:
- 8125:8125/udp
- 8126:8126
environment:
- DD_API_KEY=${DATADOG_API_KEY}
- DD_SITE=${DD_SITE}
- DD_DOGSTATSD_NON_LOCAL_TRAFFIC=${DD_DOGSTATSD_NON_LOCAL_TRAFFIC}
- DD_LOGS_ENABLED="true"
- DD_APM_ENABLED="true"
- DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL="true"
- DD_CONTAINER_EXCLUDE_LOGS="name:dd-agent"
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- /proc/:/host/proc/:ro
# - /opt/dd-agent/run:/opt/dd-agent/run:rw
- /sys/fs/cgroup/:/host/sys/fs/cgroup:ro
volumes:
shared_volume:
However running the datadog container runs into an error. The error log says that it's trying to connect to a redis server. I am not sure where is this coming from, as I don't recollect redis being one of the dependencies for datadog.
Pasted same log below for convenience -
dd-agent | 2022-10-11 10:13:53 UTC | CORE | ERROR | (pkg/collector/worker/check_logger.go:69 in Error) | check:php_fpm | Error running check: [{"message": "Detected 1 error while loading configuration model `InstanceConfig`:\n__root__\n Field `status_url` or `ping_url` must be set", "traceback": "Traceback (most recent call last):\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py\", line 1091, in run\n initialization()\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py\", line 492, in load_configuration_models\n instance_config = self.load_configuration_model(package_path, 'InstanceConfig', raw_instance_config)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py\", line 536, in load_configuration_model\n raise_from(ConfigurationError('\\n'.join(message_lines)), None)\n File \"<string>\", line 3, in raise_from\ndatadog_checks.base.errors.ConfigurationError: Detected 1 error while loading configuration model `InstanceConfig`:\n__root__\n Field `status_url` or `ping_url` must be set\n"}]
dd-agent | 2022-10-11 10:13:57 UTC | CORE | ERROR | (pkg/collector/worker/check_logger.go:69 in Error) | check:redisdb | Error running check: [{"message": "Timeout connecting to server", "traceback": "Traceback (most recent call last):\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/connection.py\", line 611, in connect\n sock = self.retry.call_with_retry(\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/retry.py\", line 51, in call_with_retry\n raise error\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/retry.py\", line 46, in call_with_retry\n return do()\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/connection.py\", line 612, in <lambda>\n lambda: self._connect(), lambda error: self.disconnect(error)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/connection.py\", line 677, in _connect\n raise err\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/connection.py\", line 665, in _connect\n sock.connect(socket_address)\nsocket.timeout: timed out\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/base/checks/base.py\", line 1116, in run\n self.check(instance)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/redisdb/redisdb.py\", line 556, in check\n self._check_db()\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/datadog_checks/redisdb/redisdb.py\", line 205, in _check_db\n info = conn.info()\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/commands/core.py\", line 970, in info\n return self.execute_command(\"INFO\", **kwargs)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/client.py\", line 1235, in execute_command\n conn = self.connection or pool.get_connection(command_name, **options)\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/connection.py\", line 1387, in get_connection\n connection.connect()\n File \"/opt/datadog-agent/embedded/lib/python3.8/site-packages/redis/connection.py\", line 615, in connect\n raise TimeoutError(\"Timeout connecting to server\")\nredis.exceptions.TimeoutError: Timeout connecting to server\n"}]
This is a known issue, however it does not interfere with the agent collection and sending logs and traces to Datadog.
Here is a comment on that same issue
do not pay attention on this . its do not have any impact on collection and sending data
I ran into the same issue and can confirm that the agent still works. I wish they would fix this or change the error message to a warning with a message "You may disregard this warning".
I am trying to run a simple python script within a docker run command scheduled with Airflow.
I have followed the instructions here Airflow init.
My .env file:
AIRFLOW_UID=1000
AIRFLOW_GID=0
And the docker-compose.yaml is based on the default one docker-compose.yaml. I had to add - /var/run/docker.sock:/var/run/docker.sock as an additional volume to run docker inside of docker.
My dag is configured as followed:
""" this is an example dag """
from datetime import timedelta
from airflow import DAG
from airflow.operators.docker_operator import DockerOperator
from airflow.utils.dates import days_ago
from docker.types import Mount
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['info#foo.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 10,
'retry_delay': timedelta(minutes=5),
}
with DAG(
'msg_europe_etl',
default_args=default_args,
description='Process MSG_EUROPE ETL',
schedule_interval=timedelta(minutes=15),
start_date=days_ago(0),
tags=['satellite_data'],
) as dag:
download_and_store = DockerOperator(
task_id='download_and_store',
image='satellite_image:latest',
auto_remove=True,
api_version='1.41',
mounts=[Mount(source='/home/archive_1/archive/satellite_data',
target='/app/data'),
Mount(source='/home/dlassahn/projects/forecast-system/meteoIntelligence-satellite',
target='/app')],
command="python3 src/scripts.py download_satellite_images "
"{{ (execution_date - macros.timedelta(hours=4)).strftime('%Y-%m-%d %H:%M') }} "
"'msg_europe' ",
)
download_and_store
The Airflow log:
[2021-08-03 17:23:58,691] {docker.py:231} INFO - Starting docker container from image satellite_image:latest
[2021-08-03 17:23:58,702] {taskinstance.py:1501} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/home/airflow/.local/lib/python3.6/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.41/containers/create
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/docker/operators/docker.py", line 319, in execute
return self._run_image()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/docker/operators/docker.py", line 258, in _run_image
tty=self.tty,
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/container.py", line 430, in create_container
return self.create_container_from_config(config, name)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/container.py", line 441, in create_container_from_config
return self._result(res, True)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 274, in _result
self._raise_for_status(response)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 400 Client Error for http+docker://localhost/v1.41/containers/create: Bad Request ("invalid mount config for type "bind": bind source path does not exist: /tmp/airflowtmp037k87u6")
Trying to set mount_tmp_dir=False yield to an Dag ImportError because of unknown Keyword Argument mount_tmp_dir. (this might be an issue for the Documentation)
Nevertheless I do not know how to configure the tmp directory correctly.
My Airflow Version: 2.1.2
There was a bug in Docker Provider 2.0.0 which prevented Docker Operator to run with Docker-In-Docker solution.
You need to upgrade to the latest Docker Provider 2.1.0
https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/index.html#id1
You can do it by extending the image as described in https://airflow.apache.org/docs/docker-stack/build.html#extending-the-image with - for example - this docker file:
FROM apache/airflow
RUN pip install --no-cache-dir apache-airflow-providers-docker==2.1.0
The operator will work out-of-the-box in this case with "fallback" mode (and Warning message), but you can also disable the mount that causes the problem. More explanation from the https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/_api/airflow/providers/docker/operators/docker/index.html
By default, a temporary directory is created on the host and mounted
into a container to allow storing files that together exceed the
default disk size of 10GB in a container. In this case The path to the
mounted directory can be accessed via the environment variable
AIRFLOW_TMP_DIR.
If the volume cannot be mounted, warning is printed and an attempt is
made to execute the docker command without the temporary folder
mounted. This is to make it works by default with remote docker engine
or when you run docker-in-docker solution and temporary directory is
not shared with the docker engine. Warning is printed in logs in this
case.
If you know you run DockerOperator with remote engine or via
docker-in-docker you should set mount_tmp_dir parameter to False. In
this case, you can still use mounts parameter to mount already
existing named volumes in your Docker Engine to achieve similar
capability where you can store files exceeding default disk size of
the container,
I had the same issue and all "recommended" ways of solving the issue here and setting up mount_dir params as descripted here just lead to other errors. The one solution that helped me was wrapping the invocated by docker code with the VPN (actually this hack was taken from another docker-powered DAG that used VPN and worked well).
So the final solution looks like:
#!/bin/bash
connect_to_vpn.sh &
sleep 10
python3 my_func.py
sleep 10
stop_vpn.sh
wait -n
exit $?
To connect to VPN I used openconnect. The took can be installed with apt install and supports anyconnect protocol (it was my crucial requirement).
I have a Flask web app that used to run on a standalone server using the following:
Flask/SQLAlchemy
MariaDB
uwsgi
nginx
On the stand alone server this application ran fine.
I have since "dockerized" this application across two containers:
uwsgi-nginx-flask
MariaDB
Every since dockerizing I occasionally get this error (entire traceback posted at the end):
Lost connection to MySQL server during query
The MariaDB log shows the following errors with verbose logging:
2020-05-10 18:35:32 130 [Warning] Aborted connection 130 to db: 'flspection2' user: 'fl-server' host: '172.19.0.1' (Got an error reading communication packets)
2020-05-10 18:45:34 128 [Warning] Aborted connection 128 to db: 'flspection2' user: 'fl-server' host: '172.19.0.1' (Got timeout reading communication packets)
This is experienced by the user as a 502 Bad Gateway. If the user refreshes the page, this will often solve the problem. This issue arises at random. I have not been able to reproduce it at will, but over time it will inevitably show up.
What is causing this and how can I solve it?
What I've done:
Verified that the MariaDB container has a timeout is 28800. I've seen the error occur much sooner than 28800 seconds after restarting all the containers, so I don't think it is actually a timeout issue.
Set pool_recycle option to 120
Verified that Flask-SQLAlchemy is using a scoped_session which should avoid these timing out issues.
Changed network_mode to default per a comment. This did not solve the problem.
My thoughts are that it acts like the connection between flask and the database is unreliable, but as docker containers running on the same host, shouldn't that be quite reliable?
Relevant code:
database.py
from flask_sqlalchemy import SQLAlchemy
db = SQLAlchemy()
config.py
class Config:
...
SQLALCHEMY_DATABASE_URI = 'mysql+pymysql://username:password#127.0.0.1/database?charset=utf8mb4'
SQLALCHEMY_TRACK_MODIFICATIONS = False
SQLALCHEMY_ENGINE_OPTIONS = {
'pool_recycle': 120
}
...
docker-compose.yml
version: "3.7"
services:
db:
restart: "always"
build: ./docker/db
volumes:
- "~/db:/var/lib/mysql"
environment:
MYSQL_ROOT_PASSWORD: "password"
MYSQL_DATABASE: "database"
MYSQL_USER: "user"
MYSQL_PASSWORD: "password"
ports:
- '3306:3306'
nginx-uwsgi-flask:
restart: "always"
depends_on:
- "db"
build:
context: .
dockerfile: ./docker/nginx-uwsgi-flask/Dockerfile
volumes:
- "~/data/fileshare:/fileshare"
ports:
- "80:80"
- "443:443"
network_mode: "host"
traceback
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.6/site-packages/pymysql/cursors.py", line 170, in execute
result = self._query(query)
File "/usr/local/lib/python3.6/site-packages/pymysql/cursors.py", line 328, in _query
conn.query(q)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 517, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 732, in _read_query_result
result.read()
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 1075, in read
first_packet = self.connection._read_packet()
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 657, in _read_packet
packet_header = self._read_bytes(4)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 707, in _read_bytes
CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")
pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/usr/local/lib/python3.6/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/usr/local/lib/python3.6/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/usr/local/lib/python3.6/site-packages/flask_user/decorators.py", line 132, in decorator
allowed = _is_logged_in_with_confirmed_email(user_manager)
File "/usr/local/lib/python3.6/site-packages/flask_user/decorators.py", line 17, in _is_logged_in_with_confirmed_email
if user_manager.call_or_get(current_user.is_authenticated):
File "/usr/local/lib/python3.6/site-packages/werkzeug/local.py", line 347, in __getattr__
return getattr(self._get_current_object(), name)
File "/usr/local/lib/python3.6/site-packages/werkzeug/local.py", line 306, in _get_current_object
return self.__local()
File "/usr/local/lib/python3.6/site-packages/flask_login/utils.py", line 26, in <lambda>
current_user = LocalProxy(lambda: _get_user())
File "/usr/local/lib/python3.6/site-packages/flask_login/utils.py", line 335, in _get_user
current_app.login_manager._load_user()
File "/usr/local/lib/python3.6/site-packages/flask_login/login_manager.py", line 359, in _load_user
return self.reload_user()
File "/usr/local/lib/python3.6/site-packages/flask_login/login_manager.py", line 321, in reload_user
user = self.user_callback(user_id)
File "/usr/local/lib/python3.6/site-packages/flask_user/user_manager.py", line 130, in load_user_by_user_token
user = self.db_manager.UserClass.get_user_by_token(user_token)
File "/usr/local/lib/python3.6/site-packages/flask_user/user_mixin.py", line 51, in get_user_by_token
user = user_manager.db_manager.get_user_by_id(user_id)
File "/usr/local/lib/python3.6/site-packages/flask_user/db_manager.py", line 179, in get_user_by_id
return self.db_adapter.get_object(self.UserClass, id=id)
File "/usr/local/lib/python3.6/site-packages/flask_user/db_adapters/sql_db_adapter.py", line 48, in get_object
return ObjectClass.query.get(id)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 924, in get
ident, loading.load_on_pk_identity)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 1007, in _get_impl
return db_load_fn(self, primary_key_identity)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/loading.py", line 250, in load_on_pk_identity
return q.one()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 2954, in one
ret = self.one_or_none()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 2924, in one_or_none
ret = list(self)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 2995, in __iter__
return self._execute_and_instances(context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/orm/query.py", line 3018, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 948, in execute
return meth(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/sql/elements.py", line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1200, in _execute_context
context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1413, in _handle_dbapi_exception
exc_info
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 265, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/compat.py", line 248, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/base.py", line 1193, in _execute_context
context)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/default.py", line 509, in do_execute
cursor.execute(statement, parameters)
File "/usr/local/lib/python3.6/site-packages/pymysql/cursors.py", line 170, in execute
result = self._query(query)
File "/usr/local/lib/python3.6/site-packages/pymysql/cursors.py", line 328, in _query
conn.query(q)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 517, in query
self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 732, in _read_query_result
result.read()
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 1075, in read
first_packet = self.connection._read_packet()
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 657, in _read_packet
packet_header = self._read_bytes(4)
File "/usr/local/lib/python3.6/site-packages/pymysql/connections.py", line 707, in _read_bytes
CR.CR_SERVER_LOST, "Lost connection to MySQL server during query")
sqlalchemy.exc.OperationalError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') [SQL: 'SELECT user.is_active AS user_is_active, user.id AS user_id, user.username AS user_username, user.password AS user_password, user.reset_password_token AS user_reset_password_token, user.email AS user_email, user.email_confirmed_at AS user_email_confirmed_at, user.first_name AS user_first_name, user.last_name AS user_last_name \nFROM user \nWHERE user.id = %(param_1)s'] [parameters: {'param_1': 13}] (Background on this error at: http://sqlalche.me/e/e3q8)
I solved this issue by migrating from a mariadb container to mysql. I still don't know what the root cause is.
Try the following to rule out that you have stale connections in your pool:
From https://docs.sqlalchemy.org/en/13/core/pooling.html#pool-disconnects
The approach adds a small bit of overhead to the connection checkout process, however is otherwise the most simple and reliable approach to completely eliminating database errors due to stale pooled connections. The calling application does not need to be concerned about organizing operations to be able to recover from stale connections checked out from the pool.
Pessimistic testing of connections upon checkout is achievable by using the Pool.pre_ping argument, available from create_engine() via the create_engine.pool_pre_ping argument:
engine = create_engine("mysql+pymysql://user:pw#host/db", pool_pre_ping=True)
I dont knwo wich versions u are using but did u try to set 'SQLALCHEMY_POOL_SIZE
and 'SQLALCHEMY_POOL_RECYCLE'
I was faced a similar problem because I have multi threaded django app. When thread try to access database connection was lost.
There is a django bug.
https://code.djangoproject.com/ticket/21597
you can solve it with this workaround.
from django.db import connection
def is_connection_usable():
try:
connection.connection.ping()
except:
return False
else:
return True
def do_work():
while(True): # Endless loop that keeps the worker going (simplified)
if not is_connection_usable():
connection.close()
try:
do_a_bit_of_work()
except:
logger.exception("Something bad happened, trying again")
————————————————
版权声明:本文为CSDN博主「orangleliu」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/lzz957748332/java/article/details/41480417
First check your database connection, if connection is lost it will close your connection, when you attempt connection again, you will connect.
Bkz
https://blog.csdn.net/lzz957748332/article/details/41480417
I am using SAM CLI 0.6.0 and I am getting the error below when running sam local start-api with the app generated using sam init --runtime java
PS C:\Users\Kiran\AWS\SAM\java-sample\sam-app> sam local start-api
2018-09-03 10:49:49 Mounting HelloWorldFunction at http://127.0.0.1:3000/hello [GET]
2018-09-03 10:49:49 You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template
2018-09-03 10:49:49 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
2018-09-03 10:49:58 Invoking helloworld.App::handleRequest (java8)
2018-09-03 10:49:58 Found credentials in shared credentials file: ~/.aws/credentials
2018-09-03 10:49:58 Decompressing C:\Users\Kiran\AWS\SAM\java-sample\sam-app\target\HelloWorld-1.0.jar
Fetching lambci/lambda:java8 Docker container image......
2018-09-03 10:49:59 Mounting C:\Users\Kiran\AppData\Local\Temp\tmp7f8z0_zj as /var/task:ro inside runtime container
2018-09-03 10:49:59 Exception on /hello [GET]
Traceback (most recent call last):
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\api\client.py", line 229, in _raise_for_status
response.raise_for_status()
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\requests\models.py", line 939, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://192.168.1.145:2376/v1.35/containers/create
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\flask\_compat.py", line 35, in reraise
raise value
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\flask\app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\samcli\local\apigw\local_apigw_service.py", line 140, in _request_handler
self.lambda_runner.invoke(route.function_name, event, stdout=stdout_stream, stderr=self.stderr)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\samcli\commands\local\lib\local_lambda.py", line 80, in invoke
self.local_runtime.invoke(config, event, debug_context=self.debug_context, stdout=stdout, stderr=stderr)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\samcli\local\lambdafn\runtime.py", line 79, in invoke
self._container_manager.run(container)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\samcli\local\docker\manager.py", line 61, in run
container.create()
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\samcli\local\docker\container.py", line 120, in create
real_container = self.docker_client.containers.create(self._image, **kwargs)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\models\containers.py", line 824, in create
resp = self.client.api.create_container(**create_kwargs)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\api\container.py", line 411, in create_container
return self.create_container_from_config(config, name)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\api\container.py", line 422, in create_container_from_config
return self._result(res, True)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\api\client.py", line 235, in _result
self._raise_for_status(response)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\api\client.py", line 231, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "C:\Users\Kiran\AppData\Roaming\Python\Python37\site-packages\docker\errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 500 Server Error: Internal Server Error ("invalid volume specification: 'C:\Users\Kiran\AppData\Local\Temp\tmp7f8z0_zj:/var/task:ro'")
2018-09-03 10:49:59 127.0.0.1 - - [03/Sep/2018 10:49:59] "GET /hello HTTP/1.1" 502 -
The path mentioned in the last line of the error message (C:\Users\Kiran\AppData\Local\Temp\tmp7f8z0_zj) doesn't seem to exist on my machine. I do have the jar specified code URI of the template file.
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
sam-app
Sample SAM Template for sam-app
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 20
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
CodeUri: target/HelloWorld-1.0.jar
Handler: helloworld.App::handleRequest
Runtime: java8
Environment: # More info about Env Vars: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#environment-object
Variables:
PARAM1: VALUE
Events:
HelloWorld:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /hello
Method: get
Outputs:
HelloWorldApi:
Description: "API Gateway endpoint URL for Prod stage for Hello World function"
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/hello/"
HelloWorldFunction:
Description: "Hello World Lambda Function ARN"
Value: !GetAtt HelloWorldFunction.Arn
HelloWorldFunctionIamRole:
Description: "Implicit IAM Role created for Hello World function"
Value: !GetAtt HelloWorldFunctionRole.Arn
Appreciate any inputs to fix this error.
Thanks!
Same errors with docker toolbox
Check out this issue.
This is a bug in SAM CLI and fixed with latest release.
Do 'Sam --version' and you should see the following.
sam --version
SAM CLI, version 0.6.1.
This issue affected people running SAM on windows 10 with docker toolbox.
Vyas