Python ImportError loading oauth2_provider on Elastic Beanstalk - oauth-2.0

I am trying to deploy oauth2 and oauth2-provider on my Python/Django web server, which is deployed on Ubuntu using Amazon Web Services (EC2 and Elastic Beanstalk).
When I run the web server locally, all works fine.
When I tar all of the files and transfer them to the EC2 instance, I successfully loaded the
Following are the installed apps from my settings.py file:
INSTALLED_APPS = (
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'oauth2_provider',
'corsheaders',
'consentrecords',
'custom_user',
'monitor',
)
Here are the commands I used to install the various tools needed for oauth2 (after ensuring that the virtualenv was properly set up):
sudo apt-get install python3-oauthlib
pip3 install django-cors-headers
pip3 install django-oauth-toolkit
pip3 install django-oauth2-provider
When I specify python3 manage.py runserver on the EC2 instance, it launches a local web server properly, which means to me that the settings.py file has been read properly.
However, when I deploy the instance to Elastic Beanstalk (using the command eb deploy), The web server has an internal error. The following comes from the logs:
mod_wsgi (pid=6880): Exception occurred processing WSGI script '/opt/python/current/app/prod/consentrecords/consentrecords/wsgi.py'.
Traceback (most recent call last):
File "/opt/python/current/app/prod/consentrecords/consentrecords/wsgi.py", line 21, in <module>
application = get_wsgi_application()
File "/opt/python/run/venv/lib/python3.4/site-packages/django/core/wsgi.py", line 14, in get_wsgi_application
django.setup()
File "/opt/python/run/venv/lib/python3.4/site-packages/django/__init__.py", line 18, in setup
apps.populate(settings.INSTALLED_APPS)
File "/opt/python/run/venv/lib/python3.4/site-packages/django/apps/registry.py", line 85, in populate
app_config = AppConfig.create(entry)
File "/opt/python/run/venv/lib/python3.4/site-packages/django/apps/config.py", line 86, in create
module = import_module(entry)
File "/opt/python/run/baselinenv/lib64/python3.4/importlib/__init__.py", line 109, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 2254, in _gcd_import
File "<frozen importlib._bootstrap>", line 2237, in _find_and_load
File "<frozen importlib._bootstrap>", line 2224, in _find_and_load_unlocked
ImportError: No module named 'oauth2_provider'
I also tried running python3 on the EC2 instance and give the command "import oauth2_provider". The command produced no error and appeared to load the module.
Why would this work on my local system and seem to load properly on the EC2 instance, but not the Elastic Beanstalk instance?

I discovered that this was the first time I needed to incorporate site-packages.
I solved the problem by editing the PYTHONPATH in the .ebextensions/01-django_eb.config file to include the site-packages directory that contained oauth2-provider.
Following is the .config file:
option_settings: "aws:elasticbeanstalk:application:environment":
DJANGO_SETTINGS_MODULE: "consentrecords.settings"
PYTHONPATH: "/opt/python/current/app/prod/consentrecords:/opt/python/current/app/prod/consentrecords/consentrecords:<b>/opt/python/current/app/prod/lib/python3.4/site-packages:$PYTHONPATH" "aws:elasticbeanstalk:container:python":
WSGIPath: "prod/consentrecords/consentrecords/wsgi.py" "aws:elasticbeanstalk:container:python:staticfiles":
"/static/": "prod/consentrecords/static/"

Related

Airflow DockerOperator unable to mount tmp directory correctly

I am trying to run a simple python script within a docker run command scheduled with Airflow.
I have followed the instructions here Airflow init.
My .env file:
AIRFLOW_UID=1000
AIRFLOW_GID=0
And the docker-compose.yaml is based on the default one docker-compose.yaml. I had to add - /var/run/docker.sock:/var/run/docker.sock as an additional volume to run docker inside of docker.
My dag is configured as followed:
""" this is an example dag """
from datetime import timedelta
from airflow import DAG
from airflow.operators.docker_operator import DockerOperator
from airflow.utils.dates import days_ago
from docker.types import Mount
default_args = {
'owner': 'airflow',
'depends_on_past': False,
'email': ['info#foo.com'],
'email_on_failure': True,
'email_on_retry': False,
'retries': 10,
'retry_delay': timedelta(minutes=5),
}
with DAG(
'msg_europe_etl',
default_args=default_args,
description='Process MSG_EUROPE ETL',
schedule_interval=timedelta(minutes=15),
start_date=days_ago(0),
tags=['satellite_data'],
) as dag:
download_and_store = DockerOperator(
task_id='download_and_store',
image='satellite_image:latest',
auto_remove=True,
api_version='1.41',
mounts=[Mount(source='/home/archive_1/archive/satellite_data',
target='/app/data'),
Mount(source='/home/dlassahn/projects/forecast-system/meteoIntelligence-satellite',
target='/app')],
command="python3 src/scripts.py download_satellite_images "
"{{ (execution_date - macros.timedelta(hours=4)).strftime('%Y-%m-%d %H:%M') }} "
"'msg_europe' ",
)
download_and_store
The Airflow log:
[2021-08-03 17:23:58,691] {docker.py:231} INFO - Starting docker container from image satellite_image:latest
[2021-08-03 17:23:58,702] {taskinstance.py:1501} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 268, in _raise_for_status
response.raise_for_status()
File "/home/airflow/.local/lib/python3.6/site-packages/requests/models.py", line 943, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: http+docker://localhost/v1.41/containers/create
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1157, in _run_raw_task
self._prepare_and_execute_task_with_callbacks(context, task)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1331, in _prepare_and_execute_task_with_callbacks
result = self._execute_task(context, task_copy)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/models/taskinstance.py", line 1361, in _execute_task
result = task_copy.execute(context=context)
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/docker/operators/docker.py", line 319, in execute
return self._run_image()
File "/home/airflow/.local/lib/python3.6/site-packages/airflow/providers/docker/operators/docker.py", line 258, in _run_image
tty=self.tty,
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/container.py", line 430, in create_container
return self.create_container_from_config(config, name)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/container.py", line 441, in create_container_from_config
return self._result(res, True)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 274, in _result
self._raise_for_status(response)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/api/client.py", line 270, in _raise_for_status
raise create_api_error_from_http_exception(e)
File "/home/airflow/.local/lib/python3.6/site-packages/docker/errors.py", line 31, in create_api_error_from_http_exception
raise cls(e, response=response, explanation=explanation)
docker.errors.APIError: 400 Client Error for http+docker://localhost/v1.41/containers/create: Bad Request ("invalid mount config for type "bind": bind source path does not exist: /tmp/airflowtmp037k87u6")
Trying to set mount_tmp_dir=False yield to an Dag ImportError because of unknown Keyword Argument mount_tmp_dir. (this might be an issue for the Documentation)
Nevertheless I do not know how to configure the tmp directory correctly.
My Airflow Version: 2.1.2
There was a bug in Docker Provider 2.0.0 which prevented Docker Operator to run with Docker-In-Docker solution.
You need to upgrade to the latest Docker Provider 2.1.0
https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/index.html#id1
You can do it by extending the image as described in https://airflow.apache.org/docs/docker-stack/build.html#extending-the-image with - for example - this docker file:
FROM apache/airflow
RUN pip install --no-cache-dir apache-airflow-providers-docker==2.1.0
The operator will work out-of-the-box in this case with "fallback" mode (and Warning message), but you can also disable the mount that causes the problem. More explanation from the https://airflow.apache.org/docs/apache-airflow-providers-docker/stable/_api/airflow/providers/docker/operators/docker/index.html
By default, a temporary directory is created on the host and mounted
into a container to allow storing files that together exceed the
default disk size of 10GB in a container. In this case The path to the
mounted directory can be accessed via the environment variable
AIRFLOW_TMP_DIR.
If the volume cannot be mounted, warning is printed and an attempt is
made to execute the docker command without the temporary folder
mounted. This is to make it works by default with remote docker engine
or when you run docker-in-docker solution and temporary directory is
not shared with the docker engine. Warning is printed in logs in this
case.
If you know you run DockerOperator with remote engine or via
docker-in-docker you should set mount_tmp_dir parameter to False. In
this case, you can still use mounts parameter to mount already
existing named volumes in your Docker Engine to achieve similar
capability where you can store files exceeding default disk size of
the container,
I had the same issue and all "recommended" ways of solving the issue here and setting up mount_dir params as descripted here just lead to other errors. The one solution that helped me was wrapping the invocated by docker code with the VPN (actually this hack was taken from another docker-powered DAG that used VPN and worked well).
So the final solution looks like:
#!/bin/bash
connect_to_vpn.sh &
sleep 10
python3 my_func.py
sleep 10
stop_vpn.sh
wait -n
exit $?
To connect to VPN I used openconnect. The took can be installed with apt install and supports anyconnect protocol (it was my crucial requirement).

Cannot use sqlite with the LocalExecutor [AIrflow]

I am trying to restart the airflow scheduler using the following command
airflow scheduler
I am using docker. I went inside my docker image for airflow and opened the CLI for my airflow image. That is where I used this command.
It throws an exception
Traceback (most recent call last):
File "/usr/local/bin/airflow", line 25, in <module>
from airflow.configuration import conf
File "/usr/local/lib/python3.6/site-packages/airflow/__init__.py", line 31, in <module>
from airflow.utils.log.logging_mixin import LoggingMixin
File "/usr/local/lib/python3.6/site-packages/airflow/utils/__init__.py", line 24, in <module>
from .decorators import apply_defaults as _apply_defaults
File "/usr/local/lib/python3.6/site-packages/airflow/utils/decorators.py", line 36, in <module>
from airflow import settings
File "/usr/local/lib/python3.6/site-packages/airflow/settings.py", line 37, in <module>
from airflow.configuration import conf, AIRFLOW_HOME, WEBSERVER_CONFIG # NOQA F401
File "/usr/local/lib/python3.6/site-packages/airflow/configuration.py", line 731, in <module>
conf.read(AIRFLOW_CONFIG)
File "/usr/local/lib/python3.6/site-packages/airflow/configuration.py", line 421, in read
self._validate()
File "/usr/local/lib/python3.6/site-packages/airflow/configuration.py", line 213, in _validate
self._validate_config_dependencies()
File "/usr/local/lib/python3.6/site-packages/airflow/configuration.py", line 247, in _validate_config_dependencies
self.get('core', 'executor')))
airflow.exceptions.AirflowConfigException: error: cannot use sqlite with the LocalExecutor
I am looking for any way to restart the airflow scheduler.
This is expected.
Since sqlite doesn’t support multiple connections it can only be used with SequentialExecutor. This is also explained in the docs.
If you want to use LocalExecutor please set MySQL or PostgreSQL as backend.

Connecting to external networks from inside minikube VM behind proxy in docker container

I have an active kubernetes cluster inside Minikube VM (using VirtualBox as driver), so for deploying new containers I am able to download the images as this connection is already laid out using istio service, now if I ssh into my minikube VM first of all I am not able to wget https content but http contents are connected after setting proxies and no_proxies but if I want to access any link from inside of my containers, say simple pod with python image and urllib library and I want to connect from inside this pod and then print the contents from any link (eg.http://python.org) I am not able to do so, all I am getting is no route to host error in logs which points to some problem with the connection due to proxies.
def basic():
import urllib.request
print("inside basic funtion")
with urllib.request.urlopen('http://python.org/') as response:
html = response.read()
print(html)
this is the python code I am running from inside my container as a pipeline component.
Most recent error I got-
Traceback (most recent call last):
File "/usr/local/lib/python3.7/urllib/request.py", line 1317, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/usr/local/lib/python3.7/http/client.py", line 1229, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1275, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1224, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.7/http/client.py", line 1016, in _send_output
self.send(msg)
File "/usr/local/lib/python3.7/http/client.py", line 956, in send
self.connect()
File "/usr/local/lib/python3.7/http/client.py", line 928, in connect
(self.host,self.port), self.timeout, self.source_address)
File "/usr/local/lib/python3.7/socket.py", line 727, in create_connection
raise err
File "/usr/local/lib/python3.7/socket.py", line 716, in create_connection
sock.connect(sa)
TimeoutError: [Errno 110] Operation timed out
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<string>", line 11, in <module>
File "<string>", line 3, in basic
File "/usr/local/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/usr/local/lib/python3.7/urllib/request.py", line 525, in open
response = self._open(req, data)
File "/usr/local/lib/python3.7/urllib/request.py", line 543, in _open
'_open', req)
File "/usr/local/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/usr/local/lib/python3.7/urllib/request.py", line 1345, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/usr/local/lib/python3.7/urllib/request.py", line 1319, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 110] Operation timed out>
I have started minikube as-
minikube start --cpus 6 --memory 12288 --disk-size=80g --extra-config=apiserver.service-account-issuer=api --extra-config=apiserver.service-account-signing-key-file=/var/lib/minikube/certs/apiserver.key --extra-config=apiserver.service-account-api-audiences=api --kubernetes-version v1.14.0
after setting the env variables as well.
Update:
I created a different container just to check the curl from inside the component as- (I am using kfp libraries for creating containers)
def curl_op(text):
return dsl.ContainerOp(
name='curl',
image='tutum/curl',
command=['sh', '-c'],
arguments=['curl -x http://<proxy-server>:<proxy-port> "$0"', text]
)
so using the above argument I am able to connect to external links, which again makes it certain that i need to create the containers with proxies set.
So for running the above python code I mentioned as pipeline component.
I added the environment variables using the os library and this individual piece was able to connect to external networks.
Updated python code-
def basic():
import urllib.request
import os
proxy = 'http://proxy-path:port'
os.environ['http_proxy'] = proxy
os.environ['HTTP_PROXY'] = proxy
os.environ['https_proxy'] = proxy
os.environ['HTTPS_PROXY'] = proxy
print("inside basic funtion")
with urllib.request.urlopen('http://python.org/') as response:
html = response.read()
print(html)
And if the docker image is created from scratch without taking help of pipeline library function then we need to just add the env details into our dockerfile the usual way after the base image call-
ENV HTTP_PROXY http://proxy-path:port
ENV HTTPS_PROXY http://proxy-path:port

Spyder permission error

I created virtual environment test and installed spyder in that environment
activated and tested
source activate test
conda info -e
# conda environments:
#
bin /home/myname/anaconda3/envs/bin
conda-meta /home/myname/anaconda3/envs/conda-meta
include /home/myname/anaconda3/envs/include
lib /home/myname/anaconda3/envs/lib
share /home/myname/anaconda3/envs/share
ssl /home/myname/anaconda3/envs/ssl
test * /home/myname/anaconda3/envs/test
root /home/myname/anaconda3
When I try to run spyder I get the Permission denied error. Can not figure out why
myname - mycomp - ~/anaconda3/envs/test
0 # spyder
Traceback (most recent call last):
File "/home/myname/anaconda3/envs/test/lib/python3.5/site-packages/spyder/app/mainwindow.py", line 2998, in main
mainwindow = run_spyder(app, options, args)
File "/home/myname/anaconda3/envs/test/lib/python3.5/site-packages/spyder/app/mainwindow.py", line 2902, in run_spyder
main.setup()
File "/home/myname/anaconda3/envs/test/lib/python3.5/site-packages/spyder/app/mainwindow.py", line 1153, in setup
self.setup_layout(default=False)
File "/home/myname/anaconda3/envs/test/lib/python3.5/site-packages/spyder/app/mainwindow.py", line 1414, in setup_layout
self.setup_default_layouts('default', settings)
File "/home/myname/anaconda3/envs/test/lib/python3.5/site-packages/spyder/app/mainwindow.py", line 1593, in setup_default_layouts
widget.toggle_view(True)
File "/home/myname/anaconda3/envs/test/lib/python3.5/site-packages/spyder/plugins/ipythonconsole.py", line 677, in toggle_view
self.create_new_client(give_focus=False)
File "/home/myname/anaconda3/envs/test/lib/python3.5/site-packages/spyder/plugins/ipythonconsole.py", line 862, in create_new_client
connection_file=self._new_connection_file(),
File "/home/myname/anaconda3/envs/test/lib/python3.5/site-packages/spyder/plugins/ipythonconsole.py", line 1340, in _new_connection_file
os.makedirs(jupyter_runtime_dir())
File "/home/myname/anaconda3/envs/test/lib/python3.5/os.py", line 231, in makedirs
makedirs(head, mode, exist_ok)
File "/home/myname/anaconda3/envs/test/lib/python3.5/os.py", line 241, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/run/user/myname'
You need to make this directory
/run/user/myname
writable. Spyder needs to write some files in there, so if that directory is not writable it will fail with the error you're seeing.

mod_wsgi apache with python-eve

I tried to integrate my eve app into apache.
I think I did all correctly like it is shown in flask documentation.
When I try to consume my eve collection...I get an error in apache log:
Traceback (most recent call last):
File "/var/customers/webs/myapp/myapp.wsgi", line 7, in <module>
from run import app as application
File "/var/customers/webs/myapp/run.py", line 9, in <module>
app = Eve(__name__)
File "/usr/local/lib/python2.7/dist-packages/eve/flaskapp.py", line 139, in __init__
self.validate_domain_struct()
File "/usr/local/lib/python2.7/dist-packages/eve/flaskapp.py", line 252, in validate_domain_struct
raise ConfigException('DOMAIN dictionary missing or wrong.')
ConfigException: DOMAIN dictionary missing or wrong.
It seems that the app can't find my settings.py
My apache folder looks like:
/myapp
- myapp.wsgi
- run.py
- settings.py
if I start it directly using python run.py, everythink works fine.
Check this answer. You can try to add the settings.py path using settings named parameter into the eve app initialization.
thanks for the hint #gcw
the solution is pretty easy:
just give the full path where the settings.py is located to the constructor
app = Eve(settings='/var/customers/webs/myapp/settings.py')

Resources