Import Error: no module named bplanner.app - docker

I' m trying to run my docker containers ( Flask app) but i got an error that says:
Import error no module named bplanner.app
Docker compose :
Error :
Dockerfile :

bplanner.app:create_app()
You're calling a python method from within a gunicorn command. I don't think that's possible. Instead create a separate python file which imports the create_app function and makes the resulting app object available to gunicorn.
You haven't posted any python code, or a directory listing, but at a guess this should work...
# wsgi.py
from app import create_app()
application = create_app()
In the guicorn command:
wsgi:application

Related

How do I fix python error occuring when script is run by an app? (macos)

I'm using taskwarrior with a couple of python hooks installed that get triggered when issuing certain taskwarrior commands. The hooks run fine when I use taskwarrior normally from the command line.
But I'm running into problems when the the hooks are trigged by a hotkey app, Karabiner Elements (I'm on a mac). Karabiner calls a perl script which in turn execute this bash command containing the task command:
/bin/bash -c 'TASKRC=/Users/me/.taskrc /usr/local/bin/task add \'the task\''
Unfortunately, the stack error is cutoff in the Karabiner log. This is as much as I get:
[2021-12-07 06:24:51.797] [error] [console_user_server] shell_command stderr:Traceback (most recent call last): File "/Users/me/.task_work/hooks/on-add-pirate", line 9, in <module> from tasklib import TaskWarrior, Task File "/Users/me/Library/Python/3.8/lib/python/site-packages/tasklib/__init__.py", l...
I'm guessing the python script is choking because it can't figure out where needed libraries are. But I don't see any shell environment variables that I might be able to set. I have python3 installed with brew and the tasklib library installed with pip3 (I believe).
Here's the hook script:
#!/usr/bin/env python3
import glob
from types import ModuleType
from importlib.machinery import SourceFileLoader, ModuleSpec
from importlib.util import module_from_spec
import os
from tasklib import TaskWarrior, Task
<-- snip -->
task = Task.from_input()
for hook in find_hooks('pirate_add'):
hook(task)
print(task.export_data())
And here's the __init__.py script mentioned in the error:
from .backends import TaskWarrior
from .task import Task
from .serializing import local_zone
__version__ = '2.2.1'
Ok, problem was bash was defaulting to older version of python3.
The fix is here: https://stackoverflow.com/a/70267561/1641112

Docker issues (could not load or find main class)

I am trying to perform a docker run however I keep getting the issue in the terminal which states Error: Could not find or load main class Main.
My Dockerfile is correctly named and the build did run and I can see the image when running docker run
The docker file is below:
FROM openjdk:8
COPY . /src/
WORKDIR /src/
RUN ["javac", "Main.java"]
ENTRYPOINT ["java", "Main"]
Can someone please advise me what is the best approach to take at this point or what I should be looking out for?
Thanks
It sounds like your main class name is not "Main".
after compiling with "javac" java creates a class file named exactly as its main class name. I mean the class that contains the main method.

OAuth - "No module named authlib"

I'm running superset on MacOS in docker and I'm trying to get OAuth working.
I’ve edited the config file /docker/pythonpath_dev/superset_config.py and added the OAuth configuration.
One of the lines I added was
AUTH_TYPE = AUTH_OAUTH
This required me to import the auth types as below:
from flask_appbuilder.security.manager import (
AUTH_OID,
AUTH_REMOTE_USER,
AUTH_DB,
AUTH_LDAP,
AUTH_OAUTH,
)
When I try to start up superset with the following command: docker-compose -f docker-compose-non-dev.yml up
I get the following error:
File "/usr/local/lib/python3.7/site-packages/flask_appbuilder/security/manager.py", line 250, in __init__
from authlib.integrations.flask_client import OAuth
ModuleNotFoundError: No module named 'authlib'
I'm fairly new to docker itself. How do I go about resolving this?
In case anybody else comes across this, the solution was to add the Authlib module to the python env on the docker image.
The process for adding a new python module to the docker image is documented here: https://github.com/apache/superset/blob/master/docker/README.md#local-packages
Quoted below in case that file changes:
If you want to add python packages in order to test things like DBs locally, you can simply add a local requirements.txt (./docker/requirements-local.txt) and rebuild your docker stack.
Steps:
1. Create ./docker/requirements-local.txt
2. Add your new packages
3. Rebuild docker-compose
a. docker-compose down -v
b. docker-compose up
Important was running docker-compose up and not docker-compose -f docker-compose-non-dev.yml up. The latter does not seem to rebuild the docker image.

Airflow on windows 10 - Module not found errors

I'm new to data science and wanted to do a little tutorial, which requires airflow, among other things. I installed it on windows using git bash in VS Code. I tried running it but it had a problem not being able to load the sqlite3 import
command (module not found error). I figured out that if I added the directory of sqlite3.py to the path, it would run, but now it gives me a similar error: pwd module not found from daemon.py
File "C:\ProgramData\Anaconda3\lib\site-packages\daemon\daemon.py", line 18, in <module>
import pwd
ModuleNotFoundError: No module named 'pwd'
Strange to me that it can't find pwd. Obviously pwd works in both git bash and powershell natively. It seems like a basic universal command. I'd love to learn more about what's going on. I don't want to have to end up adding 100 things to path just to get this program to run. I'd love any insights anyone can provide.
PS I'm using Anaconda.
it's seems to be the side effects of Spawning new Python Daemons .
You likely can fix this by downgrading the Python-Daemon :
pip install python-daemon==2.1.2

How to serve custom MLflow model with Docker?

We have a project following essentially this
docker example with the only difference that we created a custom model similar to this whose code lies in a directory called forecast. We succeeded in running the model with mlflow run. The problem arises when we try to serve the model. After doing
mlflow models build-docker -m "runs:/my-run-id/my-model" -n "my-image-name"
we fail running the container with
docker run -p 5001:8080 "my-image-name"
with the following error:
ModuleNotFoundError: No module named 'forecast'
It seems that the docker image is not aware of the source code defining our custom model class.
With Conda environnement the problem does not arise thanks to the code_path argument in mlflow.pyfunc.log_model.
Our Dockerfile is very basic, with just FROM continuumio/miniconda3:4.7.12, RUN pip install {model_dependencies}.
How to let the docker image know about the source code for deserialising the model and run it?
You can specify source code dependencies by setting
code_paths argument when logging the model. So in your case, you can do something like:
mlflow.pyfunc.log_model(..., code_paths=[<path to your forecast.py file>])

Resources