ModuleNotFoundError: No module named after publishing a package to PyPI with Poetry - python-import

I created a package with poetry with this folder instruction:
my-app/
├─ src/
│ ├─ __init__.py
│ ├─ models/
│ ├─ helpers/
│ ├─ file_a.py
│ ├─ file_b.py
├─ __init__.py
├─ tests/
│ ├─ tests.py
├─ __init__.py
├─ pyproject.tmol
In file_a.py I do from file_b import FileB and then I can it use like: b = FileB(),
and in my tests.py, I do from file_a import FileA and all works fine while doing development, but when I publish the package to PyPi and try to use in another project:
from file_a import FileA
a = FileA()
I get something like:
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "src/__main__.py", line 1, in <module>
from kinetic_sdk import KineticSdk
File "/Users/me/Library/Caches/pypoetry/virtualenvs/kinetic-python-example-17HFj6yJ-py3.8/lib/python3.8/site-packages/file_a.py", line 1, in <module>
from file_b import FileB
ModuleNotFoundError: No module named 'file_b'
FWIW: package name would be file-a

I just published my first package to PyPI and ran into the same issue. This is how I solved it.
You can install your package like it is indicated on the MyPI website (in my case: pip install nbdbsession).
Your import, however, will be based on the folder name.
We need to change your file structure.
This is your original structure:
my-app/
├─ src/
│ ├─ __init__.py
│ ├─ models/
│ ├─ helpers/
│ ├─ file_a.py
│ ├─ file_b.py
├─ __init__.py
├─ tests/
│ ├─ tests.py
├─ __init__.py
├─ pyproject.tmol
This is what you need to change:
The + indicates something you need to add
The - indicates a removal
The ~ indicates a change
my-app/
├─ src/
+ ---myapp/ # put everything into the folder "src/myapp/"
│ ├─ __init__.py
│ ├─ models/
│ ├─ helpers/
│ ├─ file_a.py
│ ├─ file_b.py
- ├─ __init__.py # you only need the __init__.py file in "src/myapp"
├─ tests/
│ ├─ tests.py
- ├─ __init__.py # same here: no need to store that file here
~ ├─ pyproject.toml # there was a typo here :)
Or in essence:
my-app/
├─ src/
| |-myapp/
│ ├─ __init__.py
│ ├─ models/
│ ├─ helpers/
│ ├─ file_a.py
│ ├─ file_b.py
Once your package is published, you can run import myapp, where myapp is the name of the folder.
Another Example
Let's say you had this struct:
my-app/
├─ src/
|-zen/
├─ __init__.py
├─ ...
You would indeed import zen after pip install my-app.
A few more notes:
Why do you need the __init__.py file? This file will let Python know that the current directory is a module / package. Therefore, you only need it in your package folders, not in the root directory or in src.
When you import within a module, you usually do relative imports (python docs).
So instead of from file_b import FileB you would use from .file_b import FileB.
And if you want to import file_b after installing your package with pip install myapp, you need to from myapp import file_b.
I hope that helps!

Related

Dockerfile Build Doesn't Work But Works with Compose

I have a Dockerfile that won't build, but will work with docker-compose, I need it to work though with docker build command.
###############
# CACHE IMAGE #
###############
ARG GO_IMAGE=golang:1.17.3-alpine3.14
ARG BASE_IMAGE=alpine:3.14.2
FROM ${GO_IMAGE} AS cache
# Add the keys
ARG GITHUB_ID
ENV GITHUB_ID=$GITHUB_ID
ARG GITHUB_TOKEN
ENV GITHUB_TOKEN=$GITHUB_TOKEN
# Install Git
RUN apk add git
# TODO: ENCRYPT THE GITHUB_ID AND GITHUB_TOKEN
# Make Git Configuration
RUN git config \
--global \
url."https://${GITHUB_ID}:${GITHUB_TOKEN}#github.com/".insteadOf \
"https://github.com/"
WORKDIR /bin
COPY go.mod go.sum bin/
RUN go mod download
##############
# BASE IMAGE #
##############
FROM cache AS dataeng_github_metrics
COPY . /bin
WORKDIR /bin
# Setup Git Terminal Prompt & Go Build
RUN go build .
###############
# FINAL IMAGE #
###############
FROM ${BASE_IMAGE}
COPY --from=dataeng_github_metrics /bin/dataeng_github_metrics bin/
ENTRYPOINT [ "bin/dataeng_github_metrics" ]
The directory looks like this:
rics   Docker-Publish-Workflows-To-GHCR ●  tree .  1 ↵  8955  20:37:41
.
├── Dockerfile
├── Makefile
├── README.md
├── dataeng_github_metrics
├── go.mod
├── go.sum
├── infra
│ ├── README.md
│ ├── k8s
│ │ ├── README.md
│ │ ├── configmaps
│ │ │ ├── README.md
│ │ │ └── teams-payload-configmap.yaml
│ │ ├── cronworkflow
│ │ │ ├── README.md
│ │ │ └── argo_cron_workflow.yaml
│ │ ├── deployment
│ │ │ ├── README.md
│ │ │ ├── git-hub-contributions-deployment.yaml
│ │ │ └── postgres-deployment.yaml
│ │ └── volumeclaims
│ │ ├── README.md
│ │ ├── git-hub-contributions-claim0-persistentvolumeclaim.yaml
│ │ ├── git-hub-contributions-claim1-persistentvolumeclaim.yaml
│ │ └── git-hub-contributions-claim2-persistentvolumeclaim.yaml
│ ├── payloads
│ │ ├── README.md
│ │ ├── dataeng_github_metrics.csv
│ │ └── teams.yaml
│ └── terraform
│ ├── README.md
│ └── manifests
│ └── README.md
├── local
│ ├── README.md
│ ├── dependencies
│ │ └── wait-for-postgres.sh
│ ├── docker-compose.yaml
│ └── images
│ ├── ER_Diagram.png
│ ├── print_execution.png
│ └── print_query.png
└── main.go
What's weird to me is it fails at the COPY step for go.mod and go.sum, and I haven't a clue why it's not copying over the files:
Command to build Dockerfile in working directory:
docker build - < Dockerfile
[+] Building 0.7s (12/17)
=> [internal] load build definition from Dockerfile 0.0s
=> => transferring dockerfile: 928B 0.0s
=> [internal] load .dockerignore 0.0s
=> => transferring context: 2B 0.0s
=> [internal] load metadata for docker.io/library/alpine:3.14.2 0.4s
=> [internal] load metadata for docker.io/library/golang:1.17.3-alpine3.14 0.0s
=> [cache 1/7] FROM docker.io/library/golang:1.17.3-alpine3.14 0.0s
=> [internal] load build context 0.0s
=> => transferring context: 2B 0.0s
=> [stage-2 1/2] FROM docker.io/library/alpine:3.14.2#sha256:e1c082e3d3c45cccac829840a2594 0.0s
=> CACHED [cache 2/7] RUN apk add git 0.0s
=> CACHED [cache 3/7] RUN git config --global url."https://:#github.com/".insteadO 0.0s
=> CACHED [cache 4/7] WORKDIR /bin 0.0s
=> [cache 5/7] RUN pwd 0.2s
=> ERROR [cache 6/7] COPY go.mod go.sum ./
Why is it not letting me copy a file into my WORKDIR with docker build, but when I use docker-compose it works just fine.
Try running the Docker build like so:
docker build .
The build with - doesn't work as expected because no context is given. See Docker build docs:
This will read a Dockerfile from STDIN without context. Due to the lack of a context, no contents of any local directory will be sent to the Docker daemon. Since there is no context, a Dockerfile ADD only works if it refers to a remote URL.

Copying files from a parent directory?

My folder is created within a pipeline and its a transform artifact, looking like this:
TransformArtifact/
├─ deployment/
│ ├─ base_configs/
│ │ ├─ lambda/
│ │ │ ├─ appconfig.json
├─ lambda/
│ ├─ Dockerfile
Inside my dockerfile, the following line throws an error, because it references a wrong path that's later copied.
ARG APP_CONFIG_PATH=TransformArtifact/deployment/base_configs/lambda/
I've also tried ./deployment/base_configs/lambda/, it still wouldn't work. How do I reference the lambda folder within my base_configs folder?
The full path of the Dockerfile, for example, is:
/home/vsts/work/1/TransformArtifact/lambda/Dockerfile. Yes, I've also tried putting the full path in front of it.

Docker mosquitto - Error unable to load auth plugin

I really need your help !
I'm encountering a problem with the loading of a plugin in a docker mosquitto.
I tried to load it on a local version of mosquitto and it worked well.
The error return in the docker console is:
dev_instance_mosquitto_1 exited with code 13
The errors return in the log file of mosquitto are:
1626352342: Loading plugin: /mosquitto/config/mosquitto_message_timestamp.so
1626352342: Error: Unable to load auth plugin "/mosquitto/config/mosquitto_message_timestamp.so".
1626352342: Load error: Error relocating /mosquitto/config/mosquitto_message_timestamp.so: __sprintf_chk: symbol not found
Here is a tree output of the project:
mosquitto/
├── Dockerfile
├── config
│ ├── acl
│ ├── ca_certificates
│ │ ├── README
│ │ ├── broker_CA.crt
│ │ ├── mqtt.test.perax.com.p12
│ │ ├── private_key.key
│ │ └── server_ca.crt
│ ├── certs
│ │ ├── CA_broker_mqtt.crt
│ │ ├── README
│ │ ├── serveur_broker.crt
│ │ └── serveur_broker.key
│ ├── conf.d
│ │ └── default.conf
│ ├── mosquitto.conf
│ ├── mosquitto_message_timestamp.so
│ └── pwfile
├── data
│ └── mosquitto.db
└── log
└── mosquitto.log
Here is the Dockerfile:
FROM eclipse-mosquitto
COPY config/ /mosquitto/config
COPY config/mosquitto_message_timestamp.so /usr/lib/mosquitto_message_timestamp.so
RUN install /usr/lib/mosquitto_message_timestamp.so /mosquitto/config/
here is the docker-compose.yml:
mosquitto:
restart: always
build: ./mosquitto/
image: "eclipse-mosquitto/latests"
ports:
- "1883:1883"
- "9001:9001"
volumes:
- ./mosquitto/config/:/mosquitto/config/
- ./mosquitto/data/:/mosquitto/data/
- ./mosquitto/log/mosquitto.log:/mosquitto/log/mosquitto.log
user: 1883:1883
environment:
- PUID=1883
- PGID=1883
Here is the mosquitto.conf:
persistence true
persistence_location /mosquitto/data
log_dest file /mosquitto/log/mosquitto.log
include_dir /mosquitto/config/conf.d
plugin /mosquitto/config/mosquitto_message_timestamp.so
I'm using mosquitto 2.0.10 on a ubuntu serveur with the version 18.04.5 LTS.
In thanking you for your help.
Your best bet here is probably to set up a multi step Docker build file that uses an Alpine based image to build the plugin then copy it into the eclipse-mosquitto image.

Jumping around of the dockerfile/docker-compose context

My current projects structure looks something like that:
/home/some/project
│ .credentials_dev
│ .credentials_prod
│ ca.cer
│ docker-compose.yml
│ Dockerfile
│ init.sql
│ nginx_dev.conf
│ nginx_prod.conf
│
├───src
└───users
Compose makes four containers, two of them are dev & prod version of application, who uses appropriate prod & dev files. As you can see, following structure root is little overloaded, so i'd like to move all the deployment staff into the separate directory to make the following:
/home/some/project
├───deployment
│ .credentials_dev
│ .credentials_prod
│ ca.cer
│ docker-compose.yml
│ Dockerfile
│ init.sql
│ nginx_dev.conf
│ nginx_prod.conf
│
├───src
└───users
The idea is to receieve in the result following structure on the docker host:
host
├───dev
│ ├───src
│ └───users
├───prod
│ ├───src
│ └───users
└───project
├───deployment
│ .credentials_dev
│ .credentials_prod
│ ca.cer
│ docker-compose.yml
│ Dockerfile
│ init.sql
│ nginx_dev.conf
│ nginx_prod.conf
│
├───src
└───users
and two containers app_dev and app_prod, which volumes are appropriate mounted into folders /host/dev and /host/prod.
I tried multiple solutions found here, but all of them in different variations returned the following errors:
ERROR: Service 'app_dev' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder264200969/ca.cer: no such file or directory
ERROR: Service 'app_dev' failed to build: COPY failed: Forbidden path outside the build context: ../ca.cer ()
Error is always appears while docker-compose is trying to build an image, on that string:
COPY deployment/ca.cer /code/
Please tell me how to achieve the desired result.
The Deployment folder is outside of the build context. Docker will pass all the files inside the deployment file as the build context. However the deployment folder itself is outside of it.
Change your copy statement to be instead :
COPY ./ca.cer /code/
Since in the image you are already in that folder.

Docker isn't mounting the directory? "OCI runtime create failed: container_linux.go:346: no such file or directory: unknown"

On my Windows 10 Home computer with Docker Toolbox, Docker is having trouble mounting the drives. I've already run dos2unix on the entrypoint.sh file.
The full error is as such:
ERROR: for users Cannot start service users: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/src/app/entrypoint.sh\": stat /usr/src/app/entrypoint.sh: no such file or directory": unknown
My docker-compose.yml:
version: '3.7'
services:
users:
build:
context: ./services/users
dockerfile: Dockerfile
entrypoint: ['/usr/src/app/entrypoint.sh']
volumes:
- './services/users:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgresql://postgres:postgres#users-db:5432/users_dev
- DATABASE_TEST_URL=postgresql://postgres:postgres#users-db:5432/users_test
depends_on:
- users-db
Curiously, when I comment out the "volumes" section, it works! But I want to be able to mount volumes in the future.
Directory structure can be seen as such:
D:\flask-react-auth
│ .gitignore
│ .gitlab-ci.yml
│ docker-compose.yml
│ README.md
│ release.sh
│
└───services
│
└───users
│ .coveragerc
│ .dockerignore
│ Dockerfile
│ Dockerfile.prod
│ entrypoint.sh
│ manage.py
│ requirements-dev.txt
│ requirements.txt
│ setup.cfg
│ tree.txt
│
└───project
│ config.py
│ __init__.py
│
├───api
│ │ ping.py
│ │ __init__.py
│ │
│ └───users
│ admin.py
│ crud.py
│ models.py
│ views.py
│ __init__.py
│
├───db
│ create.sql
│ Dockerfile
│
└───tests
conftest.py
pytest.ini
test_admin.py
test_config.py
test_ping.py
test_users.py
test_users_unit.py
__init__.py
I have added the D:\flask-react-auth\ to the 'Shared Folders' on virtualbox as well.
The answer seems obvious to me:
When you run the code as is
* it mounts the current working directory to '/usr/src/app'.
* The current working directory does not have a file 'entrypoint.sh'.
* It tries to run '/usr/src/app/entrypoint.sh' but it is not there so it fails.
When you comment out that volume mount
* I assume the image already has '/usr/src/app/entrypoint.sh' so it just works.
I think you probably should change the mounting code from
volumes:
- '.:/usr/src/app'
to
volumes:
- './services/users:/usr/src/app'

Resources