Copying files from a parent directory? - docker

My folder is created within a pipeline and its a transform artifact, looking like this:
TransformArtifact/
├─ deployment/
│ ├─ base_configs/
│ │ ├─ lambda/
│ │ │ ├─ appconfig.json
├─ lambda/
│ ├─ Dockerfile
Inside my dockerfile, the following line throws an error, because it references a wrong path that's later copied.
ARG APP_CONFIG_PATH=TransformArtifact/deployment/base_configs/lambda/
I've also tried ./deployment/base_configs/lambda/, it still wouldn't work. How do I reference the lambda folder within my base_configs folder?
The full path of the Dockerfile, for example, is:
/home/vsts/work/1/TransformArtifact/lambda/Dockerfile. Yes, I've also tried putting the full path in front of it.

Related

Copy all files of sub and nested sub directories

This is my project file structure:
java-project/
├── docker.compose.yml
├── pom.xml
└── services/
├── a/
│ ├── Dockerfile
│ ├── pom.xml
│ ├── src/
│ │ ├── pom.xml
│ │ ├── xxx
│ │ └── xxx
│ └── target/
│ ├── pom.xml
│ └── xxxx
└── b/
├── Dockerfile
├── pom.xml
├── src/
│ ├── pom.xml
│ ├── xxx
│ └── xxx
└── target/
├── pom.xml
└── xxxx
I want to copy all of the contents of the services folder of the project (including all the subfolders inside the services). Basically, I want to replicate the current project structure with every file and folder in the docker image as well for the mvn build to execute successfully.
I am doing the following in the Dockerfile, but I don't see all of the contents:
COPY services/**/pom.xml ./services/
What am I doing wrong here? TIA
Let's look at your COPY instruction:
# <src> <dest>
COPY services/**/pom.xml ./services/
Under the hood, Docker reads the <src> using Go's filepath.Match method. This means that the instruction doesn't use the globstar (**) the way glob patterns do. However, your question suggests you want to copy everything inside services — not only pom.xml files.
You can copy everything inside your local services directory using:
COPY services ./services/
If you want to exclude certain subdirectories or files, you can specify this using a .dockerignore.

ModuleNotFoundError: No module named after publishing a package to PyPI with Poetry

I created a package with poetry with this folder instruction:
my-app/
├─ src/
│ ├─ __init__.py
│ ├─ models/
│ ├─ helpers/
│ ├─ file_a.py
│ ├─ file_b.py
├─ __init__.py
├─ tests/
│ ├─ tests.py
├─ __init__.py
├─ pyproject.tmol
In file_a.py I do from file_b import FileB and then I can it use like: b = FileB(),
and in my tests.py, I do from file_a import FileA and all works fine while doing development, but when I publish the package to PyPi and try to use in another project:
from file_a import FileA
a = FileA()
I get something like:
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "src/__main__.py", line 1, in <module>
from kinetic_sdk import KineticSdk
File "/Users/me/Library/Caches/pypoetry/virtualenvs/kinetic-python-example-17HFj6yJ-py3.8/lib/python3.8/site-packages/file_a.py", line 1, in <module>
from file_b import FileB
ModuleNotFoundError: No module named 'file_b'
FWIW: package name would be file-a
I just published my first package to PyPI and ran into the same issue. This is how I solved it.
You can install your package like it is indicated on the MyPI website (in my case: pip install nbdbsession).
Your import, however, will be based on the folder name.
We need to change your file structure.
This is your original structure:
my-app/
├─ src/
│ ├─ __init__.py
│ ├─ models/
│ ├─ helpers/
│ ├─ file_a.py
│ ├─ file_b.py
├─ __init__.py
├─ tests/
│ ├─ tests.py
├─ __init__.py
├─ pyproject.tmol
This is what you need to change:
The + indicates something you need to add
The - indicates a removal
The ~ indicates a change
my-app/
├─ src/
+ ---myapp/ # put everything into the folder "src/myapp/"
│ ├─ __init__.py
│ ├─ models/
│ ├─ helpers/
│ ├─ file_a.py
│ ├─ file_b.py
- ├─ __init__.py # you only need the __init__.py file in "src/myapp"
├─ tests/
│ ├─ tests.py
- ├─ __init__.py # same here: no need to store that file here
~ ├─ pyproject.toml # there was a typo here :)
Or in essence:
my-app/
├─ src/
| |-myapp/
│ ├─ __init__.py
│ ├─ models/
│ ├─ helpers/
│ ├─ file_a.py
│ ├─ file_b.py
Once your package is published, you can run import myapp, where myapp is the name of the folder.
Another Example
Let's say you had this struct:
my-app/
├─ src/
|-zen/
├─ __init__.py
├─ ...
You would indeed import zen after pip install my-app.
A few more notes:
Why do you need the __init__.py file? This file will let Python know that the current directory is a module / package. Therefore, you only need it in your package folders, not in the root directory or in src.
When you import within a module, you usually do relative imports (python docs).
So instead of from file_b import FileB you would use from .file_b import FileB.
And if you want to import file_b after installing your package with pip install myapp, you need to from myapp import file_b.
I hope that helps!

How to create multiple containers in same pods which have separate deployment.yaml files?

tldr: in docker-compose, intercontainer communication is possible via localhost. I want to do the same in k8s, however, I have separate deployment.yaml files for each component. How to link them ?
I have a kubernetes helm package in which there are sub helm packages. The folder structure is as follows ::
A
├── Chart.yaml
├── values.yaml
├── charts
│ ├── component1
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── configmap.yaml
│ │ │ ├── deployment.yaml
│ │ │ ├── hpa.yaml
│ │ │ ├── ingress.yaml
│ │ │ ├── service.yaml
│ │ │ ├── serviceaccount.yaml
│ │ └── values.yaml
│ ├── component2
│ │ ├── Chart.yaml
│ │ ├── templates
│ │ │ ├── certs.yaml
│ │ │ ├── configmap.yaml
│ │ │ ├── pdb.yaml
│ │ │ ├── role.yaml
│ │ │ ├── statefulset.yaml
│ │ │ ├── pvc.yaml
│ │ │ └── svc.yaml
│ │ ├── values-production.yaml
│ │ └── values.yaml
In docker-compose, I was able to communicate between component1 and component2 via ports using localhost.
However, in this architecture, I have separate deployment.yaml files for those components. I know that if I keep them as containers in a single deployment.yaml file, I can communicate via localhost.
Question: How do I put these containers in same pod, provided that they are present in separate deployment.yaml files ?
That's not possible. Pods are the smallest deployable unit in kubernetes that consist of one or more containers. All containers inside the pod share the same network namespace (beside others). The containers can only be reached via fqdn or ip. For each container outside a pod "localhost" means something completely different. Similar to running docker compose on different hosts, they can not connect using localhost.
You can use the service's name to have a similar behaviour. Instead of calling http://localhost:8080 you can simple use http://component1:8080 to reach component1 from component2, supposing the service in component1/templates/service.yaml is named component1 and both are in the same namespace. Generally there is a dns record for every service with the schema <service>.<namespace>, e.g. component1.default for component1 running in the default namespace. If component2 where in a different namespace you would use http://component1.default:8080.

Jumping around of the dockerfile/docker-compose context

My current projects structure looks something like that:
/home/some/project
│ .credentials_dev
│ .credentials_prod
│ ca.cer
│ docker-compose.yml
│ Dockerfile
│ init.sql
│ nginx_dev.conf
│ nginx_prod.conf
│
├───src
└───users
Compose makes four containers, two of them are dev & prod version of application, who uses appropriate prod & dev files. As you can see, following structure root is little overloaded, so i'd like to move all the deployment staff into the separate directory to make the following:
/home/some/project
├───deployment
│ .credentials_dev
│ .credentials_prod
│ ca.cer
│ docker-compose.yml
│ Dockerfile
│ init.sql
│ nginx_dev.conf
│ nginx_prod.conf
│
├───src
└───users
The idea is to receieve in the result following structure on the docker host:
host
├───dev
│ ├───src
│ └───users
├───prod
│ ├───src
│ └───users
└───project
├───deployment
│ .credentials_dev
│ .credentials_prod
│ ca.cer
│ docker-compose.yml
│ Dockerfile
│ init.sql
│ nginx_dev.conf
│ nginx_prod.conf
│
├───src
└───users
and two containers app_dev and app_prod, which volumes are appropriate mounted into folders /host/dev and /host/prod.
I tried multiple solutions found here, but all of them in different variations returned the following errors:
ERROR: Service 'app_dev' failed to build: COPY failed: stat /var/lib/docker/tmp/docker-builder264200969/ca.cer: no such file or directory
ERROR: Service 'app_dev' failed to build: COPY failed: Forbidden path outside the build context: ../ca.cer ()
Error is always appears while docker-compose is trying to build an image, on that string:
COPY deployment/ca.cer /code/
Please tell me how to achieve the desired result.
The Deployment folder is outside of the build context. Docker will pass all the files inside the deployment file as the build context. However the deployment folder itself is outside of it.
Change your copy statement to be instead :
COPY ./ca.cer /code/
Since in the image you are already in that folder.

Docker isn't mounting the directory? "OCI runtime create failed: container_linux.go:346: no such file or directory: unknown"

On my Windows 10 Home computer with Docker Toolbox, Docker is having trouble mounting the drives. I've already run dos2unix on the entrypoint.sh file.
The full error is as such:
ERROR: for users Cannot start service users: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"/usr/src/app/entrypoint.sh\": stat /usr/src/app/entrypoint.sh: no such file or directory": unknown
My docker-compose.yml:
version: '3.7'
services:
users:
build:
context: ./services/users
dockerfile: Dockerfile
entrypoint: ['/usr/src/app/entrypoint.sh']
volumes:
- './services/users:/usr/src/app'
ports:
- 5001:5000
environment:
- FLASK_ENV=development
- APP_SETTINGS=project.config.DevelopmentConfig
- DATABASE_URL=postgresql://postgres:postgres#users-db:5432/users_dev
- DATABASE_TEST_URL=postgresql://postgres:postgres#users-db:5432/users_test
depends_on:
- users-db
Curiously, when I comment out the "volumes" section, it works! But I want to be able to mount volumes in the future.
Directory structure can be seen as such:
D:\flask-react-auth
│ .gitignore
│ .gitlab-ci.yml
│ docker-compose.yml
│ README.md
│ release.sh
│
└───services
│
└───users
│ .coveragerc
│ .dockerignore
│ Dockerfile
│ Dockerfile.prod
│ entrypoint.sh
│ manage.py
│ requirements-dev.txt
│ requirements.txt
│ setup.cfg
│ tree.txt
│
└───project
│ config.py
│ __init__.py
│
├───api
│ │ ping.py
│ │ __init__.py
│ │
│ └───users
│ admin.py
│ crud.py
│ models.py
│ views.py
│ __init__.py
│
├───db
│ create.sql
│ Dockerfile
│
└───tests
conftest.py
pytest.ini
test_admin.py
test_config.py
test_ping.py
test_users.py
test_users_unit.py
__init__.py
I have added the D:\flask-react-auth\ to the 'Shared Folders' on virtualbox as well.
The answer seems obvious to me:
When you run the code as is
* it mounts the current working directory to '/usr/src/app'.
* The current working directory does not have a file 'entrypoint.sh'.
* It tries to run '/usr/src/app/entrypoint.sh' but it is not there so it fails.
When you comment out that volume mount
* I assume the image already has '/usr/src/app/entrypoint.sh' so it just works.
I think you probably should change the mounting code from
volumes:
- '.:/usr/src/app'
to
volumes:
- './services/users:/usr/src/app'

Resources