$ sw_vers
ProductName: Mac OS X
ProductVersion: 10.13.6
BuildVersion: 17G65
$ docker -v
Docker version 18.06.0-ce, build 0ffa825
I have some docker-compose file that actually has this string of line:
volumes:
- .:/sql
And In "." directory I have plenty of directories and files:
$ ls -l
total 32
-rw-r--r-- 1 michael staff 161 Nov 9 13:35 README.ms
drwxr-xr-x 12 michael staff 384 Nov 9 13:35 backend
-rwxr-xr-x 1 michael staff 1438 Nov 9 13:35 manage.py
drwxr-xr-x 12 michael staff 384 Nov 10 13:28 ops
drwxr-xr-x 5 michael staff 160 Nov 9 13:35 requirements
-rw-r--r-- 1 michael staff 38 Nov 9 13:35 requirements.txt
But when I start this container and go inside, all I see there:
# ls -1 /sql
docker-compose.yml
ops
Pay attention: file docker-compose.yml is even not from this directory. It is from ops/
Who has an idea of the cause?
Figured out the reason.
No matter where you start docker-compose from, it takes "." as a location of docker-compose.yml file. Thus, all you expect to see inside container have to be located within the very same directory where docker-compose.yml resides at.
Related
I have a docker-compose file that mounts a local directory. Below is the structure of the directory, which contains multi-subfolders.
/k6/templates/vanilla
❯ ls -ltr
total 16
drwxr-xr-x 3 zhangzhao staff 96 Nov 2 23:16 common
drwxr-xr-x 3 zhangzhao staff 96 Nov 2 23:20 config
drwxr-xr-x 3 zhangzhao staff 96 Nov 2 23:22 scenarios
drwxr-xr-x 3 zhangzhao staff 96 Nov 2 23:23 scripts
-rw-r--r-- 1 zhangzhao staff 599 Nov 2 23:34 main.js
-rw-r--r-- 1 zhangzhao staff 2511 Nov 2 23:55 README.md
In the docker-compose.yml file, I mounted the vanilla directory to test expecting that the sub-directories were mounted to /test recursively. However when I was running the main.js by docker-compose run --rm k6 run /test/main.js, it threw an error the function was not found. The function was stored in the scenarios sub-directory. So the docker-compose mount does not mount folder recursively?
volumes:
- ../templates/vanilla:/test
ERRO[0000] There were problems with the specified script configuration:
- executor scenarios1: function 'scenarios1' not found in exports
I have a locally created docker container (with python script) that I can run locally as follows:
docker run --rm -v "$PWD"/output:/app_base/output eod_price:latest
python eod_price.py ACN -b 7 -f out.csv
This app saves the output in file /Users/me/Documents/Python/GitHub/docker-learn/docker007/output/out.csv
$ ls -l
-rw-r--r--# 1 me staff 647 23 Jul 22:47 Dockerfile
-rw-r--r-- 1 me staff 2628 24 Jul 06:37 Readme.md
-rw-r--r--# 1 me staff 10222 12 Jul 15:54 airflow-docker-compose.yaml
drwxr-xr-x 3 me staff 96 23 Jul 22:56 app
drwxr-xr-x# 6 me staff 192 23 Jul 22:26 dags
drwxr-xr-x# 10 me staff 320 23 Jul 13:39 logs
-rw-r--r-- 1 me staff 646 3 Jul 09:59 myapp-docker-compose.yaml
drwxr-xr-x 3 me staff 96 23 Jul 22:58 output
drwxr-xr-x# 2 me staff 64 11 Jul 22:41 plugins
-rw-r--r--# 1 me staff 119 23 Jul 21:26 requirements.txt
In airflow-docker-compose file I have added additional volumes for input and output directory
volumes:
- ./dags:/opt/airflow/dags
- ./logs:/opt/airflow/logs
- ./plugins:/opt/airflow/plugins
- ./input:/opt/airflow/input
- ./output:/opt/airflow/output
In the dags directory I am trying to run this in apache airflow
eod_price = DockerOperator(
task_id='get_eod_price',
image='eod_price',
api_version='auto',
command='python eod_price.py ACN -b 7 -f out.csv',
container_name='eod_price-container',
auto_remove=True,
mounts=[
Mount(source='/opt/airflow/output',
target='/app_base/output',
type='bind'),
],
mount_tmp_dir=False,
docker_url='unix://var/run/docker.sock',
network_mode='bridge'
)
However, I get FileNotFoundError: [Errno 2] No such file or directory error
Other things I tried in DockerOperator are:
add & remove host_tmp_dir
remove mount_tmp_dir
I followed another similar issue link and in airflow docker compose I added additional volume x-airflow-common:
- /var/run/docker.sock:/var/run/docker.sock
However, that changed error to PermissionError: [Errno 13] Permission denied.
I am on Mac OS
Any tips how to resolve this?
What is the easiest way to debug airflow problems?
What I do not get is that is this error due to wrong mounting of volume in DockerOperator function or some other issue. I cannot make out if airflow did not even find the container image (eod_price) or was error in execution of the image.
Edit:
I followed yet another solution at link. This solution chmod 777 /var/run/docker.sock also did not work. I also tried to run it with TCP wrap solution and then I get a new error, so I reverted back.
Starting with an empty directory, I created this docker-compose.yml:
version: '3.9'
services:
neo4j:
image: neo4j:3.2
restart: unless-stopped
ports:
- 7474:7474
- 7687:7687
volumes:
- ./conf:/conf
- ./data:/data
- ./import:/import
- ./logs:/logs
- ./plugins:/plugins
environment:
# Raise memory limits
- NEO4J_dbms_memory_pagecache_size=1G
- NEO4J_dbms.memory.heap.initial_size=1G
- NEO4J_dbms_memory_heap_max__size=1G
Then I add the import directory, which contains data files I intend to work with in the container.
At this point, my directory looks like this:
0 drwxr-xr-x 9 cc staff 288 Dec 11 18:57 .
0 drwxr-xr-x 5 cc staff 160 Dec 11 18:15 ..
8 -rw-r--r-- 1 cc staff 458 Dec 11 18:45 docker-compose.yml
0 drwxr-xr-x 20 cc staff 640 Dec 11 18:57 import
I run docker-compose up -d --build, and the container is built. Now the local directory looks like this:
0 drwxr-xr-x 9 cc staff 288 Dec 11 18:57 .
0 drwxr-xr-x 5 cc staff 160 Dec 11 18:15 ..
0 drwxr-xr-x 2 cc staff 64 Dec 11 13:59 conf
0 drwxrwxrwx# 4 cc staff 128 Dec 11 18:08 data
8 -rw-r--r-- 1 cc staff 458 Dec 11 18:45 docker-compose.yml
0 drwxr-xr-x 20 cc staff 640 Dec 11 18:57 import
0 drwxrwxrwx# 3 cc staff 96 Dec 11 13:59 logs
0 drwxr-xr-x 3 cc staff 96 Dec 11 15:32 plugins
The conf, data, logs, and plugins directories are created.
data and logs are populated from the build of the Neo4j image, and conf and plugins are empty, as expected.
I use docker exec to look at the directory structures on the container:
8 drwx------ 1 neo4j neo4j 4096 Dec 11 23:46 .
8 drwxr-xr-x 1 root root 4096 May 11 2019 ..
36 -rwxrwxrwx 1 neo4j neo4j 36005 Feb 18 2019 LICENSE.txt
128 -rwxrwxrwx 1 neo4j neo4j 130044 Feb 18 2019 LICENSES.txt
12 -rwxrwxrwx 1 neo4j neo4j 8493 Feb 18 2019 NOTICE.txt
4 -rwxrwxrwx 1 neo4j neo4j 1594 Feb 18 2019 README.txt
4 -rwxrwxrwx 1 neo4j neo4j 96 Feb 18 2019 UPGRADE.txt
8 drwx------ 1 neo4j neo4j 4096 May 11 2019 bin
4 drwxr-xr-x 2 neo4j neo4j 4096 Dec 11 23:46 certificates
8 drwx------ 1 neo4j neo4j 4096 Dec 11 23:46 conf
0 lrwxrwxrwx 1 root root 5 May 11 2019 data -> /data
4 drwx------ 1 neo4j neo4j 4096 Feb 18 2019 import
8 drwx------ 1 neo4j neo4j 4096 May 11 2019 lib
0 lrwxrwxrwx 1 root root 5 May 11 2019 logs -> /logs
4 drwx------ 1 neo4j neo4j 4096 Feb 18 2019 plugins
4 drwx------ 1 neo4j neo4j 4096 Feb 18 2019 run
My problem is that the import directory in the container is empty. The data and logs directories are not empty though.
The data and logs directories on my local have extended attributes which the conf and plugins do not:
xattr -l data
com.docker.grpcfuse.ownership: {"UID":100,"GID":101}
The only difference I can identify is that those directories that had data created by docker-compose when it grabbed the Neo4j image.
Does anyone understand what is happening here, and tell me how I can get this to work? I'm using Mac OS X 10.15 and docker-compose version 1.27.4, build 40524192.
Thanks.
TL;DR: your current setup probably works fine.
To walk through the specific behavior you're observing:
On container startup, Docker will create empty directories on the host if they don't exist, and mount-point directories inside the container. (Which is why those directories appear.)
Docker never copies data from an image into a bind mount. This behavior only happens for named volumes (and only the very first time you use them, not on later runs; and only on native Docker, not on Kubernetes).
But, the standard database images generally know how to initialize an empty data directory. In the case of the neo4j image, its Dockerfile ends with an ENTRYPOINT directive that runs at container startup; that docker-entrypoint.sh knows how to do various sorts of first-time setup. That's how data gets into ./data.
The image also declares a WORKDIR /var/lib/neo4j (via an intermediate environment variable). That explains, in your ls -l listing, why there are symlinks like data -> /data. Your bind mount is to /import, but if you docker-compose exec neo4j ls import, it will look relative to that WORKDIR, which is why the directory looks empty.
But, the entrypoint script specifically looks for a /import directory inside the container, and if it exists and is readable, it sets an environment variable NEO4J_dbms_directories_import=/import.
This all suggests to me that your setup is correct, and if you try to execute an import, it will work correctly and see your host data. You are looking at a /var/lib/neo4j/import directory from the image, and it's empty, but the image startup knows to also look for /import in the container, and your mount points there.
When I add a remote interpreter from one of my docker-compose, it doesn't seem to succeed and doesn't show any packages in the dialog. When I add an interpreter to the debugger it says:
python packaging tools not found.
Then if i click on install packaging tools, error displayed:
ERROR: for dockeryard_pycharm_helpers_1
Cannot start service pycharm_helpers: network not found
Starting dockeryard_postgres_1 ...
Starting dockeryard_nginx_1 ...
Starting dockeryard_redis_1 ...
Starting dockeryard_postgres_1 ...
Starting dockeryard_nginx_1 ...
Starting dockeryard_pycharm_helpers_1
Starting dockeryard_redis_1
Starting dockeryard_worker_1 ...
Starting dockeryard_worker_1
Starting dockeryard_pycharm_helpers_1
ERROR: for dockeryard_pycharm_helpers_1 Cannot start service pycharm_helpers: network not found
ERROR: for pycharm_helpers Cannot start service pycharm_helpers: network not found
[31m
ERROR [0m:
Note, this interpreter was already in use and I was able to connect remotely with PyCharm, but I have added and eventually removed a custom network to the container.
As explained in Configuring Remote Python Interpreters - "When a remote Python interpreter is added, at first the PyCharm helpers are copied to the remote host". And my guess something went wrong since network was updated in the docker-compose.
From what I understand from the error message, when PyCharm starts interpreter it tries to use/find that network c7b0cc277c94ba5f58f6e72dcbab1ba24794e72422e839a83ea6102d08c40452.
I don't see that network listed anywhere when I run:
$ docker network inspect dockeryard_default
So PyCharm stores it somewhere and not been updated with the change.
I have tried to remove interpreter (using PyCharm dialog) and add it back - same result.
How can I get rid of this network and make PyCharm to be able to debug again?
Thanks.
Was having a near identical error and was able to get past it. I did two things though I'm uncertain as to which was the actual solution:
Made sure the mappings were correct under both (a) Preferences -> Project -> Project Interpreter -> Path mappings and (b) Run -> Edit Configurations -> <Your_Configuration> -> Path mappings
Removed/deleted any containers that looked to be related to PyCharm (believe this is more than likely what solved things).
Hope this helps. PyCharm docker-compose seems to work for some and be a real PITA for others.
One other note. I downgraded from PyCharm 2018 to 2017.3 as there's known docker bugs in 2018.
EDIT: And it would seem a docker-compose down from CLI reintroduces the error -_-
TLDR:
The {project_name}_pycharm_helpers_{pycharm_build_number} volume has been removed or is corrupted.
To repopulate it, run:
docker volume rm {project_name}_pycharm_helpers_{pycharm_build_number}
docker run -v {project_name}_pycharm_helpers_{pycharm_build_number}:/opt/.pycharm_helpers pycharm_helpers:{pycharm_build_number}
The pycharm_build_number can be found in the about section of your pycharm (mac OS: Pycharm > About)
Long story
I struggled a lot with PyCharm suddenly not finding the helpers any more or any related bugs, sometimes because I was clearing my containers or volumes. For instance, running
docker rm -f `docker container ps -aq`
docker volume rm $(docker volume ls -q)
will almost surely get pycharm into troubles.
AFAIK about how PyCharm works, there is:
a PyCharm base image named pycharm_helpers with tag corresponding to your pycharm build number (for example: PY-202.7660.27)
the first time you create docker related things, PyCharm creates volumes that get data from this image for later use in your containers. For instance, after a first attempt at running a remote docker-compose interpreter, I see the newly created myproject_pycharm_helpers_PY-202.7660.27 volume when doing docker volume ls.
when running the docker interpreter, PyCharm adds this volume into the /opt/.pycharm_helpers directory by adding at some point a -v myproject_pycharm_helpers_PY-202.7660.27:/opt/.pycharm_helpers to your command. For instance using docker-compose, you can see the addition of the -f /Users/clementwalter/Library/Caches/JetBrains/PyCharm2020.2/tmp/docker-compose.override.1508.yml and when you actually look into this file you see:
version: "3.8"
services:
local:
command:
- "python"
- "/opt/.pycharm_helpers/pydev/pydevconsole.py"
- "--mode=server"
- "--port=55824"
entrypoint: ""
environment:
PYCHARM_MATPLOTLIB_INTERACTIVE: "true"
PYTHONPATH: "/opt/project:/opt/.pycharm_helpers/pycharm_matplotlib_backend:/opt/.pycharm_helpers/pycharm_display:/opt/.pycharm_helpers/third_party/thriftpy:/opt/.pycharm_helpers/pydev"
PYTHONUNBUFFERED: "1"
PYTHONIOENCODING: "UTF-8"
PYCHARM_MATPLOTLIB_INDEX: "0"
PYCHARM_HOSTED: "1"
PYCHARM_DISPLAY_PORT: "63342"
IPYTHONENABLE: "True"
volumes:
- "/Users/clementwalter/Documents/myproject:/opt/project:rw"
- "pycharm_helpers_PY-202.7660.27:/opt/.pycharm_helpers"
working_dir: "/opt/project"
volumes:
pycharm_helpers_PY-202.7660.27: {}
You get into troubles when this volume is not correctly populated anymore.
Fortunately the docker volume documentation has a section "Populate a volume using a container" which is exactly what PyCharm does under the hood.
For the record you can check the content of the pycharm_helpers image:
$ docker run -it pycharm_helpers:PY-202.7660.27 sh
/opt/.pycharm_helpers #
you end up into the pycharm_helpers directory and find all the helpers here:
/opt/.pycharm_helpers # ls -la
total 5568
drwxr-xr-x 21 root root 4096 Dec 17 16:38 .
drwxr-xr-x 1 root root 4096 Dec 17 11:07 ..
-rw-r--r-- 1 root root 274 Dec 17 11:07 Dockerfile
drwxr-xr-x 5 root root 4096 Dec 17 16:38 MathJax
-rw-r--r-- 1 root root 2526 Sep 16 11:14 check_all_test_suite.py
-rw-r--r-- 1 root root 3194 Sep 16 11:14 conda_packaging_tool.py
drwxr-xr-x 2 root root 4096 Dec 17 16:38 coverage_runner
drwxr-xr-x 3 root root 4096 Dec 17 16:38 coveragepy
-rw-r--r-- 1 root root 11586 Sep 16 11:14 docstring_formatter.py
drwxr-xr-x 4 root root 4096 Dec 17 16:38 epydoc
-rw-r--r-- 1 root root 519 Sep 16 11:14 extra_syspath.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 generator3
-rw-r--r-- 1 root root 8 Sep 16 11:14 icon-robots.txt
-rw-r--r-- 1 root root 3950 Sep 16 11:14 packaging_tool.py
-rw-r--r-- 1 root root 1490666 Sep 16 11:14 pip-20.1.1-py2.py3-none-any.whl
drwxr-xr-x 2 root root 4096 Dec 17 16:38 pockets
drwxr-xr-x 3 root root 4096 Dec 17 16:38 profiler
-rw-r--r-- 1 root root 863 Sep 16 11:14 py2ipnb_converter.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 py2only
drwxr-xr-x 3 root root 4096 Dec 17 16:38 py3only
drwxr-xr-x 7 root root 4096 Dec 17 16:38 pycharm
drwxr-xr-x 4 root root 4096 Dec 17 16:38 pycharm_display
drwxr-xr-x 3 root root 4096 Dec 17 16:38 pycharm_matplotlib_backend
-rw-r--r-- 1 root root 103414 Sep 16 11:14 pycodestyle.py
drwxr-xr-x 24 root root 4096 Dec 17 16:38 pydev
drwxr-xr-x 9 root root 4096 Dec 17 16:38 python-skeletons
drwxr-xr-x 2 root root 4096 Dec 17 16:38 rest_runners
-rw-r--r-- 1 root root 583493 Sep 16 11:14 setuptools-44.1.1-py2.py3-none-any.whl
-rw-r--r-- 1 root root 29664 Sep 16 11:14 six.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 sphinxcontrib
-rw-r--r-- 1 root root 128 Sep 16 11:14 syspath.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 third_party
drwxr-xr-x 3 root root 4096 Dec 17 16:38 tools
drwxr-xr-x 5 root root 4096 Dec 17 16:38 typeshed
-rw-r--r-- 1 root root 3354133 Sep 16 11:14 virtualenv-16.7.10-py2.py3-none-any.whl
to make these helpers available again, following the docker documentation, you have to fix the volume. To do so:
docker rm {project_name}_pycharm_helpers_{pycharm_build}
docker run -v {project_name}_pycharm_helpers_{pycharm_build}:"/opt/.pycharm_helpers" pycharm_helpers:{tag}
et voilà
If you're still seeing this in PyCharm 2020.2 then do this:
close PyCharm
try #peterc's suggestion:
docker ps -a | grep -i pycharm | awk '{print $1}' | xargs docker rm
launch PyCharm again
The option invalidate cache -> Clear downloaded shared indexes will also repopulate the Pycharm volumes. (At least in 2021.1)
I'm running a 3-node cluster in AWS. Yesterday, I upgraded my cluster from DSE 4.7.3 to 4.8.0.
After the upgrade, the datastax-agent service is no longer registered and the /usr/share/datastax-agent/conf folder has been removed.
PRE-UPGRADE:
$ ls -alr
total 24836
drwxrwxr-x 3 cassandra cassandra 4096 Aug 10 14:57 tmp
drwxrwxr-x 2 cassandra cassandra 4096 Aug 10 14:56 ssl
drwxrwxr-x 2 cassandra cassandra 4096 Sep 28 15:14 doc
-rw-r--r-- 1 cassandra cassandra 25402305 Jul 14 18:55 datastax-agent-5.2.0-standalone.jar
drwxrwxr-x 2 cassandra cassandra 4096 Sep 28 18:23 conf
drwxrwxr-x 3 cassandra cassandra 4096 Sep 28 18:13 bin
drwxr-xr-x 118 root root 4096 Oct 2 18:02 ..
drwxrwxr-x 7 cassandra cassandra 4096 Oct 7 19:03 .
POST-UPGRADE:
$ ls -al
total 24976
drwxr-xr-x 3 cassandra cassandra 4096 Oct 5 20:45 .
drwxr-xr-x 114 root root 4096 Oct 5 18:23 ..
drwxr-xr-x 3 cassandra cassandra 4096 Oct 5 20:45 bin
-rw-r--r-- 1 cassandra cassandra 25562841 Sep 10 20:43 datastax-agent-5.2.1-standalone.jar
Also, /etc/init.d/datastax-agent file has been deleted. I don't know how I'm supposed to start/stop the service now.
Can I restore the files from the rollback directory? What effect will that have?
In this particular case what happened was that the dpkg install found a preexisting /etc/init.d/datastax-agent file and only put /etc/init.d/datastax-agent.fpk.bak into place. A "sudo dpkg -P datastax-agent" followed by a "sudo dpkg -i /usr/share/dse/datastax-agent/datastax-agent_5.2.1_all.deb" fixed the issue. We had to first kill the already running agent processes and then do a service restart.
Will investigate how that could have happened... that's still a little bit of a mystery to me.