how to migrate data from old docker gitlab container to new container - docker

I have a very old docker gitlab that was running fine until i tried to upgrade and the rest is history. It never worked.
So now my plans are to setup a new container on a new server and then just copy over the git repo data from the old docker host to the new docker host running the new container
here are the volumes i exposed to the host for both the old and new containers
--volume /srv/gitlab/config:/etc/gitlab
--volume /srv/gitlab/logs:/var/log/gitlab
--volume /srv/gitlab/data:/var/opt/gitlab
what i have tried it check the gitlab data directories to see where the git repos are stored on the old docker host but i cant find anything as it shows empty
root#gitlab:~# ls -lha /srv/gitlab/data/git-data/repositories/
total 48K
drwxrws--- 9 998 998 4.0K Aug 9 2021 .
drwx------ 3 998 root 4.0K Jun 8 2016 ..
drwxrwx--- 2 998 998 12K Oct 24 2020 devops
drwxr-sr-x 3 998 998 4.0K Aug 1 2021 +gitaly
-rw------- 1 998 998 64 Jun 18 2020 .gitaly-metadata
drwxr-s--- 91 998 998 4.0K Oct 24 2020 #hashed
drwxr-s--- 3 998 998 4.0K Oct 25 2020 #snippets
root#gitlab:~# ls -lha /srv/gitlab/data/git-data/repositories/devops/
total 16K
drwxrwx--- 2 998 998 12K Oct 24 2020 .
drwxrws--- 9 998 998 4.0K Aug 9 2021 ..
so anyone know where the git repo data are stored on a docker host running gitlab?

Related

Docker Container - Update entry point file

I've Docker Container on Ubuntu and trying to start it that was initially setup by a professional developer (I'm a newbie with docker/container thing)
While starting the container using docker start 16e5e9280bfe -a I get this error
bash: startup.sh: No such file or directory
here is the list of containers
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
16e5e9280bfe 9fc1df773d19 "bash startup.sh" 9 months ago Exited (127) 53 seconds ago mystifying_kapitsa
The Command bash startup.sh has a problem that the docker is not able to find the startup.sh file (probably it gets deleted) and I dont know where this file should be created (need your help in this regards)
I tried to edit the config.v2.json file (removing the CMD and ARG parameter) under /var/lib/docker/containers/16e5e9280bfea319c5094cddb4b2da71b6e461be824b119c32817e281d282f39/ but when I start the container, it get's overwritten by the system and startup.sh appears again in the file
Incase you need to know I've many overlays
drwx------ 3 root root 4096 Mar 15 09:02 09a7c19f2a8e478b75d8362915c9f324fca4a0a02e24637a1e636017ac94306d/
drwx------ 3 root root 4096 Mar 15 09:02 1c8a6778fe9c2285a0fb9497cf033f70c04a16d66e644f93e4d332b7f68e1b5a/
drwx------ 3 root root 4096 Jul 14 2019 1d0c600d79c41bf9b5554fa928a297ac3a359fbff1c8ef903c40809b913ea640/
drwx------ 3 root root 4096 Jul 14 2019 407310e8d2aefdb76bd01fa1675eef1b7512885532703c9f658c14fb9bd16b5d/
drwx------ 3 root root 4096 Mar 15 09:02 63a54717c69963a50921d0c63931674c8547ad032a13de29f585df956e3e8fa5/
drwx------ 3 root root 4096 Feb 27 2018 65a84394f71682bcf7eb92bb77e78525dc594b8688726e2fc2e125fe7c762f4d/
drwx------ 3 root root 4096 Mar 15 09:02 68b4eb941a39414a13e1d20d623f1fa65951cf688e3ef221e3ce7ebb4a3bb5a1/
drwx------ 3 root root 4096 Oct 19 2019 758f8ce11982261aae7c2200e421952f155742cbee4421d73c2ee822b6a44d6d/
drwx------ 3 root root 4096 Mar 4 2018 7cfe53f80077d076c046ffe12bb189c0ae8397c02879e4ab82dfb2970708ff7b/
drwx------ 3 root root 4096 Feb 27 2018 7ea1bc5aab7bdbe917daa8ab70f02a1bbdd5fef3ccd7b5865d5f0b65cf188168/
drwx------ 3 root root 4096 Feb 27 2018 8726b7fc216e2caf30bc6bdfd67aac681c076fe016a3078093a327de0eb86f71/
drwx------ 3 root root 4096 Oct 19 2019 a051306523973e4bb6942c9d9bb58d39fe55e5a4d8ba69bd907285d321f8c361/
drwx------ 4 root root 4096 Mar 15 12:06 a1af7c75c5d4bdd231d5494618851ba1226adf91879e7091cf03313d8b97b89a/
drwx------ 4 root root 4096 Mar 15 07:26 a1af7c75c5d4bdd231d5494618851ba1226adf91879e7091cf03313d8b97b89a-init/
drwx------ 3 root root 4096 Mar 15 09:02 aeea6b5c888be7896a298965b7163ea14343e3bf4bb5ccb8cd2a839cba66e62d/
drwx------ 3 root root 4096 Oct 19 2019 b44ca2240ee9a220eca0598a2f747ad1dfeb439019363189cbec85fb69a74775/
drwx------ 3 root root 4096 Feb 27 2018 bb8c3313a4e30681ac71c8e0279ed72ea94d4fbcb1f6cf6144ac98a238e3df34/
drwx------ 3 root root 4096 Mar 15 09:02 cdbbda1e3e039677378745b5e0a971fabc78d7ca37c6b3c15da45a54037da57b/
drwx------ 3 root root 4096 Feb 27 2018 d1edbc1173ed75f9fc4b800893975bdf3c6f2440f8483fb9e5acb817f19a7e45/
drwx------ 3 root root 4096 Feb 27 2018 e8b6a178f59cfa58f9821b555fbe28ae25ee64a22525271c5d8507dbaa41d553/
drwx------ 3 root root 4096 Feb 27 2018 eb14f3c333daad51203a19145d00d484862c2443f02ad711a28b9bad3bbdf08e/
drwx------ 3 root root 4096 Jul 14 2019 f598570a29c2ba8a452969191bb362431d62ef33a81da9bd4ac2aabaac2027da/
drwx------ 3 root root 4096 Feb 27 2018 ffe7c7b1cc80b55698a1e7bc355fe48595dc07502a59f261821de01fcbc59f49/
This is the config.v2.json File - you may want to check please
{"StreamConfig":{},"State":{"Running":false,"Paused":false,"Restarting":false,"OOMKilled":false,"RemovalInProgress":false,"Dead":false,"Pid":0,"ExitCode":127,"Error":"","StartedAt":"2021-03-15T10:49:57.080556832Z","FinishedAt":"2021-03-15T10:49:57.114871213Z","Health":null},"ID":"16e5e9280bfea319c5094cddb4b2da71b6e461be824b119c32817e281d282f39","Created":"2020-05-31T17:01:37.405644454Z","Managed":false,"Path":"bash","Args":["startup.sh"],"Config":{"Hostname":"16e5e9280bfe","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"ExposedPorts":{"21/tcp":{},"22/tcp":{},"443/tcp":{},"80/tcp":{}},"Tty":true,"OpenStdin":true,"StdinOnce":false,"Env":["PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","FFMPEG_VERSION=3.3.6","FDKAAC_VERSION=0.1.5","LAME_VERSION=3.99.5","LIBASS_VERSION=0.13.7","OGG_VERSION=1.3.2","OPENCOREAMR_VERSION=0.1.4","OPUS_VERSION=1.2","OPENJPEG_VERSION=2.1.2","THEORA_VERSION=1.1.1","VORBIS_VERSION=1.3.5","VPX_VERSION=1.7.0","X264_VERSION=20170226-2245-stable","X265_VERSION=2.3","XVID_VERSION=1.3.4","FREETYPE_VERSION=2.5.5","FRIBIDI_VERSION=0.19.7","FONTCONFIG_VERSION=2.12.4","LIBVIDSTAB_VERSION=1.1.0","KVAZAAR_VERSION=1.2.0","SRC=/usr/local","LD_LIBRARY_PATH=/usr/local/lib"],"Cmd":["startup.sh"],"Image":"9fc1df773d19","Volumes":{"/var/log/":{}},"WorkingDir":"/var/www/html","Entrypoint":["bash"],"OnBuild":null,"Labels":{}},"Image":"sha256:9fc1df773d198694f22f33c823ea8a05db78dcc7ea787ffafdc6ee95008bcbab","NetworkSettings":{"Bridge":"","SandboxID":"e32693eb6d1f685f8a77187c9f9713558d49248bc47ab6b8a97045ad37856a3e","HairpinMode":false,"LinkLocalIPv6Address":"","LinkLocalIPv6PrefixLen":0,"Networks":{"bridge":{"IPAMConfig":null,"Links":null,"Aliases":null,"NetworkID":"1dfe0ae53916827fbc1a6fe18387a7653f48cdc445b823cc3d42cce04a8ac242","EndpointID":"","Gateway":"","IPAddress":"","IPPrefixLen":0,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"","DriverOpts":null,"IPAMOperational":false}},"Service":null,"Ports":null,"SandboxKey":"/var/run/docker/netns/e32693eb6d1f","SecondaryIPAddresses":null,"SecondaryIPv6Addresses":null,"IsAnonymousEndpoint":true,"HasSwarmEndpoint":false},"LogPath":"/var/lib/docker/containers/16e5e9280bfea319c5094cddb4b2da71b6e461be824b119c32817e281d282f39/16e5e9280bfea319c5094cddb4b2da71b6e461be824b119c32817e281d282f39-json.log","Name":"/mystifying_kapitsa","Driver":"overlay","OS":"linux","MountLabel":"","ProcessLabel":"","RestartCount":0,"HasBeenStartedBefore":true,"HasBeenManuallyStopped":false,"MountPoints":{"/var/log":{"Source":"","Destination":"/var/log","RW":true,"Name":"5154e45a0f7d2fe5ce97d406f496474f9247e9413e4fc3ce076b59a45014c60c","Driver":"local","Type":"volume","Spec":{},"SkipMountpointCreation":false},"/var/www/demo":{"Source":"/var/www/demo","Destination":"/var/www/demo","RW":true,"Name":"","Driver":"","Type":"bind","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/var/www/demo","Target":"/var/www/demo"},"SkipMountpointCreation":false},"/var/www/html":{"Source":"/var/www/html","Destination":"/var/www/html","RW":true,"Name":"","Driver":"","Type":"bind","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/var/www/html","Target":"/var/www/html/"},"SkipMountpointCreation":false}},"SecretReferences":null,"ConfigReferences":null,"AppArmorProfile":"docker-default","HostnamePath":"/var/lib/docker/containers/16e5e9280bfea319c5094cddb4b2da71b6e461be824b119c32817e281d282f39/hostname","HostsPath":"/var/lib/docker/containers/16e5e9280bfea319c5094cddb4b2da71b6e461be824b119c32817e281d282f39/hosts","ShmPath":"","ResolvConfPath":"/var/lib/docker/containers/16e5e9280bfea319c5094cddb4b2da71b6e461be824b119c32817e281d282f39/resolv.conf","SeccompProfile":"","NoNewPrivileges":false}
I'm using Ubuntu 18.04.3 LTS
Thank you!
You need to find where is the startup script in the overlays of the image, container and volumes that are attached. You will need to find the last version (last layer) where the file was changed last and modify it there.
However, this is not to be done by someone who do not have in-depth understanding of the docker-engine.
But you don't need to do all this things just to enter in a stopped+failed container.
If a you need the access to the container, what you can do is to create image from the container
docker commit mystifying_kapitsa my_container:latest
Once you have the image, you can use it as a base for new images or, you can create new container from the image like this:
docker exec -ti --name i_am_in my_container:latest /bin/bash
This will give you a shell inside the image from the container and all data that ware in the mystifying_kapitsa will be in.

docker-compose mounted volume is empty, but other volumes created during Docker image build are populated

Starting with an empty directory, I created this docker-compose.yml:
version: '3.9'
services:
neo4j:
image: neo4j:3.2
restart: unless-stopped
ports:
- 7474:7474
- 7687:7687
volumes:
- ./conf:/conf
- ./data:/data
- ./import:/import
- ./logs:/logs
- ./plugins:/plugins
environment:
# Raise memory limits
- NEO4J_dbms_memory_pagecache_size=1G
- NEO4J_dbms.memory.heap.initial_size=1G
- NEO4J_dbms_memory_heap_max__size=1G
Then I add the import directory, which contains data files I intend to work with in the container.
At this point, my directory looks like this:
0 drwxr-xr-x 9 cc staff 288 Dec 11 18:57 .
0 drwxr-xr-x 5 cc staff 160 Dec 11 18:15 ..
8 -rw-r--r-- 1 cc staff 458 Dec 11 18:45 docker-compose.yml
0 drwxr-xr-x 20 cc staff 640 Dec 11 18:57 import
I run docker-compose up -d --build, and the container is built. Now the local directory looks like this:
0 drwxr-xr-x 9 cc staff 288 Dec 11 18:57 .
0 drwxr-xr-x 5 cc staff 160 Dec 11 18:15 ..
0 drwxr-xr-x 2 cc staff 64 Dec 11 13:59 conf
0 drwxrwxrwx# 4 cc staff 128 Dec 11 18:08 data
8 -rw-r--r-- 1 cc staff 458 Dec 11 18:45 docker-compose.yml
0 drwxr-xr-x 20 cc staff 640 Dec 11 18:57 import
0 drwxrwxrwx# 3 cc staff 96 Dec 11 13:59 logs
0 drwxr-xr-x 3 cc staff 96 Dec 11 15:32 plugins
The conf, data, logs, and plugins directories are created.
data and logs are populated from the build of the Neo4j image, and conf and plugins are empty, as expected.
I use docker exec to look at the directory structures on the container:
8 drwx------ 1 neo4j neo4j 4096 Dec 11 23:46 .
8 drwxr-xr-x 1 root root 4096 May 11 2019 ..
36 -rwxrwxrwx 1 neo4j neo4j 36005 Feb 18 2019 LICENSE.txt
128 -rwxrwxrwx 1 neo4j neo4j 130044 Feb 18 2019 LICENSES.txt
12 -rwxrwxrwx 1 neo4j neo4j 8493 Feb 18 2019 NOTICE.txt
4 -rwxrwxrwx 1 neo4j neo4j 1594 Feb 18 2019 README.txt
4 -rwxrwxrwx 1 neo4j neo4j 96 Feb 18 2019 UPGRADE.txt
8 drwx------ 1 neo4j neo4j 4096 May 11 2019 bin
4 drwxr-xr-x 2 neo4j neo4j 4096 Dec 11 23:46 certificates
8 drwx------ 1 neo4j neo4j 4096 Dec 11 23:46 conf
0 lrwxrwxrwx 1 root root 5 May 11 2019 data -> /data
4 drwx------ 1 neo4j neo4j 4096 Feb 18 2019 import
8 drwx------ 1 neo4j neo4j 4096 May 11 2019 lib
0 lrwxrwxrwx 1 root root 5 May 11 2019 logs -> /logs
4 drwx------ 1 neo4j neo4j 4096 Feb 18 2019 plugins
4 drwx------ 1 neo4j neo4j 4096 Feb 18 2019 run
My problem is that the import directory in the container is empty. The data and logs directories are not empty though.
The data and logs directories on my local have extended attributes which the conf and plugins do not:
xattr -l data
com.docker.grpcfuse.ownership: {"UID":100,"GID":101}
The only difference I can identify is that those directories that had data created by docker-compose when it grabbed the Neo4j image.
Does anyone understand what is happening here, and tell me how I can get this to work? I'm using Mac OS X 10.15 and docker-compose version 1.27.4, build 40524192.
Thanks.
TL;DR: your current setup probably works fine.
To walk through the specific behavior you're observing:
On container startup, Docker will create empty directories on the host if they don't exist, and mount-point directories inside the container. (Which is why those directories appear.)
Docker never copies data from an image into a bind mount. This behavior only happens for named volumes (and only the very first time you use them, not on later runs; and only on native Docker, not on Kubernetes).
But, the standard database images generally know how to initialize an empty data directory. In the case of the neo4j image, its Dockerfile ends with an ENTRYPOINT directive that runs at container startup; that docker-entrypoint.sh knows how to do various sorts of first-time setup. That's how data gets into ./data.
The image also declares a WORKDIR /var/lib/neo4j (via an intermediate environment variable). That explains, in your ls -l listing, why there are symlinks like data -> /data. Your bind mount is to /import, but if you docker-compose exec neo4j ls import, it will look relative to that WORKDIR, which is why the directory looks empty.
But, the entrypoint script specifically looks for a /import directory inside the container, and if it exists and is readable, it sets an environment variable NEO4J_dbms_directories_import=/import.
This all suggests to me that your setup is correct, and if you try to execute an import, it will work correctly and see your host data. You are looking at a /var/lib/neo4j/import directory from the image, and it's empty, but the image startup knows to also look for /import in the container, and your mount points there.

Failing to get PyCharm to work with remote interpreter on docker

When I add a remote interpreter from one of my docker-compose, it doesn't seem to succeed and doesn't show any packages in the dialog. When I add an interpreter to the debugger it says:
python packaging tools not found.
Then if i click on install packaging tools, error displayed:
ERROR: for dockeryard_pycharm_helpers_1
Cannot start service pycharm_helpers: network not found
Starting dockeryard_postgres_1 ...
Starting dockeryard_nginx_1 ...
Starting dockeryard_redis_1 ...
Starting dockeryard_postgres_1 ...
Starting dockeryard_nginx_1 ...
Starting dockeryard_pycharm_helpers_1
Starting dockeryard_redis_1
Starting dockeryard_worker_1 ...
Starting dockeryard_worker_1
Starting dockeryard_pycharm_helpers_1
ERROR: for dockeryard_pycharm_helpers_1 Cannot start service pycharm_helpers: network not found
ERROR: for pycharm_helpers Cannot start service pycharm_helpers: network not found
[31m
ERROR [0m:
Note, this interpreter was already in use and I was able to connect remotely with PyCharm, but I have added and eventually removed a custom network to the container.
As explained in Configuring Remote Python Interpreters - "When a remote Python interpreter is added, at first the PyCharm helpers are copied to the remote host". And my guess something went wrong since network was updated in the docker-compose.
From what I understand from the error message, when PyCharm starts interpreter it tries to use/find that network c7b0cc277c94ba5f58f6e72dcbab1ba24794e72422e839a83ea6102d08c40452.
I don't see that network listed anywhere when I run:
$ docker network inspect dockeryard_default
So PyCharm stores it somewhere and not been updated with the change.
I have tried to remove interpreter (using PyCharm dialog) and add it back - same result.
How can I get rid of this network and make PyCharm to be able to debug again?
Thanks.
Was having a near identical error and was able to get past it. I did two things though I'm uncertain as to which was the actual solution:
Made sure the mappings were correct under both (a) Preferences -> Project -> Project Interpreter -> Path mappings and (b) Run -> Edit Configurations -> <Your_Configuration> -> Path mappings
Removed/deleted any containers that looked to be related to PyCharm (believe this is more than likely what solved things).
Hope this helps. PyCharm docker-compose seems to work for some and be a real PITA for others.
One other note. I downgraded from PyCharm 2018 to 2017.3 as there's known docker bugs in 2018.
EDIT: And it would seem a docker-compose down from CLI reintroduces the error -_-
TLDR:
The {project_name}_pycharm_helpers_{pycharm_build_number} volume has been removed or is corrupted.
To repopulate it, run:
docker volume rm {project_name}_pycharm_helpers_{pycharm_build_number}
docker run -v {project_name}_pycharm_helpers_{pycharm_build_number}:/opt/.pycharm_helpers pycharm_helpers:{pycharm_build_number}
The pycharm_build_number can be found in the about section of your pycharm (mac OS: Pycharm > About)
Long story
I struggled a lot with PyCharm suddenly not finding the helpers any more or any related bugs, sometimes because I was clearing my containers or volumes. For instance, running
docker rm -f `docker container ps -aq`
docker volume rm $(docker volume ls -q)
will almost surely get pycharm into troubles.
AFAIK about how PyCharm works, there is:
a PyCharm base image named pycharm_helpers with tag corresponding to your pycharm build number (for example: PY-202.7660.27)
the first time you create docker related things, PyCharm creates volumes that get data from this image for later use in your containers. For instance, after a first attempt at running a remote docker-compose interpreter, I see the newly created myproject_pycharm_helpers_PY-202.7660.27 volume when doing docker volume ls.
when running the docker interpreter, PyCharm adds this volume into the /opt/.pycharm_helpers directory by adding at some point a -v myproject_pycharm_helpers_PY-202.7660.27:/opt/.pycharm_helpers to your command. For instance using docker-compose, you can see the addition of the -f /Users/clementwalter/Library/Caches/JetBrains/PyCharm2020.2/tmp/docker-compose.override.1508.yml and when you actually look into this file you see:
version: "3.8"
services:
local:
command:
- "python"
- "/opt/.pycharm_helpers/pydev/pydevconsole.py"
- "--mode=server"
- "--port=55824"
entrypoint: ""
environment:
PYCHARM_MATPLOTLIB_INTERACTIVE: "true"
PYTHONPATH: "/opt/project:/opt/.pycharm_helpers/pycharm_matplotlib_backend:/opt/.pycharm_helpers/pycharm_display:/opt/.pycharm_helpers/third_party/thriftpy:/opt/.pycharm_helpers/pydev"
PYTHONUNBUFFERED: "1"
PYTHONIOENCODING: "UTF-8"
PYCHARM_MATPLOTLIB_INDEX: "0"
PYCHARM_HOSTED: "1"
PYCHARM_DISPLAY_PORT: "63342"
IPYTHONENABLE: "True"
volumes:
- "/Users/clementwalter/Documents/myproject:/opt/project:rw"
- "pycharm_helpers_PY-202.7660.27:/opt/.pycharm_helpers"
working_dir: "/opt/project"
volumes:
pycharm_helpers_PY-202.7660.27: {}
You get into troubles when this volume is not correctly populated anymore.
Fortunately the docker volume documentation has a section "Populate a volume using a container" which is exactly what PyCharm does under the hood.
For the record you can check the content of the pycharm_helpers image:
$ docker run -it pycharm_helpers:PY-202.7660.27 sh
/opt/.pycharm_helpers #
you end up into the pycharm_helpers directory and find all the helpers here:
/opt/.pycharm_helpers # ls -la
total 5568
drwxr-xr-x 21 root root 4096 Dec 17 16:38 .
drwxr-xr-x 1 root root 4096 Dec 17 11:07 ..
-rw-r--r-- 1 root root 274 Dec 17 11:07 Dockerfile
drwxr-xr-x 5 root root 4096 Dec 17 16:38 MathJax
-rw-r--r-- 1 root root 2526 Sep 16 11:14 check_all_test_suite.py
-rw-r--r-- 1 root root 3194 Sep 16 11:14 conda_packaging_tool.py
drwxr-xr-x 2 root root 4096 Dec 17 16:38 coverage_runner
drwxr-xr-x 3 root root 4096 Dec 17 16:38 coveragepy
-rw-r--r-- 1 root root 11586 Sep 16 11:14 docstring_formatter.py
drwxr-xr-x 4 root root 4096 Dec 17 16:38 epydoc
-rw-r--r-- 1 root root 519 Sep 16 11:14 extra_syspath.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 generator3
-rw-r--r-- 1 root root 8 Sep 16 11:14 icon-robots.txt
-rw-r--r-- 1 root root 3950 Sep 16 11:14 packaging_tool.py
-rw-r--r-- 1 root root 1490666 Sep 16 11:14 pip-20.1.1-py2.py3-none-any.whl
drwxr-xr-x 2 root root 4096 Dec 17 16:38 pockets
drwxr-xr-x 3 root root 4096 Dec 17 16:38 profiler
-rw-r--r-- 1 root root 863 Sep 16 11:14 py2ipnb_converter.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 py2only
drwxr-xr-x 3 root root 4096 Dec 17 16:38 py3only
drwxr-xr-x 7 root root 4096 Dec 17 16:38 pycharm
drwxr-xr-x 4 root root 4096 Dec 17 16:38 pycharm_display
drwxr-xr-x 3 root root 4096 Dec 17 16:38 pycharm_matplotlib_backend
-rw-r--r-- 1 root root 103414 Sep 16 11:14 pycodestyle.py
drwxr-xr-x 24 root root 4096 Dec 17 16:38 pydev
drwxr-xr-x 9 root root 4096 Dec 17 16:38 python-skeletons
drwxr-xr-x 2 root root 4096 Dec 17 16:38 rest_runners
-rw-r--r-- 1 root root 583493 Sep 16 11:14 setuptools-44.1.1-py2.py3-none-any.whl
-rw-r--r-- 1 root root 29664 Sep 16 11:14 six.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 sphinxcontrib
-rw-r--r-- 1 root root 128 Sep 16 11:14 syspath.py
drwxr-xr-x 3 root root 4096 Dec 17 16:38 third_party
drwxr-xr-x 3 root root 4096 Dec 17 16:38 tools
drwxr-xr-x 5 root root 4096 Dec 17 16:38 typeshed
-rw-r--r-- 1 root root 3354133 Sep 16 11:14 virtualenv-16.7.10-py2.py3-none-any.whl
to make these helpers available again, following the docker documentation, you have to fix the volume. To do so:
docker rm {project_name}_pycharm_helpers_{pycharm_build}
docker run -v {project_name}_pycharm_helpers_{pycharm_build}:"/opt/.pycharm_helpers" pycharm_helpers:{tag}
et voilĂ 
If you're still seeing this in PyCharm 2020.2 then do this:
close PyCharm
try #peterc's suggestion:
docker ps -a | grep -i pycharm | awk '{print $1}' | xargs docker rm
launch PyCharm again
The option invalidate cache -> Clear downloaded shared indexes will also repopulate the Pycharm volumes. (At least in 2021.1)

How I can access docker data volumes on Windows machine?

I have docker-compose.yml like this:
version: '3'
services:
mysql:
image: mysql
volumes:
- data:/var/lib/mysql
environment:
- MYSQL_ROOT_PASSWORD=$ROOT_PASSWORD
volumes:
data:
And my mount point looks like: /var/lib/docker/volumes/some_app/_data and I want to access data from that mount point and I'm not sure how to do it on Windows machine. Maybe I can create some additional container which can pass data from docker virtual machine to my directory?
When I'm mounting folder like this:
volumes:
- ./data:/var/lib/mysql
to use my local directory - I had no success because of permissions issue. And read that "right way" is using docker volumes.
UPD: MySQL container it's just example. I want to use such behaviour for my codebase and use docker foe local development.
For Linux containers under Windows, docker runs actually over a Linux virtual machine, so your named volume is a mapping of a local directory in that VM to a directory in the container.
So what you got as /var/lib/docker/volumes/some_app/_data is a directory inside that VM. To inspect it you can:
docker run --rm -it -v /:/vm-root alpine:edge ls -l /vm-root/var/lib/docker/volumes/some_app/_data
total 188476
-rw-r----- 1 999 ping 56 Jun 4 04:49 auto.cnf
-rw------- 1 999 ping 1675 Jun 4 04:49 ca-key.pem
-rw-r--r-- 1 999 ping 1074 Jun 4 04:49 ca.pem
-rw-r--r-- 1 999 ping 1078 Jun 4 04:49 client-cert.pem
-rw------- 1 999 ping 1679 Jun 4 04:49 client-key.pem
-rw-r----- 1 999 ping 1321 Jun 4 04:50 ib_buffer_pool
-rw-r----- 1 999 ping 50331648 Jun 4 04:50 ib_logfile0
-rw-r----- 1 999 ping 50331648 Jun 4 04:49 ib_logfile1
-rw-r----- 1 999 ping 79691776 Jun 4 04:50 ibdata1
-rw-r----- 1 999 ping 12582912 Jun 4 04:50 ibtmp1
drwxr-x--- 2 999 ping 4096 Jun 4 04:49 mysql
drwxr-x--- 2 999 ping 4096 Jun 4 04:49 performance_schema
-rw------- 1 999 ping 1679 Jun 4 04:49 private_key.pem
-rw-r--r-- 1 999 ping 451 Jun 4 04:49 public_key.pem
-rw-r--r-- 1 999 ping 1078 Jun 4 04:49 server-cert.pem
-rw------- 1 999 ping 1675 Jun 4 04:49 server-key.pem
drwxr-x--- 2 999 ping 12288 Jun 4 04:49 sys
That is running an auxiliar container which has mounted the hole root filesystem of that VM / into the container dir /vm-root.
To get some file run the container with some command in background (tail -f /dev/null in my case), then you can use docker cp:
docker run --name volume-holder -d -it -v /:/vm-root alpine:edge tail -f /dev/null
docker cp volume-holder:/vm-root/var/lib/docker/volumes/volumes_data/_data/public_key.pem .
If you want a transparent SSH to that VM, it seems that is not supported yet, as of Jun-2017. Here a docker staff member said that.

DSE upgrade deleted datastax-agent conf folder

I'm running a 3-node cluster in AWS. Yesterday, I upgraded my cluster from DSE 4.7.3 to 4.8.0.
After the upgrade, the datastax-agent service is no longer registered and the /usr/share/datastax-agent/conf folder has been removed.
PRE-UPGRADE:
$ ls -alr
total 24836
drwxrwxr-x 3 cassandra cassandra 4096 Aug 10 14:57 tmp
drwxrwxr-x 2 cassandra cassandra 4096 Aug 10 14:56 ssl
drwxrwxr-x 2 cassandra cassandra 4096 Sep 28 15:14 doc
-rw-r--r-- 1 cassandra cassandra 25402305 Jul 14 18:55 datastax-agent-5.2.0-standalone.jar
drwxrwxr-x 2 cassandra cassandra 4096 Sep 28 18:23 conf
drwxrwxr-x 3 cassandra cassandra 4096 Sep 28 18:13 bin
drwxr-xr-x 118 root root 4096 Oct 2 18:02 ..
drwxrwxr-x 7 cassandra cassandra 4096 Oct 7 19:03 .
POST-UPGRADE:
$ ls -al
total 24976
drwxr-xr-x 3 cassandra cassandra 4096 Oct 5 20:45 .
drwxr-xr-x 114 root root 4096 Oct 5 18:23 ..
drwxr-xr-x 3 cassandra cassandra 4096 Oct 5 20:45 bin
-rw-r--r-- 1 cassandra cassandra 25562841 Sep 10 20:43 datastax-agent-5.2.1-standalone.jar
Also, /etc/init.d/datastax-agent file has been deleted. I don't know how I'm supposed to start/stop the service now.
Can I restore the files from the rollback directory? What effect will that have?
In this particular case what happened was that the dpkg install found a preexisting /etc/init.d/datastax-agent file and only put /etc/init.d/datastax-agent.fpk.bak into place. A "sudo dpkg -P datastax-agent" followed by a "sudo dpkg -i /usr/share/dse/datastax-agent/datastax-agent_5.2.1_all.deb" fixed the issue. We had to first kill the already running agent processes and then do a service restart.
Will investigate how that could have happened... that's still a little bit of a mystery to me.

Resources