Docker run binding container pip site-packages for peristant pip installs - docker

I am trying to bind my pip site-packages docker container directory to my host computer, but on my host computer the directory is empty.
docker run -d \
-p 8080:8080 \
--name "ml-workspace" \
-v "/${PWD}:/workspace" \
--mount type=bind,source=/opt/conda/lib/python3.7/site-packages,target=/workspace/cache/site-packages \
--env AUTHENTICATE_VIA_JUPYTER="password" \
--shm-size 512m \
--restart always \
dagshub/ml-workspace:latest
Also, I am trying to bind the vs studio extensions directory --mount type=bind,source=/root/.vscode/extensions,target=/workspace/cache/extensions, but docker doesn't seem to like . in the path.

Related

Docker volumes shared with the Jupyter "lock" the folder

I'm testing Docker Desktop 4.10 on Ubuntu 22.04.
Let's say that I want to run a Jupyter Notebook:
docker run -it \
-v "${PWD}":/home/jovyan/work \
-p 8888:8888 \
jupyter/base-notebook
By doing so, I experience the "Permission denied" error while attempting to create a new notebook in the "work" directory.
Starting the notebook with chown options allows me to solve the problem:
docker run -it \
-v "${PWD}":/home/jovyan/work \
-p 8888:8888 \
--user root \
-e CHOWN_HOME="yes" \
-e CHOWN_HOME_OPTS="-R" \
jupyter/base-notebook
This solution has the drawback of changing the permissions of the folder: I see a padlock on the folder and cannot delete the contents, even after removing the container. In particular, Owner is now "user #100999" and Group "100099".
Looking for alternative solutions with no impact on os-permissions. Thanks
You can tell Docker to use the same owner and group as the directory being mounted.
docker run -it \
-v "${PWD}":/home/jovyan/work \
-p 8888:8888 \
--user "$(stat -c '%u:%g' $PWD)" \
--group-add users \
jupyter/base-notebook
The stat command is used to retrieve the user ID and group ID for the current directory's owner.

Could not find `Cargo.toml` running Cargo in docker container

I am using the code:
docker run --rm -v "$(pwd)":/code \
--mount type=volume,source="$(basename "$(pwd)")_cache",target=/code/target \
--mount type=volume,source=registry_cache,target=/usr/local/cargo/registry \
cosmwasm/rust-optimizer:0.12.4
I keep running into this error
error: could not find Cargo.toml in /code or any parent directory

GCC builds under teamcity docker agent

I'm trying out teamcity to build GCC binaries with docker agents on centos. I setup a docker agent to connect to builder2 TC server.
$ docker pull jetbrains/teamcity-agent
$ mkdir -p /mnt/builders/teamcity/agent1/conf
$ mkdir -p /mnt/builders/teamcity/agent/work
$ mkdir -p /mnt/builders/teamcity/agent/system
docker run -it --name agent1 \
-e SERVER_URL="http://builder2:8111" \
-e AGENT_NAME="builder2_agent1" \
--hostname builder2_agent \
--dns="xx.xxx.xx.xx" \
-v /mnt/builders/teamcity/agent1/conf:/data/teamcity_agent/conf \
-v /mnt/builders/teamcity/agent/work:/opt/buildagent/work \
-v /mnt/builders/teamcity/agent/system:/opt/buildagent/system \
--network='BuilderNetwork' \
jetbrains/teamcity-agent
All that works good but in order to make a build you must import the devtoolset like this
scl enable devtoolset-10 "/bin/bash"
$ which make
/opt/rh/devtoolset-10/root/usr/bin/make
So how is this done with docker agent? Are these tools to be built in the image or do you expose the /opt/rh dir to the container? Also if you were to expose the volume then how do you install /usr/bin/scl (i.e rh package scl-utils-20130529-19.el7.x86_64) into the docker container? Does it even make sense to run an agent in docker for this?

Jenkins location in docker, not at /var/jenkins_home

I'm installing Jenkins via docker by following this official guide.
After running command:
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v jenkins-data:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
I'm expecting Jenkins to be installed # /var/jenkins_home, but instead it is being installed # /var/lib/docker/volumes/jenkins-data.
Also there is no such folder as /var/jenkins_home.
Am I missing something. Please suggest.
Thank You
/var/jenkins_home is inside the container. You are using named volume & that's why it's located in /var/lib/docker/volumes/jenkins-data.
Instead, you can use host bind mounts as below to ensure you get the data in /var/jenkins_home on the host machine -
docker run \
-u root \
--rm \
-d \
-p 8080:8080 \
-p 50000:50000 \
-v /var/jenkins_home:/var/jenkins_home \
-v /var/run/docker.sock:/var/run/docker.sock \
jenkinsci/blueocean
Volumes path in case of host mounts has to be absolute else it results in creating a named volume.

How do I choose a folder to store files from Docker?

I've been working on this for hours, but I've mainly found answers relating to Linux.
I'm running Docker in Windows 10, and I'm trying to install some distros from Linuxserver
I can do a basic setup (following a guide that install Jackett a similar way)
docker create --name=jackett \
--restart=always \
-v /home/docker/jackett/config:/config \
-v /home/docker/jackett/downloads:/downloads \
-e PGID=1001 -e PUID=1001 \
-e TZ=Europe/London \
-p 9117:9117 \
linuxserver/jackett
But, I don't understand how to select one of the shared drives I setup, and I have no idea where /home/... is on my hard drive.
How would I set this up to save config and downloads in say:
H:\Documents\Configs
docker volume definitions are in pairs
-v left_side:right_side
where full path of left side is a directory local to the machine where you are executing your docker command from ... your laptop or server ... whereas the right_side is same directory as viewed from inside the freshly launched container ... that is you are mounting a local dir to your container so it can read and write to persist changes even after the container is killed off
For example I want to make visable to my app the dir on my laptop of
/some/full/path/local/dir
and the app will see this as path
/whatever/dir
so syntax would look like this
docker ... skip settings ... -v /some/full/path/local/dir:/whatever/dir
My guess is if your host machine is MS Windows then use Windows \ separators rather than linux / separators
docker ... skip settings ... -v c:\some\full\path\local\dir:/whatever/dir
so this would be linux host syntax
docker create --name=jackett \
--restart=always \
-v /some/config/dir:/config \
-v /some/config/dir:/downloads \
-e PGID=1001 -e PUID=1001 \
-e TZ=Europe/London \
-p 9117:9117 \
linuxserver/jackett
whereas this is MS Windows syntax using \ instead of / as seperator
docker create --name=jackett \
--restart=always \
-v H:\\Documents\\Configs:/config \
-v H:\\Documents\\Configs:/downloads \
-e PGID=1001 -e PUID=1001 \
-e TZ=Europe/London \
-p 9117:9117 \
linuxserver/jackett
UPDATE - notice the double \ which might work on Windows since a single \ just means do an escape on following character ... Also leave as is the last line of above linuxserver/jackett as that is not a path its the docker image name
on ubuntu I just ran below just fine
docker create --name=jackett_stens \
--restart=always \
-v /home/khufu/src/config:/config \
-v /home/khufu/src/config:/downloads \
-e PGID=1001 -e PUID=1001 \
-e TZ=Europe/London \
-p 9117:9117 \
linuxserver/jackett
above output is
khufu#jill ~ $ docker create --name=jackett_stens \
> --restart=always \
> -v /home/khufu/src/config:/config \
> -v /home/khufu/src/config:/downloads \
> -e PGID=1001 -e PUID=1001 \
> -e TZ=Europe/London \
> -p 9117:9117 \
> linuxserver/jackett
Unable to find image 'linuxserver/jackett:latest' locally
latest: Pulling from linuxserver/jackett
f2233041f557: Already exists
53bd17864f23: Pull complete
02efc09c990b: Pull complete
14b057e5c85e: Pull complete
7e03e93fc218: Pull complete
9825bf39efb1: Pull complete
0a74d4d4cac0: Pull complete
34451e5c900f: Pull complete
5453d859f994: Pull complete
d9976cfaf0ba: Pull complete
09ccdb48553d: Pull complete
Digest: sha256:b624cbc75efb40d7dab9a2095653988632a4773ad86e0f5ee2edd877e4178678
Status: Downloaded newer image for linuxserver/jackett:latest
dddfae776bfc32c3a55de1ddc08c04e2574ecb3c950ba9bb88f477e3e240121e
OK so above worked then I launched the image using
docker start jackett_stens
and confirmed it is running by issuing
docker ps

Resources