vscode containerEnv not working in mounts - dart

I'm using the vscode command Remote-contains: Open Folder in container...
I'm trying to mount bind a file into the docker container.
~/.config/dart/pub-tokens.json
The host file is under my HOME directory and I need it mounted in the same location within the container's HOME directory.
Here is my mount command from the vscode devcontainer.json
"mounts": [
"source=${localEnv:HOME}/.config/dart/pub-tokens.json,target=${containerEnv:HOME}/.config/dart/pub-tokens.json,type=bind,consistency=cached",
]
Note the 'containerEnv' in the target clause.
Launching the container via the vscode Remote-contains: Open Folder in container...
produces the following error: (for readability I've added some newlines)
Start: Run: docker run --sig-proxy=false -a STDOUT -a STDERR
--mount type=bind,source=/home/bsutton/git/onepub/onepub,target=/workspaces/onepub
--mount source=/home/bsutton/.config/dart/pub-tokens.json,target=${containerEnv:HOME}/.config/dart/pub-tokens.json,type=bind,consistency=cached
--mount source=/home/bsutton/.onepub/onepub.yaml,target=${containerEnv:HOME}/.onepub/onepub.yaml,type=bind,consistency=cached
--mount type=volume,src=vscode,dst=/vscode -l devcontainer.local_folder=/home/bsutton/git/onepub/onepub
--entrypoint /bin/sh vsc-onepub-7ff341664d5755895634c2f74983ff45-uid -c echo Container started
docker: Error response from daemon:
invalid mount config for type "bind": invalid mount path: '${containerEnv:HOME}/.config/dart/pub-tokens.json' mount path must be absolute.
It would appear that vscode isn't expanding the the containerEnv.
If I replace containerEnv it with localEnv it does get expanded (but the wrong path).
i.e. the following works:
"mounts": [
"source=${localEnv:HOME}/.config/dart/pub-tokens.json,target=${localEnv:HOME}/.config/dart/pub-tokens.json,type=bind,consistency=cached",
]
Here is the complete devcontainer.json
// For format details, see https://aka.ms/devcontainer.json. For config options, see the README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.245.0/containers/ubuntu
{
"name": "Ubuntu",
"build": {
"dockerfile": "Dockerfile",
// Update 'VARIANT' to pick an Ubuntu version: jammy / ubuntu-22.04, focal / ubuntu-20.04, bionic /ubuntu-18.04
// Use ubuntu-22.04 or ubuntu-18.04 on local arm64/Apple Silicon.
"args": { "VARIANT": "ubuntu-22.04" }
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "uname -a",
// Comment out to connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode",
"features": {
"git": "latest",
"github-cli": "latest"
},
"mounts": [
"source=${localEnv:HOME}/.config/dart/pub-tokens.json,target=${containerEnv:HOME}/.config/dart/pub-tokens.json,type=bind,consistency=cached",
"source=${localEnv:HOME}/.onepub/onepub.yaml,target=${containerEnv:HOME}/.onepub/onepub.yaml,type=bind,consistency=cached"
]
}

Related

Identify multiple VSCode devcontainers in the same remote docker context

All my team members use the same server as docker remote context. I have set up a project using VSCode-Devcontainer with a devcontainer.json like this:
{
"name": "MyProject - DevContainer",
"dockerFile": "../Dockerfile",
"context": "..",
"workspaceMount": "source=vsc-myprojekt-${localEnv:USERNAME},target=/workspace,type=volume",
"workspaceFolder": "/workspace",
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance"
],
"postCreateCommand": "/opt/entrypoint.sh",
"mounts": [
"source=/media/Pool/,target=/Pool,type=bind",
"source=cache,target=/cache,type=volume"
]
}
This worked fine for me, but now as my colleges start their devcontainers, we have the problem, that a newly started devcontainer kill other already running devcontainers.
We found that the local folder of the projekt seems to by the way to identify already running devcontainers:
[3216 ms] Start: Run: docker ps -q -a --filter label=devcontainer.local_folder=d:\develop\myproject
[3839 ms] Start: Run: docker inspect --type container 8ca7d3a44662
[4469 ms] Start: Removing Existing Container
As we all use the same path this identification based on the local folder is problematic. Is there a way to use other labels?
Seems to be a bug, because the issue I opened, was accepted as a bug report.

VSCode DevContainers: How do you mount a home file on both MAC and Windows

I am using VSCode devcontainers, how do you have a mount section in the devcontainers.json with that is compatible for both windows and MAC? I have a problem accessing the source=... section under the "mounts" section.
// For format details, see https://aka.ms/vscode-remote/devcontainer.json or this file's README at:
// https://github.com/microsoft/vscode-dev-containers/tree/v0.106.0/containers/python-3
{
"name": "Python 3",
"context": "..",
"dockerFile": "Dockerfile",
// Set *default* container specific settings.json values on container create.
"settings": {
"terminal.integrated.shell.linux": "/bin/bash",
"python.pythonPath": "/usr/local/bin/python",
"python.linting.enabled": true,
"python.linting.pylintEnabled": true,
"python.linting.pylintPath": "venv/bin/pylint",
},
// Change <username> to user path (Ex. /Users/vfrank/ on a MAC)
"mounts": [
"source=<full home path>/.aws/credentials,target=/home/vscode/.aws/credentials,type=bind,consistency=cached"
],
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
// "forwardPorts": [],
// Use 'postCreateCommand' to run commands after the container is created.
"postCreateCommand": "echo 'done'",
// Uncomment to connect as a non-root user. See https://aka.ms/vscode-remote/containers/non-root.
"remoteUser": "vscode"
}
Works on MAC but not on Windows
"mounts": [
"source=${localEnv:HOME}${localEnv:USERPROFILE}/.aws/credentials,target=/home/vscode/.aws/credentials,type=bind,consistency=cached"
]
Works on windows but not on MAC
"mounts": [
"source=~/.aws/credentials,target=/home/vscode/.aws/credentials,type=bind,consistency=cached"
]
I have docker file sharing setup for C drive on Windows and /Users on MAC, but the error I am getting when using devcontainers is that the folder or file does not exist. I can make it work on both computers so it is not related to file permissions or access from docker.
I am looking for a single source=... command that works on both Windows (10+) and MAC.
If you do this:
"source=${localEnv:HOME}${localEnv:USERPROFILE}\\.aws\\credentials,target=/root/.aws/credentials,type=bind,consistency=cached"
Then only one expansion will be non-empty on each platform and you will get the result you want.

Delete a file before starting a docker container

I'm using this Graphite Docker Image and it is using an entrypoint called "/entrypoint" and I think no additional command: https://hub.docker.com/r/graphiteapp/graphite-statsd/dockerfile
Now my goal is to delete a specific file everytime when the container starts and BEFORE the script /entrypoint is running.
I tried to override the default entrypoint with --entrypoint "rm -f /opt/graphite/path/to/file; /entrypoint", but then the container is always restarting the whole time. This is the result (docker inspect):
[
{
"Id": "dcd0f2ba87fe3aae8089b40ea3e350c51750cb1cc2f890b066278e2cb52ce013",
"Created": "2020-07-16T03:31:29.76953014Z",
"Path": "rm",
"Args": [
"-f",
"/opt/graphite/path/to/file;",
"/entrypoint"
],
"State": {
"Status": "restarting",
...
...
...
...
Can you help me telling the right way to delete a file before starting a container?
Thank you in advance!

Run Filebeat in docker as IoT Edge module

I would like to run Filebeat as Docker container in Azure IoT Edge. I would like Filebeat to get logs from others running containers.
I'm already able to run filebeat as Docker container, from the documentation (https://www.elastic.co/guide/en/beats/filebeat/6.8/running-on-docker.html#_volume_mounted_configuration)
docker run -d \
--name=filebeat \
--user=root \
--volume="$(pwd)/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro" \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
docker.elastic.co/beats/filebeat:6.8.3 filebeat -e -strict.perms=false
With this command and with the correct filebeat.yml file I'm able to collect logs for every running containers on my device.
Now I would like to deploy this configuration as Azure IoT Edge Modules.
I created a docker image having the filebeat.yml file included with the following Dockerfile:
FROM docker.elastic.co/beats/filebeat:6.8.3
COPY filebeat.yml /usr/share/filebeat/filebeat.yml
USER root
RUN chmod go-w /usr/share/filebeat/filebeat.yml
RUN chown root:filebeat /usr/share/filebeat/filebeat.yml
USER filebeat
From documentation: https://www.elastic.co/guide/en/beats/filebeat/6.8/running-on-docker.html#_custom_image_configuration
I tested this Dockerfile by running locally
docker build -t filebeat .
and
docker run -d \
--name=filebeat \
--user=root \
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro" \
filebeat:latest filebeat -e -strict.perms=false
This works fine, logs from other containers are collected as they should.
Now my question is :
In Azure IoT Edge, how can I mount volumes to access others Docker containers running on the devices, like it's done with
--volume="/var/lib/docker/containers:/var/lib/docker/containers:ro" \
--volume="/var/run/docker.sock:/var/run/docker.sock:ro"
in order to collect logs?
From this other SO post (Mount path to Azure IoT Edge module) in the Azure IoT Edge portal I tried the following:
"HostConfig": {
"Mounts": [
{
"Target": "/var/lib/docker/containers",
"Source": "/var/lib/docker/containers",
"Type": "volume",
"ReadOnly: true
},
{
"Target": "/var/run/docker.sock",
"Source": "/var/run/docker.sock",
"Type": "volume",
"ReadOnly: true
}
]
}
}
But when I deploy this module I have the following error:
2019-11-25T10:09:41Z [WARN] - Could not create module FilebeatAgent
2019-11-25T10:09:41Z [WARN] - caused by: create /var/lib/docker/containers: "/var/lib/docker/containers" includes invalid characters for a local volume name, only "[a-zA-Z0-9][a-zA-Z0-9_.-]" are allowed. If you intended to pass a host directory, use absolute path
I don't understand this error. How can I specify a path using only [a-zA-Z0-9][a-zA-Z0-9_.-] ?
Thanks for your help.
EDIT
In the Azure IoT Edge portal, createOptions json:
{
"HostConfig": {
"Binds": [
"/var/lib/docker/containers:/var/lib/docker/containers",
"/var/run/docker.sock:/var/run/docker.sock"
]
}
}
There is an article that describes how to mount storage from the host here: https://learn.microsoft.com/en-us/azure/iot-edge/how-to-access-host-storage-from-module

volumes_from instruction - docker compose

With the below docker-compose.yml file:
test:
build: ../../
dockerfile: docker/dev/Dockerfile
volumes_from:
- cachev
cachev:
build: ../../
dockerfile: docker/dev/Dockerfile
volumes:
- /build
entrypoint: "true"
cachev service in above file launches volume container that creates anonymous volume in /var/lib/docker/ folder in docker host and creates mount point /cache within volume container(xx_cachev).
Does volumes_from instruction under test service create /build mount point in xx_test container? that points to /build mount point of xx_cachev container?
From volumes_from docs:
Mount all of the volumes from another service or container...
So the short answer is yes:
volumes_from mounts /build volume defined by cachev service inside test service.
Long answer:
To answer your question let's run the test service:
docker compose up test
Before answering your question, let's make sure the description is clear:
cachev service in above file launches volume container...
It's just regular container which exits immediately because of entrypoint: "true".
docker ps -a should show:
ac68a33abe59 cache "true" 16 hours ago Exited (0) 4 minutes ago cache_1
But before it exits it creates volumes specified in volumes:. So we can call it volume container if its volumes are used by other service, for caching for instance.
that creates anonymous volume in /var/lib/docker/ folder in docker host
Agree. - /build is anonymous volume. Can be verified by viewing all container mounts:
docker inspect [cachev_container_id] --format '{{json .Mounts}}' | jq
should show something like:
{
"Type": "volume",
"Name": "1ec7ff7c72bfb5a3259ed54be5b156ea694be6c8d932bcb3fa6e657cbcaea378",
"Source": "/var/lib/docker/volumes/1ec7ff7c72bfb5a3259ed54be5b156ea694be6c8d932bcb3fa6e657cbcaea378/_data",
"Destination": "/build",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
jq is great utility for working with jsons in bash. Install it for the above command to work.
and creates mount point /cache within volume container(xx_cachev).
Don't see any evidence of mounts in cachev: service spec you provided.
If you add mapping - /tmp/cache:/cache to its volumes section and run docker compose up test again and inspect the exited container you should see:
{
"Type": "bind",
"Source": "/tmp/cache",
"Destination": "/cache",
"Mode": "rw",
"RW": true,
"Propagation": "rprivate"
}
Please, note that docker inspect [cachev_service_id] --format '{{json .Mounts}}' | jq will show all container mounts including those specified in docker/dev/Dockerfile using VOLUME instruction.
To answer to your question we need to inspect test service container:
docker inspect [test_container_id] --format '{{json .Mounts}}' | jq:
would show all the volumes specified in docker/dev/Dockerfile if any and all the volumes of cachev thanks to volumes_from instruction.
You can see that both test and cache containers have:
{
"Type": "volume",
"Name": "1ec7ff7c72bfb5a3259ed54be5b156ea694be6c8d932bcb3fa6e657cbcaea378",
"Source": "/var/lib/docker/volumes/1ec7ff7c72bfb5a3259ed54be5b156ea694be6c8d932bcb3fa6e657cbcaea378/_data",
"Destination": "/build",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
in their mounts and this volume survives subsequent runs of docker compose up test
Yes, you can verify by executing a command inside both containers. if creat file in test container under touch /build/fromtest.txt path, it will be visible in cacheV container on the same path /build/fromtest.txt.
volumes_from
Mount all of the volumes from another service or container
compose-file-volumes_from
A demo you can try
test:
image: alpine
command: sh -c "touch /build/fromtest.txt && echo hell from test-container && ls /build/"
volumes_from:
- cachev
cachev:
image: node:alpine
command: sh -c "touch /build/fromcache.txt && echo hello from cache-container && ls /build/"
volumes:
- /build
log will be
Recreating compose-volume_cachev_1 ... done
Recreating compose-volume_test_1 ... done
Attaching to compose-volume_cachev_1, compose-volume_test_1
cachev_1 | hello from cache-container
test_1 | hell from test-container
test_1 | fromcache.txt
test_1 | fromtest.txt
cachev_1 | fromcache.txt
cachev_1 | fromtest.txt

Resources