VSCode - Externally open local file in attached remote container - docker

I have an existing docker container that I am opening up in VSCode through the "Dev Containers: Attach to Running Container..." command. I have a volume mounted that I can access fine through the VSCode Explorer pane (/Users/me/dev-workspace/web/ is mounted to /code/). When I try to open a file within that mount (say /Users/me/dev-workspace/web/README.md, also at /code/README.md in the container) externally, such as double clicking in Finder or selecting "Open in VisualStudio Code" in GitHub Desktop, it opens in a separate instance of VSCode. Is there a way to have the local file open in the instance of VSCode with the attached container when opened externally?
I've tried adding mounts into devcontainers.json, but couldn't get it to apply to an attached container (existing container)

Related

How can I modify the access right (on host) of files created within a docker container inside the volume folder?

I am creating a docker container, from an image.
In the shell script, I create a volume folder in container and host.
However, when I create a test file, for example, on the container side, I cannot modify it on host. Permission says its owned by root, so I don't have the access rights.
Is there a way to fix the permission or access right in the .sh file?

vscode jumps to a specific volume afer reopen in container

I have a nice project with a .devcontainer config. Since the vscode update to 1.63 I have some trouble with the docker setup. Now I'm using the newest 1.64.0
I just want to build a new container with a clean volume an start in a fresh environment.
What happens is, that a new container is starting and I see some stuff from another container.
Same if I clone a git repo into a container volume.
Why are some containers connected with the same volume for the workspace?
Why do I fall back every time to the same volume?
In the devcontainer.json I set:
"workspaceFolder": "/workspace",
"workspaceMount": "source=remote-workspace,target=/workspace,type=volume",
To build a new devcontainer in a fresh environment you can install devcotainer cli and trigger build manually.
I'm used to mount workspace as bind mount (on windows with wsl2 files) insted of volume mount, i think that the main issue is the volume name: if both project has "source=remote-workspace" the volume will be detected as the same.
With nodejs where I want to keep node_modules folder inside container, I have done a double mount following this official vscode guide for this usecase.
So I have left workspaceMount as a default bindmount than I have added a volue that override a specific folder.
{
"mounts": [
// vvvvvvvvvvv name must be unique
"source=projectname-node_modules,target=${containerWorkspaceFolder}/node_modules,type=volume"
]
}
The results is:
every file / is server by container
but all inside ${containerWorkspaceFolder} is served by bind mount
but all inside ${containerWorkspaceFolder}/node_modules is server by named volumes

Is it possible to download files from Docker Volume via Docker Desktop application?

I have created a docker volume as the following:
docker create volume test_volume
I run my container as the following:
docker run --rm -v test_volume:/data collect_symbols:1.0
... where (in the container) I can access the test_volume via /data.
And when my container runs, I save a symbols.csv file as the following in a python script:
df.to_csv("/data/symbols.csv", index=False)
My container is not a Web server so it just runs a task and then it ends.
Then I go to my Docker Desktop application, and go to the Volumes section. I select the test_volume Docker volume, and I see the symbols.csv file there, which is good.
On the right hand side of the screen, there are 2 options: "Save As" and "Delete". (See below image.)
Then I click on Save As to check the content of the file, it asks me where to save the file, I pick a location in my local computer and click on Save. But the application throws the following error at this point:
It says Cannot destructure property 'filePath' of '(intermediate value)' as it is undefined.
Am I doing something wrong here? Or is it not possible to get the file from a Docker volume in this way? If it is the case, what is the purpose of Save As button?
Note: I am running the Docker Desktop Application on Windows.

Is there a way to view WSL2 docker container's files via Windows File Explorer?

I can bash into one of the containers, but sometimes it's much easier to view it in GUI app such as Windows File Explorer or Total Commander.
Is it possible in any way?
If the only thing you want is to view folder inside your container while in development,
you can use the BIND MOUNTS: https://docs.docker.com/storage/bind-mounts/, which is kind of volume
This allows you to set "full" connection between your container specific folder to an input folder within your local machine.
You can apply it via ternimal with: docker run ......... -v <path_within_your_local_machine>:<path_within_container_file_system>.

How to deploy the jar files into liferay running on docker from local machine?

I have a Liferay instance running on docker. I need to deploy some jar files, I created shared folders from the virtual box but none of them are shown in docker.
Enabling shared folders via virtual box won't work in the docker container.
Open kitematic, there you can see the container in which your Liferay is running. Now open the container and look for "volumes" menu. Under "Volumes" menu you can see "/workspace", select that folder.This will create a shared folder b/w local and Docker container. Select 'auto-mount' option. This folder will be usually under "Documents/kitematic/{name-of-container}/workspace". Copy all jars inside here. Now go inside the docker container via bash. Now you can copy all jars from "/workspace" to "/opt/liferay-*/deploy".

Resources