Open VS Code from inside a docker container - docker

Is it possible to run code someFile.js from inside a docker container, and have it open in VS Code?
Why do I want to do this? Because vue dev tools allows you to open a vue component from within the browser. This is especially helpful for new devs that want to quickly track down components and open them in the editor.
Unfortunatly - since my dev server is running inside a docker container - this functionality doesn't work. This is because the editor is opened from within the devserver.
Might be worth noting, I'm using Visual Studio Code Remote - Containers.
So to narrow the question furthur:
How can I allow launch VS Code from a docker container, so that vue dev tools can open that file in my local editor?

Yes, if you don't mind running your vue tools inside the docker container as well. You have to set up a .devcontainer.json file specifying the dockerfile or image or dockercompose file to use to build the container. It will create the container for you and automatically mount your project directory by default, but there are a lot of alternative configuration options as well.
This means you'd open VS Code and basically your whole IDE would be in the docker container. You could call vue tools from the VS Code terminal, including calls to code.
I've been doing this with some tensorflow stuff for the last 6 weeks or so. It was a little confusing at first, but now I really like it.
One challenge I've encountered so far is that if you are deploying your image as a deliverable, using a container as a dev environment can cause some dev tool creep into the image (like including dev tools in your Dockerfile that you need in development but dont want in the deployed image). There are probably good ways to deal with this but I haven't explored them all yet.
Another note: I can't seem to find the docs, but I think the recommended way is to use WSL2-backed docker, and then do all your docker mounting and docker client invocations from the WSL2 filesystem to docker instead of from Windows to Docker. I guess if WSL2 and docker are sharing the same VM, the mounted file systems are faster between WSL2/Docker than from Windows/Docker. This has worked well for me so far...

I've managed to adapt this dockerized version of VS Code to our restrictive runtime environment (Openshift), although it does assume connection to the internet, so extensions and Intellisense ML model had to be preinstalled:
https://hub.docker.com/r/codercom/code-server

Related

Manage VSCode Remote Container as Docker-Compose Service

for the development of my Python project I have setup a Remote Development Container. The project uses, for example, MariaDB and RabbitMQ. Until recently I built and started containers for those services outside of VSCode. A few days ago, I reworked the project using Docker Compose so that I can manage the other containers using the remarkable Docker extension (Id: ms-azuretools.vscode-docker, Version: 1.22.0). That is working fine, besides one issue I cannot figure out:
I can start all containers using compose up, however, the Python project Remote Development Container is not staying up. Currently, I open the project folder in a second VSCode window and use the "Reopen in Container" command.
However, it would be nice if the Python project container is staying up and I could just use the "Attach Visual Studio Code" command from the Docker extension Containers menu.
I am wondering, if there is something I can add to the .devcontainer.json or some other configuration file to realize this scenario?
Any help is much appreciated!
If it helps I can post the docker-compose.yml, Dockerfile's or the .devcontainer.json, please let me know what is required.

Get information about the volume from inside the docker container

Inside a container I build a (C++) app. The source code directory is shared with --volume.
If docker runs on Linux, the shared directory runs at full speed, but if docker runs on mac, docker has to bridge the share which results in speed drop. Therefore I have to copy the whole source directory to the container before starting compiling. But this copy step is necessary on non-Linux hosts only.
How can I detect if the share is "natively" shared?
Can I get information about the host os from inside the container?
Update
The idea behind this workflow is to setup an image for a defined environment to cross-build the product for multiple platforms (win, mac, linux). Otherwise each developer has a different Linux OS/compilers/components etc installed.
As a docker newbie I thought that this image (with all required 3rdParty components/compilers) can be used to build the app within a container when it is launched.
One workaround I can think of is that you can use a special networking feature which is available in both Mac and Windows hosts, but not in Linux.
It is a special dns entry you can use to get the ip of the host from inside container - host.docker.internal. Read more here and here.
Now you just need a command to get a boolean value if it resolves or not. Since I don’t know which shell you are using, I cant say for sure but something like this should help you.
In my opinion you are looking at the issue from the wrong perspective.
First of all the compilation should be done at build time, not at runtime. If you do it in the container then it means that you are shipping an image with build tools, not to say that user of the image would need the source code to run the image. For this reason it is a good practice to compile at build time and only ship an image with the binary to run.
Secondly, compiling at build time is fast because the source code is sent to the docker daemon and accessed directly from there, no need for volumes.
Lastly, to answer your last question, it is you who runs the container. So you can tell it everything about the host where it is running by just adding and environment variable (for example). It is over complicated to just run the container and let it guess where it is running, when you already have that information at the moment yuo start the container.
I used the --env DO_COPY=1 when creating the container.

Why is it a lot more complex to debug local Kubernetes containers yet local Docker containers are very simple to debug?

Debugging Docker containers is very easy on my local PC. Say I have this scenario:
1) A web application project
2) A Docker-Compose project
I set the Docker-Compose project as the startup project and then debug the project. Any breakpoints I add to my web application project work i.e. the code stops.
I have now enabled Kubernetes in Docker for Desktop and I have created a very simple app and deployed it. However, it seems to be very complex to setup a debugging environment - for example as described here: https://medium.com/#pavel.agarkov/debugging-asp-net-core-app-running-in-kubernetes-minikube-from-visual-studio-2017-on-windows-6671ddc23d93, which is making me think that I am doing something wrong. Is there a simple way to debug Kubernetes when it is installed locally like it is when debugging local Docker containers?
I was hoping that I would be able to just launch Visual Studio and it would start debugging Kubernetes containers - like with Docker. Is this possible?
Kubernetes is a tool designed to run multiple copies of a packaged application Somewhere Else. It is not designed as a live-development tool.
Imagine that you built a desktop application, packaged it up somehow, and sent it off to me. I'm running it on my desktop (Somewhere Else) and have a problem with it. I can report that problem to you, but you're not going to be able to attach your IDE to my desktop system. Instead, you need to reproduce my issue on your own development system, write a test case for it, and fix it; once you've done that you can release an update that I can run again.
Kubernetes is much more focused on this "run released software" model than a live-development environment. You can easily roll a Deployment object back to the previous version of the software that has been released, for example, assuming you have a scheme to tag distinct releases. You need to do a lot of hacking to try to get a local development tree to run inside a container, though.
The other important corollary to this is that, when you "attach your IDE to a Docker container", you are not running the code in your image! A typical setup for this starts a Docker container from an image but then overwrites all of the application code (via a bind mount) with whatever content you have on your local system. Aside from perhaps encapsulating some hard-to-install dependencies, this approach one the one hand gets the inconveniences of using Docker at all (must have root-equivalent permissions, can't locally run support tools, ...) and on the other this hides the code in the image (so you'll need to repeat whatever tests when you want to deploy to production).
I'd recommend using a local development environment for local development, and not trying to simulate it using other tools. Kubernetes isn't well-suited to live development at all, and I wouldn't try to incorporate it into your day-to-day workflow other than for pre-deployment testing once other tests have passed.
Telepresence is an useful tool to debug pods in kubernetes.Telepresence works by running your code locally, as a normal local process, and then forwarding requests to/from the Kubernetes cluster.This means development is fast: you only have to change your code and restart your process. Many web frameworks also do automatic code reload, in which case you won't even need to restart.
https://www.telepresence.io/tutorials/kubernetes
You're right it is more complicated then it needs to be. I wrote an open source framework called Robusta to solve this.
I do some tricks with code-injection to inject debuggers into already running pods. This lets you bypass the typically complex work of setting up a debug-friendly environment in advance.
You can debug any python pod in the cluster like this:
robusta playbooks trigger python_debugger name=myapp namespace=default
This will setup the debugger. All that remains to do is run kubectl port-forward into the cluster and connect Visual Studio code.
I don't know what language you're using, but if it isn't Python it should still be easy to setup. (Feel free to comment and I'll help you.)

Best practice for spinning up container-based (development) environments

OCI containers are a convenient way to package suitable toolchain for a project so that the development environments are consistent and new project members can start quickly by simply checking out the project and pulling the relevant containers.
Of course I am not talking about projects that simply need a C++ compiler or Node.JS. I am talking about projects that need specific compiler packages that don't work with newer than Fedora 22, projects with special tools that need to be installed manually into strange places, working on multiple projects that have tools that are not co-installable and such. For this kind of things it is easier to have a container than follow twenty installation steps and then pray the bits left from previous project don't break things for you.
However, starting a container with compiler to build a project requires quite a few options on the docker (or podman) command-line. Besides the image name, usually:
mount of the project working directory
user id (because the container should access the mounted files as the user running it)
if the tool needs access to some network resources, it might also need
some credentials, via environment or otherwise
ssh agent socket (mount and environment variable)
if the build process involves building docker containers
docker socket (mount); buildah may work without special setup though
and if is a graphic tool (e.g. IDE)
X socket mount and environment variable
--ipc host to make shared memory work
And then it can get more complicated by other factors. E.g. if the developers are in different departments and don't have access to the same docker repository, their images may be called differently, because docker does not support symbolic names of repositories (podman does though).
Is there some standard(ish) way to handle these options or is everybody just using ad-hoc wrapper scripts?
I use Visual Studio Code Remote - Containers extension to connect the source code to a Docker container that holds all the tools needed to build the code (e.g npm modules, ruby gems, eslint, Node.JS, java). The container contains all the "tools" used to develop/build/test the source code.
Additionally, you can also put the VSCode extensions into the Docker image to help keep VSCode IDE tools portable as well.
https://code.visualstudio.com/docs/remote/containers#_managing-extensions
You can provide a Dockerfile in the source code for newcomers to build the Docker image themselves or attach VSCode to an existing Docker container.
If you need to run a server inside the Docker container for testing purposes, you can expose a port on the container via VSCode, and start hitting the server inside the container with a browser or cURL from the host machine.
Be aware of the known limitations to Visual Studio Code Remote - Containers extension. The one that impacts me the most is the beta support for Alphine Linux. I have often noticed some of the popular Docker Hub images are based on Alphine.

Use VSCode remote development on docker image without local files

Motivation
As of now, we are using five docker containers (MySQL, PHP, static...) managed by docker-compose. We do only need to access one of them. We now have a local copy of all data inside and sync it from Windows to the container, but that is very slow, VSCode on Windows sometimes randomly locks files causing git rebase origin/master to end in very unpleasant ways.
Desired behaviour
Use VSCode Remote Development extension to:
Edit files inside the container without any mirrored files on Windows
Run git commands (checkout, rebase, merge...)
Run build commands (make, ng, npm)
Still keep Windows as for many developers it is the prefered platform.
Question
Is it possible to develop inside a docker container using VSCode?
I have tried to follow the official guide, but they do seem to require us to have mirrored files. We do also use WSL.
As #FSCKur points out this is the exact scenario VSCode dev containers is supposed to address, but on Windows I've found the performance to be unusable.
I've settled on running VSCode and docker inside a Linux VM on Windows, and have a 96% time saving in things like running up a server and watching code for changes making this setup my preferred way now.
The standardisation of devcontainer.json and being able to use github codespaces if you're away from your normal dev machine make this whole setup a pleasure to use.
see https://stackoverflow.com/a/72787362/183005 for detailed timing comparison and setup details
This is sounds like exactly what I do. My team uses Windows on the desktop, and we develop a containerised Linux app.
We use VSCode dev containers. They are an excellent solution for the scenario.
You may also be able to SSH to your docker host and code on it, but in my view this is less good because you want to keep all customisation "contained" - I have installed a few quality-of-life packages in my dev container which I'd prefer to keep out of my colleague's environments and off the docker host.
We have access to the docker host, so we clone our source on the docker host and mount it through. We also bind-mount folders from the docker host for SQL and Redis data - but that could be achieved with docker volumes instead. IIUC, the workspace folder itself does have to be a bind-mount - in fact, no alternative is allowed in the devcontainer.json file. But since you need permission anyway on the docker daemon, this is probably achievable.
All source code operations happen in the dev container, i.e. in Linux. We commit and push from there, we edit our code there. If we need to work on the repo on our laptops, we pull it locally. No rcopy, no SCP - github is our "sync" mechanism. We previously used vagrant and mounted the source from Windows - the symlinks were an absolute pain for us, but probably anyone who's tried mounting source code from Windows into Linux will have experienced pain over some element or other.
VSCode in a dev container is very similar to the local experience. You will get bash in the terminal. To be real, you probably can't work like this without touching bash. However, you can install PSv7 in the container, and/or a 'better' shell (opinion mine) such as zsh.

Resources