Editing Docker container FS using Atom/Sublime-Text? - docker

I'm running OSX and Docker with the help of boot2docker.
From my understanding boot2docker is a lightweight linux distro that is running the docker containers. I have some Ubuntu containers that I use to run and test projects that should specifically run well on Linux.
However every small code change from my host text editor of choice, requires me to re-build image and re-run the container. Run the app and confirm that the change I made didn't break something.
Is there a way for me to open a Docker container FS folder in a text editor from my host machine? (a.k.a Remote edit?)
Have any of you guys done this? Any ideas will be awesome. I think about setuping SFTP or SSHD on the Docker container, but I would want your opinion?

What I often do is, in development, mount the source code of the application to its usual place in a volume. Then, I set the command (or entrypoint) of the container to a script that launches it in "development mode" (for example, by using nodemon for a node.js application, setting RAILS_ENV=development in Rails, and so on).
Volumes do work on Mac OS X (and I assume Windows) under boot2docker or docker-machine, with the caveat that you need to be working somewhere beneath your home directory.
For a concrete example, here's a repository that I set this up in. The ingredients:
script/dev is my "dev-mode" entrypoint. It launches the main application under nodemon.
When I launch the container, I mount the source directory into the container as a volume and set script/dev as the command. (I'm using docker-compose here to launch and link in an upstream dependency, so I can do everything in one command.)
With those two things in place, I can run docker-compose up, make a source change in whatever editor I choose on my host, save the file, and the service within the container auto-reloads to bring my changes into effect. Presto!

Related

docker: /opt/docker folder not created

I am trying to configure my project to dockerize it. I can test it locally in my wsl environment, and it works fine. Inside docker, /opt/docker folder is created, and I can access my application from host machine.
But on dev server, I observe that /opt/docker is not even created.
I am not able to diagnose the root cause. Shouldn't docker behave similarly on all machines?
Not necessarily, no. You shouldn't care about 'docker', how its implemented or what directories it uses. You should only care that it works.
For example, on my WSL installation, I have /opt/containerd, not /opt/docker. I think this is because I locally install docker in wsl (because I refuse to use Docker Desktop). It's different again when I deploy to my k8s cluster, which doesn't use docker at all.
You should care about your images and containers. As long as your container runs the same, then the rest is an implementation detail that should be transparent to you.

Ansible commands on docker containers?

Upto now i had setup my ansible-playbook commands running on AWS EC2 instances.
can i run regular ansible commands like (linefile, apt, pip, etc) on container?
can i add my container-ip to hosts file in container-group and then does the same code works, here if i chanage my main.yml file that has
hosts: ec2-group
to
hosts:contaniers-group
does all commands work?
i am bit beginner into this..please do confirm me i am actually thinking of making docker-compose files from scratch, and run docker-compose commands using ansible.
You can, but it's not really how Docker is designed to be used.
A Docker container is usually a wrapper around a single process. In the standard setup you create an image that has that application built and packaged, and you can just run it without any further setup. It's not usually interesting to run a bare Linux distribution container (which won't have an application installed) or to run an interactive shell as the main container process. Tutorials like Docker's Build and run your image walk through this sequence.
A corollary to this is that containers don't usually have any local state. In the best case any state a container needs is in an external database; if you can't do that then you store local state in a volume that outlives the container.
Finally, it's extremely routine to delete and recreate containers. You need to do this to change some common options; in a cluster environment like Kubernetes this can happen outside your control. When this happens the new container will restart running its default setup, and it won't know about any manual changes the previous container might have had.
So you don't usually want to try to install software directly in a running container, since that will get lost as soon as the container exits. You can, in principle, get a shell in a container (via docker exec) but this is more of a debugging tool than an administration tool. You could make the only process a container runs be an ssh daemon, but anything you start this way will get lost as soon as the container exits (and I've never seen a recipe that correctly and securely sets up credentials to access it).
I'd recommend learning the standard Dockerfile system and running self-contained Docker images over trying to adapt Ansible to this rather different environment.

Installing a Text Editor to Edit appsettings.json in ASP.NET Core 3.1 Docker Container

When it comes to configuration and Docker, I found that bashing into the container to make alterations was much easier than re-building and re-deploying Docker containers.
For example, an image that uses a Debian based image I am able to run apt install nano and alter configurations that way, then restart the container for the changes to take register.
My question is, how do I do this with a dotnet application with appsettings.json with a Dockerfile that Visual Studio has generated? I see that its in the root directory when I bash into the container, but I can't use any commands like apt install.
You should never directly edit files inside containers, if it's at all avoidable. It's very routine to delete and recreate containers (to change a number of startup-time-only options; to update the image the container is running) and when you do this any changes you've made will get lost.
If you have a configuration file you'd like to appear in the container when you run it, you can use a Docker bind mount to have the copy of the file on the host replace the one in the container. Start it with something like
docker run -v $PWD/appsettings.json:/appsettings.json ...
You can edit the file on your host with your favorite editor, without installing unnecessary tools in the container. In principle this should be reflected immediately in the container, but you might need to restart your application/container to get it to notice. Just so long as you specify this option when you delete and recreate the container, your change will be "persistent".

How to attach VSCode to a remote Docker container while setting the correct user

I start a Docker container with a special bash script that runs the container and then creates a user X with a dynamic name, UID and GUID in the container. I can then bash into the container and perform actions as this user X. The script also creates an 'alias' user named vscode with the same UID as the earlier created dynamic user X.
In VSCode I can attach to this container. Two questions:
How can I setup VSCode to perform all actions as the 'vscode' user or as the user X? (When using devcontainer.json to create the container this is trivial, but now I attach to an existing container and devcontainer.json is not used).
In devcontainer.json you have the option to automatically install extensions. Which settings file do I need to create to automatically install extensions when attaching to a container?
The solution should be automated. Eg. manual intervention and committing the image as suggested below is possible but will make it much harder for users to just use my Docker image.
I updated to vscode 1.39 and tried to add:
ADD server-env-setup /root/.vscode-server/server-env-setup
But "server-env-setup" seems to be only used for WSL.
I'll answer your questions in reverted order:
VSCode installs extensions after creating the container by using docker exec command.
And now recipe: The easiest way is to take container already created by VSCode:
Run "Open folder on container" for creating dev container.
After container has done and you can work with VSCode. Stop your environment by clicking "Close remote connection".
Run docker ps -a. You should see last died containers something as:
How you can see the latest running container is: a7aa5af7ec08 vsc-typescript-2ea9f347739c5397afc431028000c02b. This your container with all extensions installed. And it doesn't matter how you install extensions manually or by configuring via devcontainer.json.
Run docker commit a7aa5af7ec08 all-installed-vscode-image:latest. Now you have a docker image with all your loved software installed. You can upload this image to your favorite docker registry and use also on other machines.
Now you can run docker run -i -u vscode all-installed-vscode-image:latest. And attach vscode to this container. This is an answer to your first question.
Also, you can review vscode documentation and use devcontainer.json configurations when you attach to already running containers and even containers running on remote machines.
VSCode now implements a "remoteUser" property ehich you can set in the image configuration. This will ensure that VSCode logs into the container as the correct user.

Docker Storage - Getting a Layman's answer

I am just discovering Docker - I am finding so much information, but I can't seem to get a straight answer on this option. If someone could give me a clear explanation based on my understanding I have of it so far it would be appreciated.
I am downloading a docker image locally - say the default one from Microsoft, using microsoft/dotnet-samples:dotnetapp-nanoserver I am lost as to where this is downloaded to? Is this downloaded and installed as a program on the host machine, with a isolated script that controls the container? The download is about 1.3 gigs because it includes .Net Core
In another example, if I download apache2 to run as a web server, does it install it in the default paths on the host system, but every container I want to use taps into that - or does every container contain it's isolated version of apache2?
I ask this because I can't find files that mimic the file size of these programs.
I know they are not complete VM's but where can I find the files associated with a container?
I am using Windows Server 2016 and a Mac since I want to do some trials with containers.
An image is a filesystem
Docker images are encapsulated filesystems. The software and files inside are not being directly installed onto your system.
You can think of a Docker image sort of the way you think of a .zip file. You can download a .zip file from somewhere, and it is a single file. Contained inside it might be one file, or dozens of files, or a nested tree of directories and files. But on your disk, it exists as one file.
A Docker image is similar (conceptually, at least... the details are more complicated).
Image storage
Where images are stored varies by platform. On a Linux system, they are usually under /var/lib/docker. I don't know where they are stored on Windows, but this is a more or less opaque store. Poking around inside will not reveal very much to you anyway.
To see what you have, you should use the docker images command. It will show you the images you have stored locally.
Like I said earlier, each image may consist of multiple layers. By default, that command will only show you the top layer, which is the one you'll care about, to run containers from. Technically, there are other layers, and you can see all of them using docker images -a.
Where is the software installed?
When you download an Apache image, nothing is installed on your system at all. The image file(s) are downloaded and stored. Hiding inside is Apache and everything Apache needs in order to run, but Apache is not installed onto your Windows OS anywhere.
When you want to use Apache, you would run a container. Docker takes the Apache image and, using it as a starting template, creates a running process container, inside of which Apache is running. This is isolated from your operating system. Apache is only running inside of the container.
If you run a second container from the Apache image, you now have two completely separate Apache instances running, each in their own isolated filesystem environment.
Where can I find the files?
If you just want to poke around in the container filesystem, you can start the container in interactive mode, and run a shell instead of whatever it normally runs (like Apache). For instance, if you have an image apache:latest, you can do this:
docker run --rm -it apache:latest bash
This will run an instance of apache:latest, but instead of launching Apache, it will run a bash shell and drop you into it.
The --rm flag is convenient for cases like this. It tells Docker to remove the running container when its process exits. That way for a "just looking at something" container like this one, it cleans up after itself.
The -it is actually two flags. -i is interactive mode, and -t allocates a terminal. This is a common flag to pass when you want to directly interact with the container.
Once inside, you can use the usual commands to look at files and directory listings. Note that many containers are stripped-down, though. You don't always have all of the tools you are used to having. Things like ls in Linux are typically there, but a lot of things will not be.
Simply exit when you're done looking around to exit.
Looking around while the process is running
You can also look at the container while Apache is running. First start it normally.
docker run -d apache:latest
This will return a container ID. You can also get the ID from docker ps. Then you can attach to the container with that ID by executing a shell.
docker exec -it <container_id> bash
Now you're in the container in a shell, but Apache is in there running.

Resources