Best practice for spinning up container-based (development) environments - docker

OCI containers are a convenient way to package suitable toolchain for a project so that the development environments are consistent and new project members can start quickly by simply checking out the project and pulling the relevant containers.
Of course I am not talking about projects that simply need a C++ compiler or Node.JS. I am talking about projects that need specific compiler packages that don't work with newer than Fedora 22, projects with special tools that need to be installed manually into strange places, working on multiple projects that have tools that are not co-installable and such. For this kind of things it is easier to have a container than follow twenty installation steps and then pray the bits left from previous project don't break things for you.
However, starting a container with compiler to build a project requires quite a few options on the docker (or podman) command-line. Besides the image name, usually:
mount of the project working directory
user id (because the container should access the mounted files as the user running it)
if the tool needs access to some network resources, it might also need
some credentials, via environment or otherwise
ssh agent socket (mount and environment variable)
if the build process involves building docker containers
docker socket (mount); buildah may work without special setup though
and if is a graphic tool (e.g. IDE)
X socket mount and environment variable
--ipc host to make shared memory work
And then it can get more complicated by other factors. E.g. if the developers are in different departments and don't have access to the same docker repository, their images may be called differently, because docker does not support symbolic names of repositories (podman does though).
Is there some standard(ish) way to handle these options or is everybody just using ad-hoc wrapper scripts?

I use Visual Studio Code Remote - Containers extension to connect the source code to a Docker container that holds all the tools needed to build the code (e.g npm modules, ruby gems, eslint, Node.JS, java). The container contains all the "tools" used to develop/build/test the source code.
Additionally, you can also put the VSCode extensions into the Docker image to help keep VSCode IDE tools portable as well.
https://code.visualstudio.com/docs/remote/containers#_managing-extensions
You can provide a Dockerfile in the source code for newcomers to build the Docker image themselves or attach VSCode to an existing Docker container.
If you need to run a server inside the Docker container for testing purposes, you can expose a port on the container via VSCode, and start hitting the server inside the container with a browser or cURL from the host machine.
Be aware of the known limitations to Visual Studio Code Remote - Containers extension. The one that impacts me the most is the beta support for Alphine Linux. I have often noticed some of the popular Docker Hub images are based on Alphine.

Related

creating a development container for multiple pc

I want to create an image that contains all the dependencies needed for development like Java Maven, Node, etc., I want to create that image and then deploy it in different PCs at the same time.
I wanted to know if this is possible to do it by using Docker and if you could share with me some guide or information on how to do it, because I want to create an image that contains the dependencies but I want those different to have the software in the image but still remain unique like having their own configuration. I only want the image to deploy a fast environment to program, thanks in advance.
The advantage of docker is that it shares the CLI of the host system while capsule it into its own environment with its own network adapter. That means docker is fast because you don't need to simulate hardware and operation-system as well.
But here comes the clue for you:
Since it does no simulate/contain the OS you just can't make it executable on all environments.
You need to choose the common way to tell all users that you are using linux containers so you can fullfill unix/mac/... already. For windows users there should be a info that they need WSL (Windows subsystem for linux) installed. Thats where windows can run a linux cli parallel to be compatible as well.
For your dependency:
You can build a container or compose that ocntains java, node, ... and all environment stuff - just Docker itself need to be compatible to host by that WSL/linux-container thing.
So now that was a lot about Docker: Same for kubernetes/minicube/... whatever you want to use locally -> of cause you need the correct installation for windows/linux target and if you use linux container and force windows/server to have WSL you can install linux kubernetes as well and be consistant everywhere.

Open VS Code from inside a docker container

Is it possible to run code someFile.js from inside a docker container, and have it open in VS Code?
Why do I want to do this? Because vue dev tools allows you to open a vue component from within the browser. This is especially helpful for new devs that want to quickly track down components and open them in the editor.
Unfortunatly - since my dev server is running inside a docker container - this functionality doesn't work. This is because the editor is opened from within the devserver.
Might be worth noting, I'm using Visual Studio Code Remote - Containers.
So to narrow the question furthur:
How can I allow launch VS Code from a docker container, so that vue dev tools can open that file in my local editor?
Yes, if you don't mind running your vue tools inside the docker container as well. You have to set up a .devcontainer.json file specifying the dockerfile or image or dockercompose file to use to build the container. It will create the container for you and automatically mount your project directory by default, but there are a lot of alternative configuration options as well.
This means you'd open VS Code and basically your whole IDE would be in the docker container. You could call vue tools from the VS Code terminal, including calls to code.
I've been doing this with some tensorflow stuff for the last 6 weeks or so. It was a little confusing at first, but now I really like it.
One challenge I've encountered so far is that if you are deploying your image as a deliverable, using a container as a dev environment can cause some dev tool creep into the image (like including dev tools in your Dockerfile that you need in development but dont want in the deployed image). There are probably good ways to deal with this but I haven't explored them all yet.
Another note: I can't seem to find the docs, but I think the recommended way is to use WSL2-backed docker, and then do all your docker mounting and docker client invocations from the WSL2 filesystem to docker instead of from Windows to Docker. I guess if WSL2 and docker are sharing the same VM, the mounted file systems are faster between WSL2/Docker than from Windows/Docker. This has worked well for me so far...
I've managed to adapt this dockerized version of VS Code to our restrictive runtime environment (Openshift), although it does assume connection to the internet, so extensions and Intellisense ML model had to be preinstalled:
https://hub.docker.com/r/codercom/code-server

Use VSCode remote development on docker image without local files

Motivation
As of now, we are using five docker containers (MySQL, PHP, static...) managed by docker-compose. We do only need to access one of them. We now have a local copy of all data inside and sync it from Windows to the container, but that is very slow, VSCode on Windows sometimes randomly locks files causing git rebase origin/master to end in very unpleasant ways.
Desired behaviour
Use VSCode Remote Development extension to:
Edit files inside the container without any mirrored files on Windows
Run git commands (checkout, rebase, merge...)
Run build commands (make, ng, npm)
Still keep Windows as for many developers it is the prefered platform.
Question
Is it possible to develop inside a docker container using VSCode?
I have tried to follow the official guide, but they do seem to require us to have mirrored files. We do also use WSL.
As #FSCKur points out this is the exact scenario VSCode dev containers is supposed to address, but on Windows I've found the performance to be unusable.
I've settled on running VSCode and docker inside a Linux VM on Windows, and have a 96% time saving in things like running up a server and watching code for changes making this setup my preferred way now.
The standardisation of devcontainer.json and being able to use github codespaces if you're away from your normal dev machine make this whole setup a pleasure to use.
see https://stackoverflow.com/a/72787362/183005 for detailed timing comparison and setup details
This is sounds like exactly what I do. My team uses Windows on the desktop, and we develop a containerised Linux app.
We use VSCode dev containers. They are an excellent solution for the scenario.
You may also be able to SSH to your docker host and code on it, but in my view this is less good because you want to keep all customisation "contained" - I have installed a few quality-of-life packages in my dev container which I'd prefer to keep out of my colleague's environments and off the docker host.
We have access to the docker host, so we clone our source on the docker host and mount it through. We also bind-mount folders from the docker host for SQL and Redis data - but that could be achieved with docker volumes instead. IIUC, the workspace folder itself does have to be a bind-mount - in fact, no alternative is allowed in the devcontainer.json file. But since you need permission anyway on the docker daemon, this is probably achievable.
All source code operations happen in the dev container, i.e. in Linux. We commit and push from there, we edit our code there. If we need to work on the repo on our laptops, we pull it locally. No rcopy, no SCP - github is our "sync" mechanism. We previously used vagrant and mounted the source from Windows - the symlinks were an absolute pain for us, but probably anyone who's tried mounting source code from Windows into Linux will have experienced pain over some element or other.
VSCode in a dev container is very similar to the local experience. You will get bash in the terminal. To be real, you probably can't work like this without touching bash. However, you can install PSv7 in the container, and/or a 'better' shell (opinion mine) such as zsh.

How to automate application deployment when using LXD containers?

How should applications be scripted/automatically deployed when in LXD containers?
For example is best way to deploy applications in LXD containers to use a bash script (which deploys an application)? How to execute this bash script inside the container by executing a command on the host?
Are there any tools/methods of doing this in a similar way to Docker recipes?
In my case, I use Ansible to:
build the LXD containers (web, database, redis for example).
connect to the containers and deploy the services and code needed.
you can build your own images for example with the services and/or code already deployed and build specific containers from this images.
I was doing this from before LXD had Ansible support (Ansible 2.2) i prefer to use ssh instead of lxd connection, when i connect to the containers to deploy services/code. they comes with a profile where i had setup my ssh public key (to have direct ssh connection by keys ... no passwords)
Take a look at my open source project on bitbucket devops_lxd_containers It includes:
Scripts to build lxd image templates including Apache, tomcat, haproxy.
Scripts to demonstrate custom application image builds such as Apache hosting and key/value content and haproxy configured as a router.
Code to launch the containers and map ports so they are accessible to the larger network
Code to configure haproxy as layer 7 proxy to route http requests between boxes and containers based on uri prefix routing. Based on where it previously deployed and mapped ports.
At the higher level it accepts a data drive spec and will deploy an entire environment compose of many containers spread across many hosts and hook them all up to act as a cohesive whole via a layer 7 proxy.
Extensive documentation showing how I accomplished each major step using code snippets before automating.
Code to support zero-outage upgrades using the layer7 ability to gracefully bleed off old connections while accepting new connections at the new layer.
The entire system is built on the premise that image building is best done in layers. We build a updated Ubuntu image. From it we build a hardened Ubuntu image. From it we build a basic Apache image. From it we build an application specific image like our apacheKV sample. The goal is to never rebuild any more than once and to re-use the common functionality such as the basicJDK as the source for all JDK dependent images so we can avoid having duplicate code in any location. I have strived to keep Image or template creation completely separate from deployment and port mapping. The exception is that I could not complete creation of the layer 7 routing image until we knew everything about how other images would be mapped.
I've been using Hashicorp Packer with the ansible provisioner using ansible_connection = lxd
Some notes here for constructing a template
When iterating through local files on your host system you may need to be using ansible_connection = local (e.g for stat & friends)
Using local_action in ansible with the lxd connection is still
action inside the container when using stat (but not with include_vars & lookup function for files)
Using lots of debug messages in Ansible is helpful to know which local environment ansible is actually operating in.
I'm surprised no one here mentioned Canonicals own tool for managing LXD.
https://juju.is
it is super simple, well supported, and the only caveat is it requires you turn off ipv6 at the LXD/LXC side of things (in the network bridge)
snap install juju --classic
juju bootstrap localhost
from there you can learn about juju models, deploy machines or prebaked images like ubuntuOS
juju deploy ubuntu

Docker, I have one folder that contains the application server. What can be used as a container?

I want to ask, if I have one folder that contains the application server (Axis2, Tomcat, WSO2, mongodb, and jms-consumer) What can be used as a container?
Is Docker as an application installer? Which classifies the entire application so 1 is then used as installer file, for example: server.exe for windows, server.deb for ubuntu
Could help to explain it?
Docker as an application installer?
No, docker is a a platform which manages containers (isolated user/process/disk machines running with the host kernel), around building, shipping and running (Containers as a Service).
The best practice is to isolate each part of your global service in its own container, both because of the PID1 zombie reaping issue (detailed in "Use of Supervisor in docker"), but also in term of ease of management and update.
If each component only represents a Tomcat, a MongoDB, a..., each one is easier to manage/debug, instead of having one giant container.
Also you can stop/update one without necessarily ipacting all the other ones.
The installation-like part is rather the description of your environment (both in term of OS and of applications you want to add to a container) with the Dockerfile: a description of what your environment will need to run.
That helps building an image (sort of archive of all the files you need), from which you docker run a container.
Right now, those containers only runs as Linux machines on Linux kernel hosts (or on Windows, through a Linux VM).
You don't have yet pure Windows images/containers that runs on Windows (it is in progress, with Windows Server 2016).
So can you just take what you have in one giant folder and put it in a docker container?
Not directly. The goal of Dockerfile is to describe how you would install what you need.
Then you docker build, and from the image you get, you docker run.
But in order for docker to manage correctly the lifecycle of that container, it is best if the container is limited to one process (instead of trynig to run everything like a webapp server, a mongodb, and so on in the same container space)
That means:
describing in separate Dockerfile (building separate images) for each of the components of your system
running those containers in a way they see each others and communicate with each others.
You have an example of a complex multi-component system in my project: b2d.

Resources