creating a development container for multiple pc - docker

I want to create an image that contains all the dependencies needed for development like Java Maven, Node, etc., I want to create that image and then deploy it in different PCs at the same time.
I wanted to know if this is possible to do it by using Docker and if you could share with me some guide or information on how to do it, because I want to create an image that contains the dependencies but I want those different to have the software in the image but still remain unique like having their own configuration. I only want the image to deploy a fast environment to program, thanks in advance.

The advantage of docker is that it shares the CLI of the host system while capsule it into its own environment with its own network adapter. That means docker is fast because you don't need to simulate hardware and operation-system as well.
But here comes the clue for you:
Since it does no simulate/contain the OS you just can't make it executable on all environments.
You need to choose the common way to tell all users that you are using linux containers so you can fullfill unix/mac/... already. For windows users there should be a info that they need WSL (Windows subsystem for linux) installed. Thats where windows can run a linux cli parallel to be compatible as well.
For your dependency:
You can build a container or compose that ocntains java, node, ... and all environment stuff - just Docker itself need to be compatible to host by that WSL/linux-container thing.
So now that was a lot about Docker: Same for kubernetes/minicube/... whatever you want to use locally -> of cause you need the correct installation for windows/linux target and if you use linux container and force windows/server to have WSL you can install linux kubernetes as well and be consistant everywhere.

Related

does docker always need an operating system as base image

I have heard that docker doesn't need a separate os in linux, because it shares with the host os, but in hyper-v Windows it can run Windows OS because it can hyper a linux virtual machine so run linux software on it.
But, I get confused about the FROM stage in the dockerfile, all guides said like this:
FROM ubuntu:18.04
cp . /usr/local/bin
RUN make
CMD /usr/local/bin/youapp
I can understand this step, first you need an OS, then you deploy your application; finally you run your app or whatever.
But what does the FROM stage really mean?
Does it always need an OS? Does nginx docker image have an os in it?
If i want to build my own app, I write it, I compile it, I run it; but does my own app need an OS? If not, what should I write in the FROM stage?
i got this picture, it said docker container does not need os,but use the host os,now docker build always need an os
The containers on a host share the (host's) kernel but each container must provide (the subset of) the OS that it needs.
In Windows, there's a 1:1 mapping of kernel:OS but, with Linux, the kernel is bundled into various OSs: Debian, Ubuntu, Alpine, SuSE, CoreOS etc.
The FROM statement often references an operating system but it need not and it is often not necessary (nor a good idea) to bundle an operating system in a container. The container should only include what it needs.
The NGINX image uses Debian (Dockerfile).
In some cases, the container process has no dependencies beyond the kernel. In these cases, a special FROM: scratch may be used that adds nothing else. It's an empty image (link).
No its not like that. To create any docker image using DockerFile, You need to start with a base docker image. That base docker image can be anything, Like an empty image as well, In the docker file in your example the FROM section says ubuntu, it means its assuming ubuntu as the base image. Its not always needed to have an OS as base image.
Follow this link - https://linuxhint.com/create_docker_image_from_scratch/
This will clear your doubts related to base image.
now i got answer
the From stage import the software but not the OS with kernel
it just provide a platform for your application,the ubuntu,debian,centos you write in FROM stage is just a software,the true kernel does not have relationship with them.
so if your application can run dependent ,it must like hello-world ,just a binary-package,dont rely on any other library. but mostly you need an OS,because they have the library you need.
No, the FROM stage is not providing the operating system to the image. The kernel is always provided by the host system where you are running the container. The FROM stage provides the initial file system i.e., files, directories, pre-installed softwares etc for the new image. You can also start FROM scratch which is like a blank slate.
The FROM line need NOT necessarily point to any other OS:
It can be any other container or it could be FROM SCRATCH.
Containers in host share kernel so you can think as it is master process utilizing host kernel.
Generally people see HTTPD, NGINX etc. are utilizing Debian as container OS, since this Debian OS is very thin and serves the purpose of isolation and runs as independent server.
Even you can create a HTTPD, NGINX without using any OS and name with your own version :-)

Best practice for spinning up container-based (development) environments

OCI containers are a convenient way to package suitable toolchain for a project so that the development environments are consistent and new project members can start quickly by simply checking out the project and pulling the relevant containers.
Of course I am not talking about projects that simply need a C++ compiler or Node.JS. I am talking about projects that need specific compiler packages that don't work with newer than Fedora 22, projects with special tools that need to be installed manually into strange places, working on multiple projects that have tools that are not co-installable and such. For this kind of things it is easier to have a container than follow twenty installation steps and then pray the bits left from previous project don't break things for you.
However, starting a container with compiler to build a project requires quite a few options on the docker (or podman) command-line. Besides the image name, usually:
mount of the project working directory
user id (because the container should access the mounted files as the user running it)
if the tool needs access to some network resources, it might also need
some credentials, via environment or otherwise
ssh agent socket (mount and environment variable)
if the build process involves building docker containers
docker socket (mount); buildah may work without special setup though
and if is a graphic tool (e.g. IDE)
X socket mount and environment variable
--ipc host to make shared memory work
And then it can get more complicated by other factors. E.g. if the developers are in different departments and don't have access to the same docker repository, their images may be called differently, because docker does not support symbolic names of repositories (podman does though).
Is there some standard(ish) way to handle these options or is everybody just using ad-hoc wrapper scripts?
I use Visual Studio Code Remote - Containers extension to connect the source code to a Docker container that holds all the tools needed to build the code (e.g npm modules, ruby gems, eslint, Node.JS, java). The container contains all the "tools" used to develop/build/test the source code.
Additionally, you can also put the VSCode extensions into the Docker image to help keep VSCode IDE tools portable as well.
https://code.visualstudio.com/docs/remote/containers#_managing-extensions
You can provide a Dockerfile in the source code for newcomers to build the Docker image themselves or attach VSCode to an existing Docker container.
If you need to run a server inside the Docker container for testing purposes, you can expose a port on the container via VSCode, and start hitting the server inside the container with a browser or cURL from the host machine.
Be aware of the known limitations to Visual Studio Code Remote - Containers extension. The one that impacts me the most is the beta support for Alphine Linux. I have often noticed some of the popular Docker Hub images are based on Alphine.

Why provide a Linux distro as a Dockerfile base when my host has all the software I need installed?

I want to start writing a Docker image. I have a .net Core 2.0 Web Api service that I have deployed to an Amazon Linux machine. It runs fine, but I would like to automate the build and deployment process a bit.
As far as I am concerned, there is no need for a Parent image for the image I need to build. I might grab some files from a location, run some dotnet CLI commands, and run the service using Apache as a reverse proxy. I dont really see the need for a parent image in any of that.
I am asking this question because most of the examples I have seen include a base image. Most of the time its something very generic, like "From Ubuntu". I have read that most images will include a parent image. According to Docker's documentation:
A parent image is the image that your image is based on. It refers to the contents of the FROM directive in the Dockerfile. Each subsequent declaration in the Dockerfile modifies this parent image. Most Dockerfiles start from a parent image, rather than a base image. However, the terms are sometimes used interchangeably.
What exactly is the point of inheriting from Ubuntu? Even the Docker docs suggest using Debian "since it’s very tightly controlled and kept minimal". Does that just ensure that your Linux machine has an Ubuntu distribution? Does it even matter if I am using Amazon Linux but use the Debian image as my base?
A Docker image runs in a set of filesystem namespaces which are unconnected from the host's except where you've chosen to bind-mount a volume. This means that tools installed on the host are unavailable to the container: Just because the host runs Amazon Linux doesn't mean that the userspace commands Amazon Linux provides (and the libraries those commands use to run) are available to the guests.
Without a Linux distro available inside the container, you wouldn't have a package management tool (yum, apt-get, etc) with which to install the tools you need to download a file, run software (that presumably needs to be linked to a libc, a copy of OpenSSL, or other shared components). There are also runtime parts of a working Linux system such as the resolver that are provided in userland by your distro and not shared from the host in a Docker install.
Using a base image ensures that you have tools available inside your container -- and it ensures that that container will work consistently on any Linux system with a compatible kernel and hardware architecture.
It's possible in theory to bind-mount many of the tools from the host (as by exposing all of /usr as a volume), but doing so would defeat many of the advantages Docker offers in portability.

docker is great for run-anywhere but what about the machines to host docker?

I am wondering how do we make machines that host docker to be easily replaceable. I would like something like a Dockerfile that contains instructions on how to set-up the machine that will host docker. Is there a way to do that?
The naive solution would be to create an official "docker host" binary image to install on new machines, but I would like to have something that is reproducible and transparent like the dockerfile?
It seems like tools like Vagrant, Puppet, or Chef may be useful but they appear to be for virtual machine procurement and they seem to all require set-up of some sort of "master node" server. I am not going to be spinning up and tearing down regularly so a master server is a waste of a server, I just want something that is reproducible in the event i need to set-up or replace a new machine.
this is basically what docker-machine does for you https://docs.docker.com/machine/overview/
and other "orchestration" systems will make this automated and easier, as well
There are lots of solutions to this with no real one size fits all answer.
Chef and Puppet are the popular configuration management tools that typically use a centralized server. Ansible is another option that typically runs without a server and just connects with ssh to configure the host. All three of these works very similarly, so if your concern is simply managing the CM server, Ansible may be the best option for you.
For VM's Vagrant is the typical solution and it can be combined with other tools like Ansible to provision the VM after creating it.
In the cloud space, there's tools like Terraform or vendor specific tools like CloudFormation.
Docker is working on a project called Infrakit to deploy infrastructure the way compose deploys containers. It includes hooks for several of the above tools, including Terraform and Vagrant. For your own requirements, this may be overkill.
Lastly, for designing VM images, Docker recently open sourced their Moby project which creates the VM image containing a minimal container OS, the same one used under the covers in Docker for Windows, Docker for Mac, and possibly some of the cloud hosing providers.
We automate Docker installation on hosts using Ansible + Jenkins. Given the propper SSH access, provisioning new Docker hosts is a matter of triggering a Jenkins job.

Docker, I have one folder that contains the application server. What can be used as a container?

I want to ask, if I have one folder that contains the application server (Axis2, Tomcat, WSO2, mongodb, and jms-consumer) What can be used as a container?
Is Docker as an application installer? Which classifies the entire application so 1 is then used as installer file, for example: server.exe for windows, server.deb for ubuntu
Could help to explain it?
Docker as an application installer?
No, docker is a a platform which manages containers (isolated user/process/disk machines running with the host kernel), around building, shipping and running (Containers as a Service).
The best practice is to isolate each part of your global service in its own container, both because of the PID1 zombie reaping issue (detailed in "Use of Supervisor in docker"), but also in term of ease of management and update.
If each component only represents a Tomcat, a MongoDB, a..., each one is easier to manage/debug, instead of having one giant container.
Also you can stop/update one without necessarily ipacting all the other ones.
The installation-like part is rather the description of your environment (both in term of OS and of applications you want to add to a container) with the Dockerfile: a description of what your environment will need to run.
That helps building an image (sort of archive of all the files you need), from which you docker run a container.
Right now, those containers only runs as Linux machines on Linux kernel hosts (or on Windows, through a Linux VM).
You don't have yet pure Windows images/containers that runs on Windows (it is in progress, with Windows Server 2016).
So can you just take what you have in one giant folder and put it in a docker container?
Not directly. The goal of Dockerfile is to describe how you would install what you need.
Then you docker build, and from the image you get, you docker run.
But in order for docker to manage correctly the lifecycle of that container, it is best if the container is limited to one process (instead of trynig to run everything like a webapp server, a mongodb, and so on in the same container space)
That means:
describing in separate Dockerfile (building separate images) for each of the components of your system
running those containers in a way they see each others and communicate with each others.
You have an example of a complex multi-component system in my project: b2d.

Resources