Docker - Install CI Server on a remote host - docker

I have a remote host that is already running a Ubuntu OS. I want to now create a docker file that would help me run a Continuous Integration server like TeamCity on this remote host.
I understand that I create a DockerFile from a base image like Ubuntu. But I do not need another Ubuntu filesystem on a Ubuntu host. How can I handle this situation?

If you need all the userspace files of Ubuntu, then this is how Docker operates - in order to promise that you can lift your container off an Ubuntu machine and run it on a different Linux, Docker has its own copy of everything above the kernel. This will be shared amongst every container based on Ubuntu, but still it's a couple of hundred megs of disk space.
If you don't need so much from Ubuntu, then you can start with a much smaller image such as busybox.
You could also create a fairly empty container image and map parts of your Ubuntu disk to be visible using the -v option. But then you won't have everything you need inside the container.

Related

Can I use the computer power on a different machine for docker?

I use docker locally for development. I run a few containers for Redis, Postgres, the frontend compilation and backend compilation. The frontend and backend map files from my local machine to the docker containers where a process runs that auto compiles. Then I can access the backend server and frontend webserver from services in the docker container hosting them.
My backend can be very resource-intensive as I'm developing a task that processes a large amount of time-series data. It can take about 5-10 mins on my machine. I'm using a 15-inch Macbook pro as my local machine and running docker and my development setup is really pushing my machine to the limits. I'm considering running docker on another Linux PC I have and connecting to it from my MacBook pro.
I use CircleCI quite a bit and they have some setup with docker where the CI containers you run don't actually run docker themselves but are networked out to a separate dedicated machine. The only issue is mapping volumes don't work too great.
How can I set this up in docker so that I can run docker commands locally that run on a separate machine?
Any ideas how I can map the directories to the other machine?
You can use SSH to run commands on another machine:
ssh user#server docker run hello-world
I would recommend against mapping volumes, as that doesn't work well. Instead, I'd simply copy the data you needed to the server.
scp -r directory-to-copy/* user#server:/destination-to-copy-into

Do I still need to install Node.js or Python via docker container file when the OS is installed with python/node.js already?

I am trying to create the docker file (Image file) for the web application I am creating. Basically, the web application is written in Node.js and Vue.js. In order to create a docker container for the application, I have got the documentation from vue.js to create a docker file. The steps given are working file. I just wanted to clear my understanding in this part.
link:- https://cli.vuejs.org/guide/deployment.html#docker-nginx
If the necessary package Node/Python is installed in the OS (Not in the container), would the container be able to pick up the npm scripts and execute python scripts also? If yes, is it really dependent on the OS software packages as well?
Please help me with the understanding.
Yes, you need to install Node or Python or whatever software you need in your application in your container. The reason is that the container should be able to run on any host machine that has Docker installed, regardless of how the host machine is set up or what it software it has installed.
It might be a bit tedious at first to make sure that your Dockerfile installs all the software that is needed, but it becomes very useful when you want to run your container on another machine. Then all you have to do is type docker run and it should work!
Like David said above, Docker containers are isolated from your host machine and it should be treated as a completely different machine/host. The way containers can communicate with other containers or sometimes the host is through network ports.
One "exception" to the isolation between the container and the host is that the container can sometimes write to files in the host in order to persist data even after the container has been stopped. You can use volumes or mounts to allow containers to write to files on the host.
I would suggest the Docker Overview for more information about Docker.

I'm still confused by Docker containers and images

I know that containers are a form of isolation between the app and the host (the managed running process). I also know that container images are basically the package for the runtime environment (hopefully I got that correct). What's confusing to me is when they say that a Docker image doesn't retain state. So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart? Why would I use a database in a Docker container?
It's also difficult for me to grasp LXC. On another question page I see:
LinuX Containers (LXC) is an operating system-level virtualization
method for running multiple isolated Linux systems (containers) on a
single control host (LXC host)
What does that exactly mean? Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
LXC and Docker, Both are completely different. But we say both are container holders.
There are two types of Containers,
1.Application Containers: Whose main motto is to provide application dependencies. These are Docker Containers (Light Weight Containers). They run as a process in your host and gets all the things done you want. They literally don't need any OS Image/ Boot Up thing. They come and they go in a matter of seconds. You cannot run multiple process/services inside a docker container. If you want, you can do run multiple process inside a docker container, but it is laborious. Here, resources (CPU, Disk, Memory, RAM) will be shared.
2.System Containers: These are fat Containers, means they are heavy, they need OS Images
to launch themselves, at the same time they are not as heavy as Virtual Machines, They are very similar to VM's but differ in architecture a bit.
In this, Let us say Ubuntu as a Host Machine, if you have LXC installed and configured in your ubuntu host, You can run a Centos Container, a Ubuntu(with Differnet Version), a RHEL, a Fedora and any linux flavour on top of a Ubuntu Host. You can also run multiple process inside an LXC contianer. Here also resoucre sharing will be done.
So, If you have a huge application running in one LXC Container, it requires more resources, simultaneously if you have another application running inside another LXC container which require less resources. The Container with less requirement will share the resources with the container with more resource requirement.
Answering Your Question:
So if I create a Docker image with a database (like PostgreSQL), wouldn't all the data get wiped out when I stop the container and restart?
You won't create a database docker image with some data to it(This is not recommended).
You run/create a container from an image and you attach/mount data to it.
So, when you stop/restart a container, data will never gets lost if you attach that data to a volume as this volume resides somewhere other than the docker container (May be a NFS Server or Host itself).
Does it mean I can have multiple versions of Linux running on the same host as long as the host support LXC? What else is there to it?
Yes, You can do this. We are running LXC Containers in our production.

Stack difference between VirtualBox and Docker for Windows

Let say I want run Linux application on Windows laptop. (Let say the application is Mongo and it MUST be Linux Mongo). I have 2 options
I can use VirtualBox to run Linux virtual machine with application
I can use Docker for Windows to run Linux docker image with application
My question is: which solution is expected to have better performance? VirtualBox virtual machine has well known overhead, docker instance is a process with low overhead. But between laptop Windows and docker instance AFAIU Docker for Windows establish intermediate virtual machine with Linux (Hyper-V VM?)
Stack looks similar for both options. Could I say that some option has definitely better performance/resource requirements or it depends on specific details?
I would suggest use Docker instead of VirtualBox.
Docker is like independent platform, in future if you want to run on Mac, Linux or windows you just need to copy few files from one place to other to get it set up everything but with VirtualBox you need to copy whole image or re-configured everything.
Docker provides built-in support for all kind of base images, which will help you to get speedy development.
With the Docker, you can destroy or re-run image with few/one command.
Docker provides an easy way to map local folders with VirtualBox you need to configure that.
VirtualBox is heavy as compared with Docker.
In Docker, you will always get the fresh/clean environment if you decide to use Continuous Deployment.
Network mapping (port externalise) and many more things are easily available with Docker.
Again lastly Go with Docker :)
Hope this get you clear idea, Please let me know if you need any help to setup Docker environment for your development.

Which Docker base image should be used to install Apps in a container without any additional OS?

I am running a Docker daemon on my GUEST OS which is CentOS. I want to install software services on top of that in an isolated manner and I do not need another OS image inside my Docker container.
I want to have a Docker container with just the additional binaries and libraries for the software application I am going to install.
Is there a "whiteglove/blank" base image in Docker I can use ? I want a very lean container that uses as a starting point what my GUEST OS has to offer. Is that possible ?
What you're asking for isn't possible out-of-the-box with Docker. Each Docker image has its own root filesystem, which needs to have some sort of OS installed.
Your options are:
Use a minimal base image, such as the BusyBox image. This will give you the absolute minimum you need to get a container running.
Use the CentOS base image, in which case your container will be running the same or very similar OS.
The reason Docker images are like this is because they're meant to be portable. Any Docker image is meant to run anywhere Docker is running, regardless of the operating system. This means that the Docker image must contain an entire root filesystem and OS installation.
What you can do if you need stuff from the host OS is share a directory using Docker volumes. However, this is generally meant to be used for mounting data directories, and it still necessitates the Docker image having an OS.
That said, if you have a statically-linked binary that has absolutely no dependencies, it becomes easy to create a very minimal image. This is called a "microcontainer", and Go in particular is well-suited to producing these. Here is some further reading on microcontainers and how to produce them.
One other option you could look into if all you want is the resource management part of containers is using lxc-execute, as described in this answer. But you lose out on all the other nice Docker features as well. Unfortunately, what you're trying to do is just not what Docker is built for.
As I understood docker, when you use a base image, you really do not install an additional OS.
Its just a directory structure sort of thing with preinstalled programs or we can say a file system of an actual base image OS.
In most cases [click this link for the exception], docker itself [the docker engine] runs on a linux VM when used on mac and windows.
If you are confused with virtualization, there is no virtualization inside Docker Container. Containers run in user space on top of the host operating system's kernel. So, the containers and the host OS would share the same kernel.
So, to sumarize:
Consider the host OS to be windows or mac.
Docker when installed, is inside a linux VM running on these host OS.[use this resource for more info]
The base linux images inside the docker container then use this linux VM machine as host OS and not the native windows or mac.
On linux, The base linux images inside the docker container direclty uses the host OS which is linux itself without any virtualization.
The base image inside Docker Container is just the snapshot of that linux distributions programs and tool.
The base image make use of the host kernel (which in all three cases, is linux).
Hence, there is no virtualisation inside a container but docker can use a single parent linux virtual machine to run itself [the docker engine] inside it.
Conclusion:
When you install a base image inside docker, there is no additional OS that is installed inside the container but just the copy of filesystem with minimal programs and tools is created.
From Docker's best practices:
Whenever possible, use current Official Repositories as the basis for your image. We recommend the Debian image since it’s very tightly controlled and kept extremely minimal (currently under 100 mb), while still being a full distribution.
What you're asking for is completely against the idea of using Docker Containers. You don't want to have any dependability on your GUEST OS. If you do your Docker wont be portable.
When you create a container, you want it to run on any machine that runs Docker. Be it CentoOS, Ubuntu, Mac, or Microsoft Azure :)
Ideally there are no advantages of your base container OS having to do anything with your Host OS.
For any container, you need to have at least a root file system. That is why you need to use a base image that have the root file system. Your idea is not completely against the container paradigm of usage; as opposed to VMs, we want container to be minimal without much of repetitive elements that it can leverage from the underlayer OS.
Following the links of Rohan Singh, I found some related info, that doesn't generally contradict, but relates to the core ide of the question:
The base image for all Docker images is the scratch image. It has essentially nothing in it. This may sound useless, but you can actually use it to create the smallest possible image for your application, if you can compile your application to a static binary with zero dependencies like you can with Go or C.

Resources