Building a docker image on EC2 for web application with many dependencies - docker

I am very new to Docker and have some very basic questions. I was unable to get my doubts clarified elsewhere and hence posting it here. Pardon me if the queries are very obvious. I know I lack some basic understanding regarding images but i had a hard time finding some easy to understand explanation for the whole of it.
Problem at hand:
I have my application running on an EC2 node (r4.xlarge). It is a web application which has a LOT of dependencies (system dependencies + other libraries etc). I would like to create a docker image of my machine so that i can easily run it at ease when I launch a new EC2 instance.
Questions:
Do i need to build the docker image from scratch or can I use some base image?
If i can use a base image, which one do I select? (It is hard to know the OS version on the EC2 machine and hence I am not sure which base image do i start on.
I referred this documentation-
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/docker-basics.html#install_docker
But it creates from an Ubuntu base image.
The above example has instructions on installing apache (and other things needed for the application). Let's say my application needs server X to be installed + 20 system dependencies + 10 other libraries.
Ex:
yum install gcc
yum install gfortran
wget <abc>
When I create a docker file do i need to specify all the installation instructions like above? I thought creating an image is like taking a copy of your existing machine. What is the docker file supposed to have in this case?
Pointing me out to some good documentation to build a docker image on EC2 for a web app with dependencies will be very useful too.
Thanks in advance.

First, if you want to move toward docker then I will suggest using AWS ECS which specially designed for docker container and have auto-scaling and load balancing feature.
As for your question is concern so
You need a docker file which contains all the packages and application which already installed in your EC2 instance. As for base image is concern i will recommend Alpine. Docker default image is Alpine
Why Alpine?
Alpine describes itself as:
Small. Simple. Secure. Alpine Linux is a security-oriented,
lightweight Linux distribution based on musl libc and busybox.
https://nickjanetakis.com/blog/the-3-biggest-wins-when-using-alpine-as-a-base-docker-image
https://hub.docker.com/_/alpine/
Let's say my application needs server X to be installed + 20 system
dependencies + 10 other libraries.
So You need to make dockerfile which need all these you mentioned.
Again I will suggest ECS for best docker based application because that is ECS that designed for docker, not EC2.
CONTAINERIZE EVERYTHING
Amazon ECS lets you easily build all types of
containerized applications, from long-running applications and
microservices to batch jobs and machine learning applications. You can
migrate legacy Linux or Windows applications from on-premises to the
cloud and run them as containerized applications using Amazon ECS.
https://aws.amazon.com/ecs/
https://aws.amazon.com/getting-started/tutorials/deploy-docker-containers/
https://caylent.com/containers-kubernetes-docker-swarm-amazon-ecs/

You can use a base image, you specify it with the first line of
your Docker file, with FROM
The base OS of the EC2 instance doesn't matter for the container.
that's the point of containers, you can run linux on windows, arch
on debian, whatever you want.
Yes, dependencies that don't exist in your base image will need to
be specified and installed. ( Depending on the default packager
manger for the base image you are working from you might use dpkg,
or yum or apt-get. )

Related

Why provide a Linux distro as a Dockerfile base when my host has all the software I need installed?

I want to start writing a Docker image. I have a .net Core 2.0 Web Api service that I have deployed to an Amazon Linux machine. It runs fine, but I would like to automate the build and deployment process a bit.
As far as I am concerned, there is no need for a Parent image for the image I need to build. I might grab some files from a location, run some dotnet CLI commands, and run the service using Apache as a reverse proxy. I dont really see the need for a parent image in any of that.
I am asking this question because most of the examples I have seen include a base image. Most of the time its something very generic, like "From Ubuntu". I have read that most images will include a parent image. According to Docker's documentation:
A parent image is the image that your image is based on. It refers to the contents of the FROM directive in the Dockerfile. Each subsequent declaration in the Dockerfile modifies this parent image. Most Dockerfiles start from a parent image, rather than a base image. However, the terms are sometimes used interchangeably.
What exactly is the point of inheriting from Ubuntu? Even the Docker docs suggest using Debian "since it’s very tightly controlled and kept minimal". Does that just ensure that your Linux machine has an Ubuntu distribution? Does it even matter if I am using Amazon Linux but use the Debian image as my base?
A Docker image runs in a set of filesystem namespaces which are unconnected from the host's except where you've chosen to bind-mount a volume. This means that tools installed on the host are unavailable to the container: Just because the host runs Amazon Linux doesn't mean that the userspace commands Amazon Linux provides (and the libraries those commands use to run) are available to the guests.
Without a Linux distro available inside the container, you wouldn't have a package management tool (yum, apt-get, etc) with which to install the tools you need to download a file, run software (that presumably needs to be linked to a libc, a copy of OpenSSL, or other shared components). There are also runtime parts of a working Linux system such as the resolver that are provided in userland by your distro and not shared from the host in a Docker install.
Using a base image ensures that you have tools available inside your container -- and it ensures that that container will work consistently on any Linux system with a compatible kernel and hardware architecture.
It's possible in theory to bind-mount many of the tools from the host (as by exposing all of /usr as a volume), but doing so would defeat many of the advantages Docker offers in portability.

Docker for non-code deployments?

I am trying to help a sysadmin group reduce server & service downtime on the projects they manage. Their biggest issue is that they have to take down a service, install upgrade/configure, and then restart it and hope it works.
I have heard that docker is a solution to this problem, but usually from developer circles in the context of deploying their node/python/ruby/c#/java, etc. applications to production.
The group I am trying to help is using vendor software that requires a lot of configuration and management. Can docker still be used in this case? Can we install any random software on a container? Then keep that in a private repository, upgrade versions, etc.?
This is a windows environment if that makes any difference.
Docker excels at stateless applications. You can use it for persistent data style applications, but requires the use of volume commands.
Can docker still be used in this case?
Yes, but it depends on the application. It should be able to be installed headless, and a couple other things that are pretty specific. (EG: talking to third party servers to get an license can create issues)
Can we install any random software on a container?
Yes... but: remember that when the container restarts, that software will be gone. It's better to create it as an image, and then deploy it.See my example below.
Then keep that in a private repository, upgrade versions, etc.?
Yes.
Here is an example pipeline:
Create a Dockerfile for the OS and what steps it takes to install the application. (Should be headless)
Build the image (at this point, it's called an image, not a container)
Test the image locally by creating a local container. This container is what has the configuration data such as environment variables, the volumes for persistent data it needs, etc.
If it satisifies the local developers wants, then you can either:
Let your build servers create the image and publish it an internal
docker registry (best practice)
Let your local developer publish it
to an internal docker registry
At that point, your next level environments can then pull down the image from the docker registry, configure them and create the container.
In short, it will require a lot of elbow grease but is possible.
Can we install any random software on a container?
Generally yes, but you can have many problems with legacy software which was developed to work on bare metal.
At first it can be persistence problem, but it can be solved using volumes.
At second program that working good on full OS can work not so good in container. Containers have some difference with VM's or bare metal. For example due to missing init process some containers have zombie process issue. About others difference you can read here
Docker have big profit for stateless apps, but some heave legacy apps can work not so good inside containers and should be tested good before using it in production.

Using docker to run a distributed computation

I just came across docker, and was looking through its docs to figure out how to use this to distribute a java project across multiple nodes, while making this distribution platform independent i.e the nodes can be running any platform. Currently i'm sending classes to different nodes and running it on them with the assumption that these nodes have the same environment as the client. I couldn't quite figure out how to do this, any suggestions wouldbe greatly appreciated.
I do something similar. In my humble opinion Docker or not is not your biggest problem. However, using Docker images for this purpose can and will save you a lot of headaches.
We have a build pipeline where a very large Java project is built using Maven. The outcome of this is a single large JAR file that contains the software we need to run on our nodes.
But some of our nods also need to run some 3rd party software such as Zookeeper and Cassandra. So after the Maven build we use packer.io to create a Docker image that contains all needed components which ends up on a web server that can be reached only from within our private cloud infrastructure.
If we want to roll out our system we use a combination of Python scripts that talk with the OpenStack API and create virtual machines on our cloud, and Puppet which performs the actual software provisioning inside of the VMs. Our VMs are CentOS 7 images, so what Puppet actually does is to add the Docker yum repos. Then installs Docker through yum, pulls in the Docker image from our repository server and finally uses a custom bash script to launch our Docker image.
For each of these steps there are certainly even more elegant ways of doing it.

what should be packed as a docker image?

I just get started with docker.
I'm really confused what should be packed as a docker image?
In docker hub, I can find a complete OS as a docker image: ubuntu, centos...
as well as popular platform and database like: nodejs, mongodb...
It seems to me docker hub is just like a software repository.
should everything be packed as an image? how about just a command line tool like: ls, cd, git??? they are also software, what is qualified to be a docker image??
please help clarify
Docker will provide some isolation between the programs in the image, and your host environment. In a docker image, one may package anything from a single binary, to a full environment (everything but the linux kernel).
See it as a convenience. It provides a convenient form to deploy programs that require a complex environment that may conflict with the programs installed on the host. For instance, if you're trying to package a webapp (e.g. some blog software), the docker container will allow you to ship your application code, along with a tested version of its interpreter (php, python, etc), a compatible webserver, and maybe a database environment -- together. From the perspective of the user installing your container/app, nothing else than the container is needed to run the app. It's all self-contained, and it's simpler than setting up a virtual machine.
If your binaries in the image depend on an ls command, then you'd include that as well. Generally, in the image, you include a binary (the entry point), as well as all its dependencies.
If you're familiar with chroots, you may see a docker image as a fancy chroot where the network, and process address space are also isolated, in addition to the file system.
You can think of dockerhub as an app-store.

How do I create docker image from an existing CentOS?

I am new to docker.io and not sure if this is beyond the scope of docker. I have an existing CentOS 6.5 system. I am trying to figure out how to create a docker image from a CentOS Linux system I already have running. I would like to basically clone this existing system; so I can port it to another cloud provider. I was able to create a docker image from a base CentOS image but I want to basically clone my existing system and use docker.io going forward.
Am I stuck with creating a base CentOS from scratch and configure it for docker from there? This might be more of a VirtualBox/Vagrant thing, but am interested in docker.io.
Looks like I need to start with base CentOS and create a Dockerfile with all the addons I need... Think I'm getting there now....
Cloning a system that is up and running is certainly not what Docker is intended for. Instead, Docker is meant to develop your OS and server installation together with the app or service, making DevOps even more DevOpsy. By starting with a clean CentOS image, you will be sure to install only what you need for the service, and have everything under control. You actually don't want all the other stuff that might produce incompatibilities. So, the answer here is that you definitely should approach the problem here the other way around.

Resources