Redis on Docker (Windows) - Custom Config File - docker

I have a need to use an in-memory caching solution for scaled out application servers.
Redis seems like the best choice from my research.
I've used Linux before (Ubuntu), but I don't consider myself an expert at all.
So I'm considering just going with a Windows install.
The Windows port isn't official and seems to be abandoned.
I figured this would be a good time to learn all about Docker, as I've heard it grow popular over the past couple of years.
I have got to the point where I can install Docker and run some containers with no issue.
For redis, I can do the following from a command line with no issue:
docker run -p 6379:6379 redis
But this is needed for a production system and I'm just unsure if I am using best practices or how I should configure my config file. And I also want to make sure I have high availability (master/slave).
Can anyone help me get on the right track?

There's gonna be a performance overhead running Redis in docker for windows as it installs docker on a virtual machine and then runs Redis within that.
Also bear in mind that '-p 6379:6379' will expose redis to the outside network, not just your host machine. Maybe that's what you want, maybe not.

Related

Multiple images inside one container

So, here is the problem, I need to do some development and for that I need following packages:
MongoDb
NodeJs
Nginx
RabbitMq
Redis
One option is that I take a Ubuntu image, create a container and start installing them one by one and done, start my server, and expose the ports.
But this can easily be done in a virtual box also, and it will not going to use the power of Docker. So for that I have to start building my own image with these packages. Now here is the question if I start writing my Dockerfile and if place the commands to download the Node js (and others) inside of it, this again becomes the same thing like virtualization.
What I need is that I start from Ubuntu and keep on adding the references of MongoDb, NodeJs, RabbitMq, Nginx and Redis inside the Dockerfile and finally expose the respective ports out.
Here are the queries I have:
Is this possible? Like adding the refrences of other images inside the Dockerfile when you are starting FROM one base image.
If yes then how?
Also is this the correct practice or not?
How to do these kind of things in Docker ?
Thanks in advance.
Keep images light. Run one service per container. Use the official images on docker hub for mongodb, nodejs, rabbitmq, nginx etc. Extend them if needed. If you want to run everything in a fat container you might as well just use a VM.
You can of course do crazy stuff in a dev setup, but why spend time setting up something that has zero value in a production environment? What if you need to scale up one of the services? How do set memory and cpu constraints on each service? .. and the list goes on.
Don't make monolithic containers.
A good start is to use docker-compose to configure a set of services that can talk to each other. You can make a prod and dev version of your docker-compose.yml file.
Getting into the right frame of mind
In a perfect world you would run your containers in clustered environment in production to be able to scale your system and have concurrency, but that might be overkill depending on what you are running. It's at least good to have this in the back of your head because it can help you to make the right decisions.
Some points to think about if you want to be a purist :
How do you have persistent volume storage across multiple hosts?
Reverse proxy / load balancer should probably be the entry point into the system that talks to the containers using the internal network.
Is my service even able run in a clustered environment (multiple instances of the container)
You can of course do dirty things in dev such as mapping in host volumes for persistent storage (and many people who use docker standalone in prod do that as well).
Ideally we should separate docker in dev and docker i prod. Docker is a fantastic tool during development as you can have redis, memcached, postgres, mongodb, rabbitmq, node or whatnot up and running in minutes sharing that compose setup with the rest of the team. Docker in prod can be a completely different beast.
I would also like to add that I'm generally against the fanaticism that "everything should be running in docker" in prod. Run services in docker when it makes sense. It's also not uncommon for larger companies to make their own base images. This can be a lot of work and will require maintenance to keep up with security fixes etc. It's not necessarily the first thing you jump on when starting with docker.

Advantages of Docker layer over plain OS

I'm not realy good at administrative tasks. I need couple of tomcat, LAMP, node.js servers behind ngnix. For me it seems really complicated to set everything on the system directly. I'm thinking about containerize the server. Install Docker and create ngnix container, node.js container etc.
I expecting it to be more easy to manage, only routing to the first ngnix maybe a little bit hassle. It will bring me also possibility to backup, add servers etc. easily. Not to forget about remote deployment and management. And also repeatability of the server setup task. Separation will probably shield me from recoprocating problem of completely breaking server, by changing some init script, screwing some app. server setup etc.
Are my expectation correct that Docker will abstract me little bit more from the "raw" system administration.
Side question is there anywhere some administrative GUI I can run and easily deploy, start/stop, interconnect the containers?
UPDATE
I found nice note here
By containerizing Nginx, we cut down on our sysadmin overhead. We will no longer need to manage Nginx through a package manager or build it from source. The Docker container allows us to simply replace the whole container when a new version of Nginx is released. We only need to maintain the Nginx configuration file and our content.
Yes docker will do this for you, but that does not mean, you will no longer administrate the OS for the services you run.
Its more that docker simplifies that management because you:
do not need to pick a specific OS for all of our services, which will enforce you to offside install a service because it has been not released for the OS of your choice. You would have the wrong version and so on. Instead, Docker will provide you the option, to pick the right OS or OS version ( debian wheezy, jessie or ubuntu 12.x, 14.x 16.x ) for the service in question. (Or even alpine)
Also, docker offers you pre-made images to avoid that you need remake the image for nginx, mysql, nodejs and so on. You find those on https://hub.docker.com
Docker makes it very easy and convenient to remove a service again, not littering your system by any means (over time).
Docker offers you better "mobility" you can easily move the stack or replicate it on a different host - you do not need to reconfigure the host and hope it to "be the same".
With Docker you do not need to think about the convergence of containers during their live time / or stack improvements, since they are remade from the image again and again - from the scratch, no convergence.
But, docker also (con)
Adds more complexity since you might run "more microservices". You might need a service-discovery, live configuration system and you need to understand the storage system ( volumes ) quiet a bit
Docker does not "remove" the OS-Layer, it just makes it simpler. Still you need to maintain
Volumes in general might feel not as simple as local file storage ( depends on what you choose )
GUI
I think the most compelling thing would match what you define a "GUI" is, is rancher http://rancher.com/ - its more then a GUI, its the complete docker-server management stack. High learning curve first, a lot of gain afterwards
You will still need to manage the docker host OS. Operations like:
Adding Disks from time to time.
Security Updates
Rotating Logs
Managing Firewall
Monitoring via SNMP/etc
NTP
Backups
...
Docker Advantages:
Rapid application deployment
Portability across machines
Version control and component reuse
Lightweight footprint and minimal overhead
Simplified maintenance
...
Docker Disadvantages:
Adds Complexity (Design, Implementation, Administration)
GUI tools available, some of them are:
Kitematic -> windows/mac
Panamax
Lorry.io
docker ui
...
Recommendation: Start Learning Docker CLI as the GUI tools don't have all the nifty CLI features.

How Docker can be used for multi-layered application?

I am trying to understand how docker can be used to dockerize multilayered application.
My tomcat application needs mongodb, mysql, redis, solr and rabbitmq. I am playing with Docker for couple of weeks now. I am able to install and use mongo/mysql containers. But I am not getting how can I completely ship application using Docker. I have few questions.
How should the images be. Should I have one image that has all the components installed or have separate images (like one for tomcat, one for mongo, one for mysql etc) and start those containers using a bash script outside of docker.
What is the docker way of maintaining multiple containers at once. Meaning say I have multiple containers (like mongo, mysql, tomcat etc...) that needs to be worked together to run my application, Is there any inbuilt way of dealing this so that one command/script does this?
Suppose I dockerize my application, how can i manage various routine tasks that need to be performed like incremental code deployment, database patches etc. Currently we are using vagrant, we also use fabric along with vagrant for various tasks.Like after vagrant up we use fab tasks for all kind of routine things like code deployment, db refresh, adding volumes, start/stop services etc. What would be the docker's way of doing this?
With Vagrant if VM crashes due to High CPU etc. host system is not affected. But I see docker is eating up lot of host resources. Can we put limits for that say not more than one cpu core for that container etc..?
Because we use vagrant, most of the questions above are in that context. When started with docker I thought docker as a kind of visualization technology that can be a replacement for our huge Vagrant based infra. Please correct me if I am wrong?
I advise you to look at docker-compose:
you'll be able to define an architecture of your application
you can then easily build it and run it (with one command)
pretty much same setup for dev and prod
For microservices, composition etc I won't repost on this.
For containet resource allocation:
Docker run has various resource control options (using google cgroups) see my gist here
https://gist.github.com/afolarin/15d12a476e40c173bf5f

Is it possible/sane to develop within a container Docker

I'm new to Docker and was wondering if it was possible (and a good idea) to develop within a docker container.
I mean create a container, execute bash, install and configure everything I need and start developping inside the container.
The container becomes then my main machine (for CLI related works).
When I'm on the go (or when I buy a new machine), I can just push the container, and pull it on my laptop.
This sort the problem of having to keep and synchronize your dotfile.
I haven't started using docker yet, so is it something realistic or to avoid (spacke disk problem and/or pull/push timing issue).
Yes. It is a good idea, with the correct set-up. You'll be running code as if it was a virtual machine.
The Dockerfile configurations to create a build system is not polished and will not expand shell variables, so pre-installing applications may be a bit tedious. On the other hand after building your own image to create new users and working environment, it won't be necessary to build it again, plus you can mount your own file system with the -v parameter of the run command, so you can have the files you are going to need both in your host and container machine. It's versatile.
> sudo docker run -t -i -v
/home/user_name/Workspace/project:/home/user_name/Workspace/myproject <container-ID>
I'll play the contrarian and say it's a bad idea. I've done work where I've tried to keep a container "long running" and have modified it, but then accidentally lost it or deleted it.
In my opinion containers aren't meant to be long running VMs. They are just meant to be instances of an image. Start it, stop it, kill it, start it again.
As Alex mentioned, it's certainly possible, but in my opinion goes against the "Docker" way.
I'd rather use VirtualBox and Vagrant to create VMs to develop in.
Docker container for development can be very handy. Depending on your stack and preferred IDE you might want to keep the editing part outside, at host, and mount the directory with the sources from host to the container instead, as per Alex's suggestion. If you do so, beware potential performance issue on macos x with boot2docker.
I would not expect much from the workflow with pushing the images to sync between dev environments. IMHO keeping Dockerfiles together with the code and synching by SCM means is more straightforward direction to start with. I also carry supporting Makefiles to build image(s) / run container(s) same place.

What would be a good docker webdev workflow?

I have a hunch that docker could greatly improve my webdev workflow - but I haven't quite managed to wrap my head around how to approach a project adding docker to the stack.
The basic software stack would look like this:
Software
Docker image(s) providing custom LAMP stack
Apache with several modules
MYSQL
PHP
Some CMS, e.g. Silverstripe
GIT
Workflow
I could imagine the workflow to look somewhat like the following:
Development
Write a Dockerfile that defines a LAMP-container meeting the requirements stated above
REQ: The machine should start apache/mysql right after booting
Build the docker image
Copy the files required to run the CMS into e.g. ~/dev/cmsdir
Put ~/dev/cmsdir/ under version control
Run the docker container, and somehow mount ~/dev/cmsdir to /var/www/ on the container
Populate the database
Do work in /dev/cmsdir/
Commit & shut down docker container
Deployment
Set up remote host (e.g. with ansible)
Push container image to remote host
Fetch cmsdir-project via git
Run the docker container, pull in the database and mount cmsdir into /var/www
Now, this looks all quite nice on paper, BUT I am not quite sure whether this would be the right approach at all.
Questions:
While developing locally, how would I get the database to persist between reboots of the container instance? Or would I need to run sql-dump every time before spinning down the container?
Should I have separate container instances for the db and the apache server? Or would it be sufficient to have a single container for above use case?
If using separate containers for database and server, how could I automate spinning them up and down at the same time?
How would I actually mount /dev/cmsdir/ into the containers /var/www/-directory? Should I utilize data-volumes for this?
Did I miss any pitfalls? Anything that could be simplified?
If you need database persistance indepent of your CMS container, you can use one container for MySQL and one container for your CMS. In such case, you can have your MySQL container still running and your can redeploy your CMS as often as you want independently.
For development - the another option is to map mysql data directories from your host/development machine using data volumes. This way you can manage data files for mysql (in docker) using git (on host) and "reload" initial state anytime you want (before starting mysql container).
Yes, I think you should have a separate container for db.
I am using just basic script:
#!/bin/bash
$JOB1 = (docker run ... /usr/sbin/mysqld)
$JOB2 = (docker run ... /usr/sbin/apache2)
echo MySql=$JOB1, Apache=$JOB2
Yes, you can use data-volumes -v switch. I would use this for development. You can use read-only mounting, so no changes will be made to this directory if you want (your app should store data somewhere else anyway).
docker run -v=/home/user/dev/cmsdir:/var/www/cmsdir:ro image /usr/sbin/apache2
Anyway, for final deployment, I would build and image using dockerfile with ADD /home/user/dev/cmsdir /var/www/cmsdir
I don't know :-)
You want to use docker-compose. Follow the tutorial here. Very simple. Seems to tick all your boxes.
https://docs.docker.com/compose/
I understand this post is over a year old at this time, but I have recently asked myself very similar questions and have several great answers to your questions.
You can setup a MySQL docker instance and have data persist on a stateless data container, aka the data container does not need to be actively running
Yes I would recommend having a separate instance for your web server and database. This is the power of Docker.
Check out this repo I have been building. Basically it is as simple as make build & make run and you can have a web server and database container running locally.
You use the -v argument when running the container for the first time, this will link a specific folder on the container to the host running the container.
I think your ideas are great and it is currently possible to achieve all that you are asking.
Here is a turn key solution achieving all of the needs you have listed.
I've put together an easy to use docker compose setup that should match your development workflow requirements.
https://github.com/ehyland/docker-silverstripe-dev
Main Features
Persistent DB
Your choice of HHVM + NGINX or Apache2 + PHP5
Debug and set breakpoints with xDebug
The README.md should be clear enough to get you started.

Resources