How to develop applications with Docker which are in separate repositories - docker

The stack consists of a few applications/microservices need to be connected to run locally in development, and each is within its own repository
E.g. frontend, db, api
If each app has its own Dockerfile, and docker-compose.yml that list the required services to run that one application, what practices are recommended for development of the whole stack?

This is exactly what we do at work.
Front end angular running on Apache (prod) or node (dev)
All bog standard request are handled normally, all request to api container have /imanapicall in the url and are proxied to the api container based on the fact that the url contains /imanapicall
This is standard practice. Fe container is stateless.
We have node running the api, it is stateless and simply requests data from db and sends it back to front end
We have node running restify but express more popular
Then most people use mongodb but we use some weird db stuff
Important thing is to expose ports between containers, make sure firewalls aren't being a pain. For dev purposes you prob wanna expose ports for all containers to host also so u can debug more easily by for example hitting an express endpoint directly to make sure it's giving me wot I want
Ps statelessness important to support scaling, so I can introduce a load balancer and not worry which server it hits. Only db container holds state
FURTHER TO YOUR COMMENT ...
Each tier (db, api, etc) has its own git repo.
Each git repo has an automated Jenkins job that does a build (on a push to the repo) and on success pushes a new docker image.
We then have another git repo that is responsible for pulling it all together. this repo basically consist of a docker compose file to pull all the relevant containers and run them.
job done.
this gives a simple overview. if you have any more detailed questions feel free to ask.
FINE DETAIL ...
During development of the db tier no difficulties arise.
However, during development of the api tier, for example, this tier is dependent on the db tier, so a docker container for the db tier will need to be running when developing the api tier. Can use compose for this also.
When developing front end tier, this relies on both db and api tiers.
Its best to use a generic solution during development that allows all containers to be stood up in their latest docker image state and ignore those that are irrelevant for current purposes.
For example,
When developing front end, bring up all 3 containers from the latest images. Ignore the front end container and use your front end development environment as usual. Point the front end development environment to the api container. Hope this makes sense.
Ignoring the containers that are irrelevant for the development of the tier you are working on means you can use a common approach when brining up docker containers for all tiers without having a specific solution for each.
hope this makes sense, not the easiest thing to explain!

Related

Why multi-container docker apps are built?

Can somebody explain it with some examples? Why multi-container docker apps are built? while you can contain your app in a single docker container.
When you make a multi-container app you have to do networking. Is not it easy to run a single image of a single container rather than two images of two containers?
There are several good reasons for this:
It's easier to reuse prebuilt images. If you need MySQL, or Redis, or an Nginx reverse proxy, these all exist as standard images on Docker Hub, and you can just include them in a multi-container Docker Compose setup. If you tried to put them into a single image, you'd have to install and configure them yourself.
The Docker tooling is built for single-purpose containers. If you want the logs of a multi-process container, docker logs will generally print out the supervisord logs, which aren't what you want; if you want to restart one of those containers, the docker stop; docker rm; docker run sequence will delete the whole thing. Instead with a multi-process container you need to use debugging tools like docker exec to do anything, which is harder to manage.
You can upgrade one part without affecting the rest. Upgrading the code in a container usually involves building a new image, stopping and deleting the old container, and running a new container from the new image. The "deleting the old container" part is important, and routine; if you need to delete your database to upgrade your application, you risk losing data.
You can scale one part without affecting the rest. More applicable in a cluster environment like Docker Swarm or Kubernetes. If your application is overloaded (especially in production) you'd like to run multiple copies of it, but it's hard to run multiple copies of a standard relational database. That essentially requires you to run these separately, so you can run one proxy, five application servers, and one database.
Setting up a multi-container application shouldn't be especially difficult; the easiest way is to use Docker Compose, which will deal with things like creating a network for you.
For the sake of simplification, I would say you can run only one application with a public entry point (like API) in a single container. Actually, this approach is recommended by Docker official documentation.
Microservices
Because of this single constraint, you cannot run microservices that require their own entry points in a single docker container.
It could be more a discussion on the advantages of Monolith application vs Microservices.
Database
Even if you decide to run the Monolith application only, still you need to connect some database there. As you noticed, Docker has an additional network-configuration layer, so if you want to run Database and application locally, the easiest way is to use docker-compose to run both images (Database and your Application) inside one, automatically configured network.
# Application definition
application: <your app definition>
# Database definition
database:
image: mysql:5.7
In my example, you can just connect to your DB via https://database:<port> URL from your main app (plus credentials eventually) and it will work.
Scalability
However, why we should split images for the database from the application? One word - scalability. For development purposes, you want to have your local DB, maybe with docker because it is handy. For production purposes, you will put the application image to run somewhere (Kubernetes, Docker-Swarm, Azure App Services, etc.). To handle multiple requests at the same time, you want to run multiple instances of your application. However what about the database? You cannot connect to the internal instance of DB hosted in the same container, because other instances of your app in other containers will have a completely different set of data (without synchronization).
Most often you are electing to use a separate Database server - no matter if running it on the container or fully manged databases (like Azure CosmosDB or Mongo Atlas), but with your own configuration, scaling, and synchronization dedicated for DB only. Your app just needs to worry about the proper URL to that. Most cloud providers are exposing such services out of the box, so you are not worrying about the configuration by yourself.
Easy to change
Last but not least argument is about changing the initial setup overtime. You might change the database provider, or upgrade the version of the image in the future (such things are required from time to time). When you separate images, you can modify one without touching others. It is decreasing the cost of maintenance significantly.
Also, you can add additional services very easy - different logging aggregator? No Problem, additional microservice running out-of-the-box? Easy.

How can I use Docker Hub for .Net Core projects despite a US-sanctions block?

I am from Iran. Because of sanctions from US it is very hard to use Docker in my server. But we really need to use micro-service, as times goes on our project is getting bigger and bigger and we need to think of some thing to manage the complexity.
I can't connect to Docker Hub from my server in Iran, so I need to set up proxy every time I want to pull project from Docker Hub. That period my server will not respond to users. It is funny that one of reasons I want to promote the system (by .net core and microservice and Docker and ...) is to avoid issues on server like being down or inactive.
Could I solve this by looking at alternatives to Docker in .net core ?
docker != microservice.
Docker helps you deploying multiple services on an orchestrator (e. g. Kubernetes) but you can also deploy your monolith in a single docker container....
Depending on where you want to deploy your application, you can use a Framework / Programming Model like Azure ServiceFabric or you just create multiple ASP.NET Core Web Apps that represents your microservices and deploy them to an IIS. In case of the later, you probably want some kind of API Gateway in place so the client (your MVC application) doesn't need to know each endpoint URL.
The solution for my problem is to use docker along with registry docker (docker-hub) which both are open-source. This solves my sanctions limitation problem.

Automated deployment of a dockerized application on a single machine

I have a web application consisting of a few services - web, DB and a job queue/worker. I host everything on a single Google VM and my deployment process is very simple and naive:
I manually install all services like the database on the VM
a bash script scheduled by crontab polls a remote git repository for changes every N minutes
if there were changes, it would simply restart all services using supervisord (job queue, web, etc)
Now, I am starting a new web project where I enjoy using docker-compose for local development. However, I seem to suck in analysis paralysis deciding between available options for production deployment - I looked at Kubernetes, Swarm, docker-compose, container registries and etc.
I am looking for a recipe that will keep me productive with a single machine deployment. Ideally, I should be able to scale it to multiple machines when the time comes, but simplicity and staying frugal (one machine) is more important for now. I want to consider 2 options - when the VM already exists and when a new bare VM can be allocated specifically for this application.
I wonder if docker-compose is a reasonable choice for a simple web application. Do people use it in production and if so, how does the entire process look like from bare VM to rolling out an updated application? Do people use Kubernetes or Swarm for a simple single-machine deployment or is it an overkill?
I wonder if docker-compose is a reasonable choice for a simple web application.
It can be, sure, if the development time is best spent focused on the web application and less on the non-web stuff such as the job queue and database. The other asterisk is whether the development environment works ok with hot-reloads or port-forwarding and that kind of jazz. I say it's a reasonable choice because 99% of the work of creating an application suitable for use in a clustered environment is the work of containerizing the application. So if the app already works under docker-compose, then it is with high likelihood that you can take the docker image that is constructed on behalf of docker-compose and roll it out to the cluster.
Do people use it in production
I hope not; I am sure there are people who use docker-compose to run in production, just like there are people that use Windows batch files to deploy, but don't be that person.
Do people use Kubernetes or Swarm for a simple single-machine deployment or is it an overkill?
Similarly, don't be a person that deploys the entire application on a single virtual machine or be mentally prepared for one failure to wipe out everything that you value. That's part of what clustering technologies are designed to protect against: one mistake taking down the entirety of the application, web, queuing, and persistence all in one fell swoop.
Now whether deploying kubernetes for your situation is "overkill" or not depends on whether you get benefit from the other things that kubernetes brings aside from mere scaling. We get benefit from developer empowerment, log aggregation, CPU and resource limits, the ability to take down one Node without introducing any drama, secrets management, configuration management, using a small number of Nodes for a large number of hosted applications (unlike creating a single virtual machine per deployed application because the deployments have no discipline over the placement of config file or ports or whatever). I can keep going, because kubernetes is truly magical; but, as many people will point out, it is not zero human cost to successfully run a cluster.
Many companies I have worked with are shifting their entire production environment towards Kubernetes. That makes sense because all cloud providers are currently pushing Kubernetes and we can be quite positive about Kubernetes being the future of cloud-based deployment. If your application is meant to run in any private or public cloud, I would personally choose Kubernetes as operating platform for it. If you plan to add additional services, you will be easily able to connect them and scale your infrastructure with a growing number of requests to your application. However, if you already know that you do not expect to scale your application, it may be over-powered to use a Kubernetes cluster to run it although Google Cloud etc. make it fairly easy to setup such a cluster with a few clicks.
Regarding an automated development workflow for Kubernetes, you can take a look at my answer to this question: How to best utilize Kubernetes/minikube DNS for local development

Multiple images inside one container

So, here is the problem, I need to do some development and for that I need following packages:
MongoDb
NodeJs
Nginx
RabbitMq
Redis
One option is that I take a Ubuntu image, create a container and start installing them one by one and done, start my server, and expose the ports.
But this can easily be done in a virtual box also, and it will not going to use the power of Docker. So for that I have to start building my own image with these packages. Now here is the question if I start writing my Dockerfile and if place the commands to download the Node js (and others) inside of it, this again becomes the same thing like virtualization.
What I need is that I start from Ubuntu and keep on adding the references of MongoDb, NodeJs, RabbitMq, Nginx and Redis inside the Dockerfile and finally expose the respective ports out.
Here are the queries I have:
Is this possible? Like adding the refrences of other images inside the Dockerfile when you are starting FROM one base image.
If yes then how?
Also is this the correct practice or not?
How to do these kind of things in Docker ?
Thanks in advance.
Keep images light. Run one service per container. Use the official images on docker hub for mongodb, nodejs, rabbitmq, nginx etc. Extend them if needed. If you want to run everything in a fat container you might as well just use a VM.
You can of course do crazy stuff in a dev setup, but why spend time setting up something that has zero value in a production environment? What if you need to scale up one of the services? How do set memory and cpu constraints on each service? .. and the list goes on.
Don't make monolithic containers.
A good start is to use docker-compose to configure a set of services that can talk to each other. You can make a prod and dev version of your docker-compose.yml file.
Getting into the right frame of mind
In a perfect world you would run your containers in clustered environment in production to be able to scale your system and have concurrency, but that might be overkill depending on what you are running. It's at least good to have this in the back of your head because it can help you to make the right decisions.
Some points to think about if you want to be a purist :
How do you have persistent volume storage across multiple hosts?
Reverse proxy / load balancer should probably be the entry point into the system that talks to the containers using the internal network.
Is my service even able run in a clustered environment (multiple instances of the container)
You can of course do dirty things in dev such as mapping in host volumes for persistent storage (and many people who use docker standalone in prod do that as well).
Ideally we should separate docker in dev and docker i prod. Docker is a fantastic tool during development as you can have redis, memcached, postgres, mongodb, rabbitmq, node or whatnot up and running in minutes sharing that compose setup with the rest of the team. Docker in prod can be a completely different beast.
I would also like to add that I'm generally against the fanaticism that "everything should be running in docker" in prod. Run services in docker when it makes sense. It's also not uncommon for larger companies to make their own base images. This can be a lot of work and will require maintenance to keep up with security fixes etc. It's not necessarily the first thing you jump on when starting with docker.

Container delivery on amazon ecs

I’m using Amazon ECS to auto deploy my containers on uat/production.
What is the best way to do that?
I have a REST api with a several front-end clients
Should I package my api container with nginx in the same container?
And do the same thing with the others front end clients.
Or I have to write a big task definition to bring together all my containers(db, nginx, php, api, clients) :(, but that's mean that I should redeploy all my infrastructure at each push uat/prod
I'm very confusing.
I would avoid including too much in a single container. Try and distill your containers down to one process doing one thing. If all you're doing is serving up a REST API for consumption by your front end, just put the essential pieces in for that and no more.
In my experience you also want your ECS tasks to be able to handle failure gracefully and restart, and the more complicated your containers are the harder this is to get right.
Depending on your requirements I would look into using ELB instead of nginx, you can have your ECS cluster point at an ELB and not have to deal with that piece at all.
Do not use ECS - it's too crude. I was using it as a platform for our staging/production environments and had odd problems during deployments - sometimes it worked well, sometimes - not (with the same Docker images). ECS provides not clear model of container deployment and maintenance.
There is another good, stable and predictive option - Docker Cloud service. It's new tool (a.k.a. Tutum) that was acquired by Docker. I switched the CI/CD to use it and we're happy with it.
Bind Amazon user credentials to Docker Cloud account. Docker Cloud uses AWS (or other provider) API for creating appropriate computer instances.
Create Node. Select Amazon EC2 instance type and parameters of storage, security group and so on. New instance will contain installed docker software and managing container that handles messages from Docker Cloud (deploy, destroy and others).
Create Stackfile, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/. Stackfile is a definition of container group you required. You can define different scaling/distribution models for your containers using specific Stackfile options like deployment strategy, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/#deployment-strategy-1.
Define ELB configurations in AWS for your new instances.
P.S. I'm not a member of Docker team and I like other AWS services :).
Here is my two cents on the topic, the question is not really related to ecs, it applies to any body deploying their apps on docker.
I would suggest separating the containers, one for nginx and one for API.
if they need to be co-located on the same instance, on ECS you can define them as part of the same task and on kubernetes you can make them part of same pod.
Define a docker link between the nginx and the api container. This will allow the nginx process to talk to api container without the api container exposing its ports to the host.
One advantage of using the container running platforms such as kubernetes and ecs is that they ensure each of the container run all the time and dynamically restart if one of the processes/containers go down.
Separating the containers will allow these platforms to monitor both the processes separately. When you combine the two into one container the docker container can only run with one of the processes in foreground, so you will loose the advantage of auto-healing for one of the processes.
Also moving from nginx to ELB is not a straightforward solution, you may have redirections and other things configured on the nginx, which are not available on ELB(As of date).
If you also need the ELB, there is no harm in forwarding the requests from the ELB to the nginx port.

Resources