I'm new to AWS Copilot and containers in general (frontend focused engineer helping out with DevOps at my job) and am trying to figure out how to deploy multiple containers at once. The project includes a web container, deployed through copilot as a Load Balanced Web Service, and a worker container, deployed as a Backend Service. The two containers share a code base, so when I update one, I need to update the other, especially when there is a database migration. When I do a copilot deploy, however, it seems to only give the option to choose one service. How would I synchronize the deployment?
You can deploy the two services in parallel by using Copilot's pipelines feature. Tons of info here: https://aws.github.io/copilot-cli/docs/concepts/pipelines/ ... let us know if you have more questions not answered by the docs!
Thanks!
Related
I'm using reactjs for frontend with Nginx load balancer and laravel for backend with MongoDB.
as old architecture design, code upload to GitHub with different frontend and backend repo.
still did not use DOCKER AND KUBERNETS, I want to implement them, in the new Architecture design, I used a private cloud server so, restricted to deploying on AWS\AZURE\GCP\etc...
share your Architecture plan and implementation for a better approach to microservices!
as per my thinking,
first make a docker file for react and laravel project
then upload to docker private registry.[dockerhub]
install docker and k8s on VM
deploy container of 1=react and 2=laravel from image
also deploy 3=nginx and 4=mongo container from default market image
Some of my questions:
How to make the connection?
How to take a new pull on the container, for the new update release version?
How to make a replica, for a disaster recovery plan?
How to monitor errors and performance?
How to make the pipeline MOST important?
How to make dev, staging, and production environments?
This is more of planning question, more of the task can be automated by developer/devops, except few administrative task like monitoring and environment creation.
still this can be shared responsibility. or if team is available to manage product/services.
You can use gitlab, which can directly attach to kub8 provider. Can reduce multiple build steps.
I have a web application consisting of a few services - web, DB and a job queue/worker. I host everything on a single Google VM and my deployment process is very simple and naive:
I manually install all services like the database on the VM
a bash script scheduled by crontab polls a remote git repository for changes every N minutes
if there were changes, it would simply restart all services using supervisord (job queue, web, etc)
Now, I am starting a new web project where I enjoy using docker-compose for local development. However, I seem to suck in analysis paralysis deciding between available options for production deployment - I looked at Kubernetes, Swarm, docker-compose, container registries and etc.
I am looking for a recipe that will keep me productive with a single machine deployment. Ideally, I should be able to scale it to multiple machines when the time comes, but simplicity and staying frugal (one machine) is more important for now. I want to consider 2 options - when the VM already exists and when a new bare VM can be allocated specifically for this application.
I wonder if docker-compose is a reasonable choice for a simple web application. Do people use it in production and if so, how does the entire process look like from bare VM to rolling out an updated application? Do people use Kubernetes or Swarm for a simple single-machine deployment or is it an overkill?
I wonder if docker-compose is a reasonable choice for a simple web application.
It can be, sure, if the development time is best spent focused on the web application and less on the non-web stuff such as the job queue and database. The other asterisk is whether the development environment works ok with hot-reloads or port-forwarding and that kind of jazz. I say it's a reasonable choice because 99% of the work of creating an application suitable for use in a clustered environment is the work of containerizing the application. So if the app already works under docker-compose, then it is with high likelihood that you can take the docker image that is constructed on behalf of docker-compose and roll it out to the cluster.
Do people use it in production
I hope not; I am sure there are people who use docker-compose to run in production, just like there are people that use Windows batch files to deploy, but don't be that person.
Do people use Kubernetes or Swarm for a simple single-machine deployment or is it an overkill?
Similarly, don't be a person that deploys the entire application on a single virtual machine or be mentally prepared for one failure to wipe out everything that you value. That's part of what clustering technologies are designed to protect against: one mistake taking down the entirety of the application, web, queuing, and persistence all in one fell swoop.
Now whether deploying kubernetes for your situation is "overkill" or not depends on whether you get benefit from the other things that kubernetes brings aside from mere scaling. We get benefit from developer empowerment, log aggregation, CPU and resource limits, the ability to take down one Node without introducing any drama, secrets management, configuration management, using a small number of Nodes for a large number of hosted applications (unlike creating a single virtual machine per deployed application because the deployments have no discipline over the placement of config file or ports or whatever). I can keep going, because kubernetes is truly magical; but, as many people will point out, it is not zero human cost to successfully run a cluster.
Many companies I have worked with are shifting their entire production environment towards Kubernetes. That makes sense because all cloud providers are currently pushing Kubernetes and we can be quite positive about Kubernetes being the future of cloud-based deployment. If your application is meant to run in any private or public cloud, I would personally choose Kubernetes as operating platform for it. If you plan to add additional services, you will be easily able to connect them and scale your infrastructure with a growing number of requests to your application. However, if you already know that you do not expect to scale your application, it may be over-powered to use a Kubernetes cluster to run it although Google Cloud etc. make it fairly easy to setup such a cluster with a few clicks.
Regarding an automated development workflow for Kubernetes, you can take a look at my answer to this question: How to best utilize Kubernetes/minikube DNS for local development
I have a microservice with about 6 seperate components.
I am looking to sell instances of this microservice to people who need dedicated versions of it for their project.
Docker seems to be the solution to doing this as easily as possible.
What is still very unclear to me is, is it possible to use docker to deploy whole instances of microservices within a cloud service like GCP or AWS?
Is this something more specific to the Cloud provider itself?
Basicly in the end, I'd like to be able to, via code, start up a whole new instance of my microservice within its own network having each component be able to speak to eachother.
One big problem I see is assigning IP's to the containers so that they will find each other, independent of which network they are in. Is this even possible or is this not yet feasible with current cloud techonology?
Thanks a lot in advance, I know this is a big one...
They is definitely feasible and is nowadays one of the most popular ways to ship and deploy applications. However, the procedure to deploy varies slightly based on the cloud provider that you choose.
The good news is that the packaging of your microservices with Docker is independent from the cloud provider you use. You basically need to package each component in a Docker image, and deploy these images to a cloud platform.
All popular cloud platforms nowadays support deployment of docker containers. You can use in addition popular frameworks such as Docker swarm or Kubernetes on these cloud platforms to orchestrate the microservices deployment.
I’m using Amazon ECS to auto deploy my containers on uat/production.
What is the best way to do that?
I have a REST api with a several front-end clients
Should I package my api container with nginx in the same container?
And do the same thing with the others front end clients.
Or I have to write a big task definition to bring together all my containers(db, nginx, php, api, clients) :(, but that's mean that I should redeploy all my infrastructure at each push uat/prod
I'm very confusing.
I would avoid including too much in a single container. Try and distill your containers down to one process doing one thing. If all you're doing is serving up a REST API for consumption by your front end, just put the essential pieces in for that and no more.
In my experience you also want your ECS tasks to be able to handle failure gracefully and restart, and the more complicated your containers are the harder this is to get right.
Depending on your requirements I would look into using ELB instead of nginx, you can have your ECS cluster point at an ELB and not have to deal with that piece at all.
Do not use ECS - it's too crude. I was using it as a platform for our staging/production environments and had odd problems during deployments - sometimes it worked well, sometimes - not (with the same Docker images). ECS provides not clear model of container deployment and maintenance.
There is another good, stable and predictive option - Docker Cloud service. It's new tool (a.k.a. Tutum) that was acquired by Docker. I switched the CI/CD to use it and we're happy with it.
Bind Amazon user credentials to Docker Cloud account. Docker Cloud uses AWS (or other provider) API for creating appropriate computer instances.
Create Node. Select Amazon EC2 instance type and parameters of storage, security group and so on. New instance will contain installed docker software and managing container that handles messages from Docker Cloud (deploy, destroy and others).
Create Stackfile, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/. Stackfile is a definition of container group you required. You can define different scaling/distribution models for your containers using specific Stackfile options like deployment strategy, see https://docs.docker.com/docker-cloud/apps/stack-yaml-reference/#deployment-strategy-1.
Define ELB configurations in AWS for your new instances.
P.S. I'm not a member of Docker team and I like other AWS services :).
Here is my two cents on the topic, the question is not really related to ecs, it applies to any body deploying their apps on docker.
I would suggest separating the containers, one for nginx and one for API.
if they need to be co-located on the same instance, on ECS you can define them as part of the same task and on kubernetes you can make them part of same pod.
Define a docker link between the nginx and the api container. This will allow the nginx process to talk to api container without the api container exposing its ports to the host.
One advantage of using the container running platforms such as kubernetes and ecs is that they ensure each of the container run all the time and dynamically restart if one of the processes/containers go down.
Separating the containers will allow these platforms to monitor both the processes separately. When you combine the two into one container the docker container can only run with one of the processes in foreground, so you will loose the advantage of auto-healing for one of the processes.
Also moving from nginx to ELB is not a straightforward solution, you may have redirections and other things configured on the nginx, which are not available on ELB(As of date).
If you also need the ELB, there is no harm in forwarding the requests from the ELB to the nginx port.
The stack consists of a few applications/microservices need to be connected to run locally in development, and each is within its own repository
E.g. frontend, db, api
If each app has its own Dockerfile, and docker-compose.yml that list the required services to run that one application, what practices are recommended for development of the whole stack?
This is exactly what we do at work.
Front end angular running on Apache (prod) or node (dev)
All bog standard request are handled normally, all request to api container have /imanapicall in the url and are proxied to the api container based on the fact that the url contains /imanapicall
This is standard practice. Fe container is stateless.
We have node running the api, it is stateless and simply requests data from db and sends it back to front end
We have node running restify but express more popular
Then most people use mongodb but we use some weird db stuff
Important thing is to expose ports between containers, make sure firewalls aren't being a pain. For dev purposes you prob wanna expose ports for all containers to host also so u can debug more easily by for example hitting an express endpoint directly to make sure it's giving me wot I want
Ps statelessness important to support scaling, so I can introduce a load balancer and not worry which server it hits. Only db container holds state
FURTHER TO YOUR COMMENT ...
Each tier (db, api, etc) has its own git repo.
Each git repo has an automated Jenkins job that does a build (on a push to the repo) and on success pushes a new docker image.
We then have another git repo that is responsible for pulling it all together. this repo basically consist of a docker compose file to pull all the relevant containers and run them.
job done.
this gives a simple overview. if you have any more detailed questions feel free to ask.
FINE DETAIL ...
During development of the db tier no difficulties arise.
However, during development of the api tier, for example, this tier is dependent on the db tier, so a docker container for the db tier will need to be running when developing the api tier. Can use compose for this also.
When developing front end tier, this relies on both db and api tiers.
Its best to use a generic solution during development that allows all containers to be stood up in their latest docker image state and ignore those that are irrelevant for current purposes.
For example,
When developing front end, bring up all 3 containers from the latest images. Ignore the front end container and use your front end development environment as usual. Point the front end development environment to the api container. Hope this makes sense.
Ignoring the containers that are irrelevant for the development of the tier you are working on means you can use a common approach when brining up docker containers for all tiers without having a specific solution for each.
hope this makes sense, not the easiest thing to explain!