My question concerns kubernetes set up for the development purposes. The app consosts of 4 services (react + express backend, nginx for routing, neo4j as a db). Neo4J Db is deployed in google cloud (not by me and it is not mantained by me either) but all other services are running now locally as I’m developing the app. What I want to achieve is to start up and run all those services at once, all together with a simple command as it is possible in docker compose world (thru docker-compose up).
Related
I need to deploy Django rest framework application using docker and git on nginx AWS light sail. Can I get any general guideline please on how to start the process and what are the following steps? Or Any tutorials to learn about it.
For anyone still looking for advice;
There are 2 main options to choose from for deploying you dockerized django application to AWS Lightsail:
A regular Lightsail VPS. I found this tutorial very helpful for setting up your dockerized django application on a regular Lightsail VPS.
Lightsail Container Service. Furthermore, setting up your django docker application with Lightsail Container Service seems to be relatively straight forward if you follow their instructions
There are however a few points to take into consideration between choosing 1 of the 2 options:
Setting up and getting your containers to run is easier on Lightsail Container Service, since you don't have to setup your VM to be able to run docker and it's toolings.
The Lightsail Container Service itself also makes it easy to scale up to multiple nodes, instead of having to scale up your VM(s).
The only big downside is that the Lightsail Container Service is way more expensive. You can read more as to why people think this is the case here:
Reddit - Lightsail containers vs VPS
Serverfault
Good luck to anyone starting with lightsail docker and django:)
I have a SPA app dockerized with single Dockerfile (server side is by Kotlin with Spring boot, front end is by typescript with React) and am trying to host that docker image on GCP as web app.
At first I thought Cloud Run cloud be appropriate, but it seems that Cloud Run is serverless service and not for hosting a web app. I understand there are several options; App Engine(flexible environment), Compute Engine and Kubernetes Engine.
Considering the story above, can I ask GCP community support to decide which one to choose for the purposes;
Hosting Docker Image stored at Cloud Registry
That app should be publicly deployed; .i.e. everyone can access that app via browser like every other web sites
That deployed Docker Image needs to connect Cloud SQL to persist its data
Planning to use Cloud Build for CI/CD environment
Any help would be very appreciated. Thank you!
IMO, you need to avoid what you propose (Kubernetes, Compute Engine and App Engine Flex) and to (re)consider Cloud Run and App Engine Standard.
If you have a container, App Engine Standard isn't compliant, but you can simply deploy your code and let App Engine standard building and deploying its own container (with your code inside).
My preference is Cloud Run, and it's perfectly designed for webapp, as long as:
You only perform processing on request (no background process, not long running operation (more than 60 minutes))
You don't need to store data locally (but to store data in external service, in databases or storage)
I also recommend you to split your front end and your backend.
Deploy your Front End on App Engine standard or on Cloud Storage
Deploy your backend in Cloud Run (and thus in a container)
Put a HTTPS load balancer in front of both to remove CORS issues and to have only 1 URL to expose (behind your own domain name)
The main advantage are:
If you serve your file from Cloud Storage you can leverage cache and thus to reduce the cost and the latency. Same thing if you use CDN capacity in load balancer. If you host your front end in Cloud Run or any other compute system, you will use CPU to only serve static file, and you will pay for this CPU/memory -> useless
Separate the frontend and the backend let you the capacity to evolve independently the both part without redeploy the whole application, only the part that have changed.
The proposed pattern is an entreprise grade pattern. starting from 16$ per month, you can scale high and globally. You can also activate a WAF on load balancer to increase the security and attacks prevention.
So now, if you are agree with that, what's your next questions?
I would like to deploy a rails API app to AWS Elastic Beanstalk and noticed that there are two options for docker.
Single container
Multi-container
I think it is enough with a single container for this app however, I was wondering when is the case to use multi-container. If I would like to deploy two rails apps(one is an API app and the other is an admin app) to a single EC2 instance then is this the case?
Well.. Not really. Multicontainer, as it stays, has more than a one container within overall definition (done with Dockerrun.aws.json file). You can still deploy just one container with whatever application you want, let's say django, Python based framework, where there's an API and admin panel as well and it all sits within one application.
But you may want to deploy your application behind some reverse proxy, it might be Nignx let's say, so there's a need for a second container. That's the case where you would use Multicontainer. The main advantage of using Multicontainer is that each container can talk to each other using local network and some DNS host mapping, so your Nginx container can invoke with proxy_pass any application by its name, like just "backend", where Rails or Django application is living.
I have two apps that I would like to somehow combine with docker and docker compose. The apps are:
API
This I have been able to get running on docker.
Consists of the following containers: Web (rails app), postgres, and redis.
Scraping app
This app scrapes other websites, makes sure the data is consistent, and sends it to the API. This is the app I dunno how to get running on docker.
Its a node app and would consist of the following containers: Web (sails app), mongodb, redis, the API.
My question is if its possible to write the Dockerfile or docker-compose.yml file for the scraping app such that it is linked to the API app, which itself is linked to at least two other containers. Or do I have to manually boot the API app before booting the scraping app?
I want to migrate my current deploy to docker, it counts on a mongodb service, a redis service, a pg server and a rails app, I have created already a docker container for each but i have doubts when it comes to start and linking them. Under development I'm using fig but I think it was not meant to be used on production. In order to take my deployment to production level, what mechanism should I use to auto-start and link containers together? my deploy uses a single docker host that already runs Ubuntu so i can't use CoreOS.
Linknig containers in production is a tricky thing. It will hardwire the IP addresses of the dependent containers so if you ever need to restart a container or launch a replacement (like upgrading the version of mongodb) your rails app will not work out of the box with the new container and its new IP address.
This other answer explains some available alternatives to linking.
Regarding starting the containers, you can use any deployment tool to run the required docker commands (Capistrano can easily do that). After that, docker will restart running the containers after a reboot.
You might need a watcher process to restart containers if they die, just as you would have one for a normal rails app.
Services like Tutum and Dockerize.it can make this simpler. As far as I know, Tutum will not deploy to your servers. Dockerize.it will, but is very rough (disclaimer: I'm part of the team building it).
You can convert your fig configuration to CoreOS formatted systemd configuration files with fig2coreos. Google App Engine supports CoreOS, or you can run CoreOS on AWS or your cloud provider of choice. fig2coreos also supports deploying to CoreOS in Vagrant for local development.
CenturyLink (fig2coreos authors) have an example blog post here:
This blog post will show you how to bridge the gap between building
complex multi-container apps using Fig and deploying those
applications into a production CoreOS system.
EDIT: If you are constrained to an existing host OS you can use QEMU ("a generic and open source machine emulator and virtualizer") to host a CoreOS instance. Instructions are available from the CoreOS team.