Deploy a Microservices self-hosted app to Dockerhub - docker

I have a Full stack Microservices app with this file structure.
---app name
------web-ui
------service1
------service2
------service3
the services are java based and it uses two databases (Sqlite and MongoDB) and other services including Apashe Kafka, Elastic Search and Eureka.
and i wanted this app to be used as a self-hosted app kind of like Nextcloud or Jellyfin where anyone can just go to Dockerhub and pull the whole thing and use docker run and use it as a single image.
but how can i go about dockerizing the whole thing and deploying it to Dockerhub
I'm new to docker and the whole ecosystem so i've been doing some digging around but i got confused.
for example can i deploy it to docker hub using docker compose?

Related

Deploying Container on GCP

I am trying to deploy this app. https://github.com/taigaio/taiga-docker . This container is a collection of various images. It uses docker-compose to create the container. It is my understanding that this cannot be run as an image from a GCP Artifact Repo as a docker Image. This needs a VM perhaps?
My question is if there is a way to deploy this container as an Image in a serverless fashion in GCP or any other cloud platform. Any pointers/help is much appreciated.
You can either use Cloud Run (the most Serverless way) or on a VM.
On Cloud Run you can deploy a single image as a Service (Cloud Run Terminology), if you have more than one image you can deploy multiple Services and make them talk to each other
Or on VM, that would be as if you are deploying on your personal laptop

Is it possible to deploy two different docker images within the same Cloud Run service

I've built an app that uses two home made micro services, each of the micro service having its own Dockerfile.
When I build it locally I use docker-compose for practical reasons.
Currently, when I deploy to Cloud Run I use commands like
docker tag xxx
docker push xxx
Then I select the image I want to deploy on Cloud run
As I understand, docker-compose build just builds two images (one for each Dockerfile) and the places them within the same network which allows some practical connections between these two API.
Is it possible to do something similar one Cloud Run without having to deploy each image on a different service ?
PS: For business reasons I can't host my code directly on Cloud Source Repositories, it has to be on Azure
It is not possible to deploy 2 different Docker images to Cloud Run.
The Cloud Run works in the following way:
You build a container image and upload to Google Container Registry
Deploy to Cloud Run with the container image.
Your service is automatically scaled up and down to a specific number of container instances depending on your incoming requests. Each container will run the container image.
Summary = Cloud Run takes a user's container and executes it on Google infrastructure, and handles the instantiation of instances (scaling) of that container.
Please Note, Cloud Run is designed to run Websites,REST APIs backend, Back‐office administration etc and it does not support microservices architecture (different servers running in a different container).
For you scenario, you can deploy multiple services in Cloud Run or use other Google Products such as Cloud SQL, Datastore, Spanner or BigTable.
Note: You can'd deploy 2 containers in the same service however you can deploy a container that contains multiple processes as explained in this article written by a Googler

Kubernetes on Docker for Windows -> AKS/EKS

With the Kubernetes orchestrator now available in the stable version of Docker Desktop for Win/Mac, I've been playing around with running an existing compose stack on Kubernetes locally.
This works fine, e.g., docker stack deploy -c .\docker-compose.yml myapp.
Now I want to go to the next step of running this same application in a production environment using the likes of Amazon EKS or Azure AKS. These services expect proper Kubernetes YAML files.
My question(s) is what's the best way to get these files, or more specifically:
Presumably, docker stack is performing some conversion from Compose YAML to Kubernetes YAML 'under the hood'. Is there documentation/source code links as to what is going on here and can that converted YAML be exported?
Or should I just be using Kompose?
It seems that running the above docker stack deploy command against a remote context (e.g., AKS/EKS) is not possible and that one must do a kubectl deploy. Can anyone confirm?
docker stack deploy with a Compose file to Kube only works on Docker's Kubernetes distributions - Docker Desktop and Docker Enterprise.
With the recent federation announcement you'll be able to manage AKS and EKS with Docker Enterprise, but using them direct means you'll have to use Kubernetes manifest files and kubectl.

How to deploy docker container to Cloud Foundry?

My application is comprised of two separate docker containers. One being a Grails based web application and second being a RESTful Python Flask application. Both docker containers are sitting on my local computer. They are not hosted on docker hub. They are proprietary and I don't want to host them publicly.
I would like to try Cloud Foundry to deploy these docker containers and see how it works. However, from the documentation I get a sense that Cloud Foundry doesn't support deploying docker containers sitting on a local machine.
Question
Is there a way to deploy docker containers sitting on a local computer to CloudFoundry? If not, what is a way to securely host the containers somewhere from CF can fetch them?
Is CloudFoundry capable of running a docker container that is a Python Flask application?
One option you have is to not use Docker images, and just push your code directly, one of the nice features of CF. PCF comes with a python buildpack which should automatically detect your Flask app.
Another option would be run your own trusted docker registry, push your images there, and then when you push your app, tell it to grab the images from your registry. If you google "cloud foundry docker registry" you get the following useful results you should check out:
https://github.com/cloudfoundry-community/docker-registry-boshrelease
http://docs.pivotal.io/pivotalcf/1-8/adminguide/docker.html#caveats
https://docs.pivotal.io/pivotalcf/1-7/opsguide/docker-registry.html

Google Cloud Container: Create a docker container from a Dockerfile

Is it possible to create a docker container in a Google Container Engine cluster, using a dockerfile, which builds an image on-the-fly and deploy in cluster,
rather than creating an image first and uploading it to a Google Container Registry, and then using it from there?
I feel like that is cumbersome, and there should be a way to create containers in cluster directly using a dockerfile.
It is not possible to do this in Google Container Engine. Google Container Engine is designed to help orchestrate container deployment and does not itself provide a source -> deployment workflow.
You may want to look at Google App Engine or Openshift 3 (which is built on Kubernetes) as a more fully featured platform-as-a-service offering.
You can also build this type of tooling on top of a Google Container Engine cluster yourself as all of the building blocks are available.
One service to take a look at when constructing a workflow is Google Container Builder, which can simplify the process of building a container from source and pushing it to Google Container Registry.
It is currently a fairly low level service, but offers some advantages for environments where it may be impractical to run docker build locally.

Resources