I am trying to deploy this app. https://github.com/taigaio/taiga-docker . This container is a collection of various images. It uses docker-compose to create the container. It is my understanding that this cannot be run as an image from a GCP Artifact Repo as a docker Image. This needs a VM perhaps?
My question is if there is a way to deploy this container as an Image in a serverless fashion in GCP or any other cloud platform. Any pointers/help is much appreciated.
You can either use Cloud Run (the most Serverless way) or on a VM.
On Cloud Run you can deploy a single image as a Service (Cloud Run Terminology), if you have more than one image you can deploy multiple Services and make them talk to each other
Or on VM, that would be as if you are deploying on your personal laptop
Related
I have built a docker image that when run, it registers itself as a GitHub Runner. This runner will, amongst other things, be used to build and push images to GitHub Container Registry. I don't want to deploy the containers to GKE or Compute, as I don't want the overhead of managing those resources. I would prefer to deploy the containers to Google Cloud Run. I've scoured the docs for help but I can't seem to find the answers to the following question:
Can I run 'docker in docker' when the container is deployed to GCP Cloud Run?
How do I specify the volume mount required when deploying the container to Google Cloud Run, i.e. the usual mapping with docker run would be:
-v /var/run/docker.sock:/var/run/docker.sock
I never tested but it's possible that the current Cloud Run sandbox prevent this king of use. And I don't really know the use case for this!
You can't mount volume in Cloud Run, it's stateless. You only have a in-memory file system in the /tmp directory (and it's in memory, size correctly your Cloud Run instance memory to take this into account). You can connect your instance to 3rd party product, such as Google Cloud Storage or databases, but no volume mountable on Cloud Run (for now)
If you have these requirement you can maybe have a look to autopilot and deploy directly your container on fully managed K8S.
I've built an app that uses two home made micro services, each of the micro service having its own Dockerfile.
When I build it locally I use docker-compose for practical reasons.
Currently, when I deploy to Cloud Run I use commands like
docker tag xxx
docker push xxx
Then I select the image I want to deploy on Cloud run
As I understand, docker-compose build just builds two images (one for each Dockerfile) and the places them within the same network which allows some practical connections between these two API.
Is it possible to do something similar one Cloud Run without having to deploy each image on a different service ?
PS: For business reasons I can't host my code directly on Cloud Source Repositories, it has to be on Azure
It is not possible to deploy 2 different Docker images to Cloud Run.
The Cloud Run works in the following way:
You build a container image and upload to Google Container Registry
Deploy to Cloud Run with the container image.
Your service is automatically scaled up and down to a specific number of container instances depending on your incoming requests. Each container will run the container image.
Summary = Cloud Run takes a user's container and executes it on Google infrastructure, and handles the instantiation of instances (scaling) of that container.
Please Note, Cloud Run is designed to run Websites,REST APIs backend, Back‐office administration etc and it does not support microservices architecture (different servers running in a different container).
For you scenario, you can deploy multiple services in Cloud Run or use other Google Products such as Cloud SQL, Datastore, Spanner or BigTable.
Note: You can'd deploy 2 containers in the same service however you can deploy a container that contains multiple processes as explained in this article written by a Googler
I run Rstudio server mostly from an EC2 instance. However, I´d also like to run it from a cluster at work. They tell me that I can setup docker with rstudio and make it run. Now, I´d also like the Rstudios both on EC2 and work to have the same packages and the same versions available. Any idea how I can do this? Can I have both version point to a dropbox folder? In that case, how can I mount a dropbox folder?
You should setup a docker repository on dockerhub or aws ec2 container service (ecs). ECS is a managed service that allows you to easily deploy docker containers onto a cluster of 1 or more ec2 instances that are running the ecs agent (an aws program that helps that cluster work with the ecs). The Dockerfile should install all packages that you need at build time of the image. I suggest referencing the aws ecs documentation, which includes a walkthrough to get you going very quickly (assuming you have an idea of how docker works): https://aws.amazon.com/documentation/ecs/
You should then always run from that docker image, whether you are running on a local or remote machine. One key advantage of docker is that it keeps your application's environments the same (assuming you use the same build of the image) regardless of the host environment.
I am not sure why would not always run on ECS (we have multiple analysts using RStudio, and ECS lets us provision cpu/memory resources to each one, as well as autoscale as needed). You could install docker on EC2 and manage it that way, but probably easier to just install the ecs agent (or use the ecs optimized ec2 ami which has it preinstalled - the docs above walk through configuring it), and use ECS to launch rstudio services.
My application is comprised of two separate docker containers. One being a Grails based web application and second being a RESTful Python Flask application. Both docker containers are sitting on my local computer. They are not hosted on docker hub. They are proprietary and I don't want to host them publicly.
I would like to try Cloud Foundry to deploy these docker containers and see how it works. However, from the documentation I get a sense that Cloud Foundry doesn't support deploying docker containers sitting on a local machine.
Question
Is there a way to deploy docker containers sitting on a local computer to CloudFoundry? If not, what is a way to securely host the containers somewhere from CF can fetch them?
Is CloudFoundry capable of running a docker container that is a Python Flask application?
One option you have is to not use Docker images, and just push your code directly, one of the nice features of CF. PCF comes with a python buildpack which should automatically detect your Flask app.
Another option would be run your own trusted docker registry, push your images there, and then when you push your app, tell it to grab the images from your registry. If you google "cloud foundry docker registry" you get the following useful results you should check out:
https://github.com/cloudfoundry-community/docker-registry-boshrelease
http://docs.pivotal.io/pivotalcf/1-8/adminguide/docker.html#caveats
https://docs.pivotal.io/pivotalcf/1-7/opsguide/docker-registry.html
I am currently trying to port a service across to asp-net 1.0 and get it up and running in a local Kubernetes cluster, or even a single node (Kubernetes Master and 1 minion). I have successfully managed the first part and had my service running in kestrel using Docker within a Boot2Docker VM and also Centos7. I am now trying to get my container up and running in Kubernetes. I have been trawling Google for a guide in doing this and everywhere I turn this seems a rather convoluted task. Has anyone else achieved this and have any useful guides/links?
You are on the right path, just a few additional steps:
Package your app into a docker image, use the aspnet base image and add your code (https://hub.docker.com/r/microsoft/aspnet/)
Push your image up to a docker repo
Deploy that image to your cluster
The basic rule of thumb is just get your app dockerized then you can run it in k8s.