Access rails console of an app deployed in Google Cloud Run - ruby-on-rails

We deployed a rails app in Google Cloud Run using their managed platform. The app is working fine and it is able to serve requests.
Now we want to get access to the rails console of the deployed app. Can anyone suggest a way to achieve this?
I'm aware that currently, Cloud Run supports only HTTP requests. If no other way is possible I'll have to consider something like rails web console

I think you cannot.
I'm familiar with Cloud Run but I'm not familiar with rails.
I assume you'd need to be able to shell into a container in order to be able to run IRB. Generally, you'd do this by asking the runtime (Docker Engine, Kubernetes, Cloud Run) to connect you to the container so that you could do this.
Cloud Run does not (appear) to permit this. I think it's a potentially useful feature request for the service. For those containers that contain shells, this would be the equivalent of GCE's gcloud compute ssh.
Importantly, your app may be serviced by multiple, load-balanced containers and so you'd want to be able to console into any of these.
However, you may wish to consider alternatives mechanisms for managing your app: monitoring, logging, trace etc. These mechanisms should provide you with sufficient insight into your app's state. Errant container instances should be terminated.
This follows the concept of "pets vs. cattle" whereby, instead of nurturing individual containers (is one failing?), you nurture the containers holistically (is the service comprising many containers failing?)
For completeness, if you think that there's an issue with a container image that you're unable to resolve through other means, you could run the image elsewhere (e.g. locally) where you can use IRB. Since the same container image will behave consistently wherever it's run, you should be able to observe the issue using IRB locally too.

Related

Why multi-container docker apps are built?

Can somebody explain it with some examples? Why multi-container docker apps are built? while you can contain your app in a single docker container.
When you make a multi-container app you have to do networking. Is not it easy to run a single image of a single container rather than two images of two containers?
There are several good reasons for this:
It's easier to reuse prebuilt images. If you need MySQL, or Redis, or an Nginx reverse proxy, these all exist as standard images on Docker Hub, and you can just include them in a multi-container Docker Compose setup. If you tried to put them into a single image, you'd have to install and configure them yourself.
The Docker tooling is built for single-purpose containers. If you want the logs of a multi-process container, docker logs will generally print out the supervisord logs, which aren't what you want; if you want to restart one of those containers, the docker stop; docker rm; docker run sequence will delete the whole thing. Instead with a multi-process container you need to use debugging tools like docker exec to do anything, which is harder to manage.
You can upgrade one part without affecting the rest. Upgrading the code in a container usually involves building a new image, stopping and deleting the old container, and running a new container from the new image. The "deleting the old container" part is important, and routine; if you need to delete your database to upgrade your application, you risk losing data.
You can scale one part without affecting the rest. More applicable in a cluster environment like Docker Swarm or Kubernetes. If your application is overloaded (especially in production) you'd like to run multiple copies of it, but it's hard to run multiple copies of a standard relational database. That essentially requires you to run these separately, so you can run one proxy, five application servers, and one database.
Setting up a multi-container application shouldn't be especially difficult; the easiest way is to use Docker Compose, which will deal with things like creating a network for you.
For the sake of simplification, I would say you can run only one application with a public entry point (like API) in a single container. Actually, this approach is recommended by Docker official documentation.
Microservices
Because of this single constraint, you cannot run microservices that require their own entry points in a single docker container.
It could be more a discussion on the advantages of Monolith application vs Microservices.
Database
Even if you decide to run the Monolith application only, still you need to connect some database there. As you noticed, Docker has an additional network-configuration layer, so if you want to run Database and application locally, the easiest way is to use docker-compose to run both images (Database and your Application) inside one, automatically configured network.
# Application definition
application: <your app definition>
# Database definition
database:
image: mysql:5.7
In my example, you can just connect to your DB via https://database:<port> URL from your main app (plus credentials eventually) and it will work.
Scalability
However, why we should split images for the database from the application? One word - scalability. For development purposes, you want to have your local DB, maybe with docker because it is handy. For production purposes, you will put the application image to run somewhere (Kubernetes, Docker-Swarm, Azure App Services, etc.). To handle multiple requests at the same time, you want to run multiple instances of your application. However what about the database? You cannot connect to the internal instance of DB hosted in the same container, because other instances of your app in other containers will have a completely different set of data (without synchronization).
Most often you are electing to use a separate Database server - no matter if running it on the container or fully manged databases (like Azure CosmosDB or Mongo Atlas), but with your own configuration, scaling, and synchronization dedicated for DB only. Your app just needs to worry about the proper URL to that. Most cloud providers are exposing such services out of the box, so you are not worrying about the configuration by yourself.
Easy to change
Last but not least argument is about changing the initial setup overtime. You might change the database provider, or upgrade the version of the image in the future (such things are required from time to time). When you separate images, you can modify one without touching others. It is decreasing the cost of maintenance significantly.
Also, you can add additional services very easy - different logging aggregator? No Problem, additional microservice running out-of-the-box? Easy.

Can you run a sandbox container within a Cloud Run container?

Let's say I would to let the user upload some python or bash script, execute it in the cloud run and get the result back. To do this I would create a Cloud Run service with a service account that has no permissions to access project resources. I would as well run the script within the nested container so the user cannot interfere with the server code and manipulate consecutive requests from other users.
How would I make gvisor runsc or some other sandbox runtime available within the container running on Cloud Run?
I found some resources mentioning using the privileged flag on the original container, but that is not possible with Cloud Run. Also, I cannot find any information on how to run rootless containers with runsc. Let me know if I am on the right track or if this is even possible with cloud run or should I use another service?
Thank you.
Currently Cloud Run (fully managed) itself runs on a gVisor sandbox itself, so its support for low-level Linux APIs for creating further container environments using cgroups or Linux namespace APIs are probably not going to be possible.
However, since gVisor is technically an user-space sandboxing technology (though I'm not sure what level of privileges it requires), you might be able to run a gVisor sandbox inside gVisor, though I would not hold my hopes high as it's probably not designed for that. I'm guessing that gVisor sandbox does not provide ptrace capabilities for nested sandboxes to work, though you can probably ask this on gVisor’s own GitHub repository.
For a use case like this, I recommend checking out Cloud Run for Anthos on GKE, it's a similar developer experience to Cloud Run, but runs your applications on GKE nodes (which are GCE VMs) which have full Linux system call suite available to them. Since Kubernetes podspec is available there, you can actually create privileged containers, and run VMs inside them etc.
Usually containers themselves are supposed to be the sandbox, so attempting to create further sandboxes (like you asked earlier) is going to be a lot of platform-dependent work, even if you can get it running somehow.

Docker (Compose? Swarm?) How to run a health check before exposing cotainer

I have a web app (netcore) running in a docker container. If I update it under load it won't be able to handle requests until there is a gap. This might be a bug in my app, or in the .net, I am looking for a workaround for now. If I hit the app with a single http request before exposing it to the traffic though, it works as expected.
I would like to get this behaviour:
In the running server get the latest release of the container.
Launch the container detached from network.
Run a health check on it, if health check fails - stop.
Remove old container.
Attach new container and start processing traffic.
I am using compose atm, and have somewhat limited knowledge of docker infrastructure and the problem should be something well understood, yet I've failed finding anything in the google on the topic.
It kind of sounds like Kubernetees at this stage, but I would like to keep it as simple as possible.
The thing I was looking for is the Blue/Green deployment and it is quite easy to search for it.
E.g.
https://github.com/Sinkler/docker-nginx-blue-green
https://coderbook.com/#marcus/how-to-do-zero-downtime-deployments-of-docker-containers/
Swarm has a feature which could be useful as well: https://docs.docker.com/engine/reference/commandline/service_update/

Docker, Jenkins and Rails - Setup for running specs on a typical Rails stack

I would like a Jenkins master and slave setup for running specs on standard Rails apps (PostgreSQL, sidekiq/redis, RSPec, capybara-webkit, a common Rails stack), using docker so it can be put on other machines as well. I got a few good stationary machines collecting dust.
Can anybody share an executable docker jenkins rails stack example?
What prevents that from being done?
Preferable with master-slave setup too.
Preface:
After days online, following several tutorials with no success, I am about to abandon project. I got a basic understanding of docker, docker-machine, docker compose and volumes, I got a docker registry of a few simple apps.
I know next to nothing about Jenkins, but I've used Docker pretty extensively on other CI platforms. So I'll just write about that. The level of difficulty is going to vary a lot based on your app's dependencies and quirks. I'll try and give an outline that's pretty generally useful, and leave handling application quirks up to you.
I don't think the problem you describe should require you to mess about with docker-machine. docker build and docker-compose should be sufficient.
First, you'll need to build an image for your application. If your application has a comprehensive Gemfile, and not too many dependencies relating to infrastructure etc (e.g. files living in particular places that the application doesn't set up for itself), then you'll have a pretty easy time. If not, then setting up those dependencies will get complicated. Here's a guide from the Docker folks for a simple Rails app that will help get you started.
Once the image is built, push it to a repository such as Docker Hub. Log in to Docker Hub and create a repo, then use docker login and docker push <image-name> to make the image accessible to other machines. This will be important if you want to build the image on one machine and test it on others.
It's probably worth spinning off a job to run your app's unit tests inside the image once the image is built and pushed. That'll let you fail early and avoid wasting precious execution time on a buggy revision :)
Next you'll need to satisfy the app's external dependencies, such as Redis and postgres. This is where the Docker Compose file comes in. Use it to specify all the services your app needs, and the environment variables etc that you'll set in order to run the application for testing (e.g. RAILS_ENV).
You might find it useful to provide fakes of some non-essential services such as in-memory caches, or just leave them out entirely. This will reduce the complexity of your setup, and be less demanding on your CI system.
The guide from the link above also has an example compose file, but you'll need to expand on it. The most important thing to note is that the name you give a service (e.g. db in the example from the guide) is used as a hostname in the image. As #tomwj suggested, you can search on Docker Hub for common images like postgres and Redis and find them pretty easily. You'll probably need to configure a new Rails environment with new hostnames and so on in order to get all the service hostnames configured correctly.
You're starting all your services from scratch here, including your database, so you'll need to migrate and seed it (and any other data stores) on every run. Because you're starting from an empty postgres instance, expect that to take some time. As a shortcut, you could restore a backup from a previous version before migrating. In any case, you'll need to do some work to get your data stores into shape, so that your test results give you useful information.
One of the tricky bits will be getting Capybara to run inside your application Docker image, which won't have any X displays by default. xvfb (X Virtual Frame Buffer) can help with this. I haven't tried it, but building on top of an image like this one may be of some help.
Best of luck with this. If you have the time to persist with it, it will really help you learn about what your application really depends on in order to work. It certainly did for me and my team!
There's quite a lot to unpack in that question, this is a guide of how to get started and where to look for help.
In short there's nothing preventing it, although it's reasonably complex and bespoke to setup. So hence no off-the-shelf solution.
Assuming your aim is to have Jenkins build, deploy to Docker, then test a Rails application in a Dockerised environment.
Provision the stationary machines, I'd suggest using Ansible Galaxy roles.
Install Jenkins
Install Docker
Setup a local Docker registry
Setup Docker environment, the way to bring up multiple containers is to use docker compose this will allow you to bring up the DB, redis, Rails etc... using the public docker hub images.
Create a Jenkins pipeline
Build the rails app docker image this will contain the rails app.
Deploy the application, this updates the application in the Docker swarm, from the local Docker registry.
Test, run the tests against the application now running.
I've left out the Jenkins master/slave config because if you're only running on one machine you can increase the number of executors. E.g. the master can execute more jobs at the expense of speed.

How Docker can be used for multi-layered application?

I am trying to understand how docker can be used to dockerize multilayered application.
My tomcat application needs mongodb, mysql, redis, solr and rabbitmq. I am playing with Docker for couple of weeks now. I am able to install and use mongo/mysql containers. But I am not getting how can I completely ship application using Docker. I have few questions.
How should the images be. Should I have one image that has all the components installed or have separate images (like one for tomcat, one for mongo, one for mysql etc) and start those containers using a bash script outside of docker.
What is the docker way of maintaining multiple containers at once. Meaning say I have multiple containers (like mongo, mysql, tomcat etc...) that needs to be worked together to run my application, Is there any inbuilt way of dealing this so that one command/script does this?
Suppose I dockerize my application, how can i manage various routine tasks that need to be performed like incremental code deployment, database patches etc. Currently we are using vagrant, we also use fabric along with vagrant for various tasks.Like after vagrant up we use fab tasks for all kind of routine things like code deployment, db refresh, adding volumes, start/stop services etc. What would be the docker's way of doing this?
With Vagrant if VM crashes due to High CPU etc. host system is not affected. But I see docker is eating up lot of host resources. Can we put limits for that say not more than one cpu core for that container etc..?
Because we use vagrant, most of the questions above are in that context. When started with docker I thought docker as a kind of visualization technology that can be a replacement for our huge Vagrant based infra. Please correct me if I am wrong?
I advise you to look at docker-compose:
you'll be able to define an architecture of your application
you can then easily build it and run it (with one command)
pretty much same setup for dev and prod
For microservices, composition etc I won't repost on this.
For containet resource allocation:
Docker run has various resource control options (using google cgroups) see my gist here
https://gist.github.com/afolarin/15d12a476e40c173bf5f

Resources