I would like to know if there is way to deploy cloud run to multiple regions at once. Currently there is only one option to specify region like this
gcloud run deploy --image gcr.io/shaale-one-development/testservice
testservice --platform managed --region=us-central1"
is there a possibility to deploy to multiple regions say
gcloud run deploy --image gcr.io/shaale-one-development/testservice
testservice --platform managed --region=us-central1,asia-south1"
Currently I am not specifying the region in the command and then later choosing the regions. Since we have predefined regions deploying to those regions at once saves time.
No.
You can simply repeat the command for each region:
REGIONS=(
"us-central1"
"asia-south1"
)
for REGION in ${REGIONS[#]}
do
gcloud run deploy ... -- region=${REGION}
done
You can run gcloud run deploy asynchronously using --async to parallelize the loop but this complicates checking for success as you'll need to iterate over the return operators.
If you are a Terraform user, you can use this module to deploy to multiple regions. See official documentation
Related
I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images
I know that it is possible to execute multiple commands simultaneously in Kubernetes. I've seen Multiple commands in kubernetes. But what I wanted to know is to execute multiple commands simultaneously.
command: ["/bin/sh","-c"]
args: ["command one; command two"]
Here both command one and command two execute parallel.
As command one starts a server instance and similarly command two start another server.
In my docker environment I have specified one command and then I exec int docker and start another command. But in k8s deployment it won't be possible. What should I do in this situation?
I will be using helm chart, so if there is any trick related to helm charts. I can use that as well.
Fully agree with #David Maze and #masseyb.
Writing answer as community wiki just to index this answer for future researches.
You are not able to execute simultaneously multiple commands. You should create few similar but separate deployments and use there different commands.
Running gcloud beta run deploy --image gcr.io/mynippets-dev/web:latest when gcloud project is set to 'mysnippets-dev' returns the following:
ERROR: (gcloud.beta.run.deploy) Google Cloud Run Service Agent must have permission to read the image, gcr.io/mynippets-dev/web:latest. Ensure that the provided container image URL is correct and that the above account has permission to access the image. If you just enabled the Cloud Run API, the permissions might take a few minutes to propagate. Note that [mynippets-dev/web] is not in project [mysnippets-dev]. Permission must be granted to the Google Cloud Run Service Agent from this project
It should be noted that both the GCR image and the Cloud Run account live in project 'mysnippets-dev'. But for some reason, it thinks it's a cross project deployment and maybe thinking it's 'mynippets-dev/web' with the /web (the GCR repository).
I can also repro the same issue in Cloud Run UI.
Deployment should succeed.
This looks like it is most likely a typo with mynippets-dev vs mysnippets-dev (missing an 's')
Cloud Run interprets this as a cross-project deployment, which is allowed, but requires sufficient permissions.
If this isn't intended to be a cross project deployment, this should succeed with this command.
gcloud beta run deploy --image gcr.io/mysnippets-dev/web:latest
I have a Node.JS based application consisting of three services. One is a web application, and two are internal APIs. The web application needs to talk to the APIs to do its work, but I do not want to hard-code the IP address and ports of the other services into the codebase.
In my local environment I am using the nifty envify Node.JS module to fix this. Basically, I can pretend that I have access to environment variables while I'm writing the code, and then use the envify CLI tool to convert those variables to hard-coded strings in the final browserified file.
I would like to containerize this solution and deploy it to Kubernetes. This is where I run into issues...
I've defined a couple of ARG variables in my Docker image template. These get turned into environment variables via RUN export FOO=${FOO}, and after running npm run-script build I have the container I need. OK, so I can run:
docker build . -t residentmario/my_foo_app:latest --build-arg FOO=localhost:9000 BAR=localhost:3000
And then push that up to the registry with docker push.
My qualm with this approach is that I've only succeeded in punting having hard-coded variables to the container image. What I really want is to define the paths at pod initialization time. Is this possible?
Edit: Here are two solutions.
PostStart
Kubernetes comes with a lifecycle hook called PostStart. This is described briefly in "Container Lifecycle Hooks".
This hook fires as soon as the container reaches ContainerCreated status, e.g. the container is done being pulled and is fully initialized. You can then use the hook to jump into the container and run arbitrary commands.
In our case, I can create a PostStart event that, when triggered, rebuilds the application with the correct paths.
Unless you created a Docker image that doesn't actually run anything (which seems wrong to me, but let me know if this is considered an OK practice), this does require some duplicate work: stopping the application, rerunning the build process, and starting the application up again.
Command
Per the comment below, this event doesn't necessarily fire at the right time. Here's another way to do it that's guaranteed to work (and hence, superior).
A useful Docker container ends with some variant on a CMD serving the application. You can overwrite this run command in Kubernetes, as explained in the "Define a Command and Arguments for a Container" section of the documentation.
So I added a command to the pod definition that ran a shell script that (1) rebuilt the application using the correct paths, provided as an environment variable to the pod and (2) started serving the application:
command: ["/bin/sh"]
args: ["./scripts/build.sh"]
Worked like a charm.
I am trying to dockerize my production rails application.
Currently app is configured using Ansible and deployed using capistrano.
I researched for different docker deployment strategy's and thought of getting rid of capistrano in docker and will be using docker with docker-compose
I am writing dockerfile to configure and deploy app, but it is somewhat complex to deploy app similar to capistrano as deploy.rb is using few rake tasks to setup predeployment tasks like creating directories setting app name and fetching few variables.
How can I duplicate cap tasks in dockerfile or is there a way to use current cap rake tasks in docker file or running docker container ?
Now is a good time to step back and consider if the benefits of Docker outweigh the added complexity, for your situation. Assuming it is, here are a few suggestions on how to make these components work together better.
While Ansible is a configuration management system, it's also designed for orchestration (that is, running commands across a series of remote machines). This has some cross-over with Capistrano, and as such, you may find it useful to port your Capistrano tasks to Ansible and eliminate a tool (and thus complexity) from your stack. This would likely come about from creating a deploy.yaml playbook that you run to deploy your application.
Docker also overlaps responsibilities with Ansible, but in a different area, configuration. Any part of your system configuration that's necessary for the app can be configured inside the container using the Dockerfile, rather than on a system-wide level using Ansible.
If you have rake tasks that set up the application environment, you can put them in a RUN command in the Dockerfile. Keep in mind, however, that these will only be executed when you build the image, not when you run it.
Generally speaking, I view it this way: Docker sets up a container that has everything required to run one piece of your app (including a specific checkout of your code). Ansible configures the environment in which you run the containers and manages all the work to update them and put them in the right places.