Deploying Cloud Run via YAML gives Revision named 'yourservicename-00001-soj' with different configuration already exists - google-cloud-run

When using the following command to deploy a new Cloud Run revision,
gcloud run services replace service.yaml
The deployment fails with this error
ERROR: (gcloud.run.services.replace) ALREADY_EXISTS: Revision named 'yourservicename-00001-soj' with different configuration already exists.
This occurs when you have followed Google's documentation which instructs you to pull down the current service YAML description into a file, make edits and then redeploy it.

This is because the documentation is wrong, or Google's service has regressed since it was authored.
Edit the YAML and remove spec.template.metadata.name and try again.

Related

Cannot create docs for components in backstage docker error

I am trying to display docs stored in repository created by backstage io component on backstage-io /docs page UI, but when I am trying to access the docs I am getting the following error
Building a newer version of this documentation failed. Error: "Failed to generate docs from C:\\Users\\Admin\\AppData\\Local\\Temp\\backstage-enprxk into C:\\Users\\Admin\\AppData\\Local\\Temp\\techdocs-tmp-W6iVab; caused by Error: Docker container returned a non-zero exit code (1)"
Files in my repository
docs folder only having index.md
and mkdocs.yml have
nav:
Home: index.md
I was getting similar issues working on a local POC of Backstage. The biggest problem was that I needed to install pip, python, mkdocs, and mkdocs-techdocs-core (i.e. pip3 install mkdocs-techdocs-core). If you have done that and then followed everything in this documentation, then it should start working. Hope that helps. I spent a couple of days trying to get past these types of errors.
For me the above issue is fixed by using below as it was not working inside my container in kubernetes.
I changed app-config.yaml -
techdocs:
builder: 'local' # Alternatives - 'external'
generator:
runIn: 'local' // changed from docker to local here

Deploying code on lambda failed using serverless

I was trying to deploy code on lambda using serverless deploy and got below error, tried multiple solutions available online but didn't work.
Error -
Serverless: Packaging service...
Serverless Error ---------------------------------------
The specified bucket does not exist
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information -----------------------------
OS: darwin
Node Version: 8.12.0
Serverless Version: 1.31.0
When you are deploying your Serverless application it uses the service attribute (defined in your serverless.yaml) as a unique identifier of your application in the CloudFormation.
Said so, you may have some conflict if you change the name of the bucket without removing the stack. Ex:
You deploy you application with the bucket called myBucket.
CloudFormation will be created considering this info.
You change this name to myBucketPlus and try to deploy.
Serverless will clean the mybucketPlus with the last deploy before pushing the new one.
But wait! myBucketPlus does not exist.
As you did not describe what exactly you did, I tried to give an example but it could be something else.
Also you could try removing and deploying again.
The best way to resolve this issue is -
Execute below command to see the lambda information which will also provide the S3 bucket name, region, endpoint info etc but you need only bucket name and region for this case.
sls info -v
Create the bucket in the intended region.
Done.

How to read in other config information into a dropwizard service

I am building a dropwizard service which will connect to multiple data sources including mySQL and Elasticsearch. All the mySQL settings can be defined in the yaml config file which gets read in after running from the commandline.
But what about other settings that I need to read in for other data sources that I will connect with myself, for example Elasticsearch? Where can I define those settings?
I thought I could add another commandline Command - which I tried, but I can only run a single command (from the commandline) at a time - so I can't seem to run both the 'server' command as well as my custom command, 'custom' which is followed by the my own config file for elasticsearch.
How can I introduce settings either individually or from a file - which are defined at run time (not hard coded)?
Thanks
Anton
Check out the Dropwizard Core documentation on adding custom configuration.
You'd create an ElasticSearchFactory class similar to the MessageQueueFactory in the example, reference this in your Configuration (that's in turn referenced in your Application), and then the options you need can be added to your main yaml configuration.

Google Cloud Storage: Output path does not exist or is not writeable

I am trying to follow this simple Dataflow example from google cloud site.
I have successfully installed the dataflow pipeline plugin and gcloud SDK (as well as Python 2.7). I have also set up a project on google cloud and enabled billing and all the necessary API's - as specified in the instructions above.
However, when I go to the run configurations and change the Pipeline Arguments tab to select BlockingDataflowPipelineRunner, after entering creating a bucket and setting my project-id, hitting run gives me:
Caused by: java.lang.IllegalArgumentException: Output path does not exist or is not writeable: gs://my-cloud-dataflow-bucket
at com.google.cloud.dataflow.sdk.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:146)
at com.google.cloud.dataflow.sdk.util.DataflowPathValidator.verifyPathIsAccessible(DataflowPathValidator.java:79)
at com.google.cloud.dataflow.sdk.util.DataflowPathValidator.validateOutputFilePrefixSupported(DataflowPathValidator.java:62)
at com.google.cloud.dataflow.sdk.runners.DataflowPipelineRunner.fromOptions(DataflowPipelineRunner.java:255)
at com.google.cloud.dataflow.sdk.runners.BlockingDataflowPipelineRunner.fromOptions(BlockingDataflowPipelineRunner.java:82)
... 9 more
I have used my terminal to execute 'gcloud auth login' and I see in the browser that I am successfully logged in.
I am really not sure what I have done wrong here. Can anyone confirm if this is a known issue with using dataflow pipeline and google buckets?
Thanks!
I had a similar issue with GCS bucket permissions, though I certainly had write permissions and I could upload files into the bucket.
What solved the problem for me was acquiring roles/dataflow.admin permission for the project I was submitting the pipeline to.
When submitting pipelines to the Google Cloud Dataflow Service, the pipeline runner on your local machine uploads files, which are necessary for execution in the cloud, to a "staging location" in Google Cloud Storage.
The pipeline runner on your local machine seems to be unable to write the required files to the staging location provided (gs://my-cloud-dataflow-bucket). It could be that the location doesn't exist, or that it belongs to a different GCP project than you authenticated against, or that there are more specific permissions set on that bucket, etc.
You can start debugging the issue via gsutil command-line too. For example, try running gsutil ls gs://my-cloud-dataflow-bucket to attempt to list the contents of the bucket. Then, try to upload via gsutil cp command. This will perhaps produce enough information to root-cause the issue you are facing.
Try to provide zone parameter, it works in my case with similar error. And of course export GOOGLE_APPLICATION_CREDENTIALS environment variable before running your app.
...
-Dexec.args="--runner=DataflowRunner \
--gcpTempLocation=gs://bucket/tmp \
--zone=bucket-zone \
...
Got the same error. Fixed it by setting GOOGLE_APPLICATION_CREDENTIALS using the key file with write permissions in ~/.bash_profile on Mac.
I realised I needed to use a specific acl command via gsutil. Setting my account to have owner permissions did not do the job. Instead using:
gsutil acl set public-read-write gs://my-bucket-name-here
worked in this case. Hope this helps someone!

Can we run sample-app that comes with BB Push SDK without app.id?

I am trying to deploy the sample-app war file that comes with PushSDK.But was not able to deploy the application successfully.Steps followed till now
1) I was able to configure PushSDK.properties log4j.xmlbut didn't change value of ${sampleapp.appid} in sample-app-context.xml.
2)then made the war file using cmd
3) Deployed on the server.
during deployment an error was coming.
Invalid bean definition with name 'registerListeners' defined in class path resource [sample-app-context.xml]: Could not resolve placeholder 'sampleapp.appid'
Tried to register on link but was unsuccessful
I have just started down the path of using the Push SDK, but from what I've read I conclude that you need to get your development registration completed at least before you can run the sample code; unles you're using your own BES.

Resources