Is it correct that I can see the project "Dataflow: Readonly Artifacts (DO NOT DELETE)" in my developers console? Ever since I got alpha access to CDF last month, it has been visible.
I also noticed that I got charged on the project even before we started to test and run jobs (which was literally just a few hours ago!). My understanding is that CDF is free in alpha, but that you must pay for any services used e.g. BigQuery, GCS etc. However, I would expect those charges to show up in my other project because I specify that project name/id when actually creating the PipeLine.
Those charges are not assessed to you as a Dataflow user -- they are charged to the project owner, which is the Cloud Dataflow team. As part of whitelisting for Dataflow, you've been granted read-only access to the project to allow you to access specific artifacts during Alpha (such as the custom GCE image for Dataflow).
Related
Currently I am working in a project where we have 8 instances accross different geographies. We currently use bmc rlm ( https://docs.bmc.com/docs/ReleaseLifecycleMgt/50 ) for deployment automation.
However to keep track of our deployment ids we are using excel.
Starting from Dev environment , to QA environment ( each region has
its own QA and schedule ) and ultimately move to Production.
What I wanted to know was , is there any tool that you use to keep
track of deployments ?
We tried using sharepoint but it is kind of limiting in the features.
We would ideally like a workflow to be created , whereby developers
submit the request with the dev deployment id.
Workflow goes to Release Approver for QA env. Once he/she approves ,
QA testing team gets notified.
Please let us know if anybody else faced this kind of issue and used some tools for the same?
We have recently started public preview of Reliza Hub https://relizahub.com which aims to solve this problem.
Some of the functionality you are asking (particularly approvals) is not there yet, but it's coming. Tracking functionality and mapping instance to releases is there already.
Currently I'm trying to add documentation as we go, so far Version Increment workflow is documented here. And functionality to send release data to Reliza Hub is documented in our client github.
Would be happy to provide support and discuss incomplete or missing features via our new reddit channel r/Reliza.
I have two variations of a site based off a primary enrollment site. Currently a running demo of the primary enrollment site is set up and running on a remote server using docker. I'm trying to figure out what steps are needed to move both enrollment site variants A and B up to the remote server for testing and review purposes.
The first variation (branch A) was built from the primary app as master and the second (branch B) was built as a very small variation on the initial vairant, A (think a single file updated from branch A).
So far I understand that I'll have to set up a unique database for both A and B for docker to store app data depending on which enrollment site is running (e.g., enroll-db-A and enroll-db-B). Running both sites from this host will also require specifying a unique port on the dockerfile and docker-compose file since the plan is to keep the primary demo site available through the server's default port.
What I'm confused about is how to actually move the files needed for both variants up to the remote server. Because I obviously want to minimize the number of files needed to transfer up to the remote to handle serving all our sites, and because both variants A and B both largely depend on files from the primary enrollment app root, is it sufficient to simply move all the updated and necessary config files for A and B into new directories on the remote server where the directory for the primary enrollment site is located one level up as the parent of each variant directory?
To paraphrase my manager; there's probably some way to make this work, though it's not worthwhile.
My concern in posting this mostly had to do with the apparent number of redundant files that would be pushed up to the remote web server after creating two, simple variants on an original. For our demonstration purposes, having two highly similar repos in addition to the original base loaded on to the web server is not a significant concern.
So, for now, my need to have this question answered can be deferred indefinitely.
The documentation at https://developers.google.com/actions/deploy/release-environments states "To handle release channels in your fulfillment, you should provide different fulfillment URLs (for example, one fulfillment URL for the beta and another URL for the production version of your Action)." However, there are no instructions on how this should be accomplished.
When I created my Actions on Google project, a Firebase project was created to which I upload JavaScript that supports those actions via requests to our backend service. That Firebase project provides the URL used by my Beta release for fulfillment. I now need to create an Alpha project that points to a different Firebase project to which I will upload new versions of support for requests to different versions of our backend service. I do not see a way to accomplish this. Do I need to create an entirely new Actions on Google project that has its own URL for fulfillment or is there some better way to accomplish this task?
I tried creating manually creating a separate Firebase project to host the Alpha code but that did not work. I later learned that when you create a Actions on Google project that it is intimately connected to the Firebase project created for it and cannot be pointed to another.
The problem is all in the configuration space of Actions on Google and Firebase. There is no code to show.
I would expect that some approach similar to that provided by the Alexa Developer Console and the Amazon Lambda Management Console would be available. In that approach, I have Alpha, Beta, and Production versions of the Alexa Skill and each of them points to a different version of the lambda function each of which has an appropriate value to indicate the environment that the lambda function is executing upon. This allows me to allocate requests to the correct backend service (alpha, beta, production).
I don't see a way to accomplish that in the Actions on Google/Firebase world.
If you are using Dialogflow, the Actions on Google release levels have corresponding environments. So you should be able to set a different fulfillment URL for each environment to point at the different project.
I first created a Google API project on Google developer console and configured a OAuth 2.0 client IDs as credentials in order to let my C# projects to access the Google Drive.
However, I'm new to developing with Google APIs I am not able to understand when i should create new credentials or projects.
Should I create multiple credentials (maybe one credentials for one project?) or actually it's fine to use the same credential for multiple projects? What's the purpose to create more than one credential?
If you are creating different applications then you should create different projects on google developer console each with their own set of credentials.
The reason for this is to ensure that you dont run into any issues with quota. It also allows Google to track who is using their data and how much.
You should also consider when you define your project and create credentials you are giving it an name. When a user authenticates your application they are granting access to Super app one to use their data if you use this client with Super app two then they wont know who has access to their data.
My personal rules
Each application is a project on Google developer console.
in that project i create a client id for local, test, and production environments.
Update from comment about project creation quota
My current project quota
You have 37 projects remaining in your quota. Learn more.
You can always request additional projects do it early from what i remember it took a week or so to get them. Project quota requests
Update usage to create multiple credentials for same project then
Like i mentioned before if you use the same project for credentials on different projects you are miss leading your users. The fields i have marked with arrows denote an APPLICATION they are specific to the application requesting access. They are part of the project itself. All of the clients created under that project are going to use the same consent screen. If you use it for two different applications you are IMO miss leading the users in what application they are granting access to their data.
You may also be miss leading Google as i believe the TOS requires one project per application making a request hence the consent screen having application name and link to the application contacts. However i think i need to read though the TOS doc again to make sure this is a requirement.
You are also more likely to hit quota limits. A lot of the limits are project based not credential based so if you have two applications reading from the same api with two different credentials created under the same project. You are going to hit the quota a lot faster than if you had created each application its own project.
Example: Google Analytics example max project request 50000 per day.
Same project
Application one requests 20000
Application two requests 30000
both application one and application two are now blocked from making requests for the rest of the day as they have in totally hit 50000 requests.
Two separate projects
Application one makes 30000 requests
Application two makes 50000 requests.
Application two is now blocked for the rest of the day as it made 50000 requests. Application one continues to work until it has also hit 50000 requests.
I'm proposing a SaaS solution to a prospective client to avoid the need for local installation and upgrades. The client uploads their input data as needed and downloads the outputs, so data backup and maintenance is not an issue, but continuity of the online software service is a concern for them.
Code escrow would appear to be overkill here and probably of little value. I was wondering is there an option along the lines of providing a snapshot image of a cloud server that includes a working version of the app, and for that to be in the client's possession for use in an emergency where they can no longer access the software.
This would need to be as close to a point and click solution as possible - say a one page document with a few steps that a non web savvy IT person can follow - for starting up the backup server image and being able to use the app. If I were to create a private AWS EBS snapshot / AMI that includes a working version of the application, and they created an AWS account for themselves, might they be able to kick that off easily enough?
Update:the app is on heroku at the moment so hopefully it'd be pretty straightforward to get it running in amazon EC2.
Host their app at any major PAAS providers, such as EngineYard or Heroku. Check their code into a private Github repository that you can assign them as the owner. That way they have access to the source code and can create a new instance quickly using the repository as the source.
I don't see the need to create an entire service mirror for a Rails app, unless there are specific configuration needs that can't be contained in the project or handled through capistrano.