Google Cloud BigTable - slow response time from outside of Google Project - oauth

When all the code is running in a Google Project; performance is as expected.
However; during development, I connect my laptop to a test Google Project BigTable instance; and each query takes 2-4 seconds to run.
Its a similar response response when I trigger commands using cbt CLI commands.
Is there a known reason for this overhead? Perhaps its how auth needs to be done for external connections?
On start up I see the below logs:
Opening connection for projectId ..., instanceId ..., on data host bigtable.googleapis.com, admin host bigtableadmin.googleapis.com.
Bigtable options: BigtableOptions{dataHost=bigtable.googleapis.com, adminHost=bigtableadmin.googleapis.com, ..., appProfileId=, userAgent=hbase-1.4.1, credentialType=DefaultCredentials, port=443, dataChannelCount=32, retryOptions=RetryOptions{retriesEnabled=true, allowRetriesWithoutTimestamp=false, statusToRetryOn=[DEADLINE_EXCEEDED, UNAVAILABLE, UNAUTHENTICATED, ABORTED], initialBackoffMillis=5, maxElapsedBackoffMillis=60000, backoffMultiplier=2.0, streamingBufferSize=60, readPartialRowTimeoutMillis=60000, maxScanTimeoutRetries=3}, bulkOptions=BulkOptions{asyncMutatorCount=2, useBulkApi=true, bulkMaxKeyCount=125, bulkMaxRequestSize=1048576, autoflushMs=0, maxInflightRpcs=320, maxMemory=143183052, enableBulkMutationThrottling=false, bulkMutationRpcTargetMs=100}, callOptionsConfig=CallOptionsConfig{useTimeout=false, shortRpcTimeoutMs=60000, longRpcTimeoutMs=600000}, usePlaintextNegotiation=false}.
Refreshing the OAuth token
Are there any options I can consider; other than using the BitTable emulator? I had some trouble getting that running a while back; so must try again.
Thanks,
Brent

As Solomon said above, please open a Google Cloud support ticket to resolve this.

Related

Can Google Cloud Run be used to run a continuously-listening Python script?

I'd like to run a Python script in the cloud. It would use Tweepy Streaming to continuously listen for Tweets containing certain keywords. So it needs to run uninterrupted, 24/7.
Would Google Cloud Run be suitable for this use case?
The Quotas and Limits page mentions that requests timeout after 60 minutes max, but I don't know exactly what this means.
Thank you.
No, it would not be a good choice. Serverless infrastructure provided by products like Cloud Run and Cloud Functions is generally assumed to expand and contract server instances on demand, and server instances are never guaranteed a long uptime. If you absolutely require 24/7 uninterrupted operation of some background task not tied to an event or HTTP request, you should use a different cloud product, such as App Engine or Compute Engine.
"some background task not tied to an event or HTTP request"
Isn't what the OP wants? Merely to listen for tweets 24/7? Detecting a tweet is an event and an HTTP request. Cloud Run and Cloud Functions can be triggered 24/7 via its URL endpoints.
In the Cloud Functions Page, if you scroll down there is a section called "Integration with third-party services and APIs".
I quote this section:
"... capabilities such as sending a confirmation email after a successful Stripe payment or responding to Twilio text message events"
listening for tweets counts too. So it seems Google Cloud Functions/ Cloud Run can be used for the OP's use case.

Does Cloud Run add location-aware request header similar to App engine?

App engine requests have location-aware headers (X-AppEngine-Country, X-AppEngine-Region, X-AppEngine-City) automatically added. Does Cloud Run have something similar?
This is (will be) possible with Google Cloud HTTP(S) Load Balancer via user-defined headers.
However, putting your Cloud Run service behind the load balancer is currently in alpha, so you cannot try this out today. You can wait for a while, or if you’re willing try the alpha out and give feedback, please contact me. #ahmetbtodo
AFAIK, these Google custom header values doesn't exist today. However, in the current headers you can find the IP of the originated requester (here in IPv6)
forwarded: for="2a01:cb14:af0:b500:ccf6:1a91:1713:b48";proto=https
x-forwarded-for: 2a01:cb14:af0:b500:ccf6:1a91:1713:b48*emphasized text*
You can use external services to know the exact location.

Send Docker Entrypoint logs to APP in realtime

I'm looking for ideas to send Docker Logs for each runs to be sent to my application in realtime. I'm looking ways this can be done. Please let me know how this can be done.
Let me know if you have done this already or know how this can be achieved. I want to build feature similar to Netlify or vercel where they show you all build log on UI in realtime. I want something similar for my node application.
You can achieve this with Vercel and Log Drains.
Log Drains make it easy to collect logs from your deployments and forward them to archival, search, and alerting services by sending them via HTTPS, HTTP, TLS, and TCP once a new log line is created.
At the time of writing, we currently support 3 types of Log Drains:
JSON
NDJSON
Syslog
Along with Log Drains, we are introducing two new open-source integrations with logging services for you to start using them today: LogDNA and Datadog.
Install the integration: https://vercel.com/integrations?category=logging
See the announcement blog post: https://vercel.com/blog/log-drains
Note that Vercel does not allow Docker deployments, but does support Serverless Functions.

Server timeout and sftp timeout. What to do?

Since 12h the website (Wordpress website) that is hosted on Google Cloud Platform has a time out issue. After 60 seconds of trying to load the website, the following message appears "The connection has timed out".
When trying to connect with SFTP, same issue.
What should I do to resolve this?
Since two different services stopped to work at the same time it
sounds like a networking issue. There is a timeout, therefore there is the server not answering at all to the requests.
What to do?
I would proceed with this general troubleshooting steps, if you want you can upload your question with the result of these commands/question to proceed with the troubleshooting.
First of all I would check if you are able to ping the
external/public IP of the instance.
I would check if the firewall rules allows TCP80/TCP443 and TCP22, Notice that on GCP you need to create the rule and assign the TAG to the machine from its detail page if the the rule does not apply to the whole network.
Are you able to ssh into the instance?
I would check if the processes are actually listening netstat -tuplen
If you are able when logged into the machine do you have access to the internet? Are you able to ping external IP? If not whats about internal IP?
I would go to the "activity" page of your Google Cloud Console to check which actions have been taken while the instance was still running.
I would check as well the history of the Linux machine to check if you run some commands acting on the network configuration of the machine.
Note that if you cannot SSH into the machine you can always access through serial console setting a password for your username through a startup script.
UPDATE
I had the possibility to take a look into the project, the machine was stopped due to issue with the billing account (it was closed) after the free trial period ended.
I would suggest you to go again trough the documentation regarding the upgrade of the billing account
If you have still some doubts or question after you perform this operations you can file a case at this link with the billing team and they will help you to solve the issue.

Twilio IP Messaging token issue

I'm setting up an iOS app to use the IP Messaging and video calling apis. I'm able to connect, create channels and setup a video call if I manually create hard-coded tokens for the app. However, if I want to use the PHP server (as described here https://www.twilio.com/docs/api/ip-messaging/guides/quickstart-ios) then I always get an error and it can't connect anymore.
I'm attaching a screenshot of what I see when I hit the http://localhost:8080 address which seems to produce a 500 Internal error on this URL: https://cds.twilio.com/v2/Streams
Thanks so much!
After much time spent on this I decided to try the node backend instead - under other server-side languages of the PHP and I have it running in 2 minutes! I used the exact same credentials as the ones that I was using on the PHP config file so either my PHP environment has something strange or the PHP backend needs some fixing. In any case, I'm able to move forward using the node backend, so if you run into the same issue just try node instead of PHP. woohoo!

Resources