I am using Geo-location module in my local system, not able to ping to client location. It does not allow access block.It provides me the error(No location data found. Reason: PERMISSION_DENIED.)
On using the Glocation module in (SimplyTestMe), it works fine.
Kindly suggest me.
Related
I have a Node.js app in a Docker container that I'm trying to deploy to Google Cloud Run.
I want my app to be able to read/write files from my GCS buckets that live under the same project, and I haven't been able to find much information around it.
This is what I've tried so far:
1. Hoping it works out of the box
A.k.a. initializing without credentials, like in App Engine.
const { Storage } = require('#google-cloud/storage');
// ...later in an async function
const storage = new Storage();
// This line throws the exception below
const [file] = await storage.bucket('mybucket')
.file('myfile.txt')
.download()
The last line throws this exception
{ Error: Could not refresh access token: Unsuccessful response status code. Request failed with status code 500"
at Gaxios._request (/server/node_modules/gaxios/build/src/gaxios.js:85:23)
2. Hoping it works out of the box after setting the Storage Admin IAM role to my Cloud Run service accounts.
Nope. No difference with previous.
3. Copying my credentials file as a cloudbuild.yaml step:
...
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', 'gs://top-secret-bucket/gcloud-prod-credentials.json', '/www/gcloud-prod-credentials.json']
...
It copies the file just fine, but then the file is nor visible from my app. I'm still not sure where exactly it was copied to, but listing the /www directory from my app shows no trace of it.
4. Copy my credentials file as a Docker step
Wait, but for that I need to authenticate gsutil, and for that I need the credentials.
So...
What options do I have without uploading my credentials file to version control?
This is how I managed to make it work:
The code for initializing the client library was correct. No changes here from the original question. You don't need to load any credentials if the GCS bucket belongs to the same project as your Cloud Run service.
I learned that the service account [myprojectid]-compute#developer.gserviceaccount.com (aka "Compute Engine default service account") is the one used by default for running the Cloud Run service unless you specify a different one.
I went to the Service Accounts page and made sure that the mentioned service account was enabled (mine wasn't, this was what I was missing).
Then I went here, edited the permissions for the mentioned service account and added the Storage Object Admin role.
More information on this article: https://cloud.google.com/run/docs/securing/service-identity
I believe the correct way is to change to a custom service account that has the desired permissions. You can do this under the 'Security' tab when deploying a new revision.
We Have search implemented using Lucene.Net, Indexes are stored in Azure Storage Folder, Few days ago we moved our Web Application From Azure CloudService To Azure AppService.
If we run this locally it works as expected, also works in CloudService But when we published our Web Application to Azure AppService
we have below Exception:
System.UnauthorizedAccessException: Access to the path 'D:\AzureDirectory' is denied.
tried to update AzureDirectory and Azure Storage packages but it's not working.
Any Idea?
Thanks,
Solution was to change Lucene.Net.Store.Azure.AzureDirectory s CacheDirectory path to D:/Home/AzureDirectory
AzureDirectory(cloudStorageAccount, containerName, FSDirectory.Open(new DirectoryInfo("D:/Home/AzureDirectory")))
As you mentioned I had no d:\ access
tried to update AzureDirectory
As David Makogon mentioned that in the Azure WebApp, we have no access to create or access D:\AzureDirectory folder. We could get more info from the Azure WebApp Sandbox. The following is the snippet from the document
File System Restrictions/Considerations
Applications are highly restricted in terms of their access of the file system.
Home directory access (d:\home)
Every Azure Web App has a home directory stored/backed by Azure Storage. This network share is where applications store their content. This directory is available for the sandbox with read/write access.
According to the exception you mentioned, it seems that some code want to access the folder D:\AzureDirectory but it is not existed in the Azure WebApp. We also could remote debug our WebApp in the Azure to find the related code, more details please refer to the Remote debugging web apps.
You don't have d:\ access. In Web Apps, your app lives under d:\home (more accurately d:\home\site).
Also - fyi this isn't "Azure Storage" - that term refers to blob storage.
I generated SDK for my application in Kaa. Application worked correctly. After that I changed Bootstrap server host address and as I understand, I need to regenerate SDK in order to use new Bootstrap server address. This works, but is there a way to change Bootstrap server address in generated SDK?
Currently, the Control service embeds a list of available Bootstrap services into the SDK (using a properties file for Java implementation, a header file for C++, etc.) during the SDK generation, and the SDK doesn't provide an API to override that list, so you can't change it.
Currently, if you need to change the Bootstrap server host - you need to regenerate the SDK.
For production, we recommend that you use DNS names that map to IP addresses of concrete nodes running the Bootstrap services so this will allow to manage Bootstrap servers IP addresses and help to avoid SDK regeneration.
change the line to your host as follows:
transport_public_interface=YOUR_HOST
in the /etc/kaa-node/conf/kaa-node.properties file if you are running on linux.
You should restart kaa-node service and regenerate client sdk after you change the property file.
I have service (Web Service Proxy) running on DataPower. I am able to test the service from SOAPUI.
Client application / service is trying pull WSDL from URL like http://host:port/uri?WSDL
It is mandatory to pull the WSDL from URL to develop their code.
I have upload the WSDL and share the
http://host:port/system/dpViewer/ServiceName.wsdl?filename=local:/Path/ServiceName.wsdl
Still They were not able to access the URL from their system.
We performed the connectivity to both system. Everything is working fine.
Any help?
You can't access using
http://host:port/system/dpViewer/ServiceName.wsdl?filename=local:/Path/ServiceName.wsd
As it is internal for your reference and It will open your file in Management / Admin IP. (Most of the place there will be different IP for transactions)
http://host:port/uri?WSDL is possible in DataPower
Please follow the below step in your Web Service Proxy (WSP)
Edit front side handler (HTTP SOURCE HANDLER)
Enable 'GET method'
Apply changes and Save Config.
By Default , 'Get method' will not be enabled in WSProxy. As it is disabled, All WSDL get requests are rejected by DataPower.
I hope after this , We can access the WSDL using URL.
I currently have a website up and running that uses the Twitteroauth classes (abraham's) and the Twitter API. It is all working as it should, however, I want to also run the project on my localhost for debugging/coding purposes so that I am not in danger of messing up my live version.
I am wondering if it is possible to run Twitterouth on localhost. I know that there are ways to manipulate the callback URL of the application on Twitter's site, however I do not want to do that as my live version needs the callback URL.
I hope this makes sense and I hope there is a solution out there.
Thank you.
It is possible to run Twitteroauth in localhost. You should make a copy of your project files from your website and copy it to your localhost.
You might see a file "config.php" where you'll define the callback url.
define('OAUTH_CALLBACK', 'http://localhost/path_to_callback.php file');
It will redirect you to your callback.php file and it works in localhost.
For running Abramham William's twitteraouth locally , you can create another test application on twitter so that it don't disturb the live version . Then instead of using localhost/ in your address bar , you should use the IP address of your localhost , this mistake by me consumed a lot of time .
Example :
127.0.0.1/twittertest/index.php and not localhost/twittertest/index.php in your address bar .
You'll need to give the same oauth callback in your twitter application .