How do i add Analytics to my existing wso2is? (WSO2 Identity Server) - docker

Have deployed wso2is deployed on my k8s cluster using the Dockerfile mentioned in the https://github.com/wso2/docker-is/blob/5.7.x/dockerfiles/ubuntu/is-analytics/base/Dockerfile, its working fine.
Now the requirement has changed to have login stats (successful/unsuccessful/ failed attempts etc.) discover the Analytics support is the one use. But i am not quite sure how do i add this module into my Dockerfile?
Can someone list the various steps to install wso2is with analytics.
I have download the wso2is-analytics-5.7.0 zip, but i am not sure what else in the Dockerfile(from the link mentioned above) needs to change other than the :
"ARG WSO2_SERVER=wso2is-analytics"
Edited: so going once again the wso2is site : https://docs.wso2.com/display/IS570/Prerequisites+to+Publish+Statistics
Step 03: Configure Event Publishers , is these all optional? if already have wso2is deployed before?
because it says this "In a fresh WSO2 IS pack, you can view all the event publishers related to WSO2 IS Analytics in the /repository/deployment/server/eventpublishers directory."
Expected Result:
Have a working wso2is with analytics dasboard to track the login success/failure attempts.
thanks for your support, appreciate it!
Maurya (novice on wso2is)

The latest IS analytics-5.7.0, has different profiles, here, you need the following profiles,
Worker - Consume events from IS and process (This is consumed from event publishers in IS side, in the documentation, Step 3 and 4 are optional, that is documented for further understanding and not needed for deployment initially )
[1] https://hub.docker.com/r/wso2/wso2is-analytics-worker
Dashboard - This is to view the statistics
[2] https://hub.docker.com/r/wso2/wso2is-analytics-dashboard
[3] https://docs.wso2.com/display/IS570/Accessing+the+Analytics+Dashboard

Related

Google Sheets Add-on error: authorisation is required to perform that action

I have an add-on that worked for years inside my domain/company until Google decided to change stuff.
I republished it and now I can run it but nobody else in the company can.
The error they receive is:
"Authorisation is required to perform that action".
I cannot pinpoint exactly where the error is because the GCP Log only tells me the function not the line, but it seems most of the times the error appears when showing a sidebar.
I do not use any kind of API, simply GAS but "just in case " I added in OAuth consent screen these scopes: .../auth/script.container.ui and .../auth/spreadsheets.
In Google Workspace Marketplace SDK OAuth Scopes I've just left the default.
Also I tried adding in appscript.json this (at top level):
"oauthScopes": [
"https://www.googleapis.com/auth/script.container.ui",
"https://www.googleapis.com/auth/script.external_request",
"https://www.googleapis.com/auth/script.scriptapp",
"https://www.googleapis.com/auth/spreadsheets",
"https://www.googleapis.com/auth/userinfo.email"
]
What else can I try ?
Update: as requested in comments here's the offending code:
// clientside
google.script.run.
withSuccessHandler()
.withFailureHandler(failureHandler) // failureHandler gets called
.aServerFunc()
//serverside
function aServerFunc(){
Logger.log('REACHED') // NO REACHED APPEARS IN CLOUD LOGS !
var docProp = PropertiesService.getDocumentProperties();
return docProp.getProperty(key)
}
So I guess the problem is nobody else but me can run google.script.run in an add-on !
Update 2:
I've removed the PropertiesService calls so it's just a blank function on the server. So it's clear that nobody but me can run google.scripts.run.
Update 3:
As required in the comments here's the procedures I did to publish the add-on:
I created a google cloud project, then configured the OAuth consentscreen (with the same scopes as appsscript.json - see above list), then in Google Workspace Marketplace SDK I've set the script ID and deployment number and the same scopes and published.
It turns out the add-on was just fine !
It's just this 4 years old bug that Google refuses to fix
If the user is logged in with multiple accounts the default one will be used.
If the default account is non-domain and the add-on is restricted to a domain the add-on will fail to authorise.

What is the cause of the Security Constraints Not Satisfied Error when using sam deploy ---guided?

I am attempting to follow the Hello World example for deploying an AWS Serverless Application, but I get a Security Constraints Not Satisfied Error when using sam deploy --guided. I'm pressing Enter at each prompt to accept the defaults per the tutorial.
The curious bit--to me--is if I use the AWS Toolkit extension for VS Code to deploy the app it works fine, so I don't think it has anything to do with my IAM permission config, but I'm new to this so I'm not ruling it out.
Recently the guided deploy was updated to include a prompt confirming if you were ok with not having any authorization defined. At the same time, a check was added that would fail the guided deploy if you answer 'No'. (See the relevant part of the commit here.)
This means that, as of this commit, you can't go through the AWS Hello World tutorial by responding with Enter to accept the default options.
To successfully deploy from the command line you'll need to confirm 'Yes' that you're ok with not having any authorization defined, and then it should work as expected.
During your sam deploy --guided, at the following question you need to answer YES.
OR
You can do sam deploy which skip that

How to authenticate to Cloud Storage from a Docker app on Cloud Run

I have a Node.js app in a Docker container that I'm trying to deploy to Google Cloud Run.
I want my app to be able to read/write files from my GCS buckets that live under the same project, and I haven't been able to find much information around it.
This is what I've tried so far:
1. Hoping it works out of the box
A.k.a. initializing without credentials, like in App Engine.
const { Storage } = require('#google-cloud/storage');
// ...later in an async function
const storage = new Storage();
// This line throws the exception below
const [file] = await storage.bucket('mybucket')
.file('myfile.txt')
.download()
The last line throws this exception
{ Error: Could not refresh access token: Unsuccessful response status code. Request failed with status code 500"
at Gaxios._request (/server/node_modules/gaxios/build/src/gaxios.js:85:23)
2. Hoping it works out of the box after setting the Storage Admin IAM role to my Cloud Run service accounts.
Nope. No difference with previous.
3. Copying my credentials file as a cloudbuild.yaml step:
...
- name: 'gcr.io/cloud-builders/gsutil'
args: ['cp', 'gs://top-secret-bucket/gcloud-prod-credentials.json', '/www/gcloud-prod-credentials.json']
...
It copies the file just fine, but then the file is nor visible from my app. I'm still not sure where exactly it was copied to, but listing the /www directory from my app shows no trace of it.
4. Copy my credentials file as a Docker step
Wait, but for that I need to authenticate gsutil, and for that I need the credentials.
So...
What options do I have without uploading my credentials file to version control?
This is how I managed to make it work:
The code for initializing the client library was correct. No changes here from the original question. You don't need to load any credentials if the GCS bucket belongs to the same project as your Cloud Run service.
I learned that the service account [myprojectid]-compute#developer.gserviceaccount.com (aka "Compute Engine default service account") is the one used by default for running the Cloud Run service unless you specify a different one.
I went to the Service Accounts page and made sure that the mentioned service account was enabled (mine wasn't, this was what I was missing).
Then I went here, edited the permissions for the mentioned service account and added the Storage Object Admin role.
More information on this article: https://cloud.google.com/run/docs/securing/service-identity
I believe the correct way is to change to a custom service account that has the desired permissions. You can do this under the 'Security' tab when deploying a new revision.

Cannot deploy stream through spring-cloud-dataflow-server in sap-cloud-foundry

I deployed spring-cloud-dataflow-server-cloudfoundry to SAP Cloud Fondry with environments below:
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_URL:https://api.cf.sap.hana.ondemand.com
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_ORG:{org}
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SPACE:{space}
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_DOMAIN:{doamin}
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_USERNAME:username
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_PASSWORD:password
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_SKIP_SSL_VALIDATION:false
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_STREAM_SERVICES: mq
And import stream starter apps using bulk import applications.
And I create stream using "time-source-rabbit-1.3.0.RELEASE.jar" and "log-sink-rabbit-1.3.0.RELEASE.jar".
But I cannot deploy stream.
The status is "partial" fianlly, and apps' runtime are failed.
My question is:
1. Whether spring-cloud-dataflow-server-cloudfoundry can be used in SAP cloud foundry like I used?
2. When deploy stream in cloudfoundry using spring-cloud-dataflow-server-cloudfoundry dashboard, should I set any other necessary properties?
Thanks in advance.
Looking at the manifest.yml, it appears that org, space, and domain weren't replaced with SAP-CF specific values. Pay attention to the following note in the ref. guide.
Now we can configure the app. The following configuration is for Pivotal Web Services. You need to fill in {org}, {space}, {email} and {password} before running these commands.
If you have them replaced with your environment specific properties, the next step is to check the SCDF-server's logs. There will be particular details as to why the deployment failed if it did.
Now to answer your questions.
For #1, it is hard to say without logs or environment details. We don't actively test against SAP distribution of Cloud Foundry. As far as the distribution is compatible with Diego 1.7.1 and over, it should work. We also publish the CF compatible versions in project site. Perhaps this could be useful to compare the SAP CF environment and its foundation versions.
For #2, no, you don't need any other properties.

Bluemix - IBM Containers problems for US South?

Anybody having problems with IBM Containers on US South in Bluemix?
Containers report Data currently available on the dashboard and if I try to list or start a container I get this error:
Catalog Error
×
BXNUI0513E: The attempt to retrieve containers failed because a problem occurred
contacting IBM Containers. Try again later. If the problem continues, go to
Support. For other help options, see the Bluemix Docs.
If I switch to the UK site, I can create and use containers.
I've just recently tried out a Docker container with a sshd and it was running fine for 5-6 hours. However, then it seemed like part of the Container service in Bluemix broke and I've not been able to access it for the past 24 hours.
Regards.
Mikael
For trial accounts you can create containers only in one space and this error sometimes occurs when the user tries to create a container in another region. Unfortunately since you're using 'pay as you go' in this in case you have to open a support request using one of the following methods in order to engage IBM Containers team to investigate your issue:
Use the Support Widget. It is available from the user avatar in the
   upper right corner of the main Bluemix UI.  After opening the support
   widget panel, select Get Help > Get In Touch , select the type of
   assistance you need, and then fill out the support form.
Use the Support Site 'Get Help' form. This form is available on a separate site that is made available for ticket submission when you cannot log into Bluemix and access the Support Widget.  Go to http://ibm.biz/bluemixsupport and fill in the support request form.
EDIT: I saw that you opened a Support ticket and the issue was fixed. It was an issue related to your specific organization.
Just a small note. Hopefully Containers in Dallas are now working well again. In addition, I wanted to note that we strongly discourage the use of sshd in containers for security reasons. The good news is shell access is at your fingertips via the cf ic exec <contianer id> /bin/bash command. (your container may need just bash or /bin/sh YMMV)

Resources