Hue Impersonation without a wildcard - hue

I'm trying to resolve a
User: hue is not allowed to impersonate hue
Error in my Bigtop Hadoop setup... but, I'm trying to avoid wildcarding the hadoop.proxyuser.hue.groups setting like all the examples show.
Is it possible to have Hue run without having to open up all the groups?

Turns out I just needed to add a hue user to the Namenode

Related

How to alter byCategory projection via API?

I want to set my $byCategory system projection to last in EventStore. Is there any way to do this automatically? Either via a startup configuration or an API call?
It is found under this url on localhost:
http://localhost:2113/web/index.html#/projections/http://localhost:2113/projection/$by_category
Here is what the edit screen looks like:

Twilio IP Messaging token issue

I'm setting up an iOS app to use the IP Messaging and video calling apis. I'm able to connect, create channels and setup a video call if I manually create hard-coded tokens for the app. However, if I want to use the PHP server (as described here https://www.twilio.com/docs/api/ip-messaging/guides/quickstart-ios) then I always get an error and it can't connect anymore.
I'm attaching a screenshot of what I see when I hit the http://localhost:8080 address which seems to produce a 500 Internal error on this URL: https://cds.twilio.com/v2/Streams
Thanks so much!
After much time spent on this I decided to try the node backend instead - under other server-side languages of the PHP and I have it running in 2 minutes! I used the exact same credentials as the ones that I was using on the PHP config file so either my PHP environment has something strange or the PHP backend needs some fixing. In any case, I'm able to move forward using the node backend, so if you run into the same issue just try node instead of PHP. woohoo!

Grafana graph share without requiring login

I'm trying to use a direct link to a graph in grafana to save the png image to disk.
This requires cookies with saved data so the user doesn't have to log in (if they've accessed it before), but does have to log in otherwise.
I can't use the auth.anonymous settings in the grafana.ini file, because it will make everything available, which is not good.
Any hints on how to handle this situation?
Thanks in advance

Is it possible to have Centralised Logging for ElasticBeanstalk Docker apps?

We have custom Docker web app running in Elastic Beanstalk Docker container environment.
Would like to have application logs be available for viewing outside. Without downloading through instances or AWS console.
So far neither of solutions been acceptable. Maybe someone achieved centralised logging for Elastic Benastalk Dockerized apps?
Solution 1: AWS Console log download
not acceptable - requires to download logs, extract every time. Non real-time.
Solution 2: S3 + Elasticsearch + Fluentd
fluentd does not have plugin to retrieve logs from S3
There's excellent S3 plugin, but it's only for log output to S3. not for input logs from S3.
Solution 3: S3 + Elasticsearch + Logstash
cons: Can only pull all logs from entire bucket or nothing.
The problem lies with Elastic Beanstalk S3 Log storage structure. You cannot specify file name pattern. It's either all logs or nothing.
ElasticBeanstalk saves logs on S3 in path containing random instance and environment ids:
s3.bucket/resources/environments/logs/publish/e-<random environment id>/i-<random instance id>/my.log#
Logstash s3 plugin can be pointed only to resources/environments/logs/publish/. When you try to point it to environments/logs/publish/*/my.log it does not work.
which means you can not pull particular log and tag/type it to be able to find in Elasticsearch. Since AWS saves logs from all your environments and instances in same folder structure, you cannot chose even the instance.
Solution 4: AWS CloudWatch Console log viewer
It is possible to forward your custom logs to CloudWatch console. Do achieve that, put configuration files in .ebextensions path of your app bundle:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.cloudwatchlogs.html
There's a file called cwl-webrequest-metrics.config which allows you to specify log files along with alerts, etc.
Great!? except that configuration file format is neither yaml,xml or Json, and it's not documented. There is absolutely zero mentions of that file, it's format either on AWS documentation website or anywhere on the net.
And to get one log file appear in CloudWatch is not simply adding a configuration line.
The only possible way to get this working seem to be trial and error. Great!? except for every attempt you need to re-deploy your environment.
There's only one reference to how to make this work with custom log: http://qiita.com/kozayupapa/items/2bb7a6b1f17f4e799a22 I have no idea how that person reverse engineered the file format.
cons:
Cloudwatch does not seem to be able to split logs into columns when displaying, so you can't easily filter by priority, etc.
AWS Console Log viewer does not have auto-refresh to follow logs.
Nightmare undocumented configuration file format, no way of testing. Trial and error requires re-deploying whole instance.
Perhaps an AWS Lambda function is applicable?
Write some javascript that dumps all notifications, then see what you can do with those.
After an object is written, you could rename it within the same bucket?
Or notify your own log-management service about the creation of a new object?
Lots of possibilities there...
I've started using Sumologic for the moment. There's a free trial and then a free tier (500mb /day, 7 day retention). I'm not out of the trial period yet and my EB app does literally nothing (it's just a few HTML pages serve by Nginx in a docker container. Looks like it could get expensive once you hit any serious amount of logs though.
It works ok so far. You need to create an IAM user that has access to the S3 bucket you want to read from and then it sucks the logs over to Sumologic servers and does all the processing and searching over there. Bit fiddly to set up, but I don't really see how it could be simpler and it's reasonably well-documented.
It lets you provide different path expressions with wildcards, then assign a "sourceCategory" to those different paths. You then use those sourceCategories to filter your log searching to a specific type of logging.
My plan long-term is to use something like your solution 3, but this got me going in very short order so I can move on to other things.
You can use a Multicontainer environment, sharing the log folder to another docker container with the tool of your preference to centralize the logs, in our case we connected an Apache Flume to move the files to an HDFS. Hope this helps you with this.
The easiest method I found to do this was using papertrail via rsyslog and .ebextensions, however it is very expensive for logging everything.
The good part is with rsyslog you can essentially send your logs anywhere and you are not tied to papertrail.
example ebextension
I've found loggly to be the most convenient.
It is a hosted service which might not be what you want. However if you check out their setup page you can see a number of ways your situation is supported (docker specific solutions, as well as like 10 amazon specific options). Even if loggly isn't to your taste, you can look at those solutions and easily see how some of them could be applied to most any centralized logging solution you might use or write.

How Do I Get "Me" URLs Using a Daemon in Microsoft Graph

I have a Microsoft Teams daemon service app with application-level permissions. I can call GET on urls like https://graph.microsoft.com/v1.0/users and it works fine.
However, if I want to get something like https://graph.microsoft.com/v1.0/users/me/onenote/notebooks/getRecentNotebooks, I get InvalidAccessToken even though I have permissions enabled for it in my daemon app. I've tried eliminating the me from the url, replacing it with my tenant address, users, users/uid, myOrganization, etc, but I get Error 400 in those cases. No combination I've tried seems to work.
What is the proper format for calling urls like this for a daemon service app with application-level permissions?
In an application context 'me' doesn't mean anything. There's no user authenticated so the access token doesn't give your application access from the viewpoint of a particular user.
Instead of /users/me/, call Graph with /users/user-guid/ or /users/user-principal-name. For example, https://graph.microsoft.com/v1.0/users/ce900c69-ac07-450a-ad2f-f56e42ee621e/onenote or https://graph.microsoft.com/v1.0/users/jon#doe.com/onenote
Docs: https://learn.microsoft.com/en-us/graph/api/user-get?view=graph-rest-1.0&tabs=http
Edit:
Also check out the Graph explorer for samples like this:

Resources