Winston for Sam CLI - console.log

When using our SAM configuration on our local machine, the logs are stored here as per the command:
sam local start-api --env-vars env.json --log-file logs.txt
This stores logs in the following format:
2022-07-20T00:11:04.600Z ZKY6ca3d-8004-4098-9a1d-a6e78134284e INFO Creating new token.
2022-07-20T00:11:04.812Z 97F6ca3d-8004-4098-9a1d-a6e78134284e INFO Token Authorised
This shows the timestamp, lambda execution unique id, log type and message.
We are looking to integrate Winston into our codebase. However, when we log using winston, the lambda id is not printed.
Here is the Winston message
I can add the timestamp and log type, however, I am unsure how to input the Lambda execution ID as part of the Winston method of logging
Is there anyway around this?

Related

Slack Conversations API conversations.kick returning "channel_not_found" for a public channel

I am writing a Slack integration that can boot certain users out of public channels when certain conditions are met. I have added several OAuth scopes to the bot token, including the following:
channels:history
channels:manage
channels:read
chat:write
chat:write.public
groups:write
im:write
mpim:write
users:read
I am writing my bot in Python using the slack-bolt library and asyncio. However when I try to invoke this code:
await app.client.conversations_kick(channel=channel_id, user=user_id)
I get the following error:
slack_sdk.errors.SlackApiError: The request to the Slack API failed. (url: https://www.slack.com/api/conversations.kick)
The server responded with: {'ok': False, 'error': 'channel_not_found'}
I know for a fact that both the channel_id and user_id arguments I'm passing in are valid. The channel ID I'm using is the string C01PAE3DB0A. I know it is valid because I can use the very same value for channel_id in the following API call:
response = await app.client.conversations_info(channel=channel_id)
And when I call conversations_info like that I get all of the information about my channel. (The same is true for calling users_info with the user_id - it returns successfully.) So why is that when I pass my valid channel_id parameter to conversations_kick I consistently receive this channel_not_found error? What am I missing?
So I got in touch directly with Slack support about this and they confirmed that there is a bug on their end. Specifically, the bug is that I should have received a restricted_action error response instead of a channel_not_found response. Apparently this is a known issue that is on their backlog.
The reason the API call would (try to) return this restricted_action error is simply because there is a workspace setting that, by default, prevents non-admins from kicking people out of public channels. Furthermore, this setting can only be changed by the workspace owner - one tier above admins.
But assuming you are the owner of the Slack workspace, you simply have to log into the Settings & Permissions page, which should look something like this:
And then you have to change the setting labeled "People who can remove members from public channels" from "Workspace admins and owners only (default)" to "Everyone, except guests."
Once I made that change, my API calls started succeeding.

HOWTO Fluent Bit OUTPUT to multiple Kinesis Firehose on multiple AWS accounts

I'm trying to send same logs to multiple Kinesis Firehose Stream on multiple AWS account via Fluent Bit v1.8.12. How can I use the role_arn in kinesis_firehose OUTPUT property correctly? I'm able to send to firehose A but not firehose B. Also, role A on AWS A can assume role B on AWS account B.
This is what I'm trying to do
This is fluent bit OUTPUT conf
[OUTPUT]
Name kinesis_firehose
Match aaa
region eu-west-1
delivery_stream a
time_key time
role_arn arn:aws:iam::11111111111:role/role-a
# THIS ONE DOES NOT WORK
[OUTPUT]
Name kinesis_firehose
Match bbb
region eu-west-1
delivery_stream b
time_key time
role_arn arn:aws:iam::22222222222:role/role-b
fluent bit pod logs says:
[2022/06/21 15:03:12] [error] [aws_credentials] STS assume role request failed
[2022/06/21 15:03:12] [ warn] [aws_credentials] No cached credentials are available and a credential refresh is already in progress. The currentco-routine will retry.
[2022/06/21 15:03:12] [error] [signv4] Provider returned no credentials, service=firehose
[2022/06/21 15:03:12] [error] [aws_client] could not sign request
[2022/06/21 15:03:12] [error] [output:kinesis_firehose:kinesis_firehose.1] Failed to send log records to b
[2022/06/21 15:03:12] [error] [output:kinesis_firehose:kinesis_firehose.1] Failed to send log records
[2022/06/21 15:03:12] [error] [output:kinesis_firehose:kinesis_firehose.1] Failed to send records
The problem was that I didn't know which role the fluent-bit pod was assuming. Enablind fluent-bit debug logs helped me.
It appears that fluent-bit assumes a particular role x that includes many EKS policies. I added to this role a policy that let this role x assume both roles role a (can write to Kinesis in account AWS A) and role b (can write to Kinesis in account AWS B). No changes were made to fluent bit configuration.
The solution is painted below:

Daloradius attribute issue: sql: Failed to create the pair: Unknown name

I followed steps in the link below to build a freeradius with daloradius on my CentOS 8 VM:
https://computingforgeeks.com/install-freeradius-and-daloradius-on-centos-rhel-8/
On daloradius, I created a new vendor and a new attrinute as below:
New Vendor and Atribute
Then I also created a user as below:
User1
User2
However, if I tried to access the radius server by the command below:
"radtest tester1 1111 192.168.123.87 0 secret1234",
I would get "Access-Reject".
From the radius.log, I could see error below:
"Wed Mar 2 03:27:39 2022 : Auth: (2) Login incorrect (sql: Failed to create the pair: Unknown name "Caswell-CW_group"): [tester1] (from client my_lan port 0)"
I have tried to check from my radius server via MariaDB and I could see below:
DB
So, I can not understand why I would get access-rejected and why sql would said Unknown name.
If I delete the check attribute in the user account "tester1", I could get access "Access-accept".
Do I miss anything in my settings on radius server?
I have founded the answer for my issue.
If I want to add a new vendor and attributes, I will need to add a new file under the path: /usr/share/freeradius/.For example, if I want to add a new vendor like "fortesting" and an new attribute "testing attribute".
I will need to add a new file called "dictionary.fortesting". And then I will need to modify the file "/usr/share/freeradius/dictionary" to include the file "dictionary.fortesting". After doing steps above, restating freeradius service. And then log in the WEB UI of daloradius, creating the same vendor and attributes.At last, I will be able to use the new vendor and attributes in my freeradius users.I will get " Access-Accept" while using the command "radtest".

Export dashboard URL from job

I have a notebook running as a job in Azure Databricks. The results are shown in a Databricks dashboard. I want the dashboard URL to be sent to the team when the run is finished.
How do I retrieve the URL of the dashboard for the current run ?
I know the job ID, I managed to get the base of the URL with
dbutils.notebook.entry_point.getDbutils().notebook().getContext().browserHostName().toString()
and I found one can get the Run ID with
dbutils.notebook.entry_point.getDbutils().notebook().getContext().currentRunId().toString()
but the URL should contain the "Run", which is different than the "Run ID". Furthermore, the URL doesn’t display a dashboard without some UUID I don’t know how to get. Where can I get these informations ?
Here is my solution :
run_id = None
url = None
try:
run_id = json.loads(dbutils.notebook.entry_point.getDbutils().notebook().getContext().toJson())["tags"]["idInJob"]
url = f"https://XXX.azuredatabricks.net/?o=YYY#job/11/run/{run_id}/dashboard/ZZZ"
except:
pass
I figured XXX, YYY and ZZZ do not change along runs, you will find them by looking at your sample dashboard.
run_id and url will stay None if the notebook is launched in interactive mode.

Unable to issue identity in Hyperledger Composer

I am trying to issue an identity to a participant that already exists in the network.
return this.bizNetworkConnection.connect(this.cardname)
.then((result) => {
let email = 'user#gmail.com',
username = email.split('#')[0];
this.businessNetworkDefinition = result;
return this.bizNetworkConnection.issueIdentity('org.test.Person#user#gmail.com', username);
})
.then((result) => {
console.log(`userID = ${result.userID}`);
console.log(`userSecret = ${result.userSecret}`);
})
I expect that I will see the userID and the userSecret logged on the console but I am getting errors as described below.
Following the developer tutorial on their documents:
If I use the card name for PeerAdmin#hlfv1 on the connect function above, I get the error. "Error trying to ping. Error: Error trying to query business network. Error: Missing \"chaincodeId\" parameter in the proposal request"
If I use the card name for admin#tutorial-network on the connect function above, I get the error "fabric-ca request register failed with errors [[{\"code\":400,\"message\":\"Authorization failure\"}]]"
For option 1, I know the network name is missing in the given card, whie option 2 means that the admin has no rights to issue an identity. However, I cannot seem to find any documentation directing me on how to use either to achieve my objective. Any help is highly welcome.
While I have listed the javascript code I am using to achieve the same, I would not mind if anyone can explain what I am missing using the composer cli.
see https://hyperledger.github.io/composer/latest/managing/identity-issue.html
you would definitely use the admin#tutorial-network card, as PeerAdmin does not have authority to issue identities (admin does).
Did you already do: 1) a composer card import -f networkadmin.card (per the tutorial) ? 2) a composer network ping -c admin#tutorial-network to use the card (now in the card store) and thereby populate the admin's credentials (certificate/private key).
Only at that point would admin be recognised as the identity to issue further identities. Is it possible you spun up a new dockerized CA server at some stage since you did the import etc ?
What happens if you issue a test identity through the command line (using admin#tutorial-network? Does it fail)

Resources