Event is None using Test Event in AWS Lambda using Docker image - docker

Wondering if anyone has come across the following and has a solution.
I have a AWS Lambda using a (Python) Docker image/container. I am trying to test it with a Test Event in the Console. When I print the Event in the function handler (2nd line in function), the Event is None. I know the function is being called because the first line is a print("String"), which I see in the logs.
Any ideas?

Related

How to catch an ouppuit of a process or a command

I'm trying to write a rule where the condition depends on the output of a script.sh. I had tried several approaches, but I did not have success.
Searching in your documentation but didn´t find anything that help me. I tried several evt or proc, but neither of them given me any info.
In fact, this is the rule with I'm trying to see how I can find a workaround:
- rule: FIM Custom rule
desc: Testing rule
condition: access_log_files and (evt.type=close)
output: Test result (proc_name=%proc.name command=%proc.cmdline evt_type=%evt.type evt.args =%evt.args syslog_.facility_str=%syslog.facility.str syslog_message=%syslog.message)
priority: WARNING
Consider please that I´m running Falco on Docker with the last image.
When I execute in the Ubuntu host the command logger test, I recievedin the stdout of the docker falco container this message:
{"hostname":"dc95654c63c3","output":"01:21:29.759239580: Warning Test result (proc_name=python3 command=python3 /usr/lib/ubuntu-advantage/timer.py evt_type=close evt.args =res=0 syslog_.facility_str= syslog_message=)","priority":"Warning","rule":"FIM Custom rule","source":"syscall","tags":[],"time":"2022-12-17T01:21:29.759239580Z", "output_fields": {"evt.args":"res=0 ","evt.time":1671240089759239580,"evt.type":"close","proc.cmdline":"python3 /usr/lib/ubuntu-advantage/timer.py","proc.name":"python3","syslog.facility.str":null,"syslog.message":null}}
So please tell me what I can do.
Thanks
In order to feed Falco with external sources of events (those that are not Kernel Syscalls) you'd need to use a Falco plugin. There are plugins to obtain events from Kubernetes, AWS CloudTrail, or even from GitHub. However, there is no plugin, that I know of, to obtain information from the standard output of a program or from Syslog.
Due to the nature of the project Falco, anyone in the community can contribute with such a plugin, so I invite you to join the Falco slack channel and ask around, or even write your own plugin.

How to store execution status of a specflow scenario when executed using Command Prompt

I'm trying to initiate the execution in a remote machine for which i want to get the execution status of a specflow scenario when executed using Command Prompt.
Note: I'm achieving this using TCPClient & TCPListener approach.
In this approach, i would like to pass the scenario tag like testcase id as - #1234 via TcpClient and in TCPListener i will listen to that tag number and will pass down such that i will execute via command prompt.
Here, i would like to get the Scenario execution status like Pass, failed, pending... such that i can pass it back to TcpClient and i can complete end to end testing...
Thank You in Advance.
Since you are running the testing using command prompt you can check for the error level. Following link might be helpful for the same:Batch Files - Error Handling

Call Windows Context Menu Entry Directly

I need to run a selfmade context menu entry via cmd.
The command is stored in
"HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\CommandStore\shell\testCommand\command"
and contains
C:\Path\convert.exe %1 test1|test2
The problem is that windows seems to call the program associated with the command differently the first time. I don't know why and I can't get figured out how to avoid this.
So I want to call the program myself the first time before the user can call it.
If I execute the program myself directly via cmd it runs correctly, but if Windows executes it using the context menu entry it behaves different. After the first time it runs as supposed.
It couldn't find anything simmilar using google and stackoverflow ..
Whats the matter here? Also tried to run it using
RunDll32.EXE URL.DLL,FileProtocolHandler "C:\path\convert.exe"
But couldn't add the parameters requiered ..
Please help me ..

Parse Server Cloud Code Does Not Work

Hey guys I have a very confusing issue at hand, I want to state that I have looked through EVERY resource I could find including on here about getting custom cloud code functions to work.
I am hosting Parse Server on Heroku with my database on mLabs
I can successfully call the 'hello' cloud code function
I cannot successfully call any custom Function, even one that prints something to the console
Below is my current process that I have been using trying to get my cloud code functions to work
Open Main.js and Add Cloud code:
Parse.Cloud.define('testParagraph', function(req, res) {
console.log("received......... this is a console log for a test function that will print out a paragraph as a test");
res.success('Hi, this is the start of a new test function that will print out a paragraph');
});
Commit change to git
Push Change to git
Restart Heroku Server
Run App & Call cloud code from iOS app in Swift
Result:
Every Time I get error 141 Invalid Function, however I can call 'hello' successfully. Just not a custom function.
Edit 2: I have discovered that I am unable to update any cloud functions. Meaning that while I can successfully call the "hello" function if I make a change to said function, re-upload to git, restart Heroku the change is not implemented. This leads me to believe that there must be something wrong with either the link to my main.js or it is being uploaded somewhere else and isnt calling the correct main.js... Any insight would be helpful
I solved this issue, my git was behind HEAD and therefore any changes I made did not become active on my Heroku Server, I merged my branches and this solved my problem of not being able to run custom functions. Now I am able to run custom functions, but I still get error 141 when try to query my database, due to the new problem I am marking this as solved and asking a new question.

How to tell if process is run by the Service Control Manager

I have a few Windows Services written in C# that I have setup to support being run from the command line as a console app if a specific parameter is passed. Works great but I would love to be able to detect whether the app is being run by the service control mananger or from a command line.
Is there any way to tell at runtime if my app was started by the SCM?
Environment.UserInteractive will return false if the process is running under the SCM.
The SCM will call your OnStart method, so you could mark that event and make sure when you run from the command line, you don't call OnStart. Or, you could check the startup parameters to see how the application was started.
In C the function StartServiceCtrlDispatcher() will fail with ERROR_FAILED_SERVICE_CONTROLLER_CONNECT. This is the best way in C, wonder if C# exposes any of this?
ERROR_FAILED_SERVICE_CONTROLLER_CONNECT
This error is returned if the program is being run as a console application rather than as a service. If the program will be run as a console application for debugging purposes, structure it such that service-specific code is not called when this error is returned.

Resources