Related
I'm trying to write a rule where the condition depends on the output of a script.sh. I had tried several approaches, but I did not have success.
Searching in your documentation but didn´t find anything that help me. I tried several evt or proc, but neither of them given me any info.
In fact, this is the rule with I'm trying to see how I can find a workaround:
- rule: FIM Custom rule
desc: Testing rule
condition: access_log_files and (evt.type=close)
output: Test result (proc_name=%proc.name command=%proc.cmdline evt_type=%evt.type evt.args =%evt.args syslog_.facility_str=%syslog.facility.str syslog_message=%syslog.message)
priority: WARNING
Consider please that I´m running Falco on Docker with the last image.
When I execute in the Ubuntu host the command logger test, I recievedin the stdout of the docker falco container this message:
{"hostname":"dc95654c63c3","output":"01:21:29.759239580: Warning Test result (proc_name=python3 command=python3 /usr/lib/ubuntu-advantage/timer.py evt_type=close evt.args =res=0 syslog_.facility_str= syslog_message=)","priority":"Warning","rule":"FIM Custom rule","source":"syscall","tags":[],"time":"2022-12-17T01:21:29.759239580Z", "output_fields": {"evt.args":"res=0 ","evt.time":1671240089759239580,"evt.type":"close","proc.cmdline":"python3 /usr/lib/ubuntu-advantage/timer.py","proc.name":"python3","syslog.facility.str":null,"syslog.message":null}}
So please tell me what I can do.
Thanks
In order to feed Falco with external sources of events (those that are not Kernel Syscalls) you'd need to use a Falco plugin. There are plugins to obtain events from Kubernetes, AWS CloudTrail, or even from GitHub. However, there is no plugin, that I know of, to obtain information from the standard output of a program or from Syslog.
Due to the nature of the project Falco, anyone in the community can contribute with such a plugin, so I invite you to join the Falco slack channel and ask around, or even write your own plugin.
does anyone know if this is possible?
All I can find in docs is reference to enabling docker experimental features, but not the kubernetes experimental features.
I tried this, but still get error.
k alpha debug -it exchange-pricing-865d579659-s8x6d --image=busybox --target=exchange-pricing-865d579659-s8x6d
error: ephemeral containers are disabled for this cluster (error from server: "the server could not find the requested resource").
Thanks
I had the same intent (as have others in this feature request). After several hours of trial and error, I finally found out a way to do so.
Steps:
Depending on which file you're trying to edit, you may need to fully shut down Docker Desktop, and restart WSL. (right-click tray-icon and press "Quit Docker Desktop", then run wsl --shutdown, then run wsl)
Open the [...]/kubeadm/manifests folder, in the Docker filesystem.
On Windows, navigate Windows Explorer to:
For Docker Desktop 4.2.0: \\wsl$\docker-desktop-data\version-pack-data\community\kubeadm\manifests
For Docker Desktop 4.11.0: \\wsl$\docker-desktop-data\data\kubeadm\manifests
Open the kube-controller-manager.yaml, kube-apiserver.yaml, and kube-scheduler.yaml files, adding the line below:
spec:
containers:
- command:
[...]
- --feature-gates=EphemeralContainers=true <-- add this line
Start Docker Desktop again.
It looks so easy when its already figured out, huh? Well trust me, it was a pain to find out.
Some of the slowdowns I hit:
It took me quite a while to even find those manifest files. (eventually found it using grepWin, searching through the whole \\wsl$\docker-desktop-data folder for any matches of a line I grabbed from the kube-apiserver-docker-desktop pod's config, which I viewed using Lens)
Once I found it, I got confused by this documentation. When I read FEATURE STATE: Kubernetes v1.22 [alpha], I thought that meant you needed version 1.22 or higher of Kubernetes for the feature to be available. This caused a huge wild goose chase where I tried to change the version of Kubernetes that was being launched in Docker Desktop, which Docker Desktop didn't seem to like. (in retrospect, the issue may have just been the minor one in point 3 below...)
When I first made changes to the manifest files, I was using Notepad++. And despite my liking Notepad++, it's apparently not quite as smart as vscode in the following regard: it does not automatically detect the indentation type for yaml files. Thus, when I pressed tab to create an indent, so I could add the new flag to the argument list, it added it as a tab character rather than spaces. This caused Kubernetes to fail reading of the file. That might not be so bad if Kubernetes gave a sane error message for that, but instead it merely gave the message unexpected EOF. And I didn't even see that error message at first because it was not being propagated to the kube-controller-manager-docker-desktop pod (which was the only relevant one that wasn't immediately erroring/closing). Anyway, I didn't realize this was the problem at the time, so...
I decided to try bypassing the manifest-files and applying my modification to the etcd data-store directly. In retrospect, this was not a good idea, because the etcd data-store is pretty complex, the tooling is substandard, and the documentation is substandard. I spent a ton of time just trying to figure out how to send commands to read and write data to it (eventually managed to do so by calling etcdctl within the etcd-docker-desktop pod). I spent further time still writing up a NodeJS script capable of reading all the data as JSON, storing it in a dump file, and being able to write changes to entries back despite there being 3+ levels of quoting involved (I eventually was able to use stdin to pass the value rather than as part of the command string, to avoid quotation-mark-inception). After all the work on etcd reading/writing above, I found it didn't work anyway because Kubernetes invariably "breaks" if anyone else writes to its etcd data-store. (even if you write the exact same value that had been there before -- as verified by comparing the dumps before and after)
After all of the above, I decided to have one last go with just adding the flags to mentioned manifest files. Was still getting the startup failure/error, but at the very end, I decided I wanted to see exactly what about my changes was causing Kubernetes to reject them. So I tried commenting out my added line; the error remained. I thought maybe it was a checksum-based rejection then. But then I thought, maybe the YAML parser that Kubernetes is using is just outdated and is finicky about what comments it is able to recognize. So I tried moving the comment around to different places, and was puzzled when the manifest was being accepted just by moving the comment to the root level. I moved it back to various locations, with it working and not working, until I thought to try making the line "half-indented" since it's "in-between" the working and non-working versions. That's when I noticed the line had a tab as its indent. And then it hit me; are the other lines also using tabs? I checked, and nope, they were using spaces. And that's when I realized I had wasted the last few hours on something I coulda just fixed with a simple indent change.
The moral of the story for some is that YAML is a bad configuration format, because it makes it easy to make trivial errors like this. But I actually place the blame more on whatever parser Kubernetes is using for the YAML files; it is unacceptable that a YAML parser would encounter an indentation mismatch and give a message so generic as unexpected EOF. I don't know what the identity of that YAML parser is, but I'm tired enough of the subject that I'm not even going to look into it right now. If one of you finds it, please make an issue report for it -- perhaps including this story as a real-world example of the pain that ambiguous error messages can cause.
Since Ephemeral Containers is still an alpha feature, it is disabled by default.
As you can read here, for this to work, it requires the EphemeralContainers feature gate to be enabled, and Kubernetes client and server version v1.16 or later.
As to the 2nd requirement I assume both your Kuberntes server and client versions are v1.16 or later but it looks like, for the time being, the 1st requirement cannot be met on Docker Desktop. According to this issue, it currently doesn't support enabling Feature Gates.
However you may still try to ssh to your master node and edit the following files:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
by adding inside the command section:
--feature-gates=EphemeralContainers=true
Then you need to delete those pods so they are recreated with new settings applied. You'll find them by running:
kubectl get pods -n kube-system
I have an environment in our company which hosts RabbitMQ 3.6.1 and Erlang 19.3. When i tried to create a queue by using RabbitMQ Management UI, I am getting the below error. I can create Exchanges and VHosts ok. It is only when I am trying to create Queues that I am getting the error. I tried to write a utility to create queues using the HTTP api but even that fails.
Upon some more researching I stumbled upon this article https://groups.google.com/d/msg/rabbitmq-users/pa1UtLbbvOE/3OlgKgMBAgAJ which says Erlang 19 is not compatible with RabbitMQ 3.6.3 and lower. Can someone confirm my findings please?
The error I am getting is
Got response code 500 with body {"error":"Internal Server Error","reason":"{error,\n {exit,\n {{function_clause,\n [{rabbit_queue_location_validator,module,\n [\"random\"],\n [{file,\"src/rabbit_queue_location_validator.erl\"},\n {line,50}]},\n {rabbit_queue_location_validator,validate_strategy,1,\n [{file,\"src/rabbit_queue_location_validator.erl\"},\n {line,38}]},\n {rabbit_queue_master_location_misc,get_location_mod_by_config,\n 1,\n [{file,\"src/rabbit_queue_master_location_misc.erl\"},\n {line,88}]},\n {rabbit_queue_master_location_misc,get_location,1,\n [{file,\"src/rabbit_queue_master_location_misc.erl\"},\n {line,51}]},\n {rabbit_amqqueue,declare,6,\n [{file,\"src/rabbit_amqqueue.erl\"},{line,300}]},\n {rabbit_channel,handle_method,3,\n [{file,\"src/rabbit_channel.erl\"},{line,1331}]},\n {rabbit_channel,handle_cast,2,\n [{file,\"src/rabbit_channel.erl\"},{line,455}]},\n {gen_server2,handle_msg,2,\n [{file,\"src/gen_server2.erl\"},{line,1049}]}]},\n {gen_server,call,\n [<0.27627.105>,\n {call,\n {'queue.declare',0,<<\"Test\">>,false,true,false,false,false,\n []},\n none,<0.15368.105>},\n infinity]}},\n [{gen_server,call,3,[{file,\"gen_server.erl\"},{line,212}]},\n {rabbit_mgmt_util,'-amqp_request/5-fun-0-',4,\n [{file,\"src/rabbit_mgmt_util.erl\"},{line,579}]},\n {rabbit_mgmt_util,with_channel,5,\n [{file,\"src/rabbit_mgmt_util.erl\"},{line,598}]},\n {rabbit_mgmt_util,http_to_amqp,5,\n [{file,\"src/rabbit_mgmt_util.erl\"},{line,526}]},\n {webmachine_resource,resource_call,3,\n [{file,\"src/webmachine_resource.erl\"},{line,186}]},\n {webmachine_resource,do,3,\n [{file,\"src/webmachine_resource.erl\"},{line,142}]},\n {webmachine_decision_core,resource_call,1,\n [{file,\"src/webmachine_decision_core.erl\"},{line,48}]},\n {webmachine_decision_core,accept_helper,1,\n [{file,\"src/webmachine_decision_core.erl\"},{line,612}]}]}}\n"}
The RabbitMQ team monitors this mailing list and only sometimes answers questions on stackoverflow.
In your case, the error is happening here. Did you create a queue-master-locator policy with the value of random? If so, I recommend clearing the policy to see if that resolves the issue.
I also recommend upgrading to the latest version (3.6.12). The version you are using is very old.
Thanks to #Luke Bakken for pointing me to the RabbitMQ Mailing list.
I managed to fix the problem by changing the configuration of queue master location strategy to <<"random">>
Please see this link for more info
https://groups.google.com/d/msg/rabbitmq-users/XUbtu4UxbHQ/3y-PvO0oBAAJ
I'm trying to add Neo4j 3.0 to my tests for the neo4j gem and I'm having trouble with the server getting killed in a Travis CI container. Pre-3.0 works just fine, but when I use 3.0 it seems to get killed. There seems to be plenty of memory (when I run Neo4j locally it uses 300-400 MB). I get a warning from Neo4j saying:
WARNING: Max 30000 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
That makes me think that it's getting killed because of too many open files. I'm not sure if there's a way to increase the number of files on a Neo4j container, and I have a number of jobs so I don't want to slow things down by running sudo: true. Did Neo4j 3.0 change to require more open files (the documentation doesn't seem to imply that it did)?
EDIT:
My .travis.yml file:
This is how I do it, and it works fine for me for 2.3 and 3.0 including a push to docker hub.
https://github.com/maxdemarzi/neo_travis
https://travis-ci.org/maxdemarzi/neo_travis
I think our memory allocation is messing things up. One thing that is unusual on your (travis's) setup is that there is twice the amount of swap memory compared to RAM, and that the amount of memory reported as available is very large.
Try specifying the amount of memory in your config file. See http://neo4j.com/docs/operations-manual/current/#performance-tuning for more details, but essentially add these to your config.
In neo4j.conf:
dbms.memory.pagecache.size=1G
and in neo4j-wrapper.conf:
dbms.memory.heap.max_size=1000
dbms.memory.heap.initial_size=1000
The memory limits are set quite low to guarantee that Travis doesn't kill the process, and I suspect that the tests don't need much in terms of memory.
I'm trying to run a Rails app on IBM Bluemix and load test it with Blitz.io. When I access the app in my browser, everything is fine. When Blitz tries to access it, however, the app crashes. The log entry looks like this:
2014-12-20T16:26:45.55-0500 [RTR] OUT **[my app name]**.mybluemix.net - [20/12/2014:21:26:43+0000] "GET / HTTP/1.1" 200 12784 "-" "blitz.io; e970e720c4f22c94f7d822731652a745#130.160.6.54" 75.126.70.42:54311 x_forwarded_for:"-" vcap_request_id:ba32f5d0-e157-4229-61f5-13eb7ab3d2d0 response_time:2.182336949 app_id:1e6ad01b-c7b4-4f57-8d9d-8d333807bb15
2014-12-20T16:26:46.60-0500 [App/0] ERR /home/vcap/app/vendor/ruby2.0.0/lib/ruby/2.0.0/webrick/server.rb:284: [BUG] object allocation during garbage collection phase
What does this mean? I'm at a bit of a loss on how to debug this, or even where the problem lies. Is it a problem with my app code? A configuration problem?
I'm not sure whether I've included enough of the error log to be helpful here. The rest is here:
http://pastebin.com/Jv6jUksv
You can specify the Ruby version you want to run in your applications in the Gem file. The Ruby buildpack in Bluemix supports Ruby v2.1.x, v2.2 and more.
But I guess the possible cause of the error is that your app is exceeding the memory quota that's allocated to your application. Bluemix is using CloudFoundry, which will kill the app instance if it consumes more memories than allocated. You can increase the memory allocated to your application by specifying the "-m" option when you do "cf push". For example:
cf push -m 1G
you can raise ticket/ask question in developer forum on bluemix support for speedy resolution
of this issue(if ruby 2.0.0 is having issue and if advance version of the same is working fine
for you):
https://developer.ibm.com/bluemix/support/
Is there a way you can see the memory usage around the time of this error message?
I've gotten the error
[BUG] object allocation during garbage collection phase
using Ruby 1.8.7 in an environment with explicit memory restrictions (probably similar to that of IBM Bluemix) when exceeding those memory restrictions. My memory is limited by a PBS directive.
For me, the error occurs when parsing a large amount of JSON where the json gem requires more memory than the limit for this particular JSON string.