I'm currently experimenting with Falco (a runtime monitoring solution for container).
I'm working locally on a Mac (Catalina v10.15.3), have Helm (v3.1.2) installed, Docker (version 2.2.0.5 (43884)) and K8s running (v1.15.5).
I try deploying Falco using helm as suggest in their installation page and I end up in a loop of crash for the pod.
Here are the logs that I get :
$ kubectl logs falco-xxxxx
* Setting up /usr/src links from host
ln: failed to create symbolic link '/usr/src//host/usr/src/*': No such file or directory
* Unloading falco-probe, if present
* Running dkms install for falco
Error! echo
Your kernel headers for kernel 4.19.76-linuxkit cannot be found at
/lib/modules/4.19.76-linuxkit/build or /lib/modules/4.19.76-linuxkit/source.
* Running dkms build failed, couldn't find /var/lib/dkms/falco/0.20.0+d77080a/build/make.log
* Trying to load a system falco-probe, if present
* Trying to find precompiled falco-probe for 4.19.76-linuxkit
Found kernel config at /proc/config.gz
* Trying to download precompiled module from https://s3.amazonaws.com/download.draios.com/stable/sysdig-probe-binaries/falco-probe-0.20.0%2Bd77080a-x86_64-4.19.76-linuxkit-f9de4c19ddd4080798f0e14972190e35.ko
curl: (22) The requested URL returned error: 404 Not Found
Download failed, consider compiling your own falco-probe and loading it or getting in touch with the Falco community
Thu Apr 2 15:39:47 2020: Falco initialized with configuration file /etc/falco/falco.yaml
Thu Apr 2 15:39:47 2020: Loading rules from file /etc/falco/falco_rules.yaml:
Thu Apr 2 15:39:48 2020: Loading rules from file /etc/falco/falco_rules.local.yaml:
Thu Apr 2 15:39:49 2020: Unable to load the driver. Exiting.
Thu Apr 2 15:39:49 2020: Runtime error: error opening device /dev/falco0. Make sure you have root credentials and that the falco-probe module is loaded.. Exiting.
I looked around a bit and didn't find a clue so I took the liberty to ask directly here before trying to compile the probe myself.
Do you have any idea how to fix this issue ?
Related
I am trying to build my own linux image using buildroot in docker with GitLab CI. Everything is going fine until I start downloading the "linux" repository. Then I get an error like below.
>>> linux d0f5c460aac292d2942b23dd6199fe23021212ad Downloading
Doing full clone
Cloning into bare repository 'linux-d0f5c460aac292d2942b23dd6199fe23021212ad'...
Looking up git.ti.com ... done.
Connecting to git.ti.com (port 9418) ... 198.47.28.207 done.
fatal: The remote end hung up unexpectedly
fatal: early EOF
fatal: index-pack failed
--2023-01-05 11:53:37-- http://sources.buildroot.net/linux-d0f5c460aac292d2942b23dd6199fe23021212ad.tar.gz
Resolving sources.buildroot.net (sources.buildroot.net)... 104.26.1.37, 172.67.72.56, 104.26.0.37, ...
Connecting to sources.buildroot.net (sources.buildroot.net)|104.26.1.37|:80... connected.
HTTP request sent, awaiting response... 404 Not Found
2023-01-05 11:53:37 ERROR 404: Not Found.
package/pkg-generic.mk:73: recipe for target '/builds/XXX/XXX/output/build/linux-d0f5c460aac292d2942b23dd6199fe23021212ad/.stamp_downloaded' failed
make: *** [/builds/XXX/XXX/output/build/linux-d0f5c460aac292d2942b23dd6199fe23021212ad/.stamp_downloaded] Error 1
Cleaning up project directory and file based variables
00:02
ERROR: Job failed: exit code 1
The image being built without docker has no problem downloading this repository. I was building this image in docker a while ago and there was no problem downloading this repository. Could it be a problem of poorer network connection? The package is bigger than the others
You are using a custom git repo (git.ti.com) which is not working and buildroot doesn't know anything about.
For this reason, you cannot expect a mirror copy available on sources.buildroot.net: buildroot only has copies of the packages distributed whithin it.
For the past few months I have successfully been using my Mac to connect to a Windows sstp VPN for work using homebrew....until now.
I followed the instructions here:
Windows SSTP VPN - connect from Mac
and use this command:
sudo /usr/local/sbin/sstpc --log-stderr --cert-warn --user <user> --password <password> <server> usepeerdns require-mschap-v2 noauth noipdefault defaultroute refuse-eap noccp
Now, out of nowhere, I get an error:
Mar 24 12:20:50 sstpc[5481]: Could not complete write of frame
Mar 24 12:20:50 sstpc[5481]: Could not forward packet to pppd
Mar 24 12:20:50 sstpc[5481]: Could not complete write of frame
Mar 24 12:20:50 sstpc[5481]: Could not forward packet to pppd
Mar 24 12:20:51 sstpc[5481]: Connection was aborted, Reason was not known
**Error: Connection was aborted, Reason was not known, (-1)
The numbers in the brackets sstpc[nnnn] vary, they're not always what is above.
I tried updating homebrew, reinstalling the sstp-client from homebrew, restarting my computer.
What else can I try?
There is a serious bug in sstp-client 1.0.14 which causes this [1], you likely need to downgrade to 1.0.13.
Unfortunately homebrew does not have tagged versions for sstp-client, so it is a little more involved - you will need to create a local tap so you can pin the version:
$ brew uninstall sstp-client
$ brew tap-new mymac/local
$ brew extract --version 1.0.13 sstp-client mymac/local
$ brew install mymac/local/sstp-client#1.0.13
Now it should work as before.
[1] https://sourceforge.net/p/sstp-client/discussion/1499218/thread/d485651bda/?limit=25#268f/038f/4b89/f7be/ffd5
I'm trying to setup OrientDB distributed configuration with docker. But I'm getting error when starting second node -
2015-10-09 17:14:14:066 WARNI
[node1444321499719]->[[node1444321392311]] requesting deploy of
database 'testDB' on local server... [OHazelcastPlugin] 2015-10-09
17:14:14:117 INFO [node1444321499719]<-[node1444321392311] received
updated status node1444321499719.testDB=SYNCHRONIZING
[OHazelcastPlugin] 2015-10-09 17:14:14:119 INFO
[node1444321499719]<-[node1444321392311] received updated status
node1444321392311.testDB=SYNCHRONIZING [OHazelcastPlugin] 2015-10-09
17:14:15:935 WARNI [node1444321499719] moving existent database
'testDB' located in '/orientdb/databases/testDB' to
'/orientdb/databases/../backup/databases/testDB' and get a fresh copy
from a remote node... [OHazelcastPlugin] 2015-10-09 17:14:15:936 SEVER
[node1444321499719] error on moving existent database 'testDB' located
in '/orientdb/databases/testDB' to
'/orientdb/databases/../backup/databases/testDB'. Try to move the
database directory manually and retry
[OHazelcastPlugin][node1444321499719] Error on starting distributed
plugin
com.orientechnologies.orient.server.distributed.ODistributedException:
Error on moving existent database 'testDB' located in
'/orientdb/databases/testDB' to
'/orientdb/databases/../backup/databases/testDB'. Try to move the
database directory manually and retry
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.backupCurrentDatabase(OHazelcastPlugin.java:1007)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.requestDatabase(OHazelcastPlugin.java:954)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.installDatabase(OHazelcastPlugin.java:893)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.installNewDatabases(OHazelcastPlugin.java:1426)
at com.orientechnologies.orient.server.hazelcast.OHazelcastPlugin.startup(OHazelcastPlugin.java:184)
at com.orientechnologies.orient.server.OServer.registerPlugins(OServer.java:979)
at com.orientechnologies.orient.server.OServer.activate(OServer.java:346)
at com.orientechnologies.orient.server.OServerMain.main(OServerMain.java:41)
I don't have this error if I'm starting orientdb cluster without docker.
Also I can move it in container
[root#64f6cc1eba61 orientdb]# mv -v /orientdb/databases/testDB
/orientdb/databases/../backup/databases/testDB
'/orientdb/databases/testDB' ->
'/orientdb/databases/../backup/databases/testDB'
'/orientdb/databases/testDB/distributed-config.json' ->
'/orientdb/databases/../backup/databases/testDB/distributed-config.json'
removed '/orientdb/databases/testDB/distributed-config.json' removed
directory: '/orientdb/databases/testDB' [root#64f6cc1eba61 orientdb]#
ls -l /orientdb/databases/../backup/databases/testDB total 4
-rw-r--r--. 1 root root 455 Oct 9 11:32 distributed-config.json [root#64f6cc1eba61 orientdb]#
I'm using OrientDB version 2.1.3
This was reported and fixed:
https://github.com/orientechnologies/orientdb/issues/4891
Set the 'distributed.backupDirectory' variable to a specific directory and the issue should be gone.
By the way, running orient distributed in docker is our experience currently a no go:
- Docker does not support multicast yet, you can work around it, but it's painful. But the main problem:
- Docker doesn't reuse ip addresses on restart, so a container restart will give it a new ip address, this messes up your cluster big time.
We abandoned using orient distributed with docker until docker is fixed on both issues (I believe it is both on the roadmap).
If you experience otherwise, I'm happy to hear your thoughts.
Following the instructions from the uWSGI SPDY router docs I didn't have much luck.
I've tried it on a vagrant instance Linux precise64 3.2.0-23-generic #36-Ubuntu SMP Tue Apr 10 20:39:51 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
Under a virtual environment.
$ python --version
Python 2.7.9
Issuing following command that I derive from the manual does fire up a server:
$ uwsgi --master --https2 addr==0.0.0.0:8443,cert=/home/vagrant/server.crt,key=/home/vagrant/server.key,spdy=asdf --module werkzeug.testapp:test_app --thunder-lock --socket=/tmp/uwsgi.sock --shared-socket :8443 -H /vagrant/venv/
Note that I'm forwarding host 8422 to guest 8443.
The problem is that checking https://spdy.localhost:8442/ doesn't show any Werkzeug variables described in the manual (SPDY, SPDY.version). The UWSGI_ROUTER has value of "http" if that's of any significance.
$ openssl version
OpenSSL 1.0.1 14 Mar 2012
Werkzeug Version 0.10.4
uwsgi.version '2.0.10'
I've made sure tat python doesn't produce insecure platform warning.
The OpenSSL seems to be OK version according to the manual.
There are no warnings or meaningful info messages in the log.
It just doesn't use SPDY routing it seems.
What might be the cause of this?
When I run rebar generate to generate a node using reltool, it fails with this error message:
ERROR: Unable to generate spec: read file info /usr/lib/erlang/man/man1/gserialver.1.gz failed
Why does that happen, and what can I do about it?
I'm running Debian squeeze (6.0.6), if that helps.
rebar prints Unable to generate spec when it gets an error message from reltool; the rest of the message comes directly from reltool. In this case, reltool is trying to get file info for various files in the Erlang directory, but fails because gserialver.1.gz is a dangling symlink.
You might wonder why gserialver.1.gz is installed under /usr/lib/erlang. It actually isn't, but Debian creates /usr/lib/erlang/man as a symlink to /usr/share/man:
$ ls -l /usr/lib/erlang/man
lrwxrwxrwx 1 root root 15 Nov 15 12:19 /usr/lib/erlang/man -> ../../share/man
So the real culprit is /usr/share/man/man1/gserialver.1.gz, which is installed by the package gcj-jre-headless. There is a bug report about that which claims that this has been fixed; however if that's not the case on your box, here is a command that will move the file out of the way and make rebar happy:
sudo dpkg-divert --divert /var/gserialver.1.gz --rename /usr/share/man/man1/gserialver.1.gz