SYSLOG-NG: Sending same log to two different index in elasticsearch - parsing

I'm trying to send the same log flow to two different elasticsearch indexes, because of users with different roles each index.
I use a file for destination too. Here is a sample:
2021-02-12T14:00:00+01:00 192.168.89.222 <30>1 2021-02-12T14:00:01+01:00 sonda filebeat 474 - - 2021-02-12T14:00:01.364+0100 DEBUG [input] input/input.go:152 Run input
2021-02-12T14:00:00+01:00 192.168.89.222 <30>1 2021-02-12T14:00:01+01:00 sonda filebeat 474 - - 2021-02-12T14:00:01.364+0100 DEBUG [input] log/input.go:191 Start next scan
2021-02-12T14:00:00+01:00 192.168.89.222 <30>1 2021-02-12T14:00:01+01:00 sensor filebeat 474 - - 2021-02-12T14:00:01.364+0100 DEBUG [input] log/input.go:421 Check file for harvesting: /opt/zeek/logs/current/weird.log
When I use only one destination for elasticsearch-http, any of the two configured, everything works fine, but when use both destinations, syslog-ng fails to start and systemcl complains.
Here is my /etc/syslog-ng/syslog-ng.conf file:
#version: 3.27
#include "scl.conf"
options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no);
dns_cache(no); owner("root"); group("adm"); perm(0640);
stats_freq(0); bad_hostname("^gconfd$");
};
source s_net {
udp(
ip(0.0.0.0)
port(514)
flags(no-parse)
);
};
log {
source(s_net);
destination(d_es);
destination(d_es_other_index); ######## comment this to avoid the error
destination(d_file);
};
template t_demo_filetemplate {
template("${ISODATE} ${HOST} ${MESSAGE}\n");
};
destination d_file {
file("/tmp/test.log" template(t_demo_filetemplate));
};
destination d_es{
elasticsearch-http(
index("syslog-ng-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
type("")
user("elastic")
password("password")
batch_lines(128)
batch_timeout(10000)
timeout(100)
template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
time-zone("UTC")
tls(
ca-file("/root/elastic_certs/elastic-ca.crt")
cert-file("/root/elastic_certs/elastic.crt")
key-file("/root/elastic_certs/elastic.key")
peer-verify(no)
)
);
};
destination d_es_other_index{
elasticsearch-http(
index("otherindex-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
type("")
user("elastic")
password("password")
batch_lines(128)
batch_timeout(10000)
timeout(100)
template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")
time-zone("UTC")
tls(
ca-file("/root/elastic_certs/elastic-ca.crt")
cert-file("/root/elastic_certs/elastic.crt")
key-file("/root/elastic_certs/elastic.key")
peer-verify(no)
)
);
};
The error I get when using two elasticsearch destinations (journalctl -xe seems to show no relevant info):
# systemctl restart syslog-ng.service
Job for syslog-ng.service failed because the control process exited with error code.
See "systemctl status syslog-ng.service" and "journalctl -xe" for details.
And my syslog-ng info:
$ syslog-ng --version
syslog-ng 3 (3.27.1)
Config version: 3.22
Installer-Version: 3.27.1
Revision: 3.27.1-3build1
Compile-Date: Jul 30 2020 17:56:17
Module-Directory: /usr/lib/syslog-ng/3.27
Module-Path: /usr/lib/syslog-ng/3.27
Include-Path: /usr/share/syslog-ng/include
Available-Modules: syslogformat,afsql,linux-kmsg-format,stardate,affile,dbparser,geoip2-plugin,afprog,kafka,graphite,riemann,tfgetent,json-plugin,cef,hook-commands,basicfuncs,disk-buffer,confgen,timestamp,http,afamqp,mod-python,tags-parser,pseudofile,system-source,afsocket,afsnmp,csvparser,afstomp,appmodel,cryptofuncs,examples,afmongodb,add-contextual-data,afsmtp,afuser,xml,map-value-pairs,kvformat,redis,secure-logging,sdjournal,pacctformat
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: on
Enable-Linux-Caps: on
Enable-Systemd: on
Is there any way of doing this two elasticsearch indexes at the same time?

You can check the exact error message in the journal logs, as it is suggested by systemctl:
See "systemctl status syslog-ng.service" and "journalctl -xe" for details.
Alternatively, you can start syslog-ng in the foreground:
$ syslog-ng -F --stderr
You probably have a persist-name collision due to the matching elasticsearch-http() URLs. Please try adding the persist-name() option with 2 unique names, for example:
destination d_es {
elasticsearch-http(
index("syslog-ng-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
# ...
persist-name("d_es")
);
};
destination d_es_other_index {
elasticsearch-http(
index("otherindex-${YEAR}-${MONTH}-${DAY}")
url("https://192.168.89.44:9200/_bulk" "https://192.168.89.144:9200/_bulk")
# ...
persist-name("d_es_other_index")
);
};

Related

Error: getDatabaseDefaults() failed. Do dumpStack() to see details

While following this tutorial to create OAM container:
https://docs.oracle.com/en/middleware/idm/access-manager/12.2.1.4/tutorial-oam-docker/
i am faced with the follwoing error:
------------ log start -------------
CONNECTION_STRING=oamDB:1521
RCUPREFIX=OAM04
DOMAIN_HOME=/u01/oracle/user_projects/domains/access_domain
INFO: Admin Server not configured. Will run RCU and Domain Configuration Phase...
Configuring Domain for first time
Start the Admin and Managed Servers
Loading RCU Phase
CONNECTION_STRING=oamDB:1521
RCUPREFIX=OAM04
jdbc_url=jdbc:oracle:thin:#oamDB:1521
Creating Domain 1st execution
RCU has already been loaded.. skipping
Domain Configuration Phase
/u01/oracle/oracle_common/common/bin/wlst.sh -skipWLSModuleScanning /u01/oracle/dockertools/create_domain.py -oh /u01/oracle -jh /u01/jdk -parent /u01/oracle/user_projects/domains -name access_domain -user weblogic -password weblogic1 -rcuDb oamDB:1521 -rcuPrefix OAM04 -rcuSchemaPwd oamdb1234 -isSSLEnabled true
Cmd is /u01/oracle/oracle_common/common/bin/wlst.sh -skipWLSModuleScanning /u01/oracle/dockertools/create_domain.py -oh /u01/oracle -jh /u01/jdk -parent /u01/oracle/user_projects/domains -name access_domain -user weblogic -password weblogic1 -rcuDb oamDB:1521 -rcuPrefix OAM04 -rcuSchemaPwd oamdb1234 -isSSLEnabled true
Initializing WebLogic Scripting Tool (WLST) ...
Welcome to WebLogic Server Administration Scripting Shell
Type help() for help on available commands
create_domain.py called with the following inputs:
INFO: sys.argv[0] = /u01/oracle/dockertools/create_domain.py
INFO: sys.argv[1] = -oh
INFO: sys.argv[2] = /u01/oracle
INFO: sys.argv[3] = -jh
INFO: sys.argv[4] = /u01/jdk
INFO: sys.argv[5] = -parent
INFO: sys.argv[6] = /u01/oracle/user_projects/domains
INFO: sys.argv[7] = -name
INFO: sys.argv[8] = access_domain
INFO: sys.argv[9] = -user
INFO: sys.argv[10] = weblogic
INFO: sys.argv[11] = -password
INFO: sys.argv[12] = weblogic1
INFO: sys.argv[13] = -rcuDb
INFO: sys.argv[14] = oamDB:1521
INFO: sys.argv[15] = -rcuPrefix
INFO: sys.argv[16] = OAM04
INFO: sys.argv[17] = -rcuSchemaPwd
INFO: sys.argv[18] = oamdb1234
INFO: sys.argv[19] = -isSSLEnabled
INFO: sys.argv[20] = true
INFO: Creating Admin server...
INFO: Enabling SSL PORT for AdminServer...
Creating Node Managers...
Will create Base domain at /u01/oracle/user_projects/domains/access_domain
Writing base domain...
Base domain created at /u01/oracle/user_projects/domains/access_domain
Extending domain at /u01/oracle/user_projects/domains/access_domain
Database oamDB:1521
Apply Extension templates
Extension Templates added
Extension Templates added
Deleting oam_server1
The default oam_server1 coming from the oam extension template deleted
Deleting oam_policy_mgr1
The default oam_server1 coming from the oam extension template deleted
Configuring JDBC Templates ...
Configuring the Service Table DataSource...
fmwDatabase jdbc:oracle:thin:#oamDB:1521
Getting Database Defaults...
Error: getDatabaseDefaults() failed. Do dumpStack() to see details.
Error: runCmd() failed. Do dumpStack() to see details.
Problem invoking WLST - Traceback (innermost last):
File "/u01/oracle/dockertools/create_domain.py", line 513, in ?
File "/u01/oracle/dockertools/create_domain.py", line 124, in createOAMDomain
File "/u01/oracle/dockertools/create_domain.py", line 328, in extendOamDomain
File "/u01/oracle/dockertools/create_domain.py", line 259, in configureJDBCTemplates
File "/tmp/WLSTOfflineIni6456738277719198193.py", line 267, in getDatabaseDefaults
File "/tmp/WLSTOfflineIni6456738277719198193.py", line 19, in command
Failed to build JDBC Connection object:
at com.oracle.cie.domain.script.jython.CommandExceptionHandler.handleException(CommandExceptionHandler.java:69)
at com.oracle.cie.domain.script.jython.WLScriptContext.handleException(WLScriptContext.java:3085)
at com.oracle.cie.domain.script.jython.WLScriptContext.runCmd(WLScriptContext.java:738)
at sun.reflect.GeneratedMethodAccessor141.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
com.oracle.cie.domain.script.jython.WLSTException: com.oracle.cie.domain.script.jython.WLSTException: Got exception when auto configuring the schema component(s) with data obtained from shadow table:
Failed to build JDBC Connection object:
Domain Creation Failed.. Please check the Domain Logs
------------ log end -------------
i am using the following docker run for admin server
docker run -d -p 7001:7001 --name oamadmin --network=OamNET --env-file /home/oam/oracle/oam-admin.env --shm-size="8g" --volume /home/oam/oracle/user_projects:/u01/oracle/user_projects oam:12.2.1.4
oam-admin.env content:
DOMAIN_NAME=access_domain
ADMIN_USER=weblogic
ADMIN_PASSWORD=weblogic1
ADMIN_LISTEN_HOST=oamadmin
ADMIN_LISTEN_PORT=7001
CONNECTION_STRING=oamDB:1521
RCUPREFIX=OAM04
DB_USER=sys
DB_PASSWORD=oamdb1234
DB_SCHEMA_PASSWORD=oamdb1234
oracle database is created using:
docker run -d --name oamDB --network=oamNET -p 1521:1521 -p 5500:5500 -e ORACLE_PWD=db1 -v /home/oam/user/host/dbtemp:/opt/oracle/oradata --env-file /home/oam/oracle/env.txt -it --shm-size="8g" -e ORACLE_EDITION=enterprise e ORACLE_ALLOW_REMOTE=true oamdb:19.3.0
i am able to connect to DB using docker
i have also executed:
alter user sys identified by oamdb1234 container=all; successfully.
containers running in docker:
oam#botrosubuntu:~/oracle$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d620fef9ddfc oamdb:19.3.0 "/bin/sh -c 'exec $O…" 9 days ago Up 3 hours (healthy) 0.0.0.0:1521->1521/tcp, 0.0.0.0:5500->5500/tcp oamDB

Error in adding 4th organization in to Hyperledger Fabric 2.0

I am new to Fabric 2.0 and recently installed all samples and I was able to run test-network without an issue with 2 orgs. Then I followed the directory on addOrg3 to add 3rd organization and join the channel I created earlier.
Now the fun part came in when I wanted to add 4th organization. What I did was, I copied the addOrg3 folder and renamed almost everything in each file to represent 4th organization. I even assigned new PORT for this organization. However I am seeing the following error.
I've also added the following in Scripts/envVar.sh
export PEER0_ORG4_CA=${PWD}/organizations/peerOrganizations/org4.example.com/peers/peer0.org4.example.com/tls/ca.crt
And added the following in envVarCLI.sh
elif [ $ORG -eq 4 ]; then
CORE_PEER_LOCALMSPID="Org4MSP"
CORE_PEER_TLS_ROOTCERT_FILE=$PEER0_ORG4_CA
CORE_PEER_ADDRESS=peer0.org4.example.com:12051
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/organizations/peerOrganizations/org4.example.com/users/Admin#.../msp
I have also added step1Org4.sh and step2Org4.sh basically following by following addOrg3's structure.
What steps do you follow to add additional organizations ? Please help.
"No such container: Org4cli"
Sorry for the formatting since I wasn't able to put in to coding style but here is the output from running the command "./addOrg4.sh up"
**Add Org4 to channel 'mychannel' with '10' seconds and CLI delay of '3' seconds and using database 'leveldb'
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/cryptogen
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
##########################################################
############ Create Org4 Identities ######################
##########################################################
+ cryptogen generate --config=org4-crypto.yaml --output=../organizations
org4.example.com
+ res=0
+ set +x
Generate CCP files for Org4
Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/../../bin/configtxgen
##########################################################
####### Generating Org4 organization definition #########
##########################################################
+ configtxgen -printOrg Org4MSP
2020-05-29 13:33:04.609 EDT [common.tools.configtxgen] main -> INFO 001 Loading configuration
2020-05-29 13:33:04.617 EDT [common.tools.configtxgen.localconfig] LoadTopLevel -> INFO 002 Loaded configuration: /Desktop/blockchain/BSI/fabric-samples/test-network/addOrg4/configtx.yaml
+ res=0
+ set +x
###############################################################
####### Generate and submit config tx to add Org4 #############
###############################################################
Error: No such container: Org4cli
ERROR !!!! Unable to create config tx **
In your addOrg4.sh have condition check like this:
CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
fi
If you already run addOrg3.sh up, CONTAINER_IDS alway have value (Example: 51b4ad60d812). It is ContainerID of Org3cli. So function Org4Up will never call. Simple way is just comment code like this:
# CONTAINER_IDS=$(docker ps -a | awk '($2 ~ /fabric-tools/) {print $1}')
# if [ -z "$CONTAINER_IDS" -o "$CONTAINER_IDS" == " " ]; then
echo "Bringing up network"
Org4Up
# fi
It will bring up Org4cli you missing.
First check the container is up or not and if it is up then I think the CLI where the command is executed is not bootstrapped with the Org4 details.
I have added the 4th Organization from the three Org Hyperledger Fabric Network .Firstly, you have to create the Org4-artifacts (Crypto.yaml and Org4 docker file including the Org4Cli) and then try to follow the manual (step by step) process to add the new Organization from the official documentation.
https://hyperledger-fabric.readthedocs.io/en/release-2.0/channel_update_tutorial.html
Omit the process of editing scripts (step1 Org3.sh ...) because the workflow for adding the 4th or a new Org is slightly changed.So,you will spend a lot of time in just modifying the scripts.
I will write an article to add a new Org (4th) on medium,will paste the link here too.

How to capture `systemctl status` and `journalctl -xe` from a kitchen run inside a Jenkins job

I have a Jenkins job running kitchen converge that is giving the following error:
---- Begin output of /bin/systemctl restart docker ----
STDOUT:
STDERR: Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
---- End output of /bin/systemctl restart docker ----
I'm looking for a way to capture the output of both commands above so I can diagnose what's wrong with the cookbook in that environment.
Running kitchen converge locally gives no error and I don't have access to the Jenkins node where this job is running.
I could get output adding the following code:
require 'mixlib/shellout'
Chef.event_handler do
on :run_failed do
systemctl = Mixlib::ShellOut.new("/bin/systemctl status docker.service")
systemctl.run_command
Chef::Log.info "Recipe failed miserably"
Chef::Log.info systemctl.stdout
Chef::Log.info systemctl.stderr
journalctl = Mixlib::ShellOut.new("/bin/journalctl -xe")
journalctl.run_command
Chef::Log.info journalctl.stdout
Chef::Log.info journalctl.stderr
end
end
Which doesn't seem to be optimal.
you didn't specify which chef resource executes the commands above, but generally speaking - execute chef-client with debug log level:
$ chef-client --log_level debug
you can achieve it by setting the log level in kitchen provisioner
---
provisioner:
name: chef_zero
log_level: :debug

Monitor a service running on a port other than 80 in Nagios

How do we monitor a remote service running on a machine using Nagios.
I have created a cfg file as follows:
define command {
command_name check_http
command_line /usr/lib64/nagios/plugins/check_http -H $HOSTADDRESS$ -p 8082
}
Now when I reload the configuration file, it throws following error:
Warning: Duplicate definition found for command 'check_http' (config file '/etc/nagios/servers/cfbase-prod.cfg', starting on line 19)
Error: Could not add object property in file '/etc/nagios/servers/cfbase-prod.cfg' on line 20.
Error processing object config files!
I am not able to figure out what is the problem.
Please help!
The basic problem is that the command_name value conflicts with the original/standard check_http command. You have (at least) a couple choices:
Set a unique command_name, e.g. check_http_8082.
Define a command to check http on an arbitrary port that gets passed as an argument. E.g.
define command{
command_name check_http_port
command_line /usr/lib64/nagios/plugins/check_http -H $HOSTADDRESS$ -p $ARG1$
}

How to use Docker in sbt-native-packager 0.8.0-M2 with Play

I am trying to build a Docker image on a Play 2.2 project. I am using Docker version 1.2.0 on Ubuntu Linux.
My Docker specific settings in Build.scala looks like this:
dockerBaseImage in Docker := "dockerfile/java:7"
maintainer in Docker := "My name"
dockerExposedPorts in Docker := Seq(9000, 9443)
dockerExposedVolumes in Docker := Seq("/opt/docker/logs")
Generated Dockerfile:
FROM dockerfile/java:latest
MAINTAINER
ADD files /
WORKDIR /opt/docker
RUN ["chown", "-R", "daemon", "."]
USER daemon
ENTRYPOINT ["bin/device-guides"]
CMD []
Output looks like the dockerBaseImage is being ignored, and the default
(dockerfile/java:latest) is not handled correctly:
[project] $ docker:publishLocal
[info] Wrote /..../project.pom
[info] Step 0 : FROM dockerfile/java:latest
[info] ---> bf7307ff060a
[info] Step 1 : MAINTAINER
[error] 2014/10/07 11:30:12 Invalid Dockerfile format
[trace] Stack trace suppressed: run last docker:publishLocal for the full output.
[error] (docker:publishLocal) Nonzero exit value: 1
[error] Total time: 2 s, completed Oct 7, 2014 11:30:12 AM
[project] $ run last docker:publishLocal
java.lang.RuntimeException: Invalid port argument: last
at scala.sys.package$.error(package.scala:27)
at play.PlayRun$class.play$PlayRun$$parsePort(PlayRun.scala:52)
at play.PlayRun$$anonfun$play$PlayRun$$filterArgs$2.apply(PlayRun.scala:69)
at play.PlayRun$$anonfun$play$PlayRun$$filterArgs$2.apply(PlayRun.scala:69)
at scala.Option.map(Option.scala:145)
at play.PlayRun$class.play$PlayRun$$filterArgs(PlayRun.scala:69)
at play.PlayRun$$anonfun$playRunTask$1$$anonfun$apply$1.apply(PlayRun.scala:97)
at play.PlayRun$$anonfun$playRunTask$1$$anonfun$apply$1.apply(PlayRun.scala:91)
at scala.Function7$$anonfun$tupled$1.apply(Function7.scala:35)
at scala.Function7$$anonfun$tupled$1.apply(Function7.scala:34)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
[trace] Stack trace suppressed: run last compile:run for the full output.
[error] (compile:run) Invalid port argument: last
[error] Total time: 0 s, completed Oct 7, 2014 11:30:16 AM
What needs to be done to make this work?
I am able to build the image using Docker from the command line:
docker build --force-rm -t device-guides:1.0-SNAPSHOT .
Packaging/publishing settings are per-project settings, rather than per-build settings.
You were using a Build.scala style build, with a format like this:
object ApplicationBuild extends Build {
val main = play.Project(appName, appVersion, libraryDependencies).settings(
...
)
}
The settings should be applied to this main project. This means that you call the settings() method on the project, passing in the appropriate settings to set up the packaging as you wish.
In this case:
object ApplicationBuild extends Build {
val main = play.Project(appName, appVersion, libraryDependencies).settings(
dockerBaseImage in Docker := "dockerfile/java:7",
maintainer in Docker := "My name",
dockerExposedPorts in Docker := Seq(9000, 9443),
dockerExposedVolumes in Docker := Seq("/opt/docker/logs")
)
}
To reuse similar settings across multiple projects, you can either create a val of type Seq[sbt.Setting], or extend sbt.Project to provide the common settings. See http://jsuereth.com/scala/2013/06/11/effective-sbt.html for some examples of how to do this (e.g. Rule #4).
This placement of settings is not necessarily clear if one is used to using build.sbt-type builds instead, because in that file, a line that evaluates to an SBT setting (or sequence of settings) is automatically appended to the root project's settings.
It's a wrong command you executed. I didn't saw it the first time.
run last docker:publishLocal
remove the run last
docker:publishLocal
Now you get your docker image build as expected

Resources