Graylog: How to Import Apache LogFiles into the Graylog server - graylog

I have a specific need for knowing how to "import" log files I receive from anyone into Graylog. My need is not about 'sending' or configuring a collector that will be sending logs to Graylog.
I need to know if I can copy a TAR with logs into the graylog and render the content of via the Web UI of Graylog.
I have read many blogs, and I am having difficulty finding guidance for my specific need.
Your help is greatly appreciated

so far as i know it is not possible to import logs, but you can use fluentd(http://www.fluentd.org/guides/recipes/graylog2) to read log-files.
BUT if you want to send logfiles from apache to graylog try this, add into you apache2.conf the following lines:
LogFormat "{ \"version\": \"1.1\", \"host\": \"%V\", \"short_message\": \"%r\", \"timestamp\": %{%s}t, \"level\": 6, \"_user_agent\": \"%{User-Agent}i\", \"_source_ip\": \"%a\", \"_duration_usec\": %D, \"_duration_sec\": %T, \"_request_size_byte\": %O, \"_http_status\": %s, \"_http_request_path\": \"%U\", \"_http_request\": \"%U%q\", \"_http_method\": \"%m\", \"_http_referer\": \"%{Referer}i\" }" graylog2_access
and add into you virtualhost file the following lines:
CustomLog "|/bin/nc -u syslogserver.example.de 50520" graylog2_access
also take a look here: https://serverfault.com/questions/310695/sending-logs-to-graylog2-server

You could try the community editon of nxlog. With nxlog you can load up your log files with im_file parse the logs up some and get them into gelf format which should make them easier to search in Graylog2. If you set SavePos and ReadFromLast to False it will suck in the entire log file anytime you kick off nxlog regardless of when the log happened, or even if it's been entered into Graylog2 before.

Related

Why isn't telegraf reading environmental variables?

My goal is to put my telegraf config into source control. To do so, I have a repo in my user's home directory with the appropriate config file which has already been tested and proven working.
I have added the path to the new config file in the "default" environment variables file:
/etc/default/telegraf
like this:
TELEGRAF_CONFIG_PATH="/home/ubuntu/some_repo/telegraf.conf"
... as well as other required variables such as passwords.
However, when I attempt to run
telegraf --test
It says No config file specified, and could not find one in $TELEGRAF_CONFIG_PATH etc.
Further, if I force it by
telegraf --test --config /home/ubuntu/some_repo/telegraf.conf
Then the process fails because it is missing the other required variables.
Questions:
What am I doing wrong?
Is there not also a way of specifying a config directory too (I would like to break my file down into separate input files)?
Perhaps as an alternative to all of this... is there not a way of specifying additional configuration files to be included from within the default /etc/telegraf/telegraf.conf file? (I've been unable to find any mention of this in documentation).
What am I doing wrong?
See what user:group owns /etc/default/telegraf. This file is better used when running telegraf as a service via systemd. Additionally, if you run env do you see the TELEGRAF_CONFIG_PATH variable? What about your other variables? If not, then you probably need to source the file first.
Is there not also a way of specifying a config directory too (I would like to break my file down into separate input files)?
Yes! Take a look at all the options of telegraf with telegraf --help and you will find:
--config-directory <directory> directory containing additional *.conf files
Perhaps as an alternative to all of this... is there not a way of specifying additional configuration files to be included from within the default /etc/telegraf/telegraf.conf file? (I've been unable to find any mention of this in documentation).
That is not the method I would suggest going down. Check out the config directory option above I mentioned.
Ok, after a LOT of trial and error, I figured everything out. For those facing similar issues, here is your shortcut to the answer:
Firstly, remember that when adding variables to the /etc/default/telegraf file, it must effectively be reloaded. So for example using ubuntu systemctl, that requires a restart.
You can verify that the variables have been loaded successfully using this:
$ sudo strings /proc/<pid>/environ
where <pid> is the "Main PID" from the telegraf status output
Secondly, when testing (eg telegraf --test) then (this is the part that is not necessarily intuitive and isn't documented) you will have to ALSO load the same environmental variables into the current user (eg: SET var=value) such that running
$ env
shows the same results as the previous command.
Hint: This is a good method for loading the current env file directly rather than doing it manually.

When I include this adminAuth code in node-red settings.js , I get the following error. When I comment it out, it's fine, what's wrong?

The entire settings file is the default settings file produced by node-red with the addition of
//COMMENT TEST HERE
adminAuth: require("./node_modules/node-red-contrib-okta/user-authentication")({
oktaAPIToken: process.env.OKTA_TOKEN,
oktaAPIUrl: process.env.OKTA_URL, //Okta API url
groups: [
{
groupID: process.env.OKTA_GROUP_ID, //okta group DD
permissions: '*'
}
]
}),
When I include this code, I get this error
docker-node-red-web-1 | 2022-06-07T20:24:24: PM2 log: App [node-red:0] exited with code [0] via signal [SIGINT]
docker-node-red-web-1 | 2022-06-07T20:24:24: PM2 log: App [node-red:0] starting in -fork mode-
docker-node-red-web-1 | 2022-06-07T20:24:24: PM2 log: App [node-red:0] online
when I do not include it, everything works smoothly, what is going wrong?
I can include the authentication code if it helps anyone responding, but I doubt it's really the route of the issue.
Thanks
There isn't enough information in the question to answer this.
The settings.js needs to be a properly formatted JavaScript source file. The easiest way to test this is to manually load the settings file in nodejs.
Assuming you have a local directory mounted as /data which contains the settings.js file, do the following
cd in to the volume directory
export all the required environment variables
run node
type require('./settings.js') and press enter
This should give you a exact error message for what is wrong with your settings.js file
The solution ended up being the location of certain directories!
A Little bit hard to explain over text, but there had to be some stuff shuffled around.

perfino agent is not logging to a file even when i use the logfile directive

I do something like this
-javaagent:/usr/local/lib/perfino/perfino.jar=server=ybperfino,name=${HSTNAMESHORT}-${APPNAME},group=${YBENV}/${HSTNAMESHORT},logMBean=10,logFile=${LOG_DIR}/perfinologs/${HSTNAMESHORT}-${APPNAME}.log
basically I want the log files to be created in the log directory for the app not the home directory for the userid
but it seems like the log file isn't being created either with logfile argument or with out !
using java11 if that makes any difference.
Found the answer - I had a competing java agent that was loading before it.
After I changed the order both java agents worked.

How can you store the logs of flume- generated by running flume in debug mode- in a file?

When I run flume using the command :
bin/flume-ng agent --conf conf --conf-file flume.conf --name agentName -Dflume.root.logger=INFO,console
it runs listing all its log data on the console. I would like to store all this log data (flume's log data) in a file. How do I do it?
You need to make a custom build of Flume which uses log4j2.
You configure log4j2 to use a rolling file appender that rolls every minute (or whatever the latency is that you desire) to a spooling directory.
You configure Flume to use a SpoolingDirectorySource against that spooling directory.
You can't use a direct Flume appender (such as what's in log4j2) to log Flume because you will get into deadlock.
You can't use log4j1 with a rolling file appender because it has a concurrency defect which means it may write new messages to an old file and the SpoolingDirectorySource then fails.
I can't remember if I tried the Log4j appender from Flume with this setup. That appender does not have many ways to configure it and I think it will cause you problems if the subsequent agent you're trying to talk to is down.
Another approach might be to patch log4j1 and fix that concurrency defect (there's a variable that needs to be made volatile)
(Yes, setting this up is a little frustrating!)
dont run with -Dflume.root.logger=INFO,console ,then flume will log in ./logs

Remove inexistance files in logs Symfony 1.4

So in my log file of my symfony project I have errors for inexistant files then after a small time I have a big log file > 50 Mb,I don't know what I can do to resolve this problem??I never work with this kind of problem?
php symfony log:rotate [--history="..."] [--period="..."] application env
You can find this documented at:
http://www.symfony-project.org/reference/1_4/en/16-Tasks
On Unix's, you can run this as a cron job, with the same formatting that you would do for any task.

Resources