PDF report in zabbix 2.2.9 - monitoring

I am using Zabbix for the first time and successfully configured the same. Now the next very important step is to generate report and download as PDF. There are few links available to facilitate the same but I am unable to get the desired output.
https://www.zabbix.com/forum/showthread.php?t=24998
Please help with some solutions.

zabbix-dynamic-pdf-report
zabbix-dynamic-pdf-report module allow us to generate pdf reports. Once implemented we have options to generate reports for ‘Host’, ‘Host Group’ on time range of ‘Hour’, ‘Day’, ‘Week’, ‘Month’, ‘Year’.
We can retrieve old report from ‘Old Reports’ section.
Implementation Dependencies
php5-curl
php5-json
sudo apt-get install php5-curl php5-json
Clone the git repo that contain the module
cd /opt/
git clone https://github.com/SandipSingh14/Zabbix_
Configure zabbix-dynamic-pdf-report according to zabbix-server
vim Zabbix_/zabbix-dynamic-pdf-report/config.inc.php
<?php
//CONFIGURABLE
# zabbix server info(user must have API access)
$z_server = 'http://zabbix.example.com/';
$z_user = 'admin';
$z_pass = 'zabbix';
# Temporary directory for storing pdf data and graphs - must exist
$z_tmp_path = './tmp';
# Directory for storing PDF reports
$pdf_report_dir = './report';
# Root URL to reports
$pdf_report_url = "./report";
# paper settings
$paper_format = 'A4'; // formats supported: 4A0, 2A0, A0 -> A10, B0 -> B10, C0 -> C10, RA0 -> RA4, SRA0 -> SRA4, LETTER, LEGAL, EXECUTIVE, FOLIO
$paper_orientation = 'portrait'; // formats supported: portrait / landscape
# time zone - see http://php.net/manual/en/timezones.php
$timezone = 'Asia/Calcutta';
# Logo used in PDF - may be empty
# TODO: Specify image size!
$pdf_logo = './images/zabbix.png';
$company_name = 'Zabbix';
//DO NOT CHANGE BELOW THIS LINE
$z_tmp_cookies = "/tmp/";
$z_url_index = $z_server ."index.php";
$z_url_graph = $z_server ."chart2.php";
$z_url_api = $z_server ."api_jsonrpc.php";
$z_login_data = "name=" .$z_user ."&password=" .$z_pass ."&autologin=1&enter=Sign+in";
?>
Change directory and create directory inside zabbix-dynamic-pdf-report
cd Zabbix_/zabbix-dynamic-pdf-report
mkdir tmp report
Change zabbix auth to login method, it’s required for login into zabbix server
sed -i 's,user.authenticate,user.login,g' inc/ZabbixAPI.class.php
sed -i 's,user.authenticate,user.login,g' inc/ZabbixAPI.class.php.org
Copy zabbix-dynamic-pdf-report into /usr/share/zabbix/
cp -r /opt/Zabbix_/zabbix-dynamic-pdf-report /usr/share/zabbix/
Restart apache
service apache2 restart
Procedure to Generate Reports
Open Generate PDF Report page
http:///zabbix-dynamic-pdf-report/index.php
Once Opened you can choose report type ‘HOST OR HOSTGROUP’ and click the dropdown to select the hostname or hostgroup name
And you can choose report range ‘LAST OR CUSTOM’ and select the dropdown for the time period of report
And if you select ‘custom’ in report range then you can select the time period for which you want to generate the report.
And click on ‘GENERATE’ button and your report will be generated.
Module provides feature of old reports i.e you can see the reports you have generated earlier.

Related

Is it possible to automatically save playlists (to files) in Rhythmbox

Besides the question in the title I would like to explain my motivation, maybe there is another solution for my situation.
I work at different stations of a little local network, I usually work in station 3, where I listen to music while I work and where I add new songs to my playlists.
If, for a couple of days, I have to work at station 5, I would like to listen to music saved at one of my playlists. In order to do so, I have to save the playlist to a file in station 3, and then import it in station 5, but sometimes I forget to do it and when I'm already in station 5 I have to go back to station 3 and save the pl.
So, one part is the question asked in the title, and another would be how to automatically update or import the saved playlist (in station 5, or any other.)
Thanks.
Ok, here it goes how I solved my issue. First I have to explain how my network is set:
5 computers in the network, Station 1 is the "File server" giving this service via NFS (all computers in the network are Linux). Stations 2 to 5 mount directories as set in the "/etc/fstab" file, for exemple:
# File server
fileserv:/home/REMOTEUSER/Documents /home/LOCALUSER/Documents nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Music /home/LOCALUSER/Music nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Video /home/LOCALUSER/Video nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Downloads /home/LOCALUSER/Downloads nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
fileserv:/home/REMOTEUSER/Images /home/LOCALUSER/Images nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
NOTE: if you don't have your server in the /etc/hosts file you can use the ip instead, like:
192.168.1.1:/home/REMOTEUSER/Documents /home/LOCALUSER/Documents nfs4 rsize=8192,wsize=8192,timeo=14,intr,_netdev 0 0
etc...
Having previous data in mind. In station 3 I have set an every hour cron job that runs the next command (I could find the way to execute a script on logout, but I usually only turnoff the machine which does not run the script. If I put the script in rc6.d the problem is that station 3 root user is not allowed in station 1 (file server), and the "local user" of station 3 is already logged out).
crontab -l
# m h dom mon dow command
0 * * * * cp /home/USER/.local/share/rhythmbox/playlists.xml /home/USER/Documents/USER/musiclists/
To recover music lists from station 3, I have created next script in station 5:
File: .RhythmboxPlaylists.sh
#!/bin/sh
### Modify variables as needed
REMUS="USER" #Remote user
LOCUS="USER" #Local user
### Rhythmbox play list location saved from station 3
ORIGPL="/home/$LOCUS/Documents/$LOCUS/musiclists/playlists.xml"
#### Local Rhythmbox play list location
DESTPL="/home/$LOCUS/.local/share/rhythmbox/playlists.xml"
### DO NOT MODIFY FROM THIS LINE DOWN
sed -i "s/home\/$REMUS\//home\/$LOCUS\//g" $ORIGPL
mv $ORIGPL $DESTPL
Set file as executable
chmod +X .RhythmboxPlaylists.sh
Add next line:
sh $HOME/.RhythmboxPlaylists.sh
at the end of file .bashrc to run it at user login (save .bashrc).
Then, when I open Rhythmbox in station 5 I have the same playlists with the same songs as in station 3.
I finally came out with a partial solution. It is partial because it covers only the "Automatically saving Rhythmbox playlists to files". I still don't know how to automatically load playlists from files into Rhythmbox... let's see the script I've created (which you can put either at starting or shutting down your system):
File: playlist.sh
#!/bin/sh
#Variables [Replace USER by your Linux user and set the playlistDir where suits you the best]
playlistXml="/home/USER/.local/share/rhythmbox/playlists.xml"
playlistDir="/home/USER/musiclists"
# Create a file per list
xmlstarlet sel -t -v 'rhythmdb-playlists/playlist/#name' -nl "$playlistXml" |
while read name; do
xmlstarlet sel -t --var name="'$name'" -v 'rhythmdb-playlists/playlist[#name = $name]' "$playlistXml" > "$playlistDir/$name.pls"
#Delete empty lines from generated files
sed -i "/^$/d" "$playlistDir/$name.pls"
#Add line numbers to define file number
cat -n "$playlistDir/$name.pls" > tmp
mv tmp "$playlistDir/$name.pls"
#Add file headder
songs=$(wc -l < "$playlistDir/$name.pls")
sed -i "1i \[playlist\]\nX-GNOME-Title=$name\nNumberOfEntries=$songs" "$playlistDir/$name.pls"
done
#Format playlist
sed -i -r "s/^\s+([0-9]+)\s+file:(.*)$/File\1=file:\2\nTitle\1=/g" $playlistDir/*.pls
Set the file as executable: chmod +x playlist.sh
I have implemented another user based solution. For this to work you need to log into the different workstations with the same user....
Close Rhythmbox on the stations/users involved.
In the user directory located on the file server create a new subdirectory, let's call it rhythmbox.
Inside the newly created rhythmbox subdirectory, create two new subdirectories, cache and share.
From the workstation where you usually manage Rhythmbox, that is, where you create and maintain playlists, move the Rhythmbox cache to the file server cache directory:
# mv $HOME/.cache/rhythmbox //file-server/home/USER/rhythmbox/cache/
Move the Rhythmbox shared directory to the file server:
# mv $HOME/.local/share/rhythmbox //file-server/home/USER/rhythmbox/share/
Where the original directories where, create symbolic links.
a1. # cd $HOME/.cache/
a2. # ln -s //file-server/home/USER/rhythmbox/cache/rhythmbox
b1. # cd $HOME/.local/share/
b2. # ln -s //file-server/home/USER/rhythmbox/rhythmbox/share/rhythmbox
On the other stations remove the Rhythmbox cache and share directories and replace them with the symbolic links.
Then, the next time you open your Rhythmbox from any station logging in with the same user, your Music application will access the same data, so the settings and playlists will be the same on all stations.

massaging packages that install files outside of nix-store

I'm using nix package-manager on macOS (Sierra).
My intention is to write a nix expression that will install the existing fish nix package along with the Bass fish plugin.
There are no existing expressions in nixpkgs for Bass, but the git repo contains a Makefile. This Makefile attempts to copy files to the $HOME dir. This is a problem as installing files outside of the nix-store is clearly not desirable and $HOME is not set when I build my package.
I can recognise why it's not desirable for nix packages to install files outside of the nix-store - in functional programming terms it's akin to a side-effect. But I'm also not clear on how to solve my problem:
By default Fish requires plugins such as Bass to be installed under $HOME/.config/fish/. Fish does provide a means to customise the config path by specifying the environment variable XDG_CONFIG_HOME. So I was thinking of doing something like this:
Create an expression for Bass patching the Makefile to install the files under $out.
Create an expression that installs fish and uses Bass as a build input. Use wrapProgram to set XDG_CONFIG_HOME pointing to the Bass install path in the nix-store.
Does this sound like the right approach? Are there alternative/better ways of solving this?
Thanks
This is the solution that I have gone with:
Expression for bass:
nix_local/pkgs/fish_plugins/bass/default.nix
{stdenv, fetchFromGitHub}:
let
version = "0.0.1";
in
stdenv.mkDerivation rec {
name = "bass-${version}";
src = fetchFromGitHub {
owner = "edc";
repo = "bass";
rev = "1fbf1b66f52026644818016015b8fa9e0f639364";
sha256 = "12bp8zipjbikasx20yz29ci3hikw0ksqlbxbvi2xgi4g6rmj7pxp";
};
patchPhase = ''
substituteInPlace Makefile --replace \
"~/.config/fish" \
$out/.config/fish
'';
}
Expression for fish_with_config:
nix_local/pkgs/fish_with_config/default.nix
{stdenv, fish, bass, makeWrapper}:
let
version = "0.0.1";
in
stdenv.mkDerivation rec {
name = "fish-with-config-${version}";
src = ./.;
buildInputs = [fish bass makeWrapper];
installPhase = ''
mkdir -p $out/.config/fish/functions
cp -r $src/.config/* $out/.config
cp -r ${bass}/.config/fish/functions/* \
$out/.config/fish/functions/
mkdir -p $out/bin
ln -s ${fish}/bin/fish $out/bin/fish
wrapProgram $out/bin/fish --set XDG_CONFIG_HOME "$out/.config"
'';
}
The Fish program is wrapped in order for it's config to be stored in the nix-store. This enables us to symlink the functions from Bass and also copy any additional config files from the local $src dir. Additional plugins could be symlinked in the same way.
The local src dir for the derivation contains the following files:
pkgs/fish_with_config
├── .config
│   └── fish
│   ├── fishd.8c8590486f8c
│   └── functions
└── default.nix
The .config/fish/fishd.8c8590486f8c file is a "universal variable file" which Fish requires in order to operate. In a standard Fish installation this file is stored under ~/config/fish/ and is created the first time you enter interactive mode. The contents of this file would typically change over time as users interact with Fish settings.
The fish_with_config derivation stores the Fish config in the nix-store, which means it can't be modified at a latter date (not writable). This means all the config settings need to be done upfront as any attempts by the user to modify the settings will result in permission errors - this is obviously a little inconvenient, but not a show stopper for me.
It's probably worth noting that the universal variable file may change with different releases of Fish and as such if I was to build fish_with_config with a newer version of Fish I would first determine it's default content by running fish in a nix-shell and inspecting the auto generated file under ~/config/fish/.
In summary the above works nicely, I have access to bass and any additional user defined functions I choose to "bake in" (pkgs/fish_with_config/.config/fish/functions).
If you see anything that could be improved or handled more idiomatically let me know.

Interpreting Fortify results file (.fpr) through command line

As part of automating the process of running secure code analysis, I have a Jenkins job which uses the sourceanalyzer command line tool to generate an .fpr results file. At the moment I'm opening this results file in Audit Workbench application to view the results and check if there's any newly introduced issues etc, and generating a report from there in PDF/XML format.
Does anyone is it possible to invoke Audit Workbench through the command line and generate a report on the issues, which we could then leverage through a Jenkins script and also then mail the results? Looking online the command line usage seems to stop at the fpr generation stage.
Thanks in advance!
There is a command-line utility to generate an Report from the FPR file.
Currently there are two report generators: Legacy and BIRT. The BIRT report engine was introduced into Audit Workbench with version 4.40.
Here is an example using the BIRT Report engine to generate a DISA STIG report
BIRTReportGenerator -template "DISA STIG" -source HelloWorld_second.fpr
-output BirtReport.pdf -format PDF -showSuppressed --Version "DISA STIG 3.9"
-UseFortifyPriorityOrder
Using the legacy one is a little more involved. The command is:
ReportGenerator -format pdf -f LegacyReport.pdf -source HelloWorld_second.fpr
-template DisaStig3.10.xml -showSuppressed -showHidden
You can either use one of the predefined template reports located in the <SCA Install Dir>/Core/config/reports directory or generate one using the Report Wizard and saving the template which gets stored in the C:\Users\<USER>\AppData\Local\Fortify\config\AWB-XX.XX\reports\ directory in Windows.
On Linux/Mac look at the configuration file <SCA Install Dir>/Core/config/fortify.properties for the com.fortify.WorkingDirectory property, this is where the reports will be stored
#SBurris,
If you don't want to show Suppressed/Hidden is it just -hideSuppressed and -hideHidden?
Also, is there a way to add custom filters to not show things like "nones" from the STIG/SANS/OWASP like you can create in the AWB GUI?
Basically, I need a command(s) to merge two FPRs and then compare them based on what is found new on the scanned code vs. the old FPR.
Merge should be:
FPRUtility -merge -project <newest_scan.fpr> -source <previous_scan.fpr> -f <BUILDXX_MergedWith_BUILDXY.fpr>
The custom filter I need after the merge is:
"[OWASP Top 10 2013]:!<none> OR [SANS Top 25 2011]:!<none> OR [STIG 3.9]:!<none> AND [Detected On]:!/^/"
Where the Detected On field is a custom tag that I need to carry through from the previous FPR file into the newly merged one.
AND THEN output the report from that newly merged fpr in pdf and xml format to a location/filename I specify. Something along the lines of:
~AWB_Installation_Dir/bin/ReportGenerator -format pdf -f [BUILDXX_MergedWith_BUILDXY].pdf -source output.fpr
-template DisaStig3.10.xml -hideSuppressed -hideHidden
Obviously this can be a multitude of commands as long as we can get it back to Bamboo. Any help would be greatly appreciated. Thanks.
FPRUtility interprets the space-separated conditions in the -information -search -query ... parameter by applying the boolean AND operator. To obtain a union of 2 conditions A || B, I figured I could intersect negations of other conditions that complement the former: !C && !D (where A || B || C || D always holds true). I.e., to find all high and critical issues, I use
FORTIFY_ROOT\jre\bin\java -d64 -Xmx4096M -jar FORTIFY_ROOT\Core\lib\exe\fpr-utility-exe.jar -project APP_VER_DATE.fpr -information -search -query "[OWASP Top 10 2017]:A [fortify priority order]:!low [fortify priority order]:!medium" -categoryIssueCounts -listIssues > issues.txt
In case of an audit, I figured I needed the older report generation utility to include suppressed issues (and their comments),
sed -e 's/\(IssueListing limit=\)"[^"]\+"/\1"-1"/' -i "FORTIFY_ROOT/Core/config/reports/DeveloperWorkbook.xml"
cmd /c call ReportGenerator -template DeveloperWorkbookAll.xml -format pdf -source APP_VER_DATE.fpr -showSuppressed -f "APP_VER_DATE_with_suppressed.pdf"

Fortify, how to start analysis through command

How we can generate FortiFy report using command ??? on linux.
In command, how we can include only some folders or files for analyzing and how we can give the location to store the report. etc.
Please help....
Thanks,
Karthik
1. Step#1 (clean cache)
you need to plan scan structure before starting:
scanid = 9999 (can be anything you like)
ProjectRoot = /local/proj/9999/
WorkingDirectory = /local/proj/9999/working
(this dir is huge, you need to "rm -rf ./working && mkdir ./working" before every scan, or byte code piles underneath this dir and consume your harddisk fast)
log = /local/proj/9999/working/sca.log
source='/local/proj/9999/source/src/**.*'
classpath='local/proj/9999/source/WEB-INF/lib/*.jar; /local/proj/9999/source/jars/**.*; /local/proj/9999/source/classes/**.*'
./sourceanalyzer -b 9999 -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/working/9999/working/sca.log -clean
It is important to specify ProjectRoot, if not overwrite this system default, it will put under your /home/user.fortify
sca.log location is very important, if fortify does not find this file, it cannot find byte code to scan.
You can alter the ProjectRoot and Working Directory once for all if your are the only user: FORTIFY_HOME/Core/config/fortify_sca.properties).
In such case, your command line would be ./sourceanalyzer -b 9999 -clean
2. Step#2 (translate source code to byte code)
nohup ./sourceanalyzer -b 9999 -verbose -64 -Xmx8000M -Xss24M -XX:MaxPermSize=128M -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+UseParallelGC -Dcom.fortify.sca.ProjectRoot=/local/proj/9999/ -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/proj/9999/sca.log -source 1.5 -classpath '/local/proj/9999/source/WEB-INF/lib/*.jar:/local/proj/9999/source/jars/**/*.jar:/local/proj/9999/source/classes/**/*.class' -extdirs '/local/proj/9999/source/wars/*.war' '/local/proj/9999/source/src/**/*' &
always unix background job (&) in case your session to server is timeout, it will keep working.
cp : put all your known classpath here for fortify to resolve the functiodfn calls. If function not found, fortify will skip the source code translation, so this part will not be scanned later. You will get a poor scan quality but FPR looks good (low issue reported). It is important to have all dependency jars in place.
-extdir: put all directories/files you don't want to be scanned here.
the last section, files between ' ' are your source.
-64 is to use 64-bit java, if not specified, 32-bit will be used and the max heap should be <1.3 GB (-Xmx1200M is safe).
-XX: are the same meaning as in launch application server. only use these to control the class heap and garbage collection. This is to tweak performance.
-source is java version (1.5 to 1.8)
3. Step#3 (scan with rulepack, custom rules, filters, etc)
nohup ./sourceanalyzer -b 9999 -64 -Xmx8000M -Dcom.fortify.sca.ProjectRoot=/local/proj/9999 -Dcom.fortify.WorkingDirectory=/local/proj/9999/working -logfile /local/ssap/proj/9999/working/sca.log **-scan** -filter '/local/other/filter.txt' -rules '/local/other/custom/*.xml -f '/local/proj/9999.fpr' &
-filter: file name must be filter.txt, any ruleguid in this file will not be reported.
rules: this is the custom rule you wrote. the HP rulepack is in FORTIFY_HOME/Core/config/rules directory
-scan : keyword to tell fortify engine to scan existing scanid. You can skip step#2 and only do step#3 if you did notchange code, just want to play with different filter/custom rules
4. Step#4 Generate PDF from the FPR file (if required)
./ReportGenerator -format pdf -f '/local/proj/9999.pdf' -source '/local/proj/9999.fpr'

wkhtmltopdf attempting to load from http rather than file

Here's an odd little problem that's led me to post my first question on SO. I am using wkhtmltopdf to convert an HTML document to a PDF as part of a Rails app. To do so, I am rendering the Rails web page to a static HTML file in a temp directory, copying a static header, footer and images to the same temp directory, then executing wkhtmltopdf using "system".
This works perfectly in Development and Test environments. In my Staging env, it does not. I suspected permissions at first, but the first couple of parts of that process (creating the HTML static files and copying them to the directory) are working. I can run wkhtmltopdf from the command line in that temp directory and get the expected outcome. Finally, I ran wkhtmltopdf via both "system" and backticks through the Rails console in staging environment, and here's what I get as output:
> `wkhtmltopdf --footer-html tmp/invoices/footer.html --header-html tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in tmp/invoices/test.html tmp/invoices/this.pdf`
Loading pages (1/6)
QPainter::begin(): Returned false ] 10%
Error: Unable to write to destination
Error: Failed loading page http://tmp/invoices/test.html (sometimes it will work just to ignore this error with --load-error-handling ignore) => ""
Notice that last bit. I'm pointing to local files, but it's looking for them via http. OK, I think, maybe I need to be explicit and feed it the file:// protocol so it doesn't look for http. So I try this:
> system("wkhtmltopdf --footer-html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/footer.html --header-html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html file://Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/this.pdf")
Loading pages (1/6)
Error: Failed loading page file://library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html (sometimes it will work just to ignore this error with --load-error-handling ignore)
=> false
Notice that this one fails with a lowercase "l" on Library. What the heck? (And no, it doesn't get any better with the recommendation to ignore the error with that switch.)
Any ideas? Is there a Rails or Ruby setting that would cause system commands to get rewritten? Is there an option I can add to wkhtmltopdf to make sure it loads from local file? I'm quite baffled. Thanks!
I have had success when using the absolute file path (notice the extra slash after the file://)
wkhtmltopdf --footer-html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/footer.html --header-html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/header.html -s Letter -L 0in -R 0in -T 0.5in -B 1in file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/test.html file:///Library/Server/Web/Data/Sites/intranet-staging/current/tmp/invoices/this.pdf
This is the same on windows
Unix path
file:///absolute/path/to/file
Windows path
file:///C:/absolute/path/to/file
In last 0.11 whicked-pdf i found one bug
Example
C:\Ruby193\lib\ruby\gems\1.9.1\gems\wicked_pdf-0.11.0\lib>wicked_pdf.rb
Line 198 I change from:
options[hf][:html][:url] = "file://#{tf.path}" to options[hf][:html][:url] = "file:///#{tf.path}" - (change // to ///)
After change whicked-pdf again worked.
Take a look at the wicked_pdf gem.
You can add a PDF mime type and then whatever page you want pdf'd, just tack on a .pdf to the URL.
I am using this in prod and it works quite well.
No need to call wkhtmltopdf directly.

Resources