I am trying to implement logging on connected users in my vernemq client using erlang. From documentation, I found that this could be bad, due to the scalability of the client and the assumption that there might be a lot of clients connecting and disconnecting. This is not my case, I will just have a bunch of clients but a lot of messages. Anyway, to my question. Is it possible to change the log file when using error_logger? Or should I use a different module for logging? Log file can be in any location if it had to, but I need it separated from vernemqs console.log. A followup question would be, can I somehow get a floating window on logs? I don't need to keep logs from previous year and I don't want to manually clean them every day or week or something like that
Thanks for any responses
From OTP21 on, you should use logger instead of error_logger, although the error_logger API is kept for compatibility (it justs uses logger under the hood).
With logger, which you can configure with the system configuration, you can use file backends such as logger_std_h (check the example configurations).
In logger_std_h you can set file rotation.
Related
Where I work, we are migrating our entire infrastructure which was until now based on monolithic services that ran directly on a windows/linux VM to a docker based architecture that will be orchestrated by Kubernetes.
One of the things that came to my mind is how we would handle logs in this new infrastructure.
Up until now, each app had its own way of handling logs, some were using log4net/log4j to write to file system, some were writing to GrayLog via a dedicated library.
The main problem I have with that is that one of the core ideas of programming micro-services in a Docker environment is that every service should assume as little as possible about the rest of services or the platform.
So basically I was looking into how I can abstract the logging process from the application, make it independent from the rest of the infrastructure.
One interesting thing that I found was that you could write the logs to standard output (stdout) and then configure Kubernetes to pull these logs and direct them to a centralised storage or a centralised logging server (like GrayLog) https://kubernetes.io/docs/concepts/cluster-administration/logging/
I have several concerns with this approach, for once, I haven't seen too many companies that do it, most popular logging solutions are to use a dedicated library to log to filesystem.
I am also concerned about how it might impact performance, some languages block if you write to stdout, whereas when you use a standard logging library, the logs are queued.
So what about services that output massive amount of user related logs?
I was interested about what you think, I didn't see this approach used widely, maybe there is reason for that.
Logging to whatever stream (File, stdout, GrayLog...) can either be synchronous (blocking) or asynchronous (non-blocking). Inherently, that has nothing to do with the medium you log to per-se. It is true that using System.out.println in Java will result in heavy thread-contention.
All the major logging frameworks (like log4j) provide you with the means to log in an asynchronous fashion to every medium that you like.
Your perception of not many companies doing this I think is wrong. Logging to stdout and configuring your underlying architecture to forward logs somewhere is the defacto standard of all PaaS/containerized applications.
So my tip is going to be: Log to stdout using a good logging framework which ensures asynchronous usage of the stream. For the rest you'll probably be fine.
I have a MQTT environment like this:
there is One (gray) sensor and one Observer that are related by the topic room/temp, so far so good, sensor can publish and the Observer can get the info as it should.
the Issue I have is now: I need to block IN THE BROKER that a 2nd undesired client comes(the orange one),and start to publish into the same topic, as far as I know, MQTT is loose coupled so that observer doesn't care who is pushing the temp values, but I find a security flawless when someone hack my environment and publish non sense triggering my alarms...
any suggestion?
am using eMQTTd by the way and according to this there is nothing in the etc/emqttd.config file I can do to avoid that...
Thanks!
I only have experience with Mosquitto but, from a quick read of the document linked, it looks like there are several ways you could achieve this.
I am unclear if you are talking about an incidental problem here--i.e. bad information is being accidentally sent--or if you are protecting against an active threat.
If you are concerned with incidental overwriting of a value, then the simple clientid solution on (pg. 38) would work.
But my impression is that it would still be transmitted in the clear and thus be of little use to you if you are facing an actual adversary (hacker etc.). If that is your concern simply setup SSL and remove all non-SSL listeners. (See pg. 24). That should limit all traffic to an encrypted channel. Then if you wish add password / user authentication (pg. 38) to complete the security.
Alternatively, depending on your configuration, you could block unapproved ip addresses at the firewall level (i.e. block access to the port that your broker is listening on to all addresses except for the temperature sensor) or using eMQTTd's built in ACL facility (pg. 25). That would be less secure than a full SSL setup but depending upon your needs it might be enough.
I don't know if it sounds crazy, but here's the scenario -
I need to print a document over the internet. My pc ClientX initiates the process using the web browser to access a ServerY on the internet and the printer is connected to a ClientZ (may be yours).
1. The document is stored on ServerY.
2. ClientZ is purely a cliet; no IIS, no print server etc.
3. I have the specific details of ClientZ, IP, Port, etc.
4. It'll be completely a server side application (and no client-side on ClientZ) with ASP.NET & C#
- so, is it possible? If yes, please give some clue. Thanks advanced.
This is kind of to big of a question for SO but basically what you need to do is
upload files to the server -- trivial
do some stuff to figure out if they are allowed to print the document -- trivial to hard depending on scope
add items to a queue for printing and associate them with a user/session -- easy
render and print the document -- trivial to hard depending on scope
notify the user that the document has been printed
handling errors
the big unknowns here are scope, if this is for a school project you probably don't have to worry about billing or queue priority in step two. If its for a commercial product billing can be a significant subsystem in its self.
the difficulty in step 4 depends directly on what formats you are going to support as many formats are going to require document specific libraries or applications. There are also security considerations here if this is a commercial product since it isn't safe to try to render all types of files.
Notifications can be easy or hard depending on how you want to do it. You can post back to the html page, but depending on how long its going to take for a job to complete it might be nice to have an email option as well.
You also need to think about errors. What is going to happen when paper or toner runs out or when someone tries to print something on A4 paper? Someone has to be notified so that jobs don't just build up.
On the server I would run just the user interaction piece on the web and have a "print daemon" running as a service to manage getting the documents printed and monitoring their status. I would use WCF to do IPC between the two.
Within the print daemon you are going to need a set of components to print different kinds of documents. I would make one assembly per type (or cluster of types) and load them into your service as plugins using MEF.
sorry this is so general, but you are asking a pretty general and difficult to answer question.
Our web application sends e-mails. We have lots of users, and we get lots of bounces. For example, user changes company and his company e-mail is no longer valid.
To find bounces, I parse SMTP log file with log parser. The logs come from Microsoft SMTP server.
Some bounces are great, like 550+#5.1.0+Address+rejected+user#domain.com. There is user#domain.com in bounce.
But some do not have e-mail in error message, like 550+No+such+recipient.
I have created simple Ruby script that parses logs (uses log parser) to find which mail caused something like 550+No+such+recipient.
I am just surprised that I could not find a tool that does it. I have found tools like Zabbix and Splunk for log analysis, but they look like overkill for such simple task.
Anybody knows a tool that would parse SMTP logs, find bounces and e-mails that cause them?
As far as I can see, log file analysis is really only useful to detect mails which are rejected at the SMTP session level. What about bounces which occur after the remote MTA has accepted a message for delivery but subsequently fails to deliver it?
We use the following set up to detect and classify all bounces after delivery to the remote MTA.
All outgoing mails are given a unique return-path header which, when decoded, identifies the recipient email address and the particular mailing.
An Apache James server which receives mail returned to the returned-path address.
A custom mailet, developed in Java and executing within Apache James which decodes the to address, sends the email text to boogietools bounce studio for bounce type classification and then persists the results to our database.
It works very, very well. We are able to detect permanent hard bounces and transient soft bounces which are further classified into very granular bounce types such as spam rejections, out of office replies etc.
This article is exactly what you are looking for. It is based on the great tool log parser.
Log parser is a powerful, versatile
tool that provides universal query
access to text-based data such as log
files, XML files and CSV files, as
well as key data sources on the
Windows® operating system such as the
Event Log, the Registry, the file
system, and Active Directory®. You
tell Log Parser what information you
need and how you want it processed.
The results of your query can be
custom-formatted in text based output,
or they can be persisted to more
specialty targets like SQL, SYSLOG, or
a chart. Most software is designed to
accomplish a limited number of
specific tasks. Log Parser is
different... the number of ways it can
be used is limited only by the needs
and imagination of the user. The
world is your database with Log
Parser.
You don't want to parse the logs to try and identify bounces. You will have both false negatives and false positives if you just look at logs.
Bounces might be generated downstream from the server you deliver to. They will look like successful deliveries in your outgoing server logs.
The naive pattern match for bounces in incoming logs (from the null sender, to one of your VERP-ed addresses) will be inaccurate. There are a few reasons why:
There will be delay warnings mixed in with actual failure bounces.
Most Out-of-Office and similar autoresponders use the null sender to prevent battlin-bots syndrome.
Similarly, challenge-response systems (like *spit* boxbe.com) tend to use the null sender.
Your VERP-ed sender addresses, if they are persistent per recipient, will get harvested by spammers and come back as either spam targets or backscatter.
So, sadly, the only reliable way to do it is to examine the bounce messages themselves. Most of them will have a "report/delivery-status" MIME part as per RFC1894, and depending on your language of choice there are probably libraries or modules to help with other bounce formats. The only one I have direct experience with is the Perl Mail::DeliveryStatus::BounceParser module, which works well enough.
I like logParser. When I need to parse for somthing very specific or custom or using regular expressions, I use biterScripting. They actually have some sample scripts that I used to get started. One is at http://www.biterscripting.com/Download/SS_WebLogParser.txt.
I based a bounce counter program on this post, only to find out later that this method doesn't actually work for high-volume senders because SMTP logs are not in sequential order. There's more about it in my blog post: Email Bounce Detection in SMTP Logs and Why It Is Impossible.
By looking at our DB's error log, we found that there was a constant stream of almost successful SQL injection attacks. Some quick coding avoided that, but how could I have setup a monitor for both the DB and Web server (including POST requests) to check for this? By this I mean if there are off the shelf tools for script-kiddies, are there off the shelf tools that will alert you to their sudden random interest in your site?
Funnily enough, Scott Hanselman had a post on UrlScan today which is one thing you could do to help monitor and minimize potential threats. It's a pretty interesting read.
UrlScan does seem like a nice option for iis6 and 7; I also found: dotDefender for pay which also covers Apache or IIS 5-7, and I had found an SQL Injection sanitation ISAPI
It is also worth noting in light of a recent wide spread SQL Injection attempt that dissallowing your webapp's db user account from querying the system tables (in MS SQL Server it's sysobjects and syscolumns) is a good idea.
I think this thread warrants more free solutions for Apache and other web servers.
Unfortunately intrusion detection was not what I had in mind, so sgfree isn't exactly a web site attack monitor, unless I'm not understanding how it works.
If you could go back and modify your app code, I'd suggest getting log4j/log4net integrated into the application. From there you could write code that would check a form field or URL (say at the global.asax level for .NET apps) and make a log entry when malicious code is detected.
The nice thing about log4j/log4net is that you can configure an e-mail/pager/SMS type appender so as soon as the malicious attempt was caught, you would be notified.
I'm in the process of merging some log4net code into our CMS system we have and I'm looking to do just this in light of the influx of ASPRox attacks that have been coming our way.
Monitoring web and DB access logs should alert you to things like this, but if you want a more fully featured alert system I would suggest some kind of IDS/IPS. You'll need a spare machine though, and a switch that can do port mirroring.
If you have those then an IDS is a cheap way of monitoring your traffic for many intrusion attempts (there will be lots). Snort (www.snort.org) based IDSes are excellent, and there are some free fully packaged versions available. One I have used is StrataGuard (http://sgfree.stillsecure.com/), and it can be configured as an IDS (Intrusion Detection System) or as an IPS (Intrusion Prevention System). It's free to use if your traffic does not exceed 5Mbps.
If you do go with an IDS/IPS I'd advise you to let it run as a simple IDS for a month or so, before you allow it to prevent attacks.
This may be overkill, but if you have a spare machine lying around it can't hurt to have an IDS running passively.
You can set up your system to kick out some error message that then makes a JSON or http call to a system that will monitor, report (log) and send out any kind of alert such as SMS/email or a phone call.
Check out developer.alertcaster.com
Especially if you need to monitor multiple simultaneous events, which it sounds like you have going on, this might be a good fix.