I have a Linux box on Ubuntu 18.04.3 and have a working fail2ban configuration (like on all my hosts).
In this case I setup a docker-container which acts as a sftp-server for several users - the docker-container has a running rsyslogd and writes login events to /var/log/auth.log - /var/log is mounted to the host-system to /myapp/log/sftp.
So I created a second sshd-jail with this config snippet in jail.local
[myapp-sftp]
filter=sshd
enabled = true
findtime = 1200
maxretry = 2
mode = aggressive
backend = polling
logpath=/myapp/log/sftp/auth.log
The logfile /myapp/log/sftp/auth.log is absolutely there and filled with a lot of failed login tries - from myself and others.
But the jail never gets triggered with a found log entry in fail2ban.log.
I already reset the fail2ban database ... and have no clue what might be wrong.
I tried backend = polling and the default pyinotify.
Checking with fail2ban-regex says that it matches..
# fail2ban-regex /myapp/log/sftp/auth.log /etc/fail2ban/filter.d/sshd.conf
Running tests
=============
Use failregex filter file : sshd, basedir: /etc/fail2ban
Use maxlines : 1
Use datepattern : Default Detectors
Use log file : /myapp/log/sftp/auth.log
Use encoding : UTF-8
Results
=======
Failregex: 268 total
|- #) [# of hits] regular expression
| 3) [64] ^Failed \S+ for invalid user <F-USER>(?P<cond_user>\S+)|(?:(?! from ).)*?</F-USER> from <HOST>(?: port \d+)?(?: on \S+(?: port \d+)?)?(?: ssh\d*)?(?(cond_user): |(?:(?:(?! from ).)*)$)
| 4) [29] ^Failed \b(?!publickey)\S+ for (?P<cond_inv>invalid user )?<F-USER>(?P<cond_user>\S+)|(?(cond_inv)(?:(?! from ).)*?|[^:]+)</F-USER> from <HOST>(?: port \d+)?(?: on \S+(?: port \d+)?)?(?: ssh\d*)?(?(cond_user): |(?:(?:(?! from ).)*)$)
| 6) [64] ^[iI](?:llegal|nvalid) user <F-USER>.*?</F-USER> from <HOST>(?: port \d+)?(?: on \S+(?: port \d+)?)?\s*$
| 21) [111] ^<F-NOFAIL>Connection from</F-NOFAIL> <HOST>
`-
Ignoreregex: 0 total
Date template hits:
|- [# of hits] date format
| [642] {^LN-BEG}(?:DAY )?MON Day %k:Minute:Second(?:\.Microseconds)?(?: ExYear)?
`-
Lines: 642 lines, 0 ignored, 268 matched, 374 missed
[processed in 0.13 sec]
Missed line(s): too many to print. Use --print-all-missed to print all 374 lines
and
# fail2ban-client status myapp-sftp
Status for the jail: myapp-sftp
|- Filter
| |- Currently failed: 0
| |- Total failed: 0
| `- File list: /myapp/log/sftp/auth.log
`- Actions
|- Currently banned: 0
|- Total banned: 0
`- Banned IP list:
# cat /var/log/fail2ban.log | grep myapp
2019-08-21 10:35:33,647 fail2ban.jail [649]: INFO Creating new jail 'wippex-sftp'
2019-08-21 10:35:33,647 fail2ban.jail [649]: INFO Jail 'myapp-sftp' uses pyinotify {}
2019-08-21 10:35:33,664 fail2ban.server [649]: INFO Jail myapp-sftp is not a JournalFilter instance
2019-08-21 10:35:33,665 fail2ban.filter [649]: INFO Added logfile: '/wippex/log/sftp.log' (pos = 0, hash = 287d8cc2e307c5f427aa87c4c649ced889d6bf6a)
2019-08-21 10:35:33,689 fail2ban.jail [649]: INFO Jail 'myapp-sftp' started
I really never get an expected found entry... nor a ban.
Any ideas are welcome.
# fail2ban-server -V
Fail2Ban v0.10.2
Copyright (c) 2004-2008 Cyril Jaquier, 2008- Fail2Ban Contributors
Copyright of modifications held by their respective authors.
log sample from /myapp/log/sftp/auth.log
Aug 21 14:03:13 a9ede63166d9 sshd[202]: Failed password for invalid user mapp from 95.85.16.178 port 41766 ssh2
Aug 21 14:03:13 a9ede63166d9 sshd[202]: Received disconnect from 95.85.16.178 port 41766:11: Normal Shutdown, Thank you for playing [preauth]
Aug 21 14:03:13 a9ede63166d9 sshd[202]: Disconnected from 95.85.16.178 port 41766 [preauth]
Aug 21 14:03:49 a9ede63166d9 sshd[204]: Connection from 95.85.16.178 port 34722 on 172.17.0.3 port 22
Aug 21 14:03:49 a9ede63166d9 sshd[204]: Invalid user mapp from 95.85.16.178 port 34722
Aug 21 14:03:49 a9ede63166d9 sshd[204]: input_userauth_request: invalid user mapp [preauth]
Aug 21 14:03:49 a9ede63166d9 sshd[204]: error: Could not get shadow information for NOUSER
Aug 21 14:03:49 a9ede63166d9 sshd[204]: Failed password for invalid user mapp from 95.85.16.178 port 34722 ssh2
Aug 21 14:03:49 a9ede63166d9 sshd[204]: Received disconnect from 95.85.16.178 port 34722:11: Normal Shutdown, Thank you for playing [preauth]
Aug 21 14:03:49 a9ede63166d9 sshd[204]: Disconnected from 95.85.16.178 port 34722 [preauth]
Problem is "solved". The docker container simply used a different timezone than the host and the logfile timestamps didnt contain the timezone.
So fail2ban assumed the timestamps were written in the same timezone as it´s running environment (on host) and didn´t interprete "old" log entries (2 hr. diff).
See https://github.com/fail2ban/fail2ban/issues/2486
I simply set the host timezone to UTC now - but will try now to set rsyncd to use a timezoned dateformat
I want to connect my neo4j's project server to py2neo in jupyter
I actually have 2 problems:
Given below is a picture of my neo4j browser connected with bolt//:localhost:11004, username: neo4j, password: password
But i am not able to connect to this server through py2neo on jupyter notebook.
The code in python is the following:
graphdb = Graph("bolt://localhost:11004", secure=True, auth=('neo4j', 'password'))
I am getting the following error:
KeyError Traceback (most recent call last)
~/conda3/lib/python3.6/site-packages/py2neo/database.py in __new__(cls, uri, **settings)
87 try:
---> 88 inst = cls._instances[key]
89 except KeyError:
KeyError: '0611fb007d1a660e26e66e58777225de'
During handling of the above exception, another exception occurred:
ServiceUnavailable Traceback (most recent call last)
<ipython-input-41-2d6567e9c5ba> in <module>()
3 # default uri for local Neo4j instance
4 dict_params=dict(secure=True)
----> 5 graphdb = Graph(**dict_params)
~/conda3/lib/python3.6/site-packages/py2neo/database.py in __new__(cls, uri, **settings)
303 def __new__(cls, uri=None, **settings):
304 name = settings.pop("name", "data")
--> 305 database = Database(uri, **settings)
306 if name in database:
307 inst = database[name]
~/conda3/lib/python3.6/site-packages/py2neo/database.py in __new__(cls, uri, **settings)
95 auth=connection_data["auth"],
96 encrypted=connection_data["secure"],
---> 97 user_agent=connection_data["user_agent"])
98 inst._graphs = {}
99 cls._instances[key] = inst
~/conda3/lib/python3.6/site-packages/neo4j/v1/api.py in __new__(cls, uri, **config)
131 for subclass in Driver.__subclasses__():
132 if parsed.scheme == subclass.uri_scheme:
--> 133 return subclass(uri, **config)
134 raise ValueError("URI scheme %r not supported" % parsed.scheme)
135
~/conda3/lib/python3.6/site-packages/neo4j/v1/direct.py in __new__(cls, uri, **config)
71
72 pool = DirectConnectionPool(connector, instance.address, **config)
---> 73 pool.release(pool.acquire())
74 instance._pool = pool
75 instance._max_retry_time = config.get("max_retry_time", default_config["max_retry_time"])
~/conda3/lib/python3.6/site-packages/neo4j/v1/direct.py in acquire(self, access_mode)
42
43 def acquire(self, access_mode=None):
---> 44 return self.acquire_direct(self.address)
45
46
~/conda3/lib/python3.6/site-packages/neo4j/bolt/connection.py in acquire_direct(self, address)
448 if can_create_new_connection:
449 try:
--> 450 connection = self.connector(address, self.connection_error_handler)
451 except ServiceUnavailable:
452 self.remove(address)
~/conda3/lib/python3.6/site-packages/neo4j/v1/direct.py in connector(address, error_handler)
68
69 def connector(address, error_handler):
---> 70 return connect(address, security_plan.ssl_context, error_handler, **config)
71
72 pool = DirectConnectionPool(connector, instance.address, **config)
~/conda3/lib/python3.6/site-packages/neo4j/bolt/connection.py in connect(address, ssl_context, error_handler, **config)
702 raise ServiceUnavailable("Failed to resolve addresses for %s" % address)
703 else:
--> 704 raise last_error
~/conda3/lib/python3.6/site-packages/neo4j/bolt/connection.py in connect(address, ssl_context, error_handler, **config)
692 log_debug("~~ [RESOLVED] %s -> %s", address, resolved_address)
693 try:
--> 694 s = _connect(resolved_address, **config)
695 s, der_encoded_server_certificate = _secure(s, address[0], ssl_context, **config)
696 connection = _handshake(s, resolved_address, der_encoded_server_certificate, error_handler, **config)
~/conda3/lib/python3.6/site-packages/neo4j/bolt/connection.py in _connect(resolved_address, **config)
582 _force_close(s)
583 if error.errno in (61, 99, 111, 10061):
--> 584 raise ServiceUnavailable("Failed to establish connection to {!r} (reason {})".format(resolved_address, error.errno))
585 else:
586 raise
ServiceUnavailable: Failed to establish connection to ('127.0.0.1', 7687) (reason 111)
What i want to know is
1) The connection between neo4j and py2neo is made how exactly in py2neo v4
2) Do i always have to make a local connection or can i connect to the neo4j server
3) If i can connect to my neo4j server is it such that whatever py2neo queries i run on my jupyter notebook shall synchronise with the neo4j database too?
From the last line of the error, it looks like it's trying to connect on default bolt port (i.e. 7687).
I would suggest you use this format instead of full URI.
graphdb = Graph(scheme="bolt", host="localhost", port=11004,
secure=True, auth=('neo4j', 'password'))
ERORR:
Feb 14 14:09:04 es1 postfix/smtp[16443]: connect to mx3.hotmail.com[65.54.188.94]:25: Connection timed out
Feb 14 14:09:34 es1 postfix/smtp[16443]: connect to mx1.hotmail.com[104.44.194.231]:25: Connection timed out
Feb 14 14:10:04 es1 postfix/smtp[16443]: connect to mx1.hotmail.com[207.46.8.167]:25: Connection timed out
Feb 14 14:10:34 es1 postfix/smtp[16443]: connect to mx2.hotmail.com[65.55.37.104]:25: Connection timed out
Feb 14 14:11:04 es1 postfix/smtp[16443]: connect to mx1.hotmail.com[65.55.92.136]:25: Connection timed out
Feb 14 14:11:04 es1 postfix/smtp[16443]: 228D519C06D: to=<xxxx#hotmail.com>, relay=none, delay=395818, delays=395668/0.01/150/0, dsn=4.4.1, status=deferred (connect to mx1.hotmail.com[65.55.92.136]:25: Connection timed out)
I've host Mail Server on CentOS 6 with Postfix/Dovecot, I can receive mail from outside, but can't not sending mail to outside.
Things I've done:
Add spf record to dns, also validate successfully from http://www.kitterman.com/spf/validate.html?
v=spf1 ip4:x.x.x.x -all
Note:
I've change the default port 25 to 26 due to ISP block issue by adding etc/postfix/master.cf
26 inet n - n - - smtpd
Your ISP is probably blocking outbound port 25. Its very common. Your SPF record and inbound SMTP port makes no difference. I suggest you contact your ISP.
I have the log below trying to parse it by the indicated column number 1 as Date, 2 as Time, 3 as Task, 4 as Error_Line, and 5 all the rest columns as Error_Message
|1 | |2 | |3 | |4 | |5 |
09-15-16 05:23:45 B:VVBN 09064 Port 22 Device 10400 Remote 44 13331 Link Up RP2016
09-15-16 05:23:44 A:QAWE 09064 Port 22 Device 10400 Remote 44 13331 Link Up RP2016
09-15-16 05:23:44 B:VVBN 13425 Port 22 Device 10400 Remote 44 13331 Receive Time Error: 24666 23270 1396 69
09-15-16 05:23:43 B:QAWE 13372 Port 22 Device 10400 Remote 44 13331 Send Time Error: 444 1888 1444 69
09-15-16 05:23:43 A:VVBN 13425 Port 22 Device 10400 Remote 44 13331 Receive Time Error: 24666 23270 1396 69
09-15-16 05:23:43 A:CCBE 13372 Port 22 Device 10400 Remote 44 13331 Send Time Error: 444 1888 1444 69
09-15-16 05:21:56 B:VVBN 07270 Port 22 Device 10400 Remote 44 13331 AT Timer Expired
09-15-16 05:21:56 A:CCBE 07270 Port 22 Device 10400 Remote 44 13331 AT Timer Expired
here is my script
logs = LOAD '/data/test_log.txt' USING PigStorge(' ') AS (date: chararray, time: chararray, task: chararray, line_error: int, error_message: chararray);
date = GROUP logs BY date;
counts = FOREACH date GENERATE COUNT($4) as count;
DUMP counts;
notice there is one space between columns only there is five spaces between 3 and 4 columns.
I tried the script above but it just work good for date not for last column Error_message.
I am trying to get this output bag:
(09-15-16,05:23:45,B:VVBN,09064,Port 22 Device 10400 Remote 44 13331 Link Up RP2016)
(09-15-16,05:23:44,A:QAWE,09064,Port 22 Device 10400 Remote 44 13331 Link Up RP2016)
:
:
I just need to consider the first four columns any other columns in the log file mix them in one column 5.
Any suggestion to get the desired output.
You need to use MyRegExLoader provided by piggybank to process custom log files.
logs = LOAD '/data/test_log.txt' USING org.apache.pig.piggybank.storage.MyRegExLoader ('provide the regex ');
PC can be very good to achieve the video upload and download, but there are two problems in the use of IOS devices:
1、In the IOS7.0 devices can not upload the video, the following is log:
[DEBUG] Did open connection on socket 17
[DEBUG] Did connect
[DEBUG] Did start background task
[DEBUG] Connection received 528 bytes on socket 17
[DEBUG] Connection received 234 bytes on socket 17
[DEBUG] Connection received 46 bytes on socket 17
[DEBUG] Connection on socket 17 preflighting request "POST /upload" with 808 bytes body
[DEBUG] Connection on socket 17 processing request "POST /upload" with 808 bytes body
2015-12-14 10:58:34.523 GCDWebServer[1701:403846] [UPLOAD] /var/mobile/Containers/Data/Application/5F45B407-9DA8-43D1-AADF-07CF22B1C4B2/Documents/IMG_9675.mp4
[DEBUG] Connection sent 175 bytes on socket 17
[DEBUG] Connection sent 2 bytes on socket 17
[DEBUG] Did close connection on socket 17
[VERBOSE] [my localhost:80] iPad2(7.0) client:50692 200 "POST /upload" (808 | 177)
[DEBUG] Did open connection on socket 17
2015-12-14 10:58:34.577 GCDWebServer[1701:403846] album’s name == GCDWebServer
[DEBUG] Connection received 400 bytes on socket 17
[DEBUG] Connection on socket 17 preflighting request "GET /list" with 400 bytes body
[DEBUG] Connection on socket 17 processing request "GET /list" with 400 bytes body
[DEBUG] Connection sent 177 bytes on socket 17
[DEBUG] Connection sent 213 bytes on socket 17
[DEBUG] Did close connection on socket 17
[VERBOSE] [my localhost:80] iPad2(7.0) client's IP:50693 200 "GET /list" (400 | 390)
[DEBUG] Did open connection on socket 17
[DEBUG] Connection received 453 bytes on socket 17
[DEBUG] Connection on socket 17 preflighting request "GET /download" with 453 bytes body
[DEBUG] Connection on socket 17 processing request "GET /download" with 453 bytes body
[DEBUG] Connection sent 182 bytes on socket 17
[DEBUG] Did close connection on socket 17
[VERBOSE] [my localhost:80] iPad2(7.0) client's IP:50694 304 "GET /download" (453 | 182)
[DEBUG] Did disconnect
[DEBUG] Did end background task
2、In the IPAD device can not download the video, the following is log:
[DEBUG] Did open connection on socket 17
[DEBUG] Did connect
[DEBUG] Did start background task
[DEBUG] Connection received 427 bytes on socket 17
[DEBUG] Connection on socket 17 preflighting request "GET /download" with 427 bytes body
[DEBUG] Connection on socket 17 processing request "GET /download" with 427 bytes body
[DEBUG] Connection sent 398 bytes on socket 17
[DEBUG] Connection sent 32768 bytes on socket 17
[DEBUG] Connection sent 32768 bytes on socket 17
[DEBUG] Connection sent 32768 bytes on socket 17
[DEBUG] Connection sent 32768 bytes on socket 17
[DEBUG] Connection sent 32768 bytes on socket 17
[DEBUG] Connection sent 32768 bytes on socket 17
[ERROR] Error while writing to socket 17: Broken pipe (32)
[DEBUG] Did close connection on socket 17
[VERBOSE] [my localhost:80] iPad2(7.0) client's IP:50690 200 "GET /download" (427 | 197006)
[DEBUG] Did open connection on socket 17
[DEBUG] Connection received 346 bytes on socket 17
[DEBUG] Connection on socket 17 preflighting request "GET /download" with 346 bytes body
[DEBUG] Connection on socket 17 processing request "GET /download" with 346 bytes body
[DEBUG] Connection sent 398 bytes on socket 17
[DEBUG] Connection sent 32768 bytes on socket 17
[DEBUG] Connection sent 32768 bytes on socket 17
[DEBUG] Connection sent 32768 bytes on socket 17
[DEBUG] Connection sent 32768 bytes on socket 17
[ERROR] Error while writing to socket 17: Broken pipe (32)
[DEBUG] Did close connection on socket 17
[VERBOSE] [my localhost:80] iPad2(7.0) client's IP:50691 200 "GET /download" (346 | 131470)
[DEBUG] Did disconnect
[DEBUG] Did end background task
Do you know why? Thank you very much!
Sorry for my ignorance.