I have a MQTT environment like this:
there is One (gray) sensor and one Observer that are related by the topic room/temp, so far so good, sensor can publish and the Observer can get the info as it should.
the Issue I have is now: I need to block IN THE BROKER that a 2nd undesired client comes(the orange one),and start to publish into the same topic, as far as I know, MQTT is loose coupled so that observer doesn't care who is pushing the temp values, but I find a security flawless when someone hack my environment and publish non sense triggering my alarms...
any suggestion?
am using eMQTTd by the way and according to this there is nothing in the etc/emqttd.config file I can do to avoid that...
Thanks!
I only have experience with Mosquitto but, from a quick read of the document linked, it looks like there are several ways you could achieve this.
I am unclear if you are talking about an incidental problem here--i.e. bad information is being accidentally sent--or if you are protecting against an active threat.
If you are concerned with incidental overwriting of a value, then the simple clientid solution on (pg. 38) would work.
But my impression is that it would still be transmitted in the clear and thus be of little use to you if you are facing an actual adversary (hacker etc.). If that is your concern simply setup SSL and remove all non-SSL listeners. (See pg. 24). That should limit all traffic to an encrypted channel. Then if you wish add password / user authentication (pg. 38) to complete the security.
Alternatively, depending on your configuration, you could block unapproved ip addresses at the firewall level (i.e. block access to the port that your broker is listening on to all addresses except for the temperature sensor) or using eMQTTd's built in ACL facility (pg. 25). That would be less secure than a full SSL setup but depending upon your needs it might be enough.
Related
Effectively, I need to monitor a queue and swap over its consumer to a stub when the actual consumer is disconnected for a length of time. I think I can get by by just seeing what IP addresses are connected to the queue in question. I've gone through the documentation but can't see how to do that in pymqi.
Cheers
A method that tells me the status of a queue including IP addresses
It's possible but not that simple.
The MQSC commands that would give you that level of admin information would be something like "DISPLAY QSTATUS(qname) TYPE(HANDLE)". Which could be converted into PCF messges that could then be sent, and the response parsed, using pymqi. Working with PCF is not the easiest thing but it can be done.
Alternatively, you could use the REST API if that's enabled on the qmgr - REST can be done from just about any language environment or even scripted with curl.
Depending on what you really need, then perhaps simply doing an MQINQ to get MQIA_OPEN_INPUT/OUTPUT_COUNT values would be sufficient. You can then just see if anyone has the queue open.
I am trying to implement logging on connected users in my vernemq client using erlang. From documentation, I found that this could be bad, due to the scalability of the client and the assumption that there might be a lot of clients connecting and disconnecting. This is not my case, I will just have a bunch of clients but a lot of messages. Anyway, to my question. Is it possible to change the log file when using error_logger? Or should I use a different module for logging? Log file can be in any location if it had to, but I need it separated from vernemqs console.log. A followup question would be, can I somehow get a floating window on logs? I don't need to keep logs from previous year and I don't want to manually clean them every day or week or something like that
Thanks for any responses
From OTP21 on, you should use logger instead of error_logger, although the error_logger API is kept for compatibility (it justs uses logger under the hood).
With logger, which you can configure with the system configuration, you can use file backends such as logger_std_h (check the example configurations).
In logger_std_h you can set file rotation.
we've been dealing with constant attacks on our authentication url, we're talking millions of requests per day, my guess is they are trying to brute force passwords.
Whenever we would block the IP with the server firewall, few seconds later the attacks would start again from a different IP.
we ended up implementing a combination of throttling through rack-attack plus custom code to dynamically block the IPs in the firewall. But as we improved our software's security, so did the attackers, and now we are seeing every request they make is done from a different IP, one call per IP, still several per seconds, not as many but still an issue.
Now i'm trying to figure out what else can i do to prevent this, we tried recaptcha but quickly ran out of the monthly quota and then nobody can login.
I'm looking into Nginx rate limiter but from what I can see it also uses the IP, considering they now rotate IPs for each request, is there a way that this would work?
Any other suggestions on how to handle this, maybe one of you went through the same thing?
Stack: Nginx and Rails 4, Ubuntu 16.
For your condition, the most effective way to prevent this attack is using captcha. You mentioned that you have used recaptcha, and that can be run out of soon, but if you develeop the captcha yourself and you would have unlimited captcha images.
As for the other prevent method, like lock the IPs, this is always useless when the attackers use IP pool, there are so many IPs(including some IoT devices' IPs) that you can not identify/lock them all even if you use the commercial Threat Intelligence Pool.
So the suggestion like this
Develop the captcha yourself,and implement this on your api,
Identify and lock the IPs that you think malicious
Set some rules to identify the UA and Cookie of the http request (usually the normal request is deferent from the attack)
Use WAF (if you have enough budget)
I have a strange szenario:
Webserver / Appserver (Java) sends requests to many different satellite systems (on customers site). Only satellite systems can initiate connection due to firewall rules.
The model I think should be something like REQ/REP, but here the REQuester have to bind and the REPlyer would have to connect.
Is this possible and a stable architecture?
Are there better solutions? (We first had WebSockets in mind...)
Remark: we don't have to use Java on both ends. To be precise on customers site we have Delphi, but we could bridge it somehow.
The model I think should be something like REQ/REP, but here the
REQuester have to bind and the REPlyer would have to connect.
This will be problematic. When the server initiates the connection, it must be aware of all peers and their bind address. Not a big deal for a handful of peers, but for many peers changing constantly, it's a mess.
Only satellite systems can initiate connection due to firewall rules.
If that's the case, your mileage will vary with WebSockets; google around, lots of info on this.
Are there better solutions?
Well, with ZeroMq, one solution that comes to mind to support client request initiation is this:
Server binds with ROUTER
Clients connect with DEALER.
This approach offers bi-directional request/reply, does not block (asynchronous), and eliminates the client-side bind problem mentioned in your question. Here, the server binds, and either side can initiate the conversation.
I recommend reading this section in the guide, it covers extended async request/reply and message enveloping, important when using ROUTER/DEALER sockets.
For my portfolio software I have been using fetchmail to read from a Google email account over IMAP and life has been great. Thanks to the miracle of idle connection supported by imap3, my triggers fire in near-realtime due to server push, much sooner than periodic polling would allow otherwise.
In my basic .fetchmailrc setup, in which a brokerage customer's account emails trade notifications to a dedicated Gmail/Google Apps box, I've had
poll imap.gmail.com proto imap user "youraddress#yourdomain-OR-gmail.com" pass "yoMama" keep nofetchall ssl idle mimedecode limit 29000 no rewrite mda "myCustomSpecialMDAhandler.sh %F %T"
Trouble is, now I need to support reading from multiple email boxes, and hand off the emails to other specialized MDA scripts I wrote. No problem, just add more poll lines to .fetchmailrc, right? Well that doesn't work when the other accounts also use imap.gmail.com. What ends up happening is that while one account reads fine (and not necessary the first one listed, though usually yes), the other is getting "socket error" all day and the emails remain untouched, unread. I can't figure out why and not even sure if it's some mechanism at imap.gmail.com or not, eg. limiting to one IMAP connection from a host. That doesn't seem right since I have kept IMAP connections to many separate Gmail & Google Apps accounts from the same client for years (like Thunderbird) and never noticed this exclusivity problem.
I haven't tried launching multiple fetchmail daemons using separate -f config files (assuming they wouldn't conflict), or deploying one or more getmail and other similar email fetchers in addition. Still trying to avoid that kind of mess--unscaleable the more boxes I have to monitor.
Don't have the reference offhand but somewhere in fetchmail's docs I recall reading that idle is not so much an imap feature as a fetchmail optional trick, which has a (nasty for me) side effect of choking off all other defined accounts from polling until the connection is cut off by some external event or timeout. So at least that would vindicate Google.
Credit to Carl's Whine Rack blog for some tips.
For now I use killall fetchmail; fetchmail -f fetcher.$[$RANDOM % $numaccounts].rc periodically from crontab to cycle reading accounts each defined individually in fetcher.1.rc, fetcher.2.rc, etc. Acceptable while email events are relatively infrequent.