How can I verify that Octoprint can't/won't turn my RaspberryPi into malware? - octoprint

I don't mean any offense, but as I was setting up my Octoprint, a skeptical colleague of mine pointed out that it wanted to reach out to check for automatic software updates, creating broad surface area for potential attackers.
After all, the RaspberryPi is a device inside my home network, and I worry what might happen if it downloaded and ran code designed to find other vulnerable devices on my network.
I suppose I could read the open source code, but I don't know what the software delivery story is.
Planning to donate to Gina Häußge's Patreon to ask directly.

You can turn off Octoprint's auto-update feature. It is also open-source, so you can modify its code to never contact the Internet.

Quoting Gina Häußge:
As with any software that you install on your machines, there are no guarantees that it can't be abused. OctoPrint's update mechanism utilizes Github Releases via HTTPS only, and I require anyone with commit access to the repository to have two factor authentication enabled. That should make it fairly unlikely to get any rogue releases pushed via the official update mechanism. You can also just deny OctoPrint access to the internet altogether, it will run just fine. Keep in mind though that you'll need to take care of updates and plugin installs and such manually then. Speaking of plugins, you should obviously also not install anything that you find somewhere on the net. I do my best to audit plugins that get registered on the official repository, but I cannot guarantee that their authors have 2FA and such enabled for their repositories... All I can tell you is, I do my best, spend a lot of thought on security and if push comes to shove you can always read the code yourself.

Related

How do I manage syslog data from a firewall

I want to capture firewall Syslog data for the analysis purpose. What are the best practices for the same? In 10 min 300MB+ data is generated, so not sure dumping it in DB would be a feasible approach.
Any recommendations?
There are many tools available for this. I'd recommend LogZilla, since I work there, but a few other popular solutions are Splunk, ELK, and Loglogic. You would need to set up a server to receive the events, then configure your device to send its logs to that server. This would allow you to search those messages, as well as configure alerts for service impacting events. Managing your logs is an important part of network administration and has many benefits, so do your homework to determine your needs before selecting a solution.
After much research with various free and paid solution, we have finalized on https://www.graylog.org. Setup on the digital ocean was very straightforward. Primary because of strong API support besides other good stuff. It has log rotation settings that help to keep the log size under control.
Hope this helps.

No connection between Freeboard's data source for Orion and Context Broker

I've been trying to connect Freeboard to visualize context information from OCB, however came across difficulties that prevent me from receiving any data from there. My thinking is that there is a problem with connecting Freeboard to OCB, because in OCB's subscription list there are no any new entries, and datasource in Freeboard shows that it has never been updated.
OCB is turned on as a docker container. Freeboards run in docker host.
I tried setting the ip as ip that I extracted from docker by:
sudo docker inspect --format '{{ .NetworkSettings.IPAddress }}' orion1
It gave me 172.17.0.3, but on that it didn't work either. I guess it shouldn't have anyways, because I can communicate with OCB by localhost:1026 as long as I do it via cUrl or Insomnia. I can push new entities, update and so on.
The accumulation server that has not been working (link here) is ok right now. But the thing is, I add subscription by myself and can't run the acc server on localhost (loopback interface), but rather on other avaliable interface, then add ip of that interface to subscription payload that i send to OCB. Maybe there is a conflict with Freeboard somewhere.
The issue here was connected to lack of CORS support. The easy solution for this is just enabling CORS functionality while launching Orion Context Broker as described here.
I have conducted quite an (actually unnecessary) research on this topic and came up with over-the-top solution for the problem which is described in this github post. There is a proxy server approach for solving the issue. I wanted to propose adding CORS support to Orion Context Broker, and was kindly surprised when found out about it being already implemented.
There are posts like this, this and this which was very helpful in solving the case.
However, I have a two requests. I guess #fgalan is a go to person right now, regarding back-end and documentation of OCB and peripheral software.
Can there be a stronger emphasis put on CORS and ACCESS-CONTROL-ALLOW-ORIGIN soulution? The reasoning behind it is that it gives a seamless connection between OCB and any front-endy application or site (i.e. Freeboard) running in internet Browser. It shouldn't be so hidden that I came across the solution for my problems just by accident while looking for something else. I guess putting it in some walkthrough documentation on I don't know some other visible place. The problem is that I spent two weeks trying to solve it and after all went for the over-the-top and unnecessary solution while the easy and accessible was just under my nose. Good thing is that I have a good connection on stack and git so it was resolved. There are probably people that gave up on Freeboard after any slip with it. And it's a shame, because for now there is no better opensource piece of software for visualization than Freeboard. And the problem is not only with Freeboard, as I said it concerns many more front-end applications and solutions. As we go with FIWARE's way of thinking, those things should be resolved differently.
The FIWARE datasource plugin for Freeboard is not worth a dime at the moment. As #fgalan pointed out in comment it was developed for v1 version of Orion Context Broker API and has not been updated. Therefore it's way more complicated than it's suppose to be. As documentation of OCB fairly point out, v1 approach is not really RESTfull like. After conducting a short code review of OCB plugin for Freeboard I can say that's not worth using. As far as I understand it should still be working, because OCB allows for v1 request to be conducted (but it doesn't work for me anyway), those request are deprecated. In my opinion new post regarding topic should appear (not sure who should I contact about it), because this is a bit misleading. What's the point of using piece of software that's deprecated and spreading bad habits regarding interacting with OCB?
Solution for this is in my opinion simple. Just use JSON datasource in Freeboard. I understand motivation behind creating individual datasource plugin for Freeboard in 2015, when there was not RESTfull v2 version of OCB API, but there is one now, so why not use it? I used ever since got rid of difficulties with CORS and it works pretty well in my opinion. Freeboard as I said earlier provides great opportunities while being easy in setup and maintenance. It should not be abandoned so easily.
By using GET request for JSON payload in Freeboard, now we have whole access to query for context from OCB. It doesn't need any POST methods as long as we use Freeboard as it supposed to be used (by querying for data to visualize). Throw in
?options=keyValues
to the request's URL and we've gotten ourselves a really smart and compact way of visualizing data coming from the Broker.
That's just the way I thing it should be resolved. Last update on this topic coming in 2015 is just not enough in my opinion, especially if there were better methods developed on accessing context data from OCB.

Can I use the ESP-01 (ESP8266) to connect securely to MQTT broker?

The latest ESP-WROOM-02 support TLS1.2 over AT commands (I got this confirmed by Expressif). However I would like to use the ESP-01 unmodified to connect to an MQTT-broker, using TLS1.2. Is it possible to use the ESP01? Does it use the same firmware or codebase? I can't seem to find concrete answers.
Note that my app runs on another MCU (unavoidable). In principle I could reflash the ESP module, but that would add a step in the production process, plus yet another development environment. An advantage would be that the ESP01 firmware version would be strictly known.
I've seen that many advise to reflash the ESP with an Arduino derived firmware aka WiFiClientSecure and thus avoid working with the AT-commands (indeed I found NO library to specifically (and reliably) work with them).
Any advice greatly appreciated.
If you're concerned about security, then ESP8266 family modules (such as the ESP-01, ESP-WROOM-02, D1, NodeMCU) are likely not a practical choice.
They don't provide a mechanism for encrypting credentials on the device or a way to ensure that no one has altered the code that's running, and you end up in a situation like this one: https://thehackernews.com/2016/01/doorbell-hacking-wifi-pasword.html
However, the ESP-32 does provide that. It also allows you to make a secure MQTT connection. While it's more expensive than the ESP-01, it's still pretty affordable (about $6 on AliExpress).
The doorbell hack example is just stupid.
Why didn't they add a password for the Access Point connection.

Network Debugging iOS Application

Is there a library or plugin out there that will allow me to capture all network requests made by my iOS application (in DEBUG mode only of course) and store the requests in a file?
I am aware of Pony Debugger, but it only let's you view the network requests in Chrome Developer Tools Dashboard. I want an actual file I can process at a later time.
I find myself using Charles http://www.charlesproxy.com/ fairly often. You can use it to proxy in the simulator or you can set turn the proxy on and then proxy your iOS device over it. It's also nice to simulate network problems like latency and slower speeds (you can use network link conditioner as well but it's a pain to keep going back to the settings app). They have a free trial so you can decide if it's right for you.
If your using AFNetworking there is a library, AFHTTPRequestOperationLogger that will allow you to easily print all requests and responses to the console. I know it's not a file but with a little work you could pull the logs off your device for later inspection.
Lastly if those options don't suite your fancy you can roll your own solution with NSURLProtocol. It probably wouldn't be much work and there is a ton of information out there on it. NSHipster did a writeup a couple of years ago http://nshipster.com/nsurlprotocol/.

What are the requirements for an application health monitoring system?

What, at a minimum, should an application health-monitoring system do for you (the developer) and/or your boss (the IT Manager) and/or the operations (on-call) staff?
What else should it do above the minimum requirements?
Is monitoring the 'infrastructure' applications (ms-exchange, apache, etc.) sufficient or do individual user applications, web sites, and databases also need to be monitored?
if the latter, what do you need to know about them?
ADDENDUM: thanks for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
Whether the application is running.
Unusual cpu/memory/network usage.
Report any unhandled exceptions.
Status of various modules (if applicable).
Status of external components (databases, webservices, fileservers, etc.)
Number of pending background tasks (if applicable).
Maybe track usage of the application and report statistics on most/less used functionalities so you know where optimizations are most beneficial.
The answer is 'it depends'. Why do you need to monitor? How large is your operations staff? Do you need reporting? What is the application environment? Who cares if the application fails? Who cares if an exception happens? Are any of the errors recoverable? I could ask questions like these for a long time.
Great question.
We've been looking for some application-level monitoring solution for our needs some time ago without any luck. Popular monitoring solution are mostly addressed to monitor infrastrcture and - in my opinion - they are too complicated for a requirements of most of small and mid-sized companies.
We required (mainly) following features:
alerts - we wanted to know about
incident as fast as possible
painless management - hosted service wouldbe
the best
visualizations - it's good to know what is going on and take some knowledge from the data
Because we didn't find suitable solution we started to write our own. Finally we've ended with up-and-running service called AlertGrid. (You can check it for free of course.)
The idea behind it is to provide an easy way to handle custom monitoring scenarios. Integration API is very simple (one function with two required parameters). At the momment we and others are using it for:
monitor scheduled tasks (cron jobs)
monitor entire application logic execution
alert on errors in applications
we are also working on examples of basic infrastructure monitoring using AlertGrid
This is such an open ended question, but I would start with physical measurements.
1. Are all the machines I think are hosting this site pingable?
2. Are all the machines which should be serving content actually serving some content? (Ideally this would be hit from an external network.)
3. Is each expected service on each machine running?
3a. Have those services run recently?
4. Does each machine have hard drive space left? (Don't forget the db)
5. Have these machines been backed up? When was the last time?
Once one lays out the physical monitoring of the systems, one can address those specific to a system?
1. Can an automated script log in? How long did it take?
2. How many users are live? Have there been a million fake accounts added?
...
These sorts of questions get more nebulous, and can be very system specific. They also usually can be derived reactively when responding to phsyical measurements. Hard drive fill up, maybe the web server logs got filled up because a bunch of agents created too many fake users. That kind of thing.
While plan A shouldn't necessarily be reactive, it is the way many a site setup a monitoring system.
Minimum: make sure it is running :)
However, some other stuff would be very useful. For example, the CPU load, RAM usage and (in multiuser systems) which user is running what. Also, for applications that access network, a list of network connections for each app. And (if you have access to client computer(s)) it would be cool to be able to see the 'window title' of the app - maybe check each 2-3 minutes if it changed and save it. Also, a list of files open by the application could be very useful, but it is not a must.
I think this is fairly simple - monitor so that you can be warned early enough before something goes wrong. That means monitor dependencies and the application itself.
It's really hard to provide specifics if you're not going to give details on the application you're monitoring, so I'd say use that as a general rule.
At a minimum you want to know that the system is healthy. This is subjective in what defines your system is healthy. Is it computers are up, the needed resources exist, the data is flowing through the system, the data is properly producing results, etc, etc.
In my project we do monitoring of most of this and then some. It really comes down to what is the highest level that you can use to analyze that everything is working. In our case we need to know down to the data output. If you just need to know down to the are these machines up it saves you on trying to show an inexperienced end user what is wrong.
There are also "off the shelf" tools that will do a lot of the hard work for you if you are just looking too hard into data results. I particularly liked Nagios when I was looking around but we needed more than it could easily show so I wrote our own monitoring system. Basically we also watch for "peculiarities" in the system, memory / cpu spikes, etc...
thanks everyone for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
the difference is:
infrastructure monitoring would be servers plus MS Exchange Server, Apache, IIS, and so forth
application monitoring would be user machines and the specific programs that they use to do their jobs, and/or servers plus the data-moving/backend applications that they run to keep the data flowing
sometimes it's hard to draw the line - an oversimplified definition might be "if your team wrote it, it's an application; if you bought it, it's infrastructure"
i think in practice it is best to monitor both
What you need to do is to break down the business process of the application and then have the software emit events at major business components. In addition, you'll need to create end to end synthetic transactions (eg. emulating end users clicking on a website). All that data would be fed into an monitoring tool. In the past, I've done JMX for applications of which flowed into Tivoli Monitoring's JMX Adapter and then I've done scripts that implement a "fake user" and then pipe in the results into Tivoli Monitoring's Script Adapter. Tivoli Monitoring takes the data and then creates application health and performance charts from that raw data.

Resources