ZABBIX to control access to wi-fi? - wifi

I have a wi-fi network and must manage access to the Internet by mobile devices (smartphone and tablet). I need users with configurable levels of permissions, traffic limits, speed, connection time, ability to generate reports of visited sites, block some specific sites and some other things.
A friend recommended me using Zabbix for that, but to me he did not understand what the tool can do, I would use RADIUSdesk or similar.
So, who is wrong? Me or him?

Zabbix is monitoring tool, not Wifi management tool. You can use it, but you will need to code a lot of functionalities for Wifi management. Definitely it's not good idea.

Related

how do i access my cisco router details from ios mobile

Is it possible to access my Cisco router details like Name,Model,IP Address,Connection status etc from my iOS mobile.
I'm even ready to write small mobile app in iOS to get all router details.
Since I have just started learning in iOS, don't know if any library already exists for above task.
If my router does not work or gets hang.. I even want to try for restart of router using my mobile.
If example code exist, it will be very useful.
Like Cisco already has andriod and iOS app for same above function but dont want to use this app and want to write my own app with limited features only.
(http://www.addictivetips.com/mobile/cisco-connect-express-manage-router-settings-remotely-android-ios/)
Thanks,
Accessing network gear is best done by using SNMP. Cisco has extremely rich management/monitoring capabilities via SNMP and all of their MIBs are publicly available here.
Almost all Cisco gear supports the SNMPv2-SMI MIB (the 1.3.6.1.2.1 OID) so querying things like sysName, sysLocation, sysContact, sysDescription, sysUpTime should be very easy. This MIB even supports tables for listing all the interfaces and IP addresses and has a whole lot of other things that might be of interest to you.
If you have SNMP write access on the device then you can even make config changes and perform management functions like rebooting or bringing an interface up/down.
There are a few SNMP libraries for ObjectiveC and I think Net-SNMP is the most popular (It's not .net even though the title suggests that).
If you are new to SNMP then I suggest starting simple by querying easy objects like 1.3.6.1.2.1.1.5 (sysName) and 1.3.6.1.2.1.1.6 (sysLocation) before trying to jump into tables like 1.3.6.1.2.1.2.2 (ifTable)
Remember, you don't have to stick with the standard MIBs you can download all of the custom ones that are particular to your device which will give you incredible amounts of flexibility.
You could use a screen-scraping technique to telnet or ssh to the Cisco device and parse the "show version" output. This will give you some of the information you need. For others, like IP addresses, you can use "show ip interface brief", "show cdp neighbors" etc. as you need.
Keep security in mind: make sure that telnet/ssh credentials are adequately protected in your app's settings, and try to restrict your commands to those that do not need privileged access on the Cisco device.
Be aware that Cisco devices have a small pool of available VTYs, and every telnet/ssh access from your app will use up one VTY. So if you have for example 30 guys wanting to use the access the device simultaneously from their apps, some of those instances are not going to get access to the device.
If this is a concern, SNMP is a better and more scalable option as suggested by previous answer. Make sure that you (a) have a read-only community string configured on the device, and (b) use only the ro community string from the app.

Very simple pub/sub web app or script that interacts well with OSX/iOS?

I'm designing some OSX/iOS apps that I'd like to share a resource to be hosted on a webserver. I would like to have some sort of web app or script that can store a list of subscribers, and to notify them when the resource is updated. (The obvious goal here is to avoid having every app poll the webserver for updates.)
The only trick here is that I'd like a significant number of clients (say, a dozen) to be subscribed for updates on a 24/7 basis. I'm not sure if it's a good idea for all of the clients to maintain a live connection... I imagine that many web service providers will be happy about their webserver maintaining a dozen persistent connections (especially if they're virtually always idle).
(Edit) I looked into the Apple Push Network Service (APNs), but it's not the right solution for my problem. APNs requires an Entrust SSL Certificate, and some heavy interaction with the Apple Push Network service. My project is much simpler and more lightweight: I just need a script that says, "Upon receiving data from Device A, push it out to Devices B/C/D" (presuming those devices are somehow accessible... either through a persistent connection or some other technique).
What's the absolute simplest way of providing this mechanism?
The "simplest way" probably means different things to different people. If you're not a fan of locking yourself into third party services then there's a veritable plethora of app frameworks and open source tools you could use to build something yourself. But this is hardly 'simple' if web app development isn't your strong point.
There are several 'off the shelf' services available to do real-time messaging on iOS: bear in mind I'm just listing the ones I know from memory, there are other alternatives. Pusher and PubNub both offer real-time messaging services for mobile apps, along with ready to go SDKs. You can interface with them to send messages bi-directionally via sockets (so similar to how APNS works, but with considerable more control).
You could use these services with your own device/user management system, or you could use a 'backend as a service' provider such as Parse or Stackmob - you may not need this step, it depends how complex your intended app/integration is.
XMPPFramework has a publish–subscribe module (for XEP-0060) which works with most XMPP servers. I've even adapted it to work with Chat Server which comes with Snow Leopard.
If you already have an XMPP server this might be worth doing; otherwise it's kind of a heavyweight solution.

Monitor all network traffic going in and out a specific computer/ip address

I'm looking for a tool under windows or mac that allows me to monitor (possibly in a simple way) the traffic going in and out of a computer of my network.
Long story short the residence where I live allows themselves to monitor the internet connection (and doesn't allow us to switch to another provider).
This annoys me on a personal level (I don't like the possibility of people checkin what I do without my knowledge as a general rule regardless of what I do) but also on a professional level (I sometimes work form home).
I'm using/trying out vpn providers (JAP, VyperVPN...) to avoid all this. it works fine with the http connections (if I run iptraces I end up in germany or US or UK ...) but I'm not sure for other applications such as online games, instant messaging softwares that use different ports.
So my question is how can I make sure that my internet traffic is using my vpn connection or not ?
Wireshark would do that for you on Windows and linux (Not sure about Mac). It uses WinPCap library and wraps in a nice UI for you to monitor the packets that you are interested. It allows you to listen to specific or all interfaces , so you can make sure your packets are going via the right interface
if you don't want them monitoring your internet usage, a vpn is a good solution, a vpn will encrypt all of your net traffic between your computer and the vpn gateway -- essentially you'd be surfing the web via a proxy and your landlords wouldnt be able to determine what you are doing.
assuming you are using a real vpn, and not just a browser based proxy solution, then the vpn should encrypt and tunnel all of your network traffic, this includes anything coming out of any port on your computer, not just http traffic.
when you install a vpn on your computer, the vpn creates a fake network device, and all of the vpn traffic gets tunneled to the vpn gateway. you can verify this by looking at your computer's routing tables. there are some vpns which allow for split traffic (split tunneling), e.g. traffic to certain domains gets tunneled through the vpn and others goes in the clear, but this is the rarity, most vpns will tunnel all of your traffic, which seems to be what you are looking for.
just make sure that your vpn uses an encryption protocol, there are some that don't -- this would defeat the whole purpose of your vpn.

dedicated web server for my application/s?

I am looking for a dedicated server because shared webhosting solutions have some limitations.
I am going to start with one appliation (web server + db) but in the future I will need more resources for more applications. I am starting small so the price is very important right now the quality is more important though.
The requirements are like (not sure what I forgot)
scalable hw resources (memory, hdd, bandwith)
linux/unix based
able to install programs
ssh
ssl/https
backup solution?
unlimited number of outgoing emails
'simple scripts' ?
server user management
Update
Does the location of the server matters as I want to target my 'visitors' world wide?
Well I don't know where you are from and if it matters to you where the server's at. But I am very happy with swiss based hostfactory (I host some ecommerce solutions there). The support team reacts very fast and you'll get full control of the server (rdp access on windows, shell access on linux).
Check it out here: hostfactory
Hardware resources are scalable via the web interface.
Yes - location matters. If you are going with just one server location, you need to make your best guess as to where most of your visitors are going to come from.
The plumbing of the internet tends to be US centric, so if you are not sure, and have no legal restrictions on where your data can live, that may be your best (and often cheapest) option.
I went for linode

What are the requirements for an application health monitoring system?

What, at a minimum, should an application health-monitoring system do for you (the developer) and/or your boss (the IT Manager) and/or the operations (on-call) staff?
What else should it do above the minimum requirements?
Is monitoring the 'infrastructure' applications (ms-exchange, apache, etc.) sufficient or do individual user applications, web sites, and databases also need to be monitored?
if the latter, what do you need to know about them?
ADDENDUM: thanks for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
Whether the application is running.
Unusual cpu/memory/network usage.
Report any unhandled exceptions.
Status of various modules (if applicable).
Status of external components (databases, webservices, fileservers, etc.)
Number of pending background tasks (if applicable).
Maybe track usage of the application and report statistics on most/less used functionalities so you know where optimizations are most beneficial.
The answer is 'it depends'. Why do you need to monitor? How large is your operations staff? Do you need reporting? What is the application environment? Who cares if the application fails? Who cares if an exception happens? Are any of the errors recoverable? I could ask questions like these for a long time.
Great question.
We've been looking for some application-level monitoring solution for our needs some time ago without any luck. Popular monitoring solution are mostly addressed to monitor infrastrcture and - in my opinion - they are too complicated for a requirements of most of small and mid-sized companies.
We required (mainly) following features:
alerts - we wanted to know about
incident as fast as possible
painless management - hosted service wouldbe
the best
visualizations - it's good to know what is going on and take some knowledge from the data
Because we didn't find suitable solution we started to write our own. Finally we've ended with up-and-running service called AlertGrid. (You can check it for free of course.)
The idea behind it is to provide an easy way to handle custom monitoring scenarios. Integration API is very simple (one function with two required parameters). At the momment we and others are using it for:
monitor scheduled tasks (cron jobs)
monitor entire application logic execution
alert on errors in applications
we are also working on examples of basic infrastructure monitoring using AlertGrid
This is such an open ended question, but I would start with physical measurements.
1. Are all the machines I think are hosting this site pingable?
2. Are all the machines which should be serving content actually serving some content? (Ideally this would be hit from an external network.)
3. Is each expected service on each machine running?
3a. Have those services run recently?
4. Does each machine have hard drive space left? (Don't forget the db)
5. Have these machines been backed up? When was the last time?
Once one lays out the physical monitoring of the systems, one can address those specific to a system?
1. Can an automated script log in? How long did it take?
2. How many users are live? Have there been a million fake accounts added?
...
These sorts of questions get more nebulous, and can be very system specific. They also usually can be derived reactively when responding to phsyical measurements. Hard drive fill up, maybe the web server logs got filled up because a bunch of agents created too many fake users. That kind of thing.
While plan A shouldn't necessarily be reactive, it is the way many a site setup a monitoring system.
Minimum: make sure it is running :)
However, some other stuff would be very useful. For example, the CPU load, RAM usage and (in multiuser systems) which user is running what. Also, for applications that access network, a list of network connections for each app. And (if you have access to client computer(s)) it would be cool to be able to see the 'window title' of the app - maybe check each 2-3 minutes if it changed and save it. Also, a list of files open by the application could be very useful, but it is not a must.
I think this is fairly simple - monitor so that you can be warned early enough before something goes wrong. That means monitor dependencies and the application itself.
It's really hard to provide specifics if you're not going to give details on the application you're monitoring, so I'd say use that as a general rule.
At a minimum you want to know that the system is healthy. This is subjective in what defines your system is healthy. Is it computers are up, the needed resources exist, the data is flowing through the system, the data is properly producing results, etc, etc.
In my project we do monitoring of most of this and then some. It really comes down to what is the highest level that you can use to analyze that everything is working. In our case we need to know down to the data output. If you just need to know down to the are these machines up it saves you on trying to show an inexperienced end user what is wrong.
There are also "off the shelf" tools that will do a lot of the hard work for you if you are just looking too hard into data results. I particularly liked Nagios when I was looking around but we needed more than it could easily show so I wrote our own monitoring system. Basically we also watch for "peculiarities" in the system, memory / cpu spikes, etc...
thanks everyone for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
the difference is:
infrastructure monitoring would be servers plus MS Exchange Server, Apache, IIS, and so forth
application monitoring would be user machines and the specific programs that they use to do their jobs, and/or servers plus the data-moving/backend applications that they run to keep the data flowing
sometimes it's hard to draw the line - an oversimplified definition might be "if your team wrote it, it's an application; if you bought it, it's infrastructure"
i think in practice it is best to monitor both
What you need to do is to break down the business process of the application and then have the software emit events at major business components. In addition, you'll need to create end to end synthetic transactions (eg. emulating end users clicking on a website). All that data would be fed into an monitoring tool. In the past, I've done JMX for applications of which flowed into Tivoli Monitoring's JMX Adapter and then I've done scripts that implement a "fake user" and then pipe in the results into Tivoli Monitoring's Script Adapter. Tivoli Monitoring takes the data and then creates application health and performance charts from that raw data.

Resources