Need VPN tool to simulate US STATES (Mississippi and Alabama) - geolocation

I am a QA and need to test locations. When user first comes to the site, the 'Location'(by state) shown should be detected by IP and displayed. I am in MO, and need to test if users see their states in Mississippi, and Alabama. I've tried multiple VPN tools free and paid, and they all have limited states that you can select. Any suggestions are welcome. Maybe there is another way to test it?

Related

Semantics of an HL7 enabled Point of Care device - is this the right way to do it?

I am implementing automated HL7v2.7 reporting of observations on a point of care device. The way this works is by sending an "ORU^R30 Unsolicited Point-Of-Care Observation Message without Existing Order - Place an Order" message to what I'm assuming will be a laboratory information system or an associated channel in an integration engine. I'm currently going to have the device ask for IP/port numbers to the LIS and MPI/their associated connections on first set-up - our device is going to communicate over TCP/LLP.
Is this the smart way to do all this? I've never worked with HL7 or any kind of HIS before.
I appreciate any possible insight. This isn't the stuff you can learn about in the standard, and I don't think I can just email Epic and ask them how they design EHR/HIS systems.
Thanks!
Message Content: ORU^R30 is not a commonly used message type, but the structure is close enough to R01 that most systems will be able to receive it. Focus on making sure you collect as much patient demographics and the visit number, or better yet scan both from the patient's wristband barcode. You must have patient and visit to file the observations.
Transmission: It's safest to just do MLLP over TCP, it will speed up your installs because that's what everybody else does. The alternative is having the health system write something custom to receive the data, usually via the interface engine.
Network: It sounds like you're thinking of putting the connection info on the device. This probably is a bad idea, I would build some kind of aggregator service that actually sends data to the EHR, that way you don't have to deal with multiple devices trying to get through firewalls, etc.

How to prevent gaming of website rewards for new visitors

I'm about to embark on a website build where a company wants to reward new visitors with a gift. The gift has some monetary value, and I'm concerned about the site being gamed. I'm looking for ways to help reduce the chance that any one person can drain the entire gift inventory.
The plans call for an integration with Facebook, so authenticating with your FB credentials will provide at least a bit of confidence that a new visitor is actually a real person (assuming that scripting the creation of 100's of FB accounts and then authenticating with them is no simple task).
However, there is also a requirement to reward new visitors who do not have FB accounts, and this is where I'm looking for ideas. An email verification system by itself won't cut it, because it's extremely easy to obtain countless number of email address (me+1#gmail.com, me+2#gmail.com, etc). I've been told that asking for a credit card number is too much of a barrier.
Are there some fairly solid strategies or services for dealing with situations like this?
EDIT: The "gift" is virtual - like a coupon
Ultimately, this is an uphill, loosing battle. If there will be incentive to beat the system, someone will try and they will eventually succeed. (See for example: every DRM scheme ever implemented.)
That said, there are strategies to reduce the ease of gaming the system.
I wouldn't really consider FB accounts to be that secure. The barrier to creating a new FB account is probably negligibly higher than creating a new webmail account.
Filtering by IP address is bound to be a disaster. There may be thousands of users behind a proxy on a single IP address (cough, AOL), and a scammer could employ a botnet to distribute each account requests to a unique IP. It is likely to be more trouble than it is worth to preemptively block IPs, but you could analyze the requests later—for example, before actually sending the reward—to see if there's lots of suspicious behavior from an IP.
Requiring a credit card number is a good start, but you've already ruled that out. Also consider that one individual can have 10 or more card numbers between actual credit cards, debit cards, and one-time-use card numbers.
Consider sending a verification code via SMS to PSTN numbers. This will cost you some money (a few cents per message), but it also costs a scammer a decent amount of change to acquire a large number of phone numbers to receive those messages. (Depending on the value of your incentive, the cost a prepaid SIM may make it cost-prohibitive.) Of course, if a scammer already has many SMS-receiving PSTN numbers at his disposal, this won't work.
First thing I wonder is if these gifts need to be sent to a physical address. It's easy to spoof 100 email addresses or FB accounts but coming up with 100 clearly unique physical addresses is much harder, obviously.
Of course, You may be giving them an e-coupon or something so address might not be an option.
Once upon a time I wrote a pretty intense anti-gaming script for a contest judging utility. While this was many months of development and is far too complex to describe in great detail, I can outline the basic features of the script:
For one we logged every detail we could when a user applied for the contest. It was pretty easy to catch obvious similarities in accounts by factoring the average time between logins / submissions from a group of criteria (like IP, browser, etc - all things that can be spoofed so by themselves it is unreliable). In addition, I compared account credentials for obvious gaming - like acct1#yahoo.com, acct2#yahoo.com, etc. by using a combination of levenshtein distance which is not solely reliable - as well as a parsing script that broke apart the various details of the credentials and looked for patterns.
Depending on the scores of each test, we assigned a probability of gaming as well as a list of possible account matches. Then it was up to the admins to exclude them from the results.
You could go on for months refining your algorithm and never get it perfect. That's why my script only flagged accounts and did not take any automatic action.
Since you're talking about inventory, can we therefore assume your gift is an actual physical item?
If so, then delivery of the gift will require a physical address for delivery - requiring unique addresses (or, allowing duplicate addresses but flagging those users for manual review) should be a good restriction.
My premise is this: While you can theoretically run a script to create 100s of Facebook or Google accounts, exercising physical control over hundreds of distinct real world delivery locations is a whole different class of problem.
I would suggest a more 'real world' solution in stead of all the security: make it clear that it is one coupon per address. Fysical (delivery and/or payment) address. Then just do as you want, maybe limit it by email or something for the looks of it, but in the end, limit it per real end-user, not per person receiving the coupon.

Printing from one Client to another Client via the Server

I don't know if it sounds crazy, but here's the scenario -
I need to print a document over the internet. My pc ClientX initiates the process using the web browser to access a ServerY on the internet and the printer is connected to a ClientZ (may be yours).
1. The document is stored on ServerY.
2. ClientZ is purely a cliet; no IIS, no print server etc.
3. I have the specific details of ClientZ, IP, Port, etc.
4. It'll be completely a server side application (and no client-side on ClientZ) with ASP.NET & C#
- so, is it possible? If yes, please give some clue. Thanks advanced.
This is kind of to big of a question for SO but basically what you need to do is
upload files to the server -- trivial
do some stuff to figure out if they are allowed to print the document -- trivial to hard depending on scope
add items to a queue for printing and associate them with a user/session -- easy
render and print the document -- trivial to hard depending on scope
notify the user that the document has been printed
handling errors
the big unknowns here are scope, if this is for a school project you probably don't have to worry about billing or queue priority in step two. If its for a commercial product billing can be a significant subsystem in its self.
the difficulty in step 4 depends directly on what formats you are going to support as many formats are going to require document specific libraries or applications. There are also security considerations here if this is a commercial product since it isn't safe to try to render all types of files.
Notifications can be easy or hard depending on how you want to do it. You can post back to the html page, but depending on how long its going to take for a job to complete it might be nice to have an email option as well.
You also need to think about errors. What is going to happen when paper or toner runs out or when someone tries to print something on A4 paper? Someone has to be notified so that jobs don't just build up.
On the server I would run just the user interaction piece on the web and have a "print daemon" running as a service to manage getting the documents printed and monitoring their status. I would use WCF to do IPC between the two.
Within the print daemon you are going to need a set of components to print different kinds of documents. I would make one assembly per type (or cluster of types) and load them into your service as plugins using MEF.
sorry this is so general, but you are asking a pretty general and difficult to answer question.

Android wifi connection

Hie members! ----am Boniface M - - a beginner in android[University student]..
My question is am planning to develop an android app/middleware that will act as a grid service .i.e an app for grid computing.. the application needs to be installed in 1....n devices. in the connection, one device must act as a server for all others. communication between the devices is via the wifi under the permission of the server device.which is determined by a certain algorithm[no problem here].
The problem is should i use a database that will keep track of all the services a device is running which are accessible to other devices or is there any way that i can directly keep all this information and then retrieve them as i request them from another app installed in another device.
and also how i can share files via wifi like blutooth
Thanks....
You're asking many questions in one and I'm actually unsure what you mean overall. Here's a few links that are sure be of some use...
http://developer.android.com/reference/android/os/Build.html This library is good for finding out information about the device you're running on.
http://developer.android.com/reference/android/location/Criteria.html - Criteria might be useful, lets you know what location based services you have running
Other than that, if you're looking to see if particular things are running check out this question: How to check if a service is running on Android?
If you're looking to keep a central hub of what devices have what available etc. you're going to need a middle man for what you want to do I suspect. If it was me, I'd do HTTP requests to a server, to php scripts I have written which would then read/write from a MySQL database to get information about other devices.
If you want to share files via wifi.. you're going to need an FTP server on the phone. There's an app swiFTP which does this to some degree (phone -> PC) but the concept should be the same. Take a look at it. It's a starting point! http://www.tested.com/news/how-to-transfer-files-wirelessly-to-your-android-phone/53/
Again, I'm unsure EXACTLY what you're looking to do but hopefully all of that is of some help. If it's not leave me a comment and I'll try and assist you further.
hope it helps!

What are the requirements for an application health monitoring system?

What, at a minimum, should an application health-monitoring system do for you (the developer) and/or your boss (the IT Manager) and/or the operations (on-call) staff?
What else should it do above the minimum requirements?
Is monitoring the 'infrastructure' applications (ms-exchange, apache, etc.) sufficient or do individual user applications, web sites, and databases also need to be monitored?
if the latter, what do you need to know about them?
ADDENDUM: thanks for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
Whether the application is running.
Unusual cpu/memory/network usage.
Report any unhandled exceptions.
Status of various modules (if applicable).
Status of external components (databases, webservices, fileservers, etc.)
Number of pending background tasks (if applicable).
Maybe track usage of the application and report statistics on most/less used functionalities so you know where optimizations are most beneficial.
The answer is 'it depends'. Why do you need to monitor? How large is your operations staff? Do you need reporting? What is the application environment? Who cares if the application fails? Who cares if an exception happens? Are any of the errors recoverable? I could ask questions like these for a long time.
Great question.
We've been looking for some application-level monitoring solution for our needs some time ago without any luck. Popular monitoring solution are mostly addressed to monitor infrastrcture and - in my opinion - they are too complicated for a requirements of most of small and mid-sized companies.
We required (mainly) following features:
alerts - we wanted to know about
incident as fast as possible
painless management - hosted service wouldbe
the best
visualizations - it's good to know what is going on and take some knowledge from the data
Because we didn't find suitable solution we started to write our own. Finally we've ended with up-and-running service called AlertGrid. (You can check it for free of course.)
The idea behind it is to provide an easy way to handle custom monitoring scenarios. Integration API is very simple (one function with two required parameters). At the momment we and others are using it for:
monitor scheduled tasks (cron jobs)
monitor entire application logic execution
alert on errors in applications
we are also working on examples of basic infrastructure monitoring using AlertGrid
This is such an open ended question, but I would start with physical measurements.
1. Are all the machines I think are hosting this site pingable?
2. Are all the machines which should be serving content actually serving some content? (Ideally this would be hit from an external network.)
3. Is each expected service on each machine running?
3a. Have those services run recently?
4. Does each machine have hard drive space left? (Don't forget the db)
5. Have these machines been backed up? When was the last time?
Once one lays out the physical monitoring of the systems, one can address those specific to a system?
1. Can an automated script log in? How long did it take?
2. How many users are live? Have there been a million fake accounts added?
...
These sorts of questions get more nebulous, and can be very system specific. They also usually can be derived reactively when responding to phsyical measurements. Hard drive fill up, maybe the web server logs got filled up because a bunch of agents created too many fake users. That kind of thing.
While plan A shouldn't necessarily be reactive, it is the way many a site setup a monitoring system.
Minimum: make sure it is running :)
However, some other stuff would be very useful. For example, the CPU load, RAM usage and (in multiuser systems) which user is running what. Also, for applications that access network, a list of network connections for each app. And (if you have access to client computer(s)) it would be cool to be able to see the 'window title' of the app - maybe check each 2-3 minutes if it changed and save it. Also, a list of files open by the application could be very useful, but it is not a must.
I think this is fairly simple - monitor so that you can be warned early enough before something goes wrong. That means monitor dependencies and the application itself.
It's really hard to provide specifics if you're not going to give details on the application you're monitoring, so I'd say use that as a general rule.
At a minimum you want to know that the system is healthy. This is subjective in what defines your system is healthy. Is it computers are up, the needed resources exist, the data is flowing through the system, the data is properly producing results, etc, etc.
In my project we do monitoring of most of this and then some. It really comes down to what is the highest level that you can use to analyze that everything is working. In our case we need to know down to the data output. If you just need to know down to the are these machines up it saves you on trying to show an inexperienced end user what is wrong.
There are also "off the shelf" tools that will do a lot of the hard work for you if you are just looking too hard into data results. I particularly liked Nagios when I was looking around but we needed more than it could easily show so I wrote our own monitoring system. Basically we also watch for "peculiarities" in the system, memory / cpu spikes, etc...
thanks everyone for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
the difference is:
infrastructure monitoring would be servers plus MS Exchange Server, Apache, IIS, and so forth
application monitoring would be user machines and the specific programs that they use to do their jobs, and/or servers plus the data-moving/backend applications that they run to keep the data flowing
sometimes it's hard to draw the line - an oversimplified definition might be "if your team wrote it, it's an application; if you bought it, it's infrastructure"
i think in practice it is best to monitor both
What you need to do is to break down the business process of the application and then have the software emit events at major business components. In addition, you'll need to create end to end synthetic transactions (eg. emulating end users clicking on a website). All that data would be fed into an monitoring tool. In the past, I've done JMX for applications of which flowed into Tivoli Monitoring's JMX Adapter and then I've done scripts that implement a "fake user" and then pipe in the results into Tivoli Monitoring's Script Adapter. Tivoli Monitoring takes the data and then creates application health and performance charts from that raw data.

Resources