tricky POS printer design question [closed] - printing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I'm looking at designing an application to run on POS terminals in addition to the software already installed. I'd like it to receive POS printer commands and then on some of them intercept and modify them. So for example when a receipt is printed, we'd like to add a custom reference number in the middle of it without having to modify the third party POS applications.
I'd love to hear peoples suggestions on the best way to approach this as reading through the POS specs, it doesn't seem trivial.

I think the solution will be to do this outside of the POS app, but communicate on a level that it understands, print. A solution that looks like a printer to capture the data, reformat, and then send on would do the trick or something that interfaces with the OS (let's say Windows) at a printer port level.
In Windows we use a custom port monitor we created to caputre and route this data, it's something we use internally so I wouldn't suggest it for you as it has some bugs. A similar solution is RedMon. This could provide the solution or provide you ideas on how to accomplish it. Once the data is captured you launch a process against it.
An alternative is to it's over the network you can always setup something that montiors 9100 (RAW) or 515 (LPR) to intercept the data.
Lastly, if it's Windows and you don't want to create something as low-level as RedMon you can always use named pipes. You'd run a service application that would be montioring a named pipe. Your printer from the POS would have it's port set to 'local' and the port would be \.\pipe\. The would allow the application to driectly communicate with your service and thus launch a process.
You could have multiple named pipe/Redmon/network ports setup each with a unique assocated output to direct to the correct device on the other side.

Related

What is the best way to share real time data between multiple devices for a short time frame in IOS [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 months ago.
Improve this question
I have an idea for an application where I would like the following to happen:
A group of people / their devices create a sharing session -->
They can set a time frame for this session (could be one hour or one day, but not long term). -->
Once the session is created they will go through their day capturing the data which I want to be shared automatically between all the devices. -->
At the end of the session each device will be able to choose what of the data collected they would like to keep locally on their device despite who may have collected it. -->
Once each of the devices have saved what they would like, the shared storage would be removed.
What I am struggling on is the best technology for sharing the data. I would rather not have notifications each time data is shared.
I have looked at Multipeer Connectivity as a solution, where each bit of data would get sent to each device and stored locally, but the biggest drawback is the inability to maintain a session in the background, which is likely where the application would reside for the majority of the time period.
Any direction or areas to research would be greatly appreciated. Note: at least to start with I would be looking to implement this in IOS
I would imagine a temporary folder on a web server (kinda like shared cloud folders) would be among the better solutions to this.
Doesn't require physical proximity to other participant. Could be in entirely different countries and it wouldn't matter. There's however nothing stopping you from setting such a limitation anyway.
Would be far easier to implement tech-wise. Especially considering cross-platform support.
You espace many logitical problems, such as storage space on individual devies. Like What happens when participant wants to keep some data, but there's not even enough space to even see a preview of what the participant is missing out on.

How do I create a client server network in Swift, or how do I send a request to another iphone on the app? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I'm trying to create an app like uber, and I'm having trouble with the iphone to iphone connection. How am I to send a request to another Iphone, saying I am your driver! Am I to have riders become accepted, and add them to some database of riders in which drivers can see them? Basically I just want a little explanation on the ways I can use swift to connect iphones, any help is appreciated.
Looks like you've got a decently long way to go, but let's break this down.
Despite how it may seem, phones don't usually talk directly to each other. In these circumstances, an app will contact a central server in order to get information about things around them. The phone (in your situation) would likely contact the server and request a list of nearby drivers and locations. The server would then send a feed of nearby drivers and their locations so that the phone could display the locations of the drivers.
When you request a ride, your phone will tell the server its current location, and potentially the targeted location. A lot of work is done behind the scenes in the server to schedule a driver to pick you up. The server is keeping track of where a given driver is, how many other clients he has in queue, how long the driver would take to get to you, among many other factors. Once it figures out which driver would be best able to serve you, it will contact that driver and tell him to start moving toward you.
Then the server will contact you saying that it has found a driver, and then will send you the feed as to where that driver is in his progress to get you.
So to more directly answer your question, you'll need to start by setting up a server up to do a lot of the work behind the scenes. You can write a server backend in Swift using Vapor, but server-side swift is in its infancy. I'd also recommend looking into Ruby on Rails using the Ruby programming language, or Node.js using Javascript. But none of these are trivial matters.
Given the nature of your question, the problem you're attempting to solve is certainly a lot more difficult than you've anticipated. But don't let that stop you from asking questions like these.

Observing user interactions [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I wish to track a set of data my educational app generates based on user challenges. Every nth challenge I want the app to send these metrics to my server so I can observe various things about the app.
Further, and most important, I need to uniquely identify each instance of my app so that I can watch the trends of a single user. I wish to persist this number through the life of the user's interaction with my program in an anonymous kind of way, and persist over multiple removal / installations on the same device.
Bonus points for what your opinion of the standard method of reporting these metrics to a web server are. XML? JSON? Simple NSURL's?
Bonus points for links to relevant Apple Documentation.
DISCLAIMER: (due to past experiences...)
I am relatively new to stack overflow. If this post doesn't conform to the standards of this site, please explain why before voting me off of the island.
You can't tie a device to a user unless you set up a username password combination. Nothing else will work if you expect to handle app removal, installation, or device upgrades.
As for preferred data-type. My preference is JSON. But that's just a preference and you'll get lots of other differing replies. Hence it's a sort of pointless question.
Take a look at this link. It explains what identifiers are constant when and in what situations they are not. He talks about the identifierforvendor and advertisingidentifier that are now the only supported unique identifiers you can access. They took away the UUID tracking as well as the MAC address method. You can still get the device serial number, but that method uses code that will get your app rejected by apple's app store review process.

Recommended alternative to webkit for server-sent events on iOS [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I would like to receive Server-Sent Events in my native iOS app however am not using webkit/Safari. From what I've found, NSURLConnection is poor fit as it chunks response . I've also looked at ZTWebSocket library (obviously nice, but I'm seeking SSE, not web sockets). Would CocoaAsyncSocket be appropriate? Or limited to pure TCP Socket communication?
I have a sneaking suspicion that I am missing something obvious, or there'd be a library or sample for this already. Thanks in advance.
SSE is a HTTP technology in the sense that it is bound to an open HTTP connection. CocoaAsyncSocket are raw TCP/UDP sockets and do not know anything about HTTP. So no, CocoaAsyncSocket won't give you SSE, as you suspected.
I don't know of any standalone implementation of SSE (in the spirit of standalone Websocket implementations), which is maybe what you are searching for. But I don't know either whether that would make sense at all, since SSE is sending messages in form of DOM-events which are most sensible in the context of HTML, as far as I can see.
If all you want to achieve is sending messages to your iOS app and you are free in the choice of technology, raw sockets would do. But Websockets more likely could suit your needs, depending on what you want. Take a look at SocketRocket.
After some more research on this, it's my opinion that the best way to implement Server Sent Events on iOS without WebKit is to use a customized NSConnection/NSRequest toolset. I settled on ASIHTTPRequest . This library allows you to explicitly control persistence attribute on connection object (essential), control data as it is received over the stream, store responses (e.g. in local files), etc.
Not to mention it contains lots of other handy extensions/customizations in the realm of networking (improved Reachability observer, a simplified API for async, a queuing feature, even ability to load entire web pages (CSS, js and all).
On the server side, I'm using cyclone-sse (tornado) and nginx (as a reverse proxy). Pretty exciting, now I can see my SSEs pushed simulataneously to both my iOS simulator and a browser subscriber. Cyclone even handles all the connections and gives me an API which supports simple POST for message pushes (also supports AMQP and Redit)...
Long story short, ASIHTTPRequest was a perfect solution for me.
Try this simple library which is written in Swift:
https://github.com/hamin/eventsource.swift
The API is super simple. It uses NSURLConnection for now.

HCI challenges of Web 2.0 [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
What are the HCI challenges of Web 2.0?
Here are a few more:
Clear privacy options
Facebook has repeatedly changed the way it deals with content ownership and privacy. (See here, here and here.) Aside from the obvious PR gaffes, this has also demonstrated the difficulty users have understanding privacy.
Geeks like us are familiar with ideas of inheritance and groups. Heck, many of us work explicitly with permission structures when dealing with files on *nix systems. To most users though, it's not clear who can see what or why.
Service Interoperability
On the desktop we're used to being able to chain together tools to get the outcome we want. A simple example would be dragging image thumbnails from a file explorer to an image editor. We'd expect that to work, but not on the web
The Flock browser goes some way to overcome this shortfall, as does the Google Docs web clipboard, but interaction between web services is still a long way off what we expect from the desktop.
Accessibility
Web 1.0 was primarily text based, so the main accessibility issues were easy to fix: stuff like text as images and tables for layout, which both affect screen-readers used by the blind.
As the content of the web gets richer (more images, video and audio), the chances get larger that someone will be excluded from it. Moreover, making video and audio accessible is much harder than making text or images accessible, so it's much less likely to be done.
Lastly, Web 2.0 introduced a whole new problem for accessibility: dynamic content. How should screen-readers (for example) deal with new content appearing on a page after an AJAX query? WAI-ARIA aims to address these issues, but they still require the web-designer to implement them.
Hope this was useful.
There are plenty as I see it,
Different screen resolutions.
Different hardware capabilities. (mobile; touch; desktop; laptop; soon orientation too.)
Localized content.
Location based.
With HTML5 upcoming, hardware acceleration; native api's; localstorage; offline.

Resources