First, thanks everyone.
Prerequisite:I am providing consumable items in my application.
product:
List item
User purchase the item by iap.
before my application received the updatedTrancactions(Transaction),Network is disconnected.
So my server don't have data to verify the receipt. the user also can not get the "Virtual currency".
Would anyone tell me how to solve this problem,or give me some tip. Thanks very much.
its the standard client-server problem. In case the connection between client and server is severed (due to timeout or other reasons), common way to do it is to retry the request. But if your API calls are not Idempotent and calling an API multiple times can affect the state of your system that many times then we have to resort to do something more clever. Some options you have -
Have a local database. When a purchase happens, then first update the state in you local DB. Late lazily sync the DB from client to server, I hear coredata or sqlite is excellent. User is not aware of this and since DB is local the UI will be extra snappy for the user.
Second approach is - in case of a failed HTTP call. You keep retrying till the call succeeds.
Incase the API is non-idempotent, then you need to have a concept of a token. i.e. a API call with the same token called multiple times is first checked on the server-side if the initial call was a success only if it was a failure execute again. ex. this is very important in banking solutions. Imagine multiple debits from your bank account due to timeouts and someone programmed to keep retrying!
This is all I am able to think of right now. Give it a spin and tell us what worked for you...
Related
Our system has two servers (S1) one is running processesing and data storage (basically DB) and the other one is a webserver (WS).
There are two types of even that can happen in the system:
User A pings User B. In this case we check if user B is logged in and we push a notification to User B client throw SignalR. It works.
Services constantly running on S1 and generating new data that concenrs multiple users. My goal is as soon as a new data important for user A is generated I immediately want to dispatch a signalR notification to user A client provided he/she is logged in.
This part 2 is not quite clear for me how to design. My thought right now is to start an indefinite process on webserves that monitors our DataBase and checks if new records are generated fpr this user and then push a SignalR message.
That would be fine, but now we have 10k users logged in and I don't think the right decision would be run 10k threads monitoring activities.
Basically, my question is what would a proper way do design signalR based notification mechanism that is based on events that are not originated on our webserver.
I would use a service bus or mq, for example this Free MQ https://www.rabbitmq.com/
You can proxy the messages direcly to the Clients using this proxy library (I'm the author).
Doc's here https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/wiki
Demo https://github.com/AndersMalmgren/SignalR.EventAggregatorProxy/tree/master/SignalR.EventAggregatorProxy.Demo.MVC4
You can also set up a sql dependency that triggers a message to your signalr clients,
http://techbrij.com/database-change-notifications-asp-net-signalr-sqldependency
This link is the one that I based my code on.
couple of things to watch for, the setup of the table. You cannot use 3 part table names
"SELECT [CMRID],
[SolutionID],
[CreateDT],
[ModifyDT]
**FROM [dbo].[Case]**
WHERE [ModifyDT] > " + LastExecutionDateTime;
Also, and this is very important, you MUST reset the event handler every time the dependency triggers, if not it will work the first time and then stop working.
I hope this helps you.
I am Developing a new warehouse integration for the company I work for as there was not existing solution.
I have gotten almost every feature to work including
fulfillment request and stock requests and even registered a carrier service for real time shipping rates however for some reason i can not get the
/fetch_tracking_numbers call to fire from shopify according to the documentation
"Once per hour Shopify will make a request to this endpoint if there are any completed fulfillments awaiting tracking numbers from the remote fulfillment service."
however I have added logs to the call so i can trouble shoot it however it seems that shopify never makes this call to the server.
If I visit the url myself i can fire the code (logs and all) however it doesn't seem like shopify is doing so
In the install I made sure to provide a valid call back url (thats why fetch stock works fine) and set the tracking support field to true but still nothing
One way to be sure would be ensure that a product's variant is set to your custom fulfillment company. Then complete a bogus order for the product. Now fulfill it. Once you have fulfilled it, Shopify will poll your end point for order's and their tracking numbers. It works fine for me... but I am thinking maybe you're waiting for Shopify and nothing is actually fulfilled.
I am probably too late to answer this but Shopify will make calls to this only if
1) you have "completed" fulfillments
2) tracking number for this completed fulfillments is pending.
You need to mark "complete" a fulfillment after shipping all the order line items
Hey I'm developing an iOS application which communicates with an external web service in order to make various kinds of requests.
I'm aware of Murphy's Law "Anything that can go wrong, will go wrong" and that made me think about timeouts. Currently my application does not handle the situation when a request get completed and times out simultaneously. How should I handle such situations?
Without cooperation from the service provider there's not a lot you can do. If your app sees a timeout it cannot from that deduce whether the request actually completed or not. Could be it worked and something in the infrastructure failed to deliver the response, could be that it failed and hence you saw no timely response.
You have some actions you can take that will help the user. I assume that you have available to you the details of the request you attempted to send, your app should keep that locally. You are now in a position to do some useful things:
Some service authors allow you to safely submit the same request twice. So just resubmit, if it previously worked the service will just say "yep, already done that, here's the details|, if not it will just do the work as normal.
Some service authors allow you to query the status of previous request, so you can determine what has been done and what has not.
In some cases there is no IT system way to deal with the problem, the user will need to contact a help desk or call centre. Here having the details of what was previously attempted can be very useful.
I have a model which sends a HTTP request to an external web service on creation in order to find out some information to add before it is saved.
Currently I'm doing this in a before_create callback. I recently learned that before/after callbacks happen within database transactions.
Am I opening myself up to any issues such as limiting DB throughput by doing this? Would it be better to commit the record before sending the http request and then update the record when it returns?
As long a s you keep a transaction open, all the locks it acquired are active. If you have a call to an external source that may stall you for a long period of time, be sure not to to have any unrelated locks in the same transaction.
In other words: don't put anything else into the same transaction.
If you don't mind the new row being visible before you look up the additional information, you might just commit and later update the row.
Or you fetch the information from the external web service before you even start the transaction. That would be cleanest / fastest solution for the database.
PostgreSQL lock types.
How to view locks.
When a user completes an order at my online store, he gets an email confirmation.
Currently we're sending this email via Gmail (which we chose over sendmail for greater portability) after we authorize the user's credit card and before we show him a confirmation message (i.e., synchronously).
It's working fine in development, but I'm wondering if this will cause a problem in production. Will it require making the user wait too long? Will many simultaneous Gmail connections get us in trouble? Any other general caveats?
If sending the emails synchronously will be a problem, could someone recommend an asynchronous solution (is ar_mailer any good?)
The main issue I can think of is that Gmail limits the amount of email you can send daily, so if you get too many orders a day it might break.
As they say :
"In an effort to fight spam and
prevent abuse, Google will temporarily
disable your account if you send a
message to more than 500 recipients or
if you send a large number of
undeliverable messages. If you use a
POP or IMAP client (Microsoft Outlook
or Apple Mail, e.g.), you may only
send a message to 100 people at a
time. Your account should be
re-enabled within 24 hours. "
http://mail.google.com/support/bin/answer.py?hl=en&answer=22839
I would recommend using sendmail on your server in order to have greater control over what's going on and don't depend on another service, especially when sendmail is not really complicated to set up.
The internet is not as resilient as some people would have you believe, the link between you and GMail will break at some point or GMail will go offline causing the user to think that they have not paid sucessfully.
I would put some other queue in place, sendmail sounds acceptable and you can't create your site now for where it 'might' be hosted in the future.
Ryan
If the server waits for the email to be sent before giving the user any feedback, were there problems connecting to the mailserver (timeouts, server down etc) the user request would timeout too and he wouldn't be told anything about the status of his order, so I believe you should really do this asynchronously.
Also, you should check whether doing that is even allowed by GMail's TOS. If that's not the case, you may check if that's allowed if you purchase one of their subscriptions. Also, there's surely a limit to the number of outgoing emails you may send within a given timeframe so if you're expecting your online store to be successful, you may hit that limit and bump into some nasty issue. If you're not self-hosting the site, you should check whether your host offers email servers (several plans include them for free) as then using your host's ISP would be the most obvious choice.
FACT: Gmail crashes. Not often, but it happens, and you can't control it or test it.
The simplest quick-fix is to start a separate thread or fork a subprocess to send the email. Yes, there likely will arise problems from using Gmail, and I really have no input on that vs. the alternatives. But from a design perspective, there's just no reason to make the user wait for that process to complete.
From a testing perspective, this might be where a proxy pattern might come in handy. It might be easy for you to directly invoke Gmail to send a message. Make it harder. Put in a proxy object that does the mailing for you that you can turn off (because heaven knows you can't for testing purposes make Gmail crash). Just make your team follow what happens in the event of an email malfunction by turning off the proxy and trying to complete an order. If you are doing it synchronously, then all the plagues mentioned here by other posters will rear their heads. If you are doing it asynchronously, you should be able to allow it to fail silently (from the user's perspective--from your perspective there should be enormous logging statements and text messages in the middle of the night and possibly a mild electric current arcing across the surface of someone's skin).