I am developing an application for a customer where I would need to get live rates for EUR/USD, XAU/USD, etc.
After a quick google search, I concluded that I won't be able to get what I am looking for. The few clean JSON data solutions are updated once per hour and allow like 100.000 requests per month ( much less than the ~ 2.6M seconds a month has ).
Surprisingly for me even these "poor" service costs several hundred bucks per month.
I am not opening this just for weeping, but to know if somebody could help me in exporting data from MetaTrader Terminal to XML, or any other source, any broker gives that info updated every second on demo accounts without delays.
With that I would be able to create my own API and use that data on my application. If that works correctly, I am willing to create an endpoint, open for anyone, it's a shame that in 2017 there are no free live feeds for at least the main markets. I am sure it would help many developers.
FOREX? Each data-feed is Broker / LP-provider specific ( i.e. different )
There is no "globally universal" XAUUSD stream, there are many streams, valid for respective Broker / LP-provider Terms & Conditions.
While technically doable ( republishing of the received stream of events ), some T&C will have explicitly legally bind you not to re-distribute the data from any feed you receive from the FOREX Trade Execution venue.
Why so expensive?
A principal business understanding is needed here. If one produces some service ( a Market Data Feed in this case ), such undertaking is to be justified by some type of exchange of values, if one is willing to receive someone else's product.
Except for some extremely altruistic exceptions ( which are typically very well funded indirectly, from some other business-domain, thus not requiring the Product/Service Consumers to share and pay the fair portion of the accumulated costs of that Product / Service creation + operations + maintenance ) and except for some ultra-idealistic illusions ( that did not happen to proof their ability to survive on their own, in the real world ),
there is no such thing as a free dinner ( as Britons are cited to say ).
So, given some Product / Service represents any non-zero value, there is fair to pay some price to exchange ones funds for such Product / Service, that allows the Consumer to benefit from the value received in exchange for her/his funds.
Meaning, if such a Product / Service is not considered absolutely useless ( as a synthetic RNG or demo account just mimicking Real Market Data Feed might appear to have in the real world's FOREX Market Data Feed context ), then there is a price to pay for it.
Surprised? Why?
If one pays nothing, why is one expecting to receive anything more than a symmetrically equivalent value of that zero paid equivalent ( a value of zero, or less, if costs are taken in account )?
Trying to exchange for zero typically brings nothing useful or reliable - by principle.
The world just works this way. There is no exception, no excuse. Even the "Non-profit" NPOs know very well, the real world works this way and share the price to pay in exchange for buying the externally provided goods and services they consume inside their NPO programmes.
You can open a demo/real ( does not matter ) account at any MT4 broker and consume quotes very natively using MetaQuotes datafeed.
Do have a look for an example, published at: https://github.com/CPlugin/CPlugin.PlatformWrapper.MetaTrader4DataFeed
I am using rtfxd.com to get high quality forex data at 1 sec frequency. They cover over 40 currency pairs and the price is currently peanuts. You can purchase a login (currently only $3) and connect via ssh and start receiving data instantly.
Related
In our company, we want to develop a simple feature flag system to work beside our API Gateway. Since we expect not more than 100 req/s on the system we decided that third-party solutions will probably be costly(as they require time and effort to understand) and also it would be a fun effort to do it on our own. Here are the requirements:
We have the list of users and the latest app version they are using. We want to apply three kinds of policies on their usage of the some application feature:
Everybody can use the feature
Only users with a version higher than a given version can use the feature
Some random percentage of users can use the feature( this is for A/B testing in our future plans)
These policies are dynamic and can change (maybe we want to remove the policy or edit it in the future)
So, the first problem is how to store the policies ?
I think We can describe each policy with at most three fields: feature , type , and specification . So for instance something like A 2 1.5 means that feature A is only active for users who use version 1.5 or above. A 3 10 means activate feature A for 10 percent of users at random and finally A 1 means that feature A should be active for all uses.
Although in this way we can describe a policy in a meaningful matter, I can't see how I can use these in a system. For instance, how can I write these policies in an RDBMS or in Redis for instance?
The second problem is how to apply these policies in the real world. In our architecture, we want each request to first visit this service, and then(if authorized) it can proceed to the core services. Ideally, we want to call an endpoint of this service and the service returns a simple yes/no response(boolean). For simplicity lets assume we get a user id and a feature like 146 A (users 146 wants to use feature A), we want to determine whether 146 is allowed to use it or not.
I thought of two approaches, first is real-time processing: as a request comes, we do a database call to fetch enough data to determine the result. The second approach is to precompute these conditions before hand, for instance when a new policy is added we can apply that policy on all user records and store the results somewhere, in this way, when a request comes, we can simply get the list of all features that the uses has access to and check if the requested feature is in the list.
The first approach requires a lot of DB calls and computation per each request, and the second one is probably much more complicated to implement and needs to query all users when a new policy is added.
These are our two main problems, I tried to simplify things so that it could become a more generic problem. I would appreciate if you could share your thoughts on each of one them.
For my internship, I need to implement a blockchain based solution to manage a drug supply chain. The management of this supply chain implies to track-and-trace (geolocate) a drug on the chain, but also to monitor the storage temperature to see if the cold chain is respected. For that I also intend to use IOT, where a device will feed information on the blockchain solution. However, I have a few questions that I can't find easier.
The first one is that I don't know if I should use ethereum or not, since each time that a new block is added (the block representing the update on the information about the product on "real-time") I will to use money. Is there any solution for that? Or do I need to create a blockchain with javascript?
The second question is that I absolutely don't know from where to begin in order to implement IOT on the block chain. I searched on research site, but they only talk about it, without presenting any example...
The third one is more a confirmation than a question since I want to know if my idea to use an IOT to track and manage products on a supply chain can be done on a wide scale, since the bigger a blockchain the slower is the time to add a block because of the consensus mechanism. So it means that my "real time" tracking on truly be "on time" since there would be a waiting time before the block is added to the blockchain. If the time is just a few seconds to minutes, then there is no problem, but because the number of block will rapidly increase because of the real time tracking (1 block each minute for each storage or transport vehicles I was planning for) that this problem of scalability makes it impractible.
I'm thanking anybody in advance who will help me solve these questions.
#1) Whether you use Ethereum or another blockchain that may be better suited for this purpose is completely up to you. I expect you will get a lot of opinionated answers on this. Ethereum is certainly the most popular blockchain for a use like this, but that does not mean it is the best for your app. Over the last several years we have seen many new blockchains with lower/no fees, faster block times, and increased scalability. I would suggest doing some research into various "Supply Chain", "Enterprise", and "Business" blockchains, as these are likely the type you are looking for and will cost very little in blockchain fees due to them not being as widely used as Ethereum is.
#2) You will have to settle on a blockchain before you can start prototyping or looking for examples, as each one will be different. For storing "log" data for your application there are generally 2 options: Storing data in a smart contract on Ethereum (or an Ethereum-like blockchain), or storing data in the OP_RETURN field of a transaction on Bitcoin or a bitcoin-based blockchain. The latter is likely easier to get started with and is simpler to understand, you just put the data in a transaction and send it (to yourself, even).
#3) Yes, there a special purpose blockchains created exactly for this purpose that are meant to ingest large amounts of data and that can scale to meet the needs of an application like you describe. Some blockchains have block times of 1 minute or less meaning that on average, you could update the data every minute if you were willing and able to pay the blockchain fee to include the data in each new block (personally I would suggest a longer interval, such as every 5-10 minutes).
You can use Emercoin NVS technology and Emercoin (testnet) blockchain to upload you data into it. By creating some "name label" with name_new command, you can thereafter upload chain of modifications for values of your name. Blockchain has command name_show (shows recent uploaded value) and name_history (shows all chain of uploaded values). You can view/debug your uploaded values within Emercoin Testnet Explorer, tab NVS.
Regarding "use money".
I can give you (or anyone else) 100 test EMCs, for free. Just write your tEMC address in comments here. 100tEMCs will be enough for ~100,000 records within Testnet. Thus, I think, it is more than enough for your test tasks.
If you need to use your service for production, you need to use the "real blockchain" (with high trust), no testnet. Anyway, EMCs are very cheap right now, and you can buy 100EMSs for ~$5 only. I think, this is not big deal for your organization.
Ask more questions, and I will be happy to assist you here with this technology.
Because you have a permissioned / private environment that does not transfer value or store value, a blockchain specialised in value transfer, like Ethereum, is not a good choice.
Choosing a blockchain that is specialised in untrusted value writes simplies creating your product a lot. Some good choices include:
BigchainDB
HyperLedger SawTooth - comes with a stock supply chain example
I recently began using the Youtube Data v3 API for a program that I'm writing which is purely for personal use. To give a brief summary of what it does, it checks the the live chat from my most recent (usually ongoing) livestream and performs actions based on certain keywords entered in chat (essentially commands for people to use from live chat). In order to do that, however, I have to constantly send requests to get a refreshed livechat. As it is now, it sends requests on 1 second intervals. I recently did a livestream to test out my program and it only took about 25 minutes for me to reach the daily quota limit of 10,000 units/day.
The request is:youtube.liveChatMessages().list(liveChatId=liveChatId,part="snippet")
It seems like every request I make costs 6 units, according to the math. I want to be able to host livestreams at lengths of up to 3 hours, which would require a significant quota increase. I'm aware that there is an option to fill out a form to request additional quota. However, it asks for business information such as a business name, business website, business mailing address, etc. Like I said before, I'm doing this for my own use only. I'm in no way part of a business, and just made my program as a personal project. Does anyone know if there's any way to apply for additional quota as an individual/hobbyist? If not, do you think just putting n/a in those fields would be acceptable? I did find another post where someone else had the exact same problem, but no one was able to give a helpful answer. Any advice would be greatly appreciated.
Unfortunately, and although only related, it seems as Google is for the money here. I also tried to do something similar myself (a very basic chat bot just reading the chat messages), and, although some other users on the net got some different results, they all have in common that, according to the doc how it should be done, all poll at this interval of about once a second (that's the timeout one get as part of the answer to a poll for new messages). I, along with a few others, got as most as about 5 minutes with polling once a second, some others, like you, got a few more minutes out of it. I changed the interval by hand in incrementing intervals of 5 seconds each: 5, 10, 15, etc... you get the picture. I can't remember on which value I finally tuned in, but I was only able to get about 2 1/2 hours worth with a rather long polling interval of just once every 10 seconds or so - still way enough for a simple chat bot just reading the chat. But also replying would had at least doubled the usage and hence halfed the time.
It's already a pain to get it working as an idividual as just setting up the required OAuth authentication requires one to at least provide basic information like providing a fixed callback and some legal and policy information. I always ended up in had it rejected with this standard reply "Your project seem to be for internal use only.". I even was able to got this G suite working (before it required payment) to set up an "internal" project (only possible if account belongs to a G suite organization account), but after I set up the OAuth login I got the error that my private account I wanted to use the bot on was not part of the organization and hence can't be used. TLDR: Just useless waste of time.
As far as I'm in for this for several months now there's just no way to get it done as a private individual for personal use. Yes, one can just set it up and have the required check rejected (as it uses the YouTube data API scopes), but one still stuck with that 10.000 units / day quota. Building your own powerful tool capable of doing more than just polling once every 10 to 30 seconds with just a minimum of interaction doesn't get you any further than just a few minuts, maybe one or two hours if you're lucky. If you want more you have to set up a business and pay for it - simple and short: Google wants you to pay for that service.
As Mixer got officially announced to be shut down on July 22nd you have exactly these two options:
Use one of the public available services like Streamlabs, Nightbot, etc ... They're backed by their respective "businesses" and by it don't seem to have those quota limits (although I just found some complaints on Streamlabs just from April - so about one month prior to when you posted this question where they admitted to had reached their limits - don't know if they already got it solved).
Don't use YouTube for streaming but rather Twitch - as Twitch doesn't have these limits and anybody is free to set up an API token either on the main account or on a second bot account (which is also explicitly explained in their docs). The downside of this are of course the objective sacrifices one has to suffer: a) viewers only have the quality of the streamer until one reaches at least affiliate b) caped at max 1080p60 with only 6.000kBit/s c) only short time of VOD storage
I myself wanted to use YouTube as my main platform (and currently do, but without my own stuff at the moment) and my own bot stuff and such as streaming on YouTube has some advantages over Twitch, but as YouTube wants me to pay what others (namely: Twitch) offer me for free (although overall not as good quality) it's an easy decision to make. Mixer looked promissing, as it also offered quite some neat features (overall better quality than Twitch, lower latency), but the requirements to get partner status were so high (2.000 followers along with another insane high number to reach) and Mixer itself just so little of a platform (I made the fun to count all the streamers and viewers - only a few hundred streamers with just a few 10.000s viewers the whole platform had less than some big Twitch channels on their own) - and now it's announced soon to be dead anyway.
Hope this may give you some input into what a small streamer has to consider and suffer from when chosing a platform - but after all what I experienced I have these information: Either do it like all the others: Stream on Twitch and use YouTube as an archive to export to from Twitch (although Twitch STILL doesn't have an auto-export of the latest VOD implemented - but I guess that could be done by some small script) - or if you want to stay on YouTube use some existing bot like Nightbot or any of the other services like Streamlabs.
If you get any other information on how to convince Google to increase the limit as an individual please let us know.
Valence allows to do this, but I wondered if there's limitations to trying to automate user account setup and enrollment in this way. We're a relatively small institution with hundreds of enrollments perhaps in a given term scattered over several weeks, so I don't think there'd be a problem with realtime events. But I wondered what the implication might be for a larger university that might have thousands of enrollments updating all the time. There'd be spikes of activity certainly as a term reached official start.
The issue of performance is a complex one, with many inter-dependant factors. I can't comment on the impact of hardware hosting your back-end LMS, but obviously, higher-performance hardware and network deployment will result in higher performance interaction between clients and the LMS.
If your needs are "hundreds of creations scattered over several weeks", that is certainly within comfortable expected performance of a back-end service.
At the moment, each user-creation request must be done separately, and if you want to pre-provide a password for a user, then it takes two calls (you can, in one call, nudge the LMS to send a "password set" email to the new user, but if you want to manually set the password for the user, you have to do that in a separate call after the user record gets created). The individual calls are pretty light-weight, being a simple HTTP POST with response back to provide the created user record's details.
There are roadmapped plans to expand support for batch operations, and batched user-creation is certainly a logical possibility for improvement.
General Problem
How do retail establishments constrain activation for gift cards, or those pre-paid phone/debit cards?
They must have a system in place that only keeps you from calling in to activate cards that haven't scanned through the register, and I assume there must be a standard solution built into the retail ERP/accounting systems. It probably involves web services or EDI.
Specific Problem
I ask all this because one of my clients wants me to develop a product that you get into by purchasing a $30 card at a retail store. The card has a unique number on it. Once you've purchased a card and activated it via a web site, coupons for restaurants and so on are emailed to you periodically.
However, if someone were to steal a bunch of cards or figure out the numbering sequence, we don't want the cards to work.
Presumably, this is a solved problem because retailers are doing this with the products above (pre-paid phone cards, etc).
I can think of a number of ways to solve this problem, however I need to provide the "standard" solution that the retailers expect, so that the product will snap into their infrastructure in the normal way.
Thanks a lot!
I've worked on a few of these types of systems and they all basically work the same way. The card has a # encoded into the magnetic strip (it could also be a barcode). That's usually all that's on the card itself. Cards are then activated at time of purchase.
Here's the basic flow:
Customer comes in and purchases a card:
The card is swiped and/or scanned.
A call is made to an on-line system (usually via some type of webservice call). It includes the card #, the amount they are activating with, and maybe a bit of additional information (ex. invoice #), and possible something like the previous transaction #.
If the call is successful, you get back a transaction ID #.
If the call fails, there is usually some protocol you are supposed to follow (sometimes handled during the daily settlement process). Things like retrying the activation, or running a query to determine if the last transaction went through.
If it was successful, the card is now active.
So basically, the card is worthless until it's activated. At that point it becomes "live" and has money associated with it. That is, back on some server somewhere is a database that has this card #, when/where it was activated, amounts, etc.
There is usually some functionality to generate an "end of day" transaction report to help you reconcile your numbers (what your system says vs what they have recorded).
Since cards are centrally managed it becomes easy for them to flag cards if they were stolen (not that it matters since they have $0 value until they have been activated).
I found out through other sources that there are about eight card processing services that integrate with the various retail locations.
Each retail location uses one. When a card scans through the register, the retailer notifies the card processing service (unlocking the PIN so that it can be activated), and then presumably the card processing service notifies us via an API call.
Then, when the customer goes to activate their card, we can tell which ones have scanned through the register (because they are unlocked). In this way, we get around problems surrounding stolen cards or guessed pin numbers.
The names of a few of these networks are:
Blackhawk Networks
InCom
Coin Star
I had the joy of working on one of these systems right out of college. Depending on the way they handle their processing whether end of day batch or weekly report could cause quite a lot of problems. One of the things I saw was that if the person whom had the card, whether legit or not, if they managed to make a bunch of purchases that were < the day's starting balance but by the end of the day > greater than starting balance all the purchases would go through. Not very fun when the company had to swallow upwards of 100 dollars per user a day.
In terms of security, make the company that you interface with be held responsible for purchases. That is the best way of handling this in what I have seen, because that is what they are there for. Hope that helps in some roundabout way.
You have to be careful, I have seen this played out in retailers..a customer in front of you is asked if they have a loyalty card by the cashier. The customer says no, but the customer behind them offers their card and it gets swiped hence collecting points for something that they did not buy...
Thus gets points at the expense of the customer in not having it thus skewing/distorting the results of the customer (the one who has a loyalty card)'s personal shopping experience and what they did not buy...on the appropriate system's database
In short, there is no foolproof way of getting around that other than asking for a retina scan or finger print that identifies the customer. Some customers would be cautious in joining a club in regards to their privacy...that is another thing to be kept in mind...not all of them would have a loyalty card...
Hope this helps,
Best regards,
Tom.
I believe that the most secure solution possible is to have a server that generates and prints (or exports) the card numbers. When a customer is interested in purchasing a gift card, it is scanned at the register and the register notifies the server that the card has been approved (probably with the credentials of the cashier).
Then when input on your website, the website checks with the card server to see if the card number is valid and approved.
Then, stolen cards are not approved. If someone figures out the numbering scheme, then you may be screwed so it is recommended that the numbers be random with enough digits to make guessing numbers unreasonable (perhaps with something similar to a CV2 code).
This is similar to how debit cards work: card number/CV2 generated ("server") -> shipped to customer -> customer activates via phone ("register", with the "credentials" being their SSN or similar) -> customer then uses at a store and the store contacts the card company's server
I know that Intuit Quickbooks Point of Sale offers a service like this (complete with an API), you could look them up.
I like HalfBrain's solution. I'd also imagine they have certain pieces of security in mind, like a single IP address (or some other criterion) with more than some number of failed activation attempts getting flagged for apparently attempting to probe the system.