Bloomberg real-time data with lot sizes - trading

I am trying to download real-time trading data from Bloomberg using the api.
So far I can get bid / ask / last prices successfully but in some exchanges (like canada) quote sizes are in lots.
I can query the lots sizes of course with reference data api and write them for every security in the database or something like that but to convert the size for every quote tick is very "expensive" conversion since they come every second and maybe more often.
So is there any other way to achieve this?

Why do you need to multiply each value by lot size? As long as the lot size is constant each quote is comparable and any computation can be implemented using the exchange values. Any results scaled in a presentation layer if necessary.

Related

AlphaVantage API Technical Indicators: Do they use only information of the past?

I am writing because I found no public documentation or code to solve this doubt. I have been using the AlphaVantage APIs for a project about stock markets prediction with Machine Learning. I have been using a lot of technical indicators of the AlphaVantage library, and, many of them use sequences (windows) of data points, rolling them (e.g. Moving Averages).
However, many financial libraries tend to update the values they previously computed for some of these indicators, by using windows retaining future information with respect to the point in time the indicator is referred to. Obviously, that would represent an "hidden" information that a predictive system (only relying either on past or present information), like mine, should not have access to.
Hence, I was wondering if it is the same case for the AlphaVantage library. I personally manually checked a lot of indicators referred to the same stock (and I repeated the process for many stocks), at a distance of days, and I did not find any inconsistencies on the values referred to the common dates (the only difference is that the most recent versions of those technical indicators have new points, referred to the new evolutions of the price in time).
I would be very pleased, if anybody of you could help me in solving this.
Most indicators will use a look back window of quote values, including current price, to calculate current indicator values. Many will also include previously calculated indicator values as a basis for current indicator values. Fewer even recalculate older indicator values based on new price information.
For this last scenario, in looking at the AlphaVantage library, I don’t see any in there that would recalculate older indicator values based on newer data. If you’re seeing indicator values change, it’s probably due to a revision or updates of their underlying quote history.
I have a rather large .NET library of indicators, so I’m familiar with which kinds behave that way, due to the mathematics.
Some examples of indicators with retroactive recalculation are ZigZag and Williams Fractal. The reason they do this is because they find local high and low points, which can’t be verified without several confirming bars of data. In other words, you cannot indicate a high point until several lower bars occur thereafter.

Does highcharts (or vaadin) have built-in abilities to use an algorithm like Ramer–Douglas–Peucker?

Using Vaadin's charts (which ultimately uses HighCharts), I'm trying to plot a line graph with over 10,000 points. It actually works reasonably quickly (a couple seconds to plot). However, I wonder if it can be much faster, as I came accross a similar problem when using JavaFx charts and discovered that people have implemented a solution using the "Ramer–Douglas–Peucker algorithm" to reduce the number of datapoints in such a manner that it's basically noticeable to the human eye when its graphed. (Here's the original answer from SO: Performance issue with JavaFX LineChart with 65000 data points).
So, does highcharts already have such built in functionality? If not, does Vaadin? Or, do I need to recreate this logic in Vaadin, which means I ultimately need to find a Java library for the Ramer–Douglas–Peucker algorithm....
Unfortunately, Highcharts doesn't have "Ramer–Douglas–Peucker algorithm" built-in. However, it has a boost module which allows rendering thousands of points in milliseconds.
The Boost module allows certain series types to be rendered by WebGL
instead of the default SVG. This allows hundreds of thousands of data
points to be rendered in milliseconds. In addition to the WebGL
rendering, it saves time by skipping the processing and inspection of
the data wherever possible.
API reference:
https://api.highcharts.com/highcharts/boost
Demo:
https://www.highcharts.com/demo/line-boost
What's more, using Highstock you can use dataGrouping.
Data grouping is the concept of sampling the data values into larger
blocks in order to ease readability and increase performance of the
JavaScript charts.
API reference:
https://api.highcharts.com/highstock/series.line.dataGrouping
Demo:
https://www.highcharts.com/stock/demo/data-grouping
With most of the chart types when you ise Vaadin charts it actually makes more sense to apply algorithm to reduce number of data points in your server side logic, e.g when copying data from original data dourse to DataSeries you use in the chart. This does not only reduce rendering time, but time to load the data to browser as well.
I would also recommend to calculate linear regression as additional two point DataSeries on server side if such thing is needed.

PHAdjustmentData size limitation

Since the adjustment data used when saving a PHAsset can be completely user-defined, are there any memory size limitations for it?
For instance, can I store something like masks or layers (so basically multiple bitmaps) in it?
The documentation for PHAdjustmentData.init notes (emphasis mine):
Because Photos limits the size of adjustment data, you should keep your edit information short and descriptive. Don’t use image data to describe an edit—instead, save only the minimal information that is needed to recreate the edit.
Your app must provide a non-empty NSData object for the data parameter. If you cannot provide relevant data to describe an edit, you may pass data that encodes an NSUUID object.
In other words, adjustment data isn't for Photoshop-style files that encode an edit in terms of the actual new pixels. Remember, the main idea for adjustment data is that you can use it to revert, reconstruct, and alter whatever work was performed in the last user edit.
Adjustment data is most commonly used for saying things like "blur filter # 20px + darken by 20% + crop to (100,100,300,400)". For more complicated edits, you'll have to get more creative — for example, for an effect that the user paints on with a brush, you can probably record the brush strokes (plus brush radius and any other parameters) in a lot less data than you'd use to store bitmaps.
And failing all that, if you have edits that can only be described using data too large to fit into adjustment data, notice that tip Apple left about using a UUID — you can store your data externally to Photos, and use adjustment data to store a key that lets you look up an edit in your external storage. (Of course, you'll then have to make sure that your external storage is in all the places that iCloud Photo Library syncs to, and have a way to fall back gracefully if you can't access it, etc...)
Oh, and as for what the upper limit is... it's possible that Apple doesn't publish the limit because it might be subject to change across devices, iCloud account status, OS versions, or some other factor. So even if you've found a limit experimentally, it might not always hold true.
I did a few tests and it seems 2 MB is the answer. It would be good to have an official statement, though...

Reverse geocoding services

I'm working on a project that returns information based on the user's location. I also want to display the user's town in text (no map) so they can change it if it's not accurate.
If things go well I hope this will be more than a small experiment, so can anyone recommend a good reverse geocoding service with the least restrictions? I notice that Google/Yahoo have a limit to the number of daily queries along with other usage terms. I basically need to take latitude and longitude and convert them to a city/town (which I presume cannot be done using the HTML5 Geolocation API).
Geocoda just launched a geocoding and spatial database service and offers up to 1K queries a month free, with paid plans starting at $49 for 25,000 queries/month. SimpleGeo just closed their Context API so you may want to look at Geocoda or other alternatives.
You're correct, the browser geolocation API only provides coordinates.
I use SimpleGeo a lot and recommend them. They offer 10K queries a day free then 0.25USD per 1K calls after that. Their Context API is what you're going to want, it pretty much does what is says on the tin. Works server-side and client-side (without requiring you to draw a map, like Google.)
GeoNames can also do this and allows up to 30K "credits" a day, different queries expend different credit amounts. The free service has highly variable performance, the paid service is more consistent. I've used them in the past, but don't much anymore because of the difficulty of automatically dealing with their data, which is more "pure" but less meaningful to most people.

Tools for mapping time series data

I'm looking for suggestions/examples of tools or APIs that enable the mapping of large amounts of time series data into an intensity map.
The data includes dimensions for country, series, and year. Here's an example http://spreadsheets.google.com/ccc?key=t9ZwziZAgy768ZTXDEg8Maw&authkey=CPn0pdoH&hl=en_GB&ui=1
UUorld is a good choice if you want to create videos that show data changing over time. They're heavy on the 3-D, but I found some examples of what appear to be 2-D intensity maps in the gallery. The trial version is free and does not expire.
For static images, I love indiemapper. It's very simple to use and has beautiful color palettes and typography options. It also has 16 different map projections, if you're into that. The free trial is 30 days.
The caveat with these (and other mapping software) is that you may have to convert your data into a certain format, depending on what it is now. For example, indiemapper takes shapefiles, KML, and GPX as input.
Try GeoCommons, you would need to reformat your spreadsheet a bit, but once you get it in there you can join to country boundaries, creat an interactive temporal map, and embed it wherever you want. everything is web based so no need to download anything.

Resources