I have started using IB in combination with IBridgePy and I was wondering whether it is possible to somehow perform any backtests, does anyone how to do this?
IB doesn't have a ready made backtesting/replay tool. Basically, you have to download historical data and run it through your strategy.
IB don't offer a real backtesting environment for your Python API. So you have to build your own backtesting environment. You should split it into 2 steps. Step 1 is collecting the historical data and step 2 is feeding your strategy with that data.
By the way, in TWS you can use Portfolio Builder. It's an easy to use tool for testing simple strategies. Check here: https://www.interactivebrokers.com/de/index.php?f=15968&ns=T
I think it's very helpful for the first steps. For more advanced strategies you have to use API as mentioned above.
IBridgePy does not provide the backtest function. You can only collecting the historical and fundamental data after you subscribe IB's specific data feeding. However, I suggest 6 main backtesting framworks for Python:
PyAlgoTrade
bt - Backtesting for Python
Backtrader
pysystemtrade
Zipline
quantopian
You may choose a few of them depend on your needs.
Related
We are using grafana to display graphs over OpenTSDB for our performance test results, so we have a use case in which we would like to compare a test metrics with benchmark results or different timestamp, so is there a way we can compare?
i know it is possible with Grafite datasource using timeshift function but now sure about OpenTSDB.
You can try this plugin "graph-compare-panel". It's very easy to use, but it only support some versions of Grafana .
It looks like there is a timeshift function for OpenTSDB. The issue for the timeshift feature request has been closed and this commit adds the timeShift/shift function.
On the other hand, the function is undocumented and there is an open issue where it doesn't seem to work.
On a related note, this is fairly easy to accomplish in ATSD which supports opentsdb protocol and provides a storage plugin for Grafana.
https://axibase.com/products/axibase-time-series-database/visualization/widgets/baselines/
time-offset = 1 week
Here's another example, a bit more artistic:
A more advanced example is to overlay multiple series using a relative elapsed time. Don't know if this is relevant for your use case, but we're working on it.
Disclaimer: I work for the company developing Axibase Time Series Database.
I need to implement a software module that is able to retrieve the topology of an autonomous system.
Looking at the various protocol that are implemented in Cisco routers i concluded that the only two alternatives to obtain topology are smnp and ospf.
The first one is a workaround and i don't want to use it, this leads to ospf.
I haven't found library in c, java and python that are usable; this one ( http://www.ospf.org/ )is probably the most complete but comes without documentation and i don't have enough time to analyze all the code.
So i found quagga that can implement a software ospf router; seems the perfect alternative since it can work with both real network and simulated network in gns3.
But it's possible to obtain the ospf routing table from quagga since everything is from command line?
This are my conclusions and doubts if someone can suggest something better or help me with the next step it would be appreciated since i'm stuck at the moment.
Use quagga's ospfclient feature. There is already an example provided in the ospfclient directory (see ospfclient.c) which will show you how to retrieve the LSA database from a quagga/ospfd instance. For this solution to work you need to attach a PC to one of your OSPF backbone routers and configure quagga/ospfd on it to successfully learn the routes then you start your ospfclient to retrieve any information you need.
Can you please help me to understand the difference between SWIFT Interact & Fileact protocol
The two products are designed to handle different types of financial processing.
SWIFT Interact is designed for solutions where a real-time gross settlement system is in play, better known by its acronym RTGS. This type of processing is near real-time processing. In the US the FedWIRE for example is an RTGS solution.
The SWIFT Fileact is used for time-insensitive processing, i.e. batch processing. In such a system, financial transactions are queued up and then batch processed at a particular time. For example ACH is a batch processed system.
Though both protocols are handling financial transactions, the difference lies in the processing handling aspect of it.
#venkat
If you are looking for more information about OpenStack Swift kindly follow our developer documentation http://docs.openstack.org/developer/swift/ for details.
I was planning to write a recommender which treats preferences differently depending contextual information (time the preference was made, device used to make the recommendation, ...)
Within the Mahout in Action book and within the code examples shipped with Mahout I can't seem to find anything related. In some examples to there's metadata (a.k.a content) used to express user or item similarity - but that's not what I'm looking for.
I wonder if anyone already made an attempt to do sth similar with Mahout?
Edit:
A practical example could be that the current session is done on a mobile device and this should cause a push up (rating*1.1) for all preferences tracked on mobile devices and a drop for preferences tracked differently (rating*0.9).
...
Another example could be that some ratings are collected implicit and others explicit. How would I be able to keep track of this fact without "coding" that directly into the tracked value and how would I be able to use that information when calculating the scores?
I would say one approach is to use the Rescorer class to do just that, but my guess is that this is what you are referring to when you say that's not what you are looking for.
Another approach would be to pre-process the entire data you have to adjust the preferences according to your needs, before using Mahout to generate recommendations.
If you provide some more detail on how you expect to use your data to modify preferences, people here would be able to help even further.
I have been working with Carmen http://carmen.sourceforge.net/ for a while now, and I really like the software but I need to make some changes inside the source code.
I am therefore interesting in some students reports/projects there have been working with Carmen, or any documentation of the source code.
I have been reading the documentation on the webpage for Carmen, but with all respect I think the literature there is a bit outdated and insufficient.
ROS is the new hot navigation toolkit for robotics. It has a professional development group and a very active community. The documentation is okay, but it's the best I've seen for robotic operating systems.
There are a lot of student project teams that are using it.
Check it out at www.ros.org
I'll be more specific on why ROS is awesome...
Built in visualizer/simulator rviz
- It has a record function which will record all of the messages passed out of nodes, this allows you take in a lot of raw data store it in a "ros bag" and then play it back later when you need to test your AI, but want to sit in your bed.
Built in navigation capabilities,
-all you have to do is write the publishers of data for your sensors.
-It has standard messages that you need to fill out so that the stack has enough information.
There is an Extended Kalman Filter which is pretty awesome because I didn't want to write one. Currently implementing it, i'll let you know how that turns out.
It also has built in message levels, by that I mean you can change which severity of print messages are printed during runtime, fairly handy for debugging.
There's a robot monitor node that you can publish the status of your sensors to and it bundles all of that information into a GUI for your viewing pleasure.
There are some basic drivers already written. For example SICK lidars are supported right out of the box.
There is also a built in transform function, to help you move everything to the right coordinate system.
ROS was made to run across multiple computers, but can work on just one.
Data transfer is handled over TCP ports.
I hope that's more helpful.