Algorithmic trading software safety guards [closed] - monitoring

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm working on an automatic trading system. What sorts of safe-guards should I have in place?
The main idea I have is to have multiple pieces checking each other.
I will have a second independent little process which will also connect to the same trading account and monitor simple things, like ensuring the total net position does not go over a certain limit, or that there are no more than N orders in 10 minutes for example, or more than M positions open simultaneously. You can also check that the actual open positions correspond to what the strategy process thinks it actually holds. As a bonus I could run this checker process on a different machine/network provider.
Besides the checks in the main strategy, this will ensure that whatever weird bug occurs, nothing really bad can happen.
Any other things I should monitor and be aware of?

Alot of algorithmic trading systems make use of ESP/CEP (Event-stream processing/complex event processing) systems in order to make trading decision on the basis of market activity (tracking VWAP being the canonical example).
But perhaps you could create a stream from the algorithm's activity, and then have an ESP/CEP system use this stream to act as a watchdog over the algo's activity; if the algo starts trading too much within a rolling 10-minute window, it could send a message to your middleware to shutdown the FIX connection, etc. It would also be wise to monitor major indexes that you are trading against to see if the market is going through a particularly volatile moment... algos that trade well during periods of relative low volatility can quickly run amok when a market starts to crash.
Esper is an open-source ESP system for Java and .Net that is worth checking out.

Related

Is it practical to use Erlang for embedded development? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
If so, what is the storage and memory footprint?
EDIT
I had done some research about this, but failed to find useful information. The site http://www.erlang-embedded.com/ doesn't help at all. The blog article http://www.1011ltd.com/web/blog/post/embedded_erlang was a little helpful, but It would be nice to hear answers from people with more experience.
EDIT 2
The hardware that I intend to use for Erlang has 32Mb of FLASH storage for the system and 512Mb of RAM. It is dual core with 400Mhz per core. It runs Linux version 2.6.18.
EDIT 3
The motivation behind my interest in Erlang would be to solve gracefully concurrency problems. On the project that I work we have some complex middleware software that is not robust, it's hard to understand and hard to extend. Of course, you can write great concurrent software in C, but Erlang just seems like a better tool for this problem domain.
What is embedded for you?
In my world it's a system with less than 1MB Flash and typically ~64kB Ram.
In my world exists C and sometimes also C++ compilers.
But nobody heard ever for an erlang compiler for such a system (and nobody missed them).
But if embedded is for you WindowsCE or a linux running on a non PC basis hardware with > 64MB Ram and 1GB Flash,
then there should be no problem with erlang.
I would echo the sentiment that the question is vague. But, ...
Not trying to troll, but I think the answer is either "Yes!!" or "No!!" depending on your assumptions regarding hardware and what problems your are trying to solve that aren't easily solved by something more standard like C (i.e., why aren't you using something like C, there must be a reason... reducing code-size, need hot-upgrade, {erlang_value_prop, n}, etc.).
Under a certain set of criteria, the answer seems to be "yes". Evidence includes:
EMBEDDED ERLANG? ABSOLUTELY (http://www.1011ltd.com/web/blog/post/embedded_erlang)
Its embedded use in ATM switches and other telecom equipment
There is (or was) an embedded-Erlang group on Google
I think Ulf Wiger has an Embedded Erlang slide-deck as part of his work with Erlang Solutions
etc
No,
Many embedded systems don't have Erlang compilers, while all have C compilers and most have C++.
Erlang lacks the low level access required by an embedded system.
Its certainly possible to get Erlang on a cluster of Raspberry Pis, but this isn't an embedded device.

Why are Reference Counting GCs Stigmatised? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I once read somewhere about a common thought amongst idealistic yet 'lazy' programmers trying their hand at implementing programming languages. It was as follows:-
'I know, I'll do an easy-to-implement and quick-to-write reference counting GCer, and then re-implement it as a real GCer when I get the time.'
Naturally, this reimplementation never occurs.
However, I question why such a reimplementation is needed. Why are mark-and-sweep collectors of the incremental and concurrent variety seen as superior to the allegedly antiquated approach taken by languages like Perl 5 and Python? (Yes, I'm aware Python augments this approach with a mark-and-sweep collector.)
Circular references is the first topic to come up under such discussion. Yes, it can be a pain (see recursive coderefs in Perl, and fix to it involving multiple assignments and reference weakening.) Yes it's not as elegant when a coder has to constantly monitor for references of that ilk.
Is the alternative any better though? We can discuss fine-grained implementation details for eternity, but the fact is, most mark-and-sweep GC implementations have the following problems:-
Non deterministic destruction of resources, leading to code that's hard to reason about and too verbose (see IDispose in .NET or the try/finally replacements in many other languages.)
Additional complexity with different categories of garbage, for short-lived, long-lived, and everything in-between, with such complexity seemingly required for reasonable performance.
Either another thread is required, or the execution of the program needs to be periodically halted to perform the collection.
Are the downfalls of mark-and-sweep justifiable to fix the issues of reference counting which are mitigatable with weak references?

Are there any profilers that run on the .NET Micro Framework? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I have a project that runs on the .NET Micro Framework (or NETMF) and am looking for a profiler. So far none of the ones I've tried will run on NETMF. Does anyone know of a profiler that will?
A week with no answers.
You have a tough problem, which is getting good measurement data in a very small footprint world.
My company, Semantic Designs provides profilers for a variety of languages including C# in a number of variations.
Our (C#) timing profilers can handle multiple threads of execution but require additional space per function call to track the data. It is unclear if you need this capability, and it is unclear that you have the room to capture it.
Our counting profilers require only enough space for track counts for each basic block (stored in an array) but some additional code space as instrumentation. Typically you need a counter slot for every 4-5 lines of code you have. This is likely your best bet.
You'll likely have to build some custom support machinery; in particular, in small embedded environments, our customers usually have to build a small bit of code that exports the count array content to a disk file. If you can achieve that, you can get profiling data.

Which is the best data-warehousing tool to learn in the present market? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I am graduating soon in electrical engineering.
I would like to learn a data-warehousing tool. Which of the following would you suggest I learn to help me advance my career, bearing in mind I don't have a computer science degree?
business objects;
informatica;
hyperion;
datastage
cognos
Data warehousing is becoming more and more commoditized. Those products are just tools (and the tools you list address very separate areas - ETL and business intelligence).
If you are looking to make a career in data warehousing, you really need to get a solid basis in the theory and principles - particularly modelling philosophies and warehouse development lifecycle practices (and dealing with the business stakeholders) - read Inmon and Kimball.
Typically data warehousing is completely different from regular software lifecycle. In DW, you build the system AND THEN you get the requirements. Seriously. The point is to model your DW as best you can, get it into the users' hands and then refactor.
ETL is about as exciting as it sounds, and BI spans a wide range of things from reporting to dashboards to data mining and decision support - and the tools vary in capabilities.
I guess my point is that learning any particular tool is not going to really advance your career except to be able to check a box which might get you a job. Advancing your career comes from solving people's problems (and well) and becoming familiar with as many technologies, tools and techniques as it takes to do that.

Grails: enterprise level Grails [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am trying to persuade my boss about using Grails.
I tell him it is the most productive way to implement our shopping web site.
But he has doubts about its scalability when traffic gets higher and higher.
So can you give examples of enterprise level web sites with
notable traffic implemented in grails.
Also is there anything that I have to take into account when building
such an enterprise level web site that will probably have high
traffic.
Note: We may expect 10K daily hit.
Take a look at the Grails Success Stories. The most popular sites may be: Sky.com and mp3.walmart.com
Groovymag has some good information on this but costs $5 per issue. This issue has both an interview with a guy from Sky which is a very large site that uses grails as well as information in implementing an e-commerce site using Grails. The main point I got from reading the interview from one of the sky.com developers is that they have no problem scaling to millions of page views by smart use of caching. Although your site may have 10000 views a day most of those views should not need to access the database. You can cache information on each product available in order to limit the number of queries necessary when viewing the site. This should reduce the traffic on your database and make GORM less of a possible bottleneck. I have not been able to find out how GORM performs under heavy load, but if worse comes to worse you could write your performance critical database code using pure JDBC calls and put it in a service.
If you do the math, 10K daily hits is less than one hit per second- even if all 10K hits happened during a 3 hour "peak traffic" window. Even assuming you meant "page renders" and not hits, you are talking about a really miniscule amount of traffic.

Resources