I'm trying to calculate my fill rate at the end of the day, with the calc of orders sent divided by trades.
the problem is when i send bracket order it is divided into 3 trades, on for limit, one for stop loss and one for take profit, while some of the get cancelled when the other one get executed.
I have also orders that I cancel automatically after some time if they haven't been executed.
When I send ib.orders() I get all the orders of the day.
how can I get only the orders that haven't been filled because I tried to buy but couldn't get them and they got cancelled?
Related
I get data from our CMS that shows all actions of staff within that system.
My challenge is to be able to show in a chart only the first iteration of each story as published and the hour in which it occurred.
A single story can be published multiple times during a day.
Using countuniquesif I can get the number of unique stories per hour by using:
=COUNTUNIQUEIFS(Sheet0!I2:I60000,Sheet0!G2:G60000,"NL_Stories/Ready",Sheet0!E2:E60000,"Webpub",Sheet0!B2:B60000,"08")
=COUNTUNIQUEIFS(Sheet0!I2:I60000,Sheet0!G2:G60000,"NL_Stories/Ready",Sheet0!E2:E60000,"Webpub",Sheet0!B2:B60000,"09")
Etc
However, if that story is published in the period from 8am-9am (08 in column B), if it is published again between 9am-10am (09 in column B) it will be counted again.
How can I limit this to just the first time it is published and excluded in any of the other hours.
I have attached a spreadsheet with two tabs, one with the raw data and two with what I currently do.
Any assistance appreciated
https://docs.google.com/spreadsheets/d/1V-kZyUUfXtaf6pMYDCxNSUjjWclOrUcW2Pk2y678RZo/edit?usp=sharing
I'm trying to build a per-visit average page load time metric in Adobe Analytics for use in analysis such as average page load time versus conversion rate (Example: Users with avg load time between 0-1s convert at X rate, users between 1-2s convert at Y rate, users between 2-3s convert at Z rate, etc).
We currently have page load time implemented as an event and an eVar on all pages, being captured in ms (ex: On loading the home page, we'll see eVar10=1782 and event10=1782). The eVar is set as a text string set to expire on hit with most recent allocation, while the event is set as an "up is bad" always record numeric with participation enabled.
My first instinct was simply a calculated metric with the Event divided by Page Views, but that ends up aggregating at too high a level (the grand total of all load speeds is divided by the grand total of page views). I tried throwing in various summation functions, but it all ended up equally garbage.
Is it possible to build Average-Per-Visit metrics in AA? Is my implementation even going in the right direction?
Could you do something like this? This might not be exactly what you want, but the calculated metric might help. If you divide your page load time metric by "page load time instances" you can get an average. Which if you have an evar for session id or username, etc, you can find the average load time per session or user. In the example below I use Day as the dimension, but you could use whatever evar you wanted, like username or session id.

Came across this thread as I was looking for something similar. The way I went about this is:
Check the CM here along with the breakdown
You can create a Pageload time/Pageview for each ECID and then further break it down by Visits. This will give you the average Page laod time for a user for a visit.
This you can cross tab with Orders. In my case visits which had orders typically
I have some payments(income and expense) which are added to Core Data and every day I calculate the total and also I show how many consecutive days the payments total was positive.
How I am doing right now is always get the total for previous day and increase a counter if its positive. This counter is saved using UserDefaults.
My issue is when lets say the app is deleted and reinstall, the counter is lost, so I am trying to find a way calculate it dynamically every time, but I don't think reading all payments for all days is a good idea in terms of memory.
Another solution is maybe save it using Keychain ?
Is there any other more elegant method? I don't really like the idea of saving this counter.
As an answer
Assuming you reset the counter if the balance goes negative, you just need to load the balance in reverse date order, and count until you go negative?
Depending on how many records you have it may not be performant to read it all in one go. If that's the case read in batches of a manageable size (50 days, perhaps) and only get more data if you are still recording positive balances.
At some point, of course, you may just return "more than 100 days" as a valid response :-)
Dataset: I'm given the number of minutes individual customers use a product each day and am trying to cluster this data in order to find common usage patterns.
My question: How can I format the data so that, for example, a power user with high levels of use for a year looks the same as a different power user who has only been able to use the device for a month before I ended data collection?
So far I've turned each customer into an array where each cell is the number of minutes used that day. This array starts when the user first uses the product and ends after the user's first year of use. All entries in the cells must be double values (e.x. 200.0 minutes used) for the clustering model. I've considered either setting all cells/days after the last day of data collection to either -1.0 or NULL. Are either of these a valid approach? If not what would you suggest?
For the problem where you want both users (one that used the product a lot every day for a year, and the other used it a lot for one month), create a new entry where it's values are:
avg_usage per time_bin
time_bin can be a month, a day or another time bin which best fits your needs.
This way, a user which use a product, let's say 200 minutes per day for one year, will get:
200 * 30 * 12 / 12 = 6000 minutes per month
and the other user, which joined just last month, will also get, with the exact same usage will get:
200 * 30 * 1 / 1 = 6000 minutes per month.
This way, it doesn't matter when you have started to use the product, the only thing that matter, is the usage rate.
An important thing you might take into consideration, that products, may be forgotten for some time. for example, a computer, and I'm away for a vacation. Those days I didn't use my computer, doesn't have (maybe) an effect of my general usage of this product. So, based on your data, product and intuition you might consider removing gaps like the one I mentioned, and not take it into account inside the calculation.
The amount of time a user has used your product could be a signal of something, but if indeed he only started some time ago, and still using it until today, it may be something you need to take into consideration, and for that use, this average binning technique may help.
I'm measuring how long users are logged into a service. Every minute, for each user, their new total online time is sent to InfluxDB. I'd like to graph, in Grafana, the cumulative online time for all users.
What kind of query would I need to do that? I initially thought that I'd want sum(onlineTime) and group by time(1m), but I realized that's summing the values within that timeframe, not summing the totals of all users, so when a user wasn't logged in, the total would drop, because there were not data points for them.
I'm a bit confused about what I'm doing now. If I'm sending the wrong data, I can change that too.
So this depends on the time data you send back to InfluxDB
The time data is equal to the total time spent till that instant of time
In this case you would have to take the "last" value and add it up for all the users
The time is equal to the small increments
In this case you would have to add this multiple incremental value for a period of time.