How can I limit a trigger to only one trade - trading

In pinescript. How to limit a trigger to only one trade/entry, and after an exit not generate another entry from the same trigger?

I solved it by using:
b = ta.barssince(id2)
tradesActively = ((strategy.closedtrades - strategy.closedtrades[b]) < 1)
Variable id2 is the trade setup condition.

Related

Are you able to set a limit in google queries referencing a cell rather than a number?

When creating a query with limit returns. I would rather reference a cell to automatically set the limits rather than continuously using a numerical variable. Here is an example of what I was trying, obviously it didn't work! Any help would be greatly appreciated. =QUERY(Production,"SELECT * ORDER BY G DESC LIMIT (H11-H23)",1)
use:
=QUERY(Production, "SELECT * ORDER BY G DESC LIMIT "&(H11-H23), 1)

How do you find the difference of a field key for two data points belonging to different timestamps?

I have the following data:
INSERT table1,adwords_id=123-456-7890 total_spending=0 1538377201000000000
INSERT table1,adwords_id=123-456-7890 total_spending=110 1538463601000000000
INSERT table1,adwords_id=123-456-7890 total_spending=120 1538550001000000000
And I want to write a query to find the difference of total_spending between two timestamps.
For example, let say I want to find the difference of total_spending between 1538377201000000000 + 1h and 1538550001000000000 + 1h
Doing it line by line will be:
v1 = SELECT last(total_spending) FROM table1 WHERE "adwords_id" = '123-456-7890' AND time < 1538377201000000000 + 1h
v2 = SELECT last(total_spending) FROM table1 WHERE "adwords_id" = '123-456-7890' AND time < 1538550001000000000 + 1h
And the answer will be v2-v1
How can I do this in one query? (so I can run this across many adwords_id)
The below should work for you.
You need to have variable values for #t1, #t2 and #adwordsId.
SELECT
(
SELECT TOP 1 total_spending
FROM Table1
WHERE adwords_id = #adwordsId
AND time < #t2
ORDER BY time DESC
) -
(
SELECT TOP 1 total_spending
FROM Table1
WHERE adwords_id = #adwordsId
AND time < #t1
ORDER BY time DESC
)
Assuming total_spending is cumulative for each adwords_id, I would try this:
SELECT last("total_spending") - first("total_spending") FROM Table1 WHERE time >= t1 AND time <= t2 GROUP BY "adwords_id"

Rails 4 how to calculate sum for each method

I'm trying to find solution how to calculate sum for each method in rails.
I feel like tried hundreds different approaches, but couldn't quiet figure it out yet.
For example:
Helper return 2 ids for products: (using alert just to make it visible)
view_context.time_plus.each
returns 1,2
But when I combine it with call, and selecting multiple option it is only returns last one instead of sum for both value
view_context.time_plus.each do |i|
flash[:alert] = [Service.find_by_price_id(i).time].sum
end
I see in logs call made for both value:
Service Load (0.2ms) SELECT `services`.* FROM `services` WHERE `services`.`price_id` = 0 LIMIT 1
Service Load (0.2ms) SELECT `services`.* FROM `services` WHERE `services`.`price_id` = 1 LIMIT 1
find_by_column always returns only one record.
You can use where condition for multiple ids like this
Model.where(column_name: [array_of_ids])
if view_context.time_plus returns an array of ids
Service.where(price_id: view_context.time_plus]).sum(:time)
You can try this
flash[:alert] = view_context.time_plus.map{|i| Service.find_by_price_id(i).time}.sum
This should work
You can use inject method:
sum = view_context.time_plus.inject(0) { |sum,i| sum+=Service.find_by_price_id(i).time }
But if you are using ruby on rails, better way is use active_record and sql:
sum2 = Service.where(price_id: [1,2]).sum(:time)

PostgreSQL and ActiveRecord subselect for race condition

I'm experiencing a race condition in ActiveRecord with PostgreSQL where I'm reading a value then incrementing it and inserting a new record:
num = Foo.where(bar_id: 42).maximum(:number)
Foo.create!({
bar_id: 42,
number: num + 1
})
At scale, multiple threads will simultaneously read then write the same value of number. Wrapping this in a transaction doesn't fix the race condition because the SELECT doesn't lock the table. I can't use an auto increment, because number is not unique, it's only unique given a certain bar_id. I see 3 possible fixes:
Explicitly use a postgres lock (a row-level lock?)
Use a unique constraint and retry on fails (yuck!)
Override save to use a subselect, I.E.
INSERT INTO foo (bar_id, number) VALUES (42, (SELECT MAX(number) + 1 FROM foo WHERE bar_id = 42));
All these solutions seem like I'd be reimplementing large parts of ActiveRecord::Base#save! Is there an easier way?
UPDATE:
I thought I found the answer with Foo.lock(true).where(bar_id: 42).maximum(:number) but that uses SELECT FOR UDPATE which isn't allowed on aggregate queries
UPDATE 2:
I've just been informed by our DBA, that even if we could do INSERT INTO foo (bar_id, number) VALUES (42, (SELECT MAX(number) + 1 FROM foo WHERE bar_id = 42)); that doesn't fix anything, since the SELECT runs in a different lock than the INSERT
Your options are:
Run in SERIALIZABLE isolation. Interdependent transactions will be aborted on commit as having a serialization failure. You'll get lots of error log spam, and you'll be doing lots of retries, but it'll work reliably.
Define a UNIQUE constraint and retry on failure, as you noted. Same issues as above.
If there is a parent object, you can SELECT ... FOR UPDATE the parent object before doing your max query. In this case you'd SELECT 1 FROM bar WHERE bar_id = $1 FOR UPDATE. You are using bar as a lock for all foos with that bar_id. You can then know that it's safe to proceed, so long as every query that's doing your counter increment does this reliably. This can work quite well.
This still does an aggregate query for each call, which (per next option) is unnecessary, but at least it doesn't spam the error log like the above options.
Use a counter table. This is what I'd do. Either in bar, or in a side-table like bar_foo_counter, acquire a row ID using
UPDATE bar_foo_counter SET counter = counter + 1
WHERE bar_id = $1 RETURNING counter
or the less efficient option if your framework can't handle RETURNING:
SELECT counter FROM bar_foo_counter
WHERE bar_id = $1 FOR UPDATE;
UPDATE bar_foo_counter SET counter = $1;
Then, in the same transaction, use the generated counter row for the number. When you commit, the counter table row for that bar_id gets unlocked for the next query to use. If you roll back, the change is discarded.
I recommend the counter approach, using a dedicated side table for the counter instead of adding a column to bar. That's cleaner to model, and means you create less update bloat in bar, which can slow down queries to bar.

Performing multiple joins on the same association with Squeel

In my application, I have to models: Workflow and Step; steps belongs_to workflow and a workflow has_many steps. Steps have an index and a boolean status ('completed').
I want to retrieve workflows whose step 1 is completed and step 2 is not, i.e. something like this in SQL:
SELECT * FROM workflows w
INNER JOIN steps s1 ON s1.workflow_id = w.id
INNER JOIN steps s2 ON s2.workflow_id = w.id
WHERE s1.index = 1 AND s1.completed = 1
AND s2.index = 2 AND s2.completed = 0
I've tried to express this query with Squeel, but it seems it does not allow multiple joins on the same association: I cannot find a way to name the joins, and when I type something like this: Workflow.joins{steps}.joins{steps}, the generated SQL is simply:
SELECT `workflows`.* FROM `workflows`
INNER JOIN `workflow_steps` ON `workflow_steps`.`workflow_id` = `workflows`.`id`
Any idea how I can achieve that?
I don't know if you gonna like it, but it's possible via self-reference:
Workflow.joins{steps.workfolw.steps}.
where{(steps.index == 1) & (steps.completed == true)} &
(steps.workfolw.steps.index == 2) & (steps.workfolw.steps.completed == false)}.
uniq

Resources