How to get date range with frappe.db.get_list()? - erp

I'm trying to get subscriptions ending in a given month using the DB API documented here.
I can get before a certain date with:
end_period = datetime.date(2020, 12, 31)
frappe.db.get_list('Subscription', filters={
'current_invoice_end': ['<', end_period]
})
But how would I specify before end_period and after start_period?
When I tried
frappe.db.get_list('Subscription', filters={
'current_invoice_end': ['<', end_period],
'current_invoice_end': ['>', start_period]
})
It treated it as "OR" and listed things outside of the range.
cross-posted at discuss.erpnext.com

You can quickly search for "between" in the erpnext src to check implementations.
It has been the only reliable source for me.
"holiday_date": ["between", (self.start_date, self.end_date)],
The solution you posted wont work because Python wont allow two keys with the same name on the dict.
Another solution that will return a list could be
holidays = frappe.db.sql_list('''select holiday_date, rate from `tabHoliday`
where
parent=%(holiday_list)s
and holiday_date >= %(start_date)s
and holiday_date <= %(end_date)s''', {
"holiday_list": holiday_list,
"start_date": self.start_date,
"end_date": self.end_date
})

You can also pass filters as a list:
frappe.db.get_list('Subscription', filters=[
['current_invoice_end', '<', end_period],
['current_invoice_end', '>', start_period]
])
I would avoid using direct SQL for this!

Related

Multi-select statement in Rails but trying to prevent duplicate queries for a specific attribute

I am building a Web API for a legacy database and there are some duplicate ids that I need to prevent from being queried.
An example of the duplicates look like this:
FIRST LAST: [
[
"1213.21",
"HHLOG",
"2020-09-18T14:43:00.000-05:00",
"121748.0"
],
[
"1213.21",
"HHLOG",
"2020-09-18T16:30:00.000-05:00",
"121748.0"
]
],
The "121748.0" is actually the pb_id you'll see below in the select statement. These are the type of duplicates I am looking to remove.
Here is my current select statement:
#nova_loads = OpsHeader
.select(:pb_id, :pb_net_rev, :pb_bill_id, :pb_id, :pb_dt_canc)
I have tried a few different things to see if would work:
#nova_loads = OpsHeader
.select((Distinct(pb_id), :pb_net_rev, :pb_bill_id, :pb_id, :pb_dt_canc)
#nova_loads = OpsHeader
.select(:pb_id, :pb_net_rev, :pb_bill_id, :pb_id, :pb_dt_canc).distinct
and a plethora of others. None of them are working and I am little confused as to why.
What do I need to do to have a multi-select statement that can filter off duplicates based off of a certain key value. In this scenario, I will always want the first record of the two. The next record is always the duplicate, so that makes it a little easier... just not for me :D.
EDIT - Here is my full query:
#nova_loads = OpsHeader
.select(:pb_id, :pb_net_rev, :pb_bill_id, :pb_id, :pb_dt_canc)
.where('pb_net_rev > ?', 0.0)
.where(pb_dt_canc: nil)
.joins(ops_stop_rec: :driver_header)
.select(:ops_driver1, :dh_first_name, :dh_last_name, :ops_delivered_time)
.where(:ops_stop_rec => {ops_delivered_time: '2020-09-18 00:00:00' .. '2020-09-18 24:00:00'}).find_each(batch_size: 1000) do |loads, i|
#load_objects.push(loads.dh_first_name + ' ' + loads.dh_last_name => [loads.pb_net_rev, loads.pb_bill_id, loads.ops_delivered_time, loads.pb_id])
end
I've used distinct_on for this. Not fully documented, so might break some time in the future, but here goes:
OpsHeader.select(:pb_id, :pb_net_rev, :pb_bill_id, :pb_id, :pb_dt_canc).tap do |rel|
rel.arel.distinct_on(OpsHeader.arel_table[:pb_id])
end
Also distinct_on currently only works on postgres.

Firebase Firestore Swift, Timestamp but server time?

With Firestore, I add a timestamp field like this
var ref: DocumentReference? = nil
ref = Firestore.firestore()
.collection("something")
.addDocument(data: [
"name": name,
"words": words,
"created": Timestamp(date: Date())
]) { ...
let theNewId = ref!.documentID
...
}
That's fine and works great, but it's not really correct. Should be using the "server timestamp" which Firestore supplies.
Please note this is on iOS (Swift) and Firestore, not Firebase.
What is the syntax to get a server timestamp?
The syntax you're looking for is:
"created": FieldValue.serverTimestamp()
This creates a token which itself has no date value. The value is assigned by the server when the write is actually written to the database, which could be much later if there are network issues, so keep that in mind.
Also keep in mind that because they are tokens, they can present different values when you read them, to which we can configure how they should be interpreted:
doc.get("created", serverTimestampBehavior: .none)
doc.get("created", serverTimestampBehavior: .previous)
doc.get("created", serverTimestampBehavior: .estimate)
none will give you a nil value if the value hasn't yet been set by the server. For example, if you're writing a document that relies on latency-compensated returns, you'll get nil on that latency-compensated return until the server eventually executes the write.
previous will give you any previous values, if they exist.
estimate will give you a value, but it will be an estimate of what the value is likely to be. For example, if you're writing a document that relies on a latency-compensated returns, estimate will give you a date value on that latency-compensated return even though the server has yet to execute the write and set its value.
It is for these reasons that dealing with Firestore's timestamps may require handling more returns by your snapshot listeners (to update tokens). A Swift alternative to these tokens is the Unix timestamp:
extension Date {
var unixTimestamp: Int {
return Int(self.timeIntervalSince1970 * 1_000) // millisecond precision
}
}
"created": Date().unixTimestamp
This is definitely the best explanation of how the timestamps work (written by the same Doug Stevenson who actually posted an answer): https://medium.com/firebase-developers/the-secrets-of-firestore-fieldvalue-servertimestamp-revealed-29dd7a38a82b
If you want a server timestamp for a field's value, use FieldValue.serverTimestamp(). This will return a token value that gets interpreted on the server after the write completes.

Azure Data Factory get data for "For Each"component from query

The situation is as follows: I have a table in my database that recieves about 3 million rows each day. We want to archive this table on a regular base, so that only the 8 most recents weeks are in the table. The rest of the data can be archived tot AZure Data lake.
I allready found out how to do this by one day at a time. But now I want to run this pipeline each week for the first seven days in the table. I assume I should do this with the "For Each" component. It should itterate along the seven distinct dates that are present in the dataset I want to backup. This dataset is copied from the source table to an archive table on forehand.
It's not difficult to get the distinct dates with a SQL query, but how to get the result of this query into an array that is used for the "For Each" component?
The issue is solved thanks to a co-worker.
What we have to do is assign a parameter to the dataset of the sink. Does not matter how you name this and you do not have to assign a value to it. But let's assume this parameter is called "date"
After that you can use this parameter in the filename of the sink (also in dataset) with by using "#dataset().Date".
After that you go back to the copyactivity and in the sink you assign a dataset property to #item().DateSelect. (DateSelect is the field name from the array that is passed to the For Each activity)
See also the answer from Bo Xioa as part of the answer
This way it works perfectly. It's just a shame that this is not well documented
You can use lookup activity to fetch the column content, and the output will be like
{
"count": "2",
"value": [
{
"Id": "1",
"TableName" : "Table1"
},
{
"Id": "2",
"TableName" : "Table2"
}
]
}
Then you can pass the value array to the Foreach activity items field by using the pattern of #activity('MyLookupActivity').output.value
ref doc: Use the Lookup activity result in a subsequent activity
I post this as an answer, because the error does not fit into a comment :D
have seen antoher option to accomplish this. That is by executing a pipeline from another pipeline. And in that way I can define the dates that I should iterate over as a parameter in the second pipeline (learn.microsoft.com/en-us/azure/data-factory/…). But unformtunately this leads to the same rsult as when just using the foreach parameter. Because in the filename of my data lake file I have to use: #{item().columname}. I can see in the monitoring view that the right values are passed in the iteration steps, but I keep getting an error:
{
"errorCode": "2200",
"message": "Failure happened on 'Sink' side. ErrorCode=UserErrorFailedFileOperation,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=The request to 'Unknown' failed and the status code is 'BadRequest', request id is ''. {\"error\":{\"code\":\"BadRequest\",\"message\":\"A potentially dangerous Request.Path value was detected from the client (:). Trace: cf3b4c3f-1681-4073-b225-17e1c07ec76d Time: 2018-08-02T05:16:13.2141897-07:00\"}} ,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.WebException,Message=The remote server returned an error: (400) Bad Request.,Source=System,'",
"failureType": "UserError",
"target": "CopyDancerDatatoADL"
}

How to concatenate 2 fields into one during query time in solr

I have a document in solr which is already indexed and stored like
{
"title":"Harry potter",
"url":"http://harrypotter.com",
"series":[
"sorcer's stone",
"Goblin of fire",
]
}
My requirement is,during query time when I try to retrieve the document
it should concatenate 2 fields in to and give the output like
{
"title":"Harry potter",
"url":"http://harrypotter.com",
"series":[
"sorcer's stone",
"Goblin of fire",
],
"title_url":"Harry potter,http://harrypotter.com"
}
I know how to do it during index time by using URP but I'm not able to understand how to achieve this during query time.Could anyone please help me with this.Any sample code for reference would be a great help to me.Thanks for your time.
concat function is available in solr7:
http://localhost:8983/solr/col/query?...&fl=title,url,concat(title,url)
if you are in an older solr, how difficult is to do this on the client side?
To concat you can use concat(field1, field2).
There are many other functions to manipulate data while retrieving.
You can see that here.

Correct way to update map data type with cqerl

I am having hard time coming with the syntax of updating map using cqerl. I have tried the following till now and it doesn't work
statement = "UPDATE keyspace SET data[?] = :data_value WHERE scope = ?;",
values = [{data,"Key Value"},{data_value, "Data Value",{scope, "Scope Value"}]
What am I doing wrong here?
Also setting ttl does not work
statement = "INSERT INTO data(scope)
VALUES(?) USING ttl ?",
values = [{scope, "Scope Value"},{[ttl], 3650}]
Anyone, any idea?
Please note that you are using single quotes around the values, which in Erlang syntax indicates you are using atoms. Based on the documentation cqerl, it doesn't expect atoms there. cqerl data types
For example try:
statement = "INSERT INTO data(scope)
VALUES(?) USING ttl ?",
values = [{scope, "Scope Value"},{[ttl], 3650}]
Based on reply from the contributor on github, it takes an atom, so '[ttl]' is the right way
https://github.com/matehat/cqerl/issues/122
For updating a map correct way is with atom in the values part
statement = "UPDATE keyspace SET data[?] = ? WHERE scope = ?;",
values = [{'key(data)',"Key Value"},{'value(data)', "Data Value",{scope, "Scope Value"}]

Resources