I have a field in the database which contains strings that look like: 58XBF2022L1001390 I need to be able to query results which match the last letter(in this case 'L'), and match or resemble the last four digits.
The regular expression I've been using to find records which match the structure is: \d{2}[A-Z]{3}\d{4}[A-Z]\d{7}, So far I've tried using a scope to refine the results, but I'm not getting any results. Here's my scope
def self.filter_by_shortcode(input)
q = input
starting = q.slice!(0)
ending = q
where("field ~* ?", "\d{2}[A-Z]{3}\d{4}/[#{starting}]/\d{3}[#{ending}]\g")
end
Here are some more example strings, and the substring that we would be looking for. Not every string stored in this database field matches this format, so we would need to be able to first match the string using the regex provided, then search by substring.
36GOD8837G6154231
G4231
13WLF8997V2119371
V9371
78FCY5027V4561374
V1374
06RNW7194P2075353
P5353
57RQN0368Y9090704
Y0704
edit: added some more examples as well as substrings that we would need to search by.
I do not know Rails, but the SQL for what you want is relative simple. Since your string if fixed format, once that format is validated, simple concatenation of sub-strings gives your desired result.
with base(target, goal) as
( values ('36GOD8837G6154231', 'G4231')
, ('13WLF8997V2119371', 'V9371')
, ('78FCY5027V4561374', 'V1374')
, ('06RNW7194P2075353', 'P5353')
, ('57RQN0368Y9090704', 'Y0704')
)
select substr(target,10,1) || substr(target,14,4) target, goal
from base
where target ~ '^\d{2}[A-Z]{3}\d{4}[A-Z]\d{7}$';
Related
I'm trying to concatenate the digits from a string that starts with 'CityName' into a separate string. I have the concatenation part. My issue is being able to access the matches from the regex
I have a regex in rails that looks like /CityName\s*(\d+)/i. I'm super new to regex and it's hard for me to wrap my head around the docs. But I'm assuming that this regex will find any digits after the CityName case intensively. And then it's interpolated if it matches an attribute on my model.
regex = /CityName\s*(\d+)/i
if line_1 =~ regex
"C#{$1}"
...
end
But further along in the execution, it's slowing down because I have to iterate over a lot of records. I have a query in psql that will do that calculations that I need, however I'm having a hard time implementing this regex replacement. My attempts so far look like:
CASE
when addr.line_1 ~* 'CityName\s*(\d+)' then 'C' || regex_matches('CityName\s*(\d+)')[0]
...
I'm having a hard time finding a solution to grab the first occurrence of the regex match. Thanks for any tips :D
EDIT: I am trying to grab the digits after 'CityName' from a string if that string contains 'CityName'
Ultimately I need assistance with the regex and how to contactenate the digits with 'C'
Your question is a bit unclear. Are you trying to add the digits to your selection or to filter records based on them?
If you just want to select them:
Address.select(%q{(regexp_matches(addr.line_1, 'CityName\s*(\d+)'))[1] as digits})
.map(&:digits)
If you want to filter based on then:
Address.where(%q{addr.line_1 ~ 'CityName\s*(\d+)'}).map &:email
.map(&:line_1)
Also a few notes:
Selecting digits case intensively does not really make sense. Digits
does not have case.
PostgreSQL arrays start from 1 instead of 0.
It seems you need a subquery or a WITH query:
SELECT tbl1.col1, sum(...), min(...) FROM (SELECT ..., CASE ...yourregex stuff... END col1 FROM ...) tbl1 GROUP BY 1;
WITH tbl1 AS (SELECT ..., CASE ...yourregex stuff... END col1 FROM ...) SELECT t.col1, sum(...) FROM tbl1 t GROUP BY 1;
If you need them regulary, you can also create views from the query or create a temp table, then you can use it in queries later.
Got it! Was able to finally start to figure out the regex.
WHEN addr.line_1 ~* '(?i)CityName\s*(\d+)' THEN 'C' || (SELECT (regexp_matches(addr.line_1, '(?i)CityName\s*(\d+)'))[1])
The (?i) allowed for case insensitive matching for CityName and then the concatenation worked. Thank you #ti6on for pointing out the index difference with postgres :D
I am having trouble with a DB query in a Rails app. I want to store various search terms (say 100 of them) and then evaluate against a value dynamically. All the examples of SIMILAR TO or ~ (regex) in Postgres I can find use a fixed string within the query, while I want to look the query up from a row.
Example:
Table: Post
column term varchar(256)
(plus regular id, Rails stuff etc)
input = "Foo bar"
Post.where("term ~* ?", input)
So term is VARCHAR column name containing the data of at least one row with the value:
^foo*$
Unless I put an exact match (e.g. "Foo bar" in term) this never returns a result.
I would also like to ideally use expressions like
(^foo.*$|^second.*$)
i.e. multiple search terms as well, so it would match with 'Foo Bar' or 'Search Example'.
I think this is to do with Ruby or ActiveRecord stripping down something? Or I'm on the wrong track and can't use regex or SIMILAR TO with row data values like this?
Alternative suggestions on how to do this also appreciated.
The Postgres regular expression match operators have the regex on the right and the string on the left. See the examples: https://www.postgresql.org/docs/9.3/static/functions-matching.html#FUNCTIONS-POSIX-TABLE
But in your query you're treating term as the string and the 'Foo bar' as the regex (you've swapped them). That's why the only term that matches is the exact match. Try:
Post.where("? ~* term", input)
Inside my model at searchable block I have index time added_at.
At search block for searching I added with(:added_at, nil), made reindex and now inside search object I have:
<Sunspot::Search:{:fq=>["-added_at_d:[* TO *]"]...}>
What is the meaning of this [* TO *] ? Something went wrong?
By adding with(:added_at, nil) you narrow down the search results to documents having no values in the field added_at, so we can expect the corresponding query filter to be defined as :
fq=>["added_at_d:null"] # not valid
The problem is that Solr Standard Query Parser does not support searching a field for empty/null value. In this situation the filter needs to be negated (exluding documents having any value in the field) so that the query remains valid.
The operator - can be used to exclude the field, and the wildcard character * can be used to match any value, now we can expect the query filter to look like :
fq=>["-added_at_d:*"]
However, although the above is valid for the query parser, using a range query should be preferred to prevent inconsitent behaviors when using wildcard within negative subqueries.
Range Queries allow one to match documents whose field(s) values are
between the lower and upper bound specified by the Range Query. Range
Queries can be inclusive or exclusive of the upper and lower bounds.
A * may be used for either or both endpoints to specify an open-ended range query.
Eventually there is nothing wrong with this filter that ends up looking like :
fq=>["-added_at_d:[* TO *]"]
cf. Lucene Range Queries, Solr Standard Query Parser
I don't know the name for this kind of search, but I see that it's getting pretty common.
Let's say I have records with the following file names:
'order_spec.rb', 'order.sass', 'orders_controller_spec.rb'
If I search with the following string 'oc' I would like the result to return 'orders_controller_spec.rb' due to match the o in orders and the c in controller.
If the string is 'os' then I'd like all 3 to match, 'order_spec.rb', 'order.sass', 'orders_controller_spec.rb'.
If the string is 'oco' then I'd like 'orders_controller_spec.rb'
What is the name for this kind of search and how would I go about getting this done in Postgresql?
This is a called a subsequence search. One simple way to do it in Postgres is to use the LIKE operator (or several of the other options in those docs) and fill the spaces between your letters with a wildcard, which for LIKE is %. To match anything with an o followed by an s in the words column, that would look like this:
SELECT * FROM table WHERE words LIKE '%o%s%';
This is a relatively expensive search, but you can improve performance with a varchar_pattern_ops or text_pattern_ops index to support faster pattern matching.
CREATE INDEX pattern_index ON table (words varchar_pattern_ops);
Is there an option in neo4j to write a select query with where clause, that ignores non-latin characters ?
MATCH (places:Place)
WHERE (places.name =~ '.*(?ui)Fabergé.*')
RETURN places
I have place with Fabergé name in graph and i want to find it when user type Fabergé or Faberge without this special character.
I'm not aware of an easy way to do this directly with a regex match in Cypher.
One possible workaround is to store the string in question in a normalized form in a second property e.g. place.name_normalized and then compare it with the normalized search string. Of course normalization needs to be done on client side, see another SO question on how to achive this: Remove diacritical marks (ń ǹ ň ñ ṅ ņ ṇ ṋ ṉ ̈ ɲ ƞ ᶇ ɳ ȵ) from Unicode chars