How do I query a list of users? - typeorm

If I want to query a list of users, I want to dynamically pass in the parameters, for example, can only query according to username, or according to the combination of username and userType conditions to query, I do not know how to use typeORm to write

I guess what you are looking for is find options. Link to official documentation: TypeORM - Find Options
repository.findOne(id?: string | number | Date | ObjectID, options?: FindOneOptions<Entity>): Promise<Entity | undefined>;
findOne function takes in two parameters. First one defines logic to how you want the record to find, by id or its column value. Second parameter lets you fetch the relations if you have any with the specific entity.

Related

How to search product data by id and name in one query

I have a index named product in elastic-search with field id, name etc..
I want to search products by id or name but my id field is integer and name is a text, I tried following but getting error while searching by name.
error type":"number_format_exception","reason":"For input string:
\"test\""
def self.search_by_name_or_id(query)
__elasticsearch__.search({
query: {
bool: {
must:{
multi_match: {
query: query,
fields: ["id", "name"]
}
}
}
}
})
end
Problem
Exception is clear, that you are trying to search test which is a String in the integer field and elasticsearch is not able to convert test in integer, instead of test if you had search for 10 or 100, then elasticsearch would convert it into integer and would not throw the exception.
Solution
You are trying to mix 2 things here, I am not sure about your design but if your id field can contain pure numbers i.e. integers, then it's not possible to achieve in a single query the way you are doing.
If you can convert your id field to String, then multi_match query would work perfectly fine, otherwise, you need to first check in your application, that search term can be converted to number or not, i.e. 10 or 100 would work fine but test10 or test100 would not and anyway there is no point of searching these terms in id field as it won't be present as its defined as integer in ES and ES would reject documents containing these terms during indexing time only. So based on your application code check you can construct the ES query which may or may not include the id field in multi-match.

How to exclude the time field from Sumo Logic results?

How do I exclude the Time (_messagetime) metadata field from my result set?
I've tried:
field -_messagetime
But it gives me the error
Field _messagetime not found, please check the spelling and try again.
Using:
fields -time
does not remove the field either.
Currently I'm getting around this by using an aggregate (count) that has no effect on the data.
[EDIT]
Here's an example query:
Removing the Message (_raw) works. But removing the time (_messagetime) doesn't.
These results are used as email alerts, so removing the Time field from the Display isn't really an option.
The easiest way is to just turn off the field in the field browser window on the left-hand side of the results:
The other option is to aggregate and then remove the aggregate field - even if you just aggregate on _raw (which is the raw message):
_sourceCategory=blah
| count by _raw
| fields -_count
If you're still having trouble, can you share the rest of your query?
Edit based on your new query:
*
| parse "Description=\"*\"" as Description
| parse "Date=\"*\"" as Date
| count by Description, Date, Action
| fields -_count
The Time field is there as a result of the timeslice operation as far as I'm aware. The following should do the trick
| fields - _timeslice

How to get values from GORM in map format?

I was wondering if it is possible to retrieve a list of elements, for example:
User.findByCompany(company)
in map mode? Like this [companyName : user, companyName : user2...]
I am doing this using each but idk if there is something that can be done at the data retrieval stage in order to avoid to iterate the same thing twice
You can use groupBy instead of each
User.findAll().groupBy{it.company}
It's a bit long winded but if you specify individual fields in an executeQuery you'll get a list of maps e.g.
User.executeQuery( select u.username, u.whatever from User u where u.company =?, ['aCompany' )
User.findByCompany(company)*.properties
This will perform a spread and put the results in list

Searching for Parse objects where an object's array property contains a specified string

I have a PFObject subclass which stores an array of strings as one of its properties. I would like to query for all objects of this class where one or more of these strings start with a provided substring.
An example might help:
I have a Person class which stores a firstName and lastName. I would like to submit a PFQuery that searches for Person objects that match on name. Specifically, a person should be be considered a match if if any ‘component’ of either the first or last name start with the provided search term.
For example, the name "Mary Beth Smith-Jones" should be considered a match for beth and bet, but not eth.
To assist with this, I have a beforeSave trigger for the Person class that breaks down the person's first and last names into separate components (and also lowercases them). This means that my "Mary Beth Smith-Jones" record looks like this:
firstName: “Mary Beth”
lastName: “Smith-Jones”
searchTerms: [“mary”, “beth”, “smith”, “jones”]
The closest I can get is to use whereKey:EqualTo which will actually return matches when run against an array:
let query = Person.query()
query?.whereKey(“searchTerms”, equalTo: “beth”)
query?.findObjectsInBackgroundWithBlock({ (places, error) -> Void in
//Mary Beth is retuned successfully
})
However, this only matches on full string equality; query?.whereKey(“searchTerms”, equalTo: “bet”) does not return the record in question.
I suppose I could explode the names and store all possible sequential components as search terms (b,e,t,h,be,et,th,bet,etc,beth, etc) but that is far from scalable.
Any suggestions for pulling these records from Parse? I am open to changing my approach if necessary.
Have you tried whereKey:hasPrefix: for this? I am not sure if this can be used on array values.
https://parse.com/docs/ios/guide#queries-queries-on-string-values

Comparing values in two columns of two different Splunk searches

I am new to splunk and facing an issue in comparing values in two columns of two different queries.
Query 1
index="abc_ndx" source="*/jkdhgsdjk.log" call_id="**" A_to="**" A_from="**" | transaction call_id keepevicted=true | search "xyz event:" | table _time, call_id, A_from, A_to | rename call_id as Call_id, A_from as From, A_to as To
Query 2
index="abc_ndx" source="*/ jkdhgsdjk.log" call_id="**" B_to="**" B_from="**" | transaction call_id keepevicted=true | search " xyz event:"| table _time, call_id, B_from, B_to | rename call_id as Call_id, B_from as From, B_to as To
These are my two different queries. I want to compare each values in A_from column with each values in B_from column and if the value matches, then display the those values of A_from.
Is it possible?
I have run the two queries separately and exported the results of each into csv and used vlookup function. But the problem is there is a limit of max 10000 rows of data which can be exported and so I miss out lots of data as my data search has more than 10000 records.
Any help?
Haven't got any data to test this on at the moment, however, the following should point you in the right direction.
When you have the table for the first query sorted out, you should 'pipe' the search string to an appendcols command with your second search string. This command will allow you to run a subsearch and "import" a columns into you base search.
Once you have the two columns in the same table. You can use the eval command to create a new field which compares the two values and assigns a value as you desire.
Hope this helps.
http://docs.splunk.com/Documentation/Splunk/5.0.2/SearchReference/Appendcols
http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Eval
I'm not sure why there is a need to keep this as two separate queries. Everything is coming from the same sourcetype, and is using almost identical data. So I would do something like the following:
index="abc_ndx" source="*/jkdhgsdjk.log" call_id="**" (A_to="**" A_from="**") OR (B_to="**" B_from="**")
| transaction call_id keepevicted=true
| search "xyz event:"
| eval to=if(A_from == B_from, A_from, "no_match")
| table _time, call_id, to
This grabs all events from your specified sourcetype and index, which have a call_id, and either A_to and A_from or B_to and B_from. Then it transactions all of that, lets you filter based on the "xyz event:" (Whatever that is)
Then it creates a new field called 'to' which shows A_from when A_from == B_from, otherwise it shows "no_match" (Placeholder since you didn't specify what should be done when they don't match)
There is also a way to potentially tackle this without using transactions. Although without more details into the underlying data, I can't say for sure. The basic idea is that if you have a common field (call_id in this case) you can just use stats to collect values associated with that field instead of an expensive transaction command.
For example:
index="abc_ndx" index="abc_ndx" source="*/jkdhgsdjk.log" call_id="**"
| stats last(_time) as earliest_time first(A_to) as A_to first(A_from) as A_from first(B_to) as B_to first(B_from) as B_from by call_id
Using first() or last() doesn't actually matter if there is only one value per call_id. (You can even use min() max() avg() and you'll get the same thing) Perhaps this will help you get to the output you need more easily.

Resources