Spring Data ElasticSearch: Save document where some fields are dynamic - spring-data-elasticsearch

I have a document I need to save in elastic search where some fields are dynamic, in a HashMap, and some fields I know in advance and can annotate with #Field and so on.
So something like this:
public MyClass {
private String fieldA;
private String fieldB;
private Map<String, Object> dynamicFields;
}
How would I do this? I don't know in advance, which keys will be contained in dynamicFields.
I would also like to define the mapping for the values in dynamicFields, so for instance save dates as dates and not as timestamps.
Is dynamic field mapping the way to go? If so, where can I find examples for that?

Related

Entity Framework call best way to call a another api

Building an api around a db and as part of this we'll be calling another api and joining data in that api to the database for viewing but not storing the data in the current database. My question is if this.
In my data model I have
public class server
{
public Int32 ServerID {get;set;}
public string ServerName {get;set;}
...
}
If I add api columns with to the server object
public string ServerMemory
That of course returns and invalid column name because this is data from the api that is not in the database table.
I see a few options
Create a view within the database structure which has some blank columns and reference that within my data model.
Create another object within my data model and then reference it using virtual using something like the method mentioned here: https://jeremiahflaga.github.io/2020/02/16/entity-framework-6-error-invalid-column-name-_id/
Create another object and a cast to cast the Server object to this other object.
Is there another simpler way to reference a foreign field within an object in a data model?
Thank you.
I found this so anyone else who needs to add a column to a datamodel that does not exist in the database
using System.ComponentModel.DataAnnotations.Schema;
then you can use this:
[NotMapped]
public virtual string RoleDesc { get; set; }
Still curious if this is the best way. I guess that really revolves around your goals.

Dapper.NET mapping with Data Annotations

So I have a class with a property like this:
public class Foo
{
[Column("GBBRSH")
public static string Gibberish { get; set;}
....
}
For saving data, I have it configured so that the update/insert statements use a custom function:
public static string GetTableColumnName(PropertyInfo property)
{
var type = typeof(ColumnAttribute);
var prop = property.GetCustomAttributes(type, false);
if (propr.Count() > 0)
return ((ColumnAttribute)prop.First()).Name;
return property.Name;
}
This handles fine, but I noticed that when I go to retrieve the data, it isn't actually pulling data back via the function for this particular column. I noticed that the other data present was pulled, but the column in question was the only field with data that didn't retrieve.
1) Is there a way to perhaps use the GetTableColumnName function for the retrieval part of Dapper?
2) Is there a way to force Dapper.NET to throw an exception if a scenario like this happens? I really don't want to have a false sense of security that everything is working as expected when it actually isn't (I get that I'm using mapping that Dapper.NET doesn't use by default, but I do want to set it up in that manner).
edit:
I'm looking in the SqlMapper source of Dapper and found:
private static IEnumerable<T> QueryInternal<T>(params) // my knowledge of generics is limited, but how does this work without a where T : object?
{
...
while (reader.Read())
{
yield return (T)func(reader);
}
...
}
so I learned about two things after finding this. Read up on Func and read up on yield (never used either before). My guess is that I need to pass reader.Read() to another function (that checks against column headers and inserts into objects appropriately) and yield return that?
You could change your select statement to work with aliases like "SELECT [Column("GBBRSH")] AS Gibberish" and provide a mapping between the attribute name and the poco property name.
That way, Dapper would fill the matching properties, since it only requires your POCO's to match the exact name of the column.

How to print a list of objects in a Velocity template?

This is a pretty basic problem and I'm pretty sure I'm doing something wrong or making some assumption. Here goes.
I'm writing a Jira plugin, which uses the Velocity template system. I have a list of ResultRow objects where ResultRow is a class with a single member variable: String key:
public class ResultRow {
public String key;
}
I have a list of these ResultRows:
List<ResultRow> rows = new ArrayList<ResultRow>();
ResultRow row = new ResultRow();
row.key = "foo";
rows.add(foo);
Map<String, Object> velocityParams = new HashMap<String, Object>();
velocityParams.put("rows", rows);
return descriptor.getHtml("view", velocityParams);
and I am trying to list these rows in a template with the following:
#foreach ($row in $rows)
<tr><td>$row.key</td></tr>
#end
I want the output to be: foo. Maddeningly, the template system simply prints the literal string "$row.key" instead of the contents of key. To verify that "$row" is indeed an object, I used the template:
#foreach ($row in $rows)
<tr><td>$row</td></tr>
#end
and the result was as expected: com.domain.jira.ResultRow#7933f2c6.
I think maybe I'm missing some requirement for the class. Does it need to be defined in some special way to suggest to Velocity that certain members are usable in templates? Does Jira use some special funky version of Velocity that only works with certain objects?
I guess the answer is you cannot do what I was trying to do. You can call member methods but you can't access member variables, which means you'll need to add getters to your class. (Could've sworn I tried that. Ah well.)
Velocity does not expose fields, only methods. There are ways to change that:
You can create your own Uberspect class that allows access to public fields.
You can wrap the instance with a modified version of Velocity's FieldMethodizer that gives access to non-static fields.
You can add and use an instance of a "tool" class to your context, such as a subclass of VelocityTool's ClassTool.

Create flexible property names in the searchable block of Sunspot for Solr

In my searchable block, I have many values like this:
string :id
How can I problematically create indexed properties (here, id) on the fly? Let's say I want to create a property for the current year such as this:
string :<year>
Is this possible in Sunspot?
you can pass blocks instead of values.. such as
string { (self.year + 2).to_s}

what is the best way to store a user filtered query params in a database table?

I have an ASP.NET MVC website. In my backend I have a table called People with the following columns:
ID
Name
Age
Location
... (a number of other cols)
I have a generic web page that uses model binding to query this data. Here is my controller action:
public ActionResult GetData(FilterParams filterParams)
{
return View(_dataAccess.Retrieve(filterParams.Name, filterParams.Age, filterParams.location, . . .)
}
which maps onto something like this:
http://www.mysite.com/MyController/GetData?Name=Bill .. .
The dataAccess layer simply checks each parameter to see if its populated to add to the db where clause. This works great.
I now want to be able to store a user's filtered queries and I am trying to figure out the best way to store a specific filter. As some of the filters only have one param in the queryString while others have 10+ fields in the filter I can't figure out the most elegant way to storing this query "filter info" into my database.
Options I can think of are:
Have a complete replicate of the table (with some extra cols) but call it PeopleFilterQueries and populate in each record a FilterName and put the value of the filter in each of field (Name, etc)
Store a table with just FilterName and a string where I store the actual querystring Name=Bill&Location=NewYork. This way I won't have to keep adding new columns if the filters change or grow.
What is the best practice for this situation?
If the purpose is to save a list of recently used filters, I would serialise the complete FilterParams object into an XML field/column after the model binding has occurred. By saving it into a XML field you're also giving yourself the flexibility to use XQuery and DML should the need arise at a later date for more performance focused querying of the information.
public ActionResult GetData(FilterParams filterParams)
{
// Peform action to get the information from your data access layer here
var someData = _dataAccess.Retrieve(filterParams.Name, filterParams.Age, filterParams.location, . . .);
// Save the search that was used to retrieve later here
_dataAccess.SaveFilter(filterParams);
return View(someData);
}
And then in your DataAccess Class you'll want to have two Methods, one for saving and one for retrieving the filters:
public void SaveFilter(FilterParams filterParams){
var ser = new System.Xml.Serialization.XmlSerializer(typeof(FilterParams));
using (var stream = new StringWriter())
{
// serialise to the stream
ser.Serialize(stream, filterParams);
}
//Add new database entry here, with a serialised string created from the FilterParams obj
someDBClass.SaveFilterToDB(stream.ToString());
}
Then when you want to retrieve a saved filter, perhaps by Id:
public FilterParams GetFilter(int filterId){
//Get the XML blob from your database as a string
string filter = someDBClass.GetFilterAsString(filterId);
var ser = new System.Xml.Serialization.XmlSerializer(typeof(FilterParams));
using (var sr = new StringReader(filterParams))
{
return (FilterParams)ser.Deserialize(sr);
}
}
Remember that your FilterParams class must have a default (i.e. parameterless) constructor, and you can use the [XmlIgnore] attribute to prevent properties from being serialised into the database should you wish.
public class FilterParams{
public string Name {get;set;}
public string Age {get;set;}
[XmlIgnore]
public string PropertyYouDontWantToSerialise {get;set;}
}
Note: The SaveFilter returns Void and there is no error handling for brevity.
Rather than storing the querystring, I would serialize the FilterParams object as JSON/XML and store the result in your database.
Here's a JSON Serializer I regularly use:
using System.IO;
using System.Runtime.Serialization.Json;
using System.Text;
namespace Fabrik.Abstractions.Serialization
{
public class JsonSerializer : ISerializer<string>
{
public string Serialize<TObject>(TObject #object) {
var dc = new DataContractJsonSerializer(typeof(TObject));
using (var ms = new MemoryStream())
{
dc.WriteObject(ms, #object);
return Encoding.UTF8.GetString(ms.ToArray());
}
}
public TObject Deserialize<TObject>(string serialized) {
var dc = new DataContractJsonSerializer(typeof(TObject));
using (var ms = new MemoryStream(Encoding.UTF8.GetBytes(serialized)))
{
return (TObject)dc.ReadObject(ms);
}
}
}
}
You can then deserialize the object and pass it your data access code as per your example above.
You didn't mention about exact purpose of storing the filter.
If you insist to save filter into a database table, I would have following structure of the table.
FilterId
Field
FieldValue
An example table might be
FilterId Field FieldValue
1 Name Tom
1 Age 24
1 Location IL
3 Name Mike
...
The answer is much more simple than you are making it:
Essentially you should store the raw query in its own table and relate it to your People table. Don't bother storing individual filter options.
Decide on a value to store (2 options)
Store the URL Query String
This id be beneficial if you like open API-style apps, and want something you can pass nicely back and forth from the client to the server and re-use without transformation.
Serialize the Filter object as a string
This is a really nice approach if your purpose for storing these filters remains entirely server side, and you would like to keep the data closer to a class object.
Relate your People table to your Query Filters Table:
The best strategy here depends on what your intention and performance needs are. Some suggestions below:
Simple filtering (ex. 2-3 filters, 3-4 options each)
Use Many-To-Many because the number of combinations suggests that the same filter combos will be used lots of times by lots of people.
Complex filtering
Use One-To-Many as there are so many possible individual queries, it less likely they are to be reused often enough to make the extra-normalization and performance hit worth your while.
There are certainly other options but they would depend on more detailed nuances of your application. The suggestions above would work nicely if you are say, trying to keep track of "recent queries" for a user, or "user favorite" filtering options...
Personal opinion
Without knowing much more about your app, I would say (1) store the query string, and (2) use OTM related tables... if and when your app shows a need for further performance profiling or issues with refactoring filter params, then come back... but chances are, it wont.
GL.
In my opinion the best way to save the "Filter" is to have some kind of json text string with each of the "columns names"
So you will have something in the db like
Table Filters
FilterId = 5 ; FilterParams = {'age' : '>18' , ...
Json will provide a lot of capabilities, like the use of age as an array to have more than one filter to the same "column", etc.
Also json is some kind of standard, so you can use this "filters" with other db some day or to just "display" the filter or edit it in a web form. If you save the Query you will be attached to it.
Well, hope it helps!
Assuming that a nosql/object database such as Berkeley DB is out of the question, I would definitely go with option 1. Sooner or later you'll find the following requirements or others coming up:
Allow people to save their filters, label, tag, search and share them via bookmarks, tweets or whatever.
Change what a parameter means or what it does, which will require you to version your filters for backward compatibility.
Provide auto-complete functions over filters, possibly using a user's filter history to inform the auto-complete.
The above will be somewhat harder to satisfy if you do any kind of binary/string serialization where you'll need to parse the result and then process them.
If you can use a NoSql DB, then you'll get all the benefits of a sql store plus be able to model the 'arbitrary number of key/value pairs' very well.
Have thought about using Profiles. This is a build in mechanism to store user specific info. From your description of your problem its seems a fit.
Profiles In ASP.NET 2.0
I have to admit that M$ implementation is a bit dated but there is essentially nothing wrong with the approach. If you wanted to roll your own, there's quite a bit of good thinking in their API.

Resources