I've created tests and now I am struggling with implementation using Stream. Please advise how Stream in methods getOrdersCount() and getOrdersTotalValue() should look like? Right now I get ".collect" in red in both methods with error notes "cannot resolve method collect in optional double". Why and how to fix this?
class ShopTestSuite {
Shop shop = new Shop();
Order order1 = new Order(12.30, LocalDate.of(2020, 12, 12), "marta123");
Order order2 = new Order(67.89, LocalDate.of(2019, 1, 12), "Tomek_K");
Order order3 = new Order(123.90, LocalDate.of(2020, 2, 2), "Sylwia");
Order order4 = new Order(22.90, LocalDate.of(2020, 6, 20), "Sylwia");
#Test
public void shouldReturnOrdersCount() {
// When
Integer result = shop.getOrdersCount();
// Then
assertEquals(4, result);
}
#Test
public void shouldOrdersTotalValue() {
// When
Double result = shop.getOrdersTotalValue();
// Then
assertEquals(146.8, result);
}
#BeforeEach
public void initializeShop () {
shop.addOrder(order1);
shop.addOrder(order2);
shop.addOrder(order3);
shop.addOrder(order4);
}
Shop class:
public class Shop {
private Set<Order> orders = new HashSet<>();
public double getOrdersTotalValue(){
return this.orders
.stream()
.mapToDouble(Order ::getValue).sum()
.collect(Collectors.toSet()) //"cannot resolve method collect in optional double"
}
public int getOrdersCount(){
return this.orders
.stream()
.count()
.collect(Collectors.toSet()); //"cannot resolve method collect in optional double"
}
I suppose you don't nead collect(Collectors.toSet()):
public double getOrdersTotalValue(){
return this.orders
.stream()
.mapToDouble(Order ::getValue).sum();
}
public int getOrdersCount(){
return (int)this.orders
.stream()
.count();
}
Related
In the following example one class property is of type Gstrv.
With ObjectClass.list_properties() one can query the Paramspec of all properties, and with get_property() all properties can be requested as GLib.Value. How would I access the Value of type GStrv and convert it to a GLib.Variant?
My GLib version is slightly outdated, so I do not have the GLib.Value.to_variant() function available yet :( .
public class Foo: GLib.Object {
public GLib.HashTable<string, int32> bar;
public Foo() {
bar = new GLib.HashTable<string, int32>(str_hash, str_equal);
}
public string[] bar_keys { owned get { return bar.get_keys_as_array(); } }
}
int main() {
var foo = new Foo();
Type type = foo.get_type();
ObjectClass ocl = (ObjectClass) type.class_ref ();
foreach (ParamSpec spec in ocl.list_properties ()) {
print ("%s\n", spec.get_name ());
Value property_value = Value(spec.value_type);
print ("%s\n", property_value.type_name ());
foo.get_property(spec.name, ref property_value);
// next: convert GLib.Value -> GLib.Variant :(
}
foo.bar.set("baz", 42);
return 0;
}
Output:
bar-keys
GStrv
Using GLib.Value.get_boxed() seems to be working.
Example:
// compile simply with: valac valacode.vala
public class Foo: GLib.Object {
public GLib.HashTable<string, int32> bar;
public Foo() {
bar = new GLib.HashTable<string, int32>(str_hash, str_equal);
}
public string[] bar_keys { owned get { return bar.get_keys_as_array(); } }
}
public Variant first_gstrv_property_as_variant(Object obj)
{
Type class_type = obj.get_type();
ObjectClass ocl = (ObjectClass) class_type.class_ref ();
foreach (ParamSpec spec in ocl.list_properties ()) {
print ("%s\n", spec.get_name ());
Value property_value = Value(spec.value_type);
print ("%s\n", property_value.type_name ());
obj.get_property(spec.name, ref property_value);
// next: convert GLib.Value -> GLib.Variant
if(property_value.type_name () == "GStrv") {
return new GLib.Variant.strv((string[])property_value.get_boxed());
}
}
return new GLib.Variant("s", "No property of type GStrv found");
}
int main() {
var foo = new Foo();
print("%s\n", first_gstrv_property_as_variant(foo).print(true));
foo.bar.set("baz", 42);
print("%s\n", first_gstrv_property_as_variant(foo).print(true));
foo.bar.set("zot", 3);
print("%s\n", first_gstrv_property_as_variant(foo).print(true));
return 0;
}
Output:
bar-keys
GStrv
#as []
bar-keys
GStrv
['baz']
bar-keys
GStrv
['baz', 'zot']
In the generated c-code this looks as follows:
_tmp18_ = g_value_get_boxed (&property_value);
_tmp19_ = g_variant_new_strv ((gchar**) _tmp18_, -1);
Passing -1 as length to g_variant_new_strv() means the string array is considered as null terminated. Inside g_variant_new_strv() the g_strv_length() function is used to determine the length.
Hopefully it will be useful to someone else someday. :-)
Say my users subscribe to a plan. Is it possible then using Spring Cloud Gateway to rate limit user requests based up on the subscription plan? Given there're Silver and Gold plans, would it let Silver subscriptions to have replenishRate/burstCapacity of 5/10 and Gold 50/100?
I naively thought of passing a new instance of RedisRateLimiter (see below I construct a new one with 5/10 settings) to the filter but I needed to get the information about the user from the request somehow in order to be able to find out whether it is Silver and Gold plan.
#Bean
public RouteLocator myRoutes(RouteLocatorBuilder builder) {
return builder.routes()
.route(p -> p
.path("/get")
.filters(f ->
f.requestRateLimiter(r -> {
r.setRateLimiter(new RedisRateLimiter(5, 10))
})
.uri("http://httpbin.org:80"))
.build();
}
Am I trying to achieve something that is even possible with Spring Cloud Gateway? What other products would you recommend to check for the purpose if any?
Thanks!
Okay, it is possible by creating a custom rate limiter on top of RedisRateLimiter class. Unfortunately the class has not been architected for extendability so the solution is somewhat "hacky", I could only decorate the normal RedisRateLimiter and duplicate some of its code in there:
#Primary
#Component
public class ApiKeyRateLimiter implements RateLimiter {
private Log log = LogFactory.getLog(getClass());
// How many requests per second do you want a user to be allowed to do?
private static final int REPLENISH_RATE = 1;
// How much bursting do you want to allow?
private static final int BURST_CAPACITY = 1;
private final RedisRateLimiter rateLimiter;
private final RedisScript<List<Long>> script;
private final ReactiveRedisTemplate<String, String> redisTemplate;
#Autowired
public ApiKeyRateLimiter(
RedisRateLimiter rateLimiter,
#Qualifier(RedisRateLimiter.REDIS_SCRIPT_NAME) RedisScript<List<Long>> script,
ReactiveRedisTemplate<String, String> redisTemplate) {
this.rateLimiter = rateLimiter;
this.script = script;
this.redisTemplate = redisTemplate;
}
// These two methods are the core of the rate limiter
// Their purpose is to come up with a rate limits for given API KEY (or user ID)
// It is up to implementor to return limits based up on the api key passed
private int getBurstCapacity(String routeId, String apiKey) {
return BURST_CAPACITY;
}
private int getReplenishRate(String routeId, String apiKey) {
return REPLENISH_RATE;
}
public Mono<Response> isAllowed(String routeId, String apiKey) {
int replenishRate = getReplenishRate(routeId, apiKey);
int burstCapacity = getBurstCapacity(routeId, apiKey);
try {
List<String> keys = getKeys(apiKey);
// The arguments to the LUA script. time() returns unixtime in seconds.
List<String> scriptArgs = Arrays.asList(replenishRate + "", burstCapacity + "",
Instant.now().getEpochSecond() + "", "1");
Flux<List<Long>> flux = this.redisTemplate.execute(this.script, keys, scriptArgs);
return flux.onErrorResume(throwable -> Flux.just(Arrays.asList(1L, -1L)))
.reduce(new ArrayList<Long>(), (longs, l) -> {
longs.addAll(l);
return longs;
}) .map(results -> {
boolean allowed = results.get(0) == 1L;
Long tokensLeft = results.get(1);
Response response = new Response(allowed, getHeaders(tokensLeft, replenishRate, burstCapacity));
if (log.isDebugEnabled()) {
log.debug("response: " + response);
}
return response;
});
}
catch (Exception e) {
/*
* We don't want a hard dependency on Redis to allow traffic. Make sure to set
* an alert so you know if this is happening too much. Stripe's observed
* failure rate is 0.01%.
*/
log.error("Error determining if user allowed from redis", e);
}
return Mono.just(new Response(true, getHeaders(-1L, replenishRate, burstCapacity)));
}
private static List<String> getKeys(String id) {
String prefix = "request_rate_limiter.{" + id;
String tokenKey = prefix + "}.tokens";
String timestampKey = prefix + "}.timestamp";
return Arrays.asList(tokenKey, timestampKey);
}
private HashMap<String, String> getHeaders(Long tokensLeft, Long replenish, Long burst) {
HashMap<String, String> headers = new HashMap<>();
headers.put(RedisRateLimiter.REMAINING_HEADER, tokensLeft.toString());
headers.put(RedisRateLimiter.REPLENISH_RATE_HEADER, replenish.toString());
headers.put(RedisRateLimiter.BURST_CAPACITY_HEADER, burst.toString());
return headers;
}
#Override
public Map getConfig() {
return rateLimiter.getConfig();
}
#Override
public Class getConfigClass() {
return rateLimiter.getConfigClass();
}
#Override
public Object newConfig() {
return rateLimiter.newConfig();
}
}
So, the route would look like this:
#Component
public class Routes {
#Autowired
ApiKeyRateLimiter rateLimiter;
#Autowired
ApiKeyResolver apiKeyResolver;
#Bean
public RouteLocator theRoutes(RouteLocatorBuilder b) {
return b.routes()
.route(p -> p
.path("/unlimited")
.uri("http://httpbin.org:80/anything?route=unlimited")
)
.route(p -> p
.path("/limited")
.filters(f ->
f.requestRateLimiter(r -> {
r.setKeyResolver(apiKeyResolver);
r.setRateLimiter(rateLimiter);
} )
)
.uri("http://httpbin.org:80/anything?route=limited")
)
.build();
}
}
Hope it saves a work day for somebody...
I would like to know how I can return the node names instead of the node IDs in the Java console.
The following output is shown in the console:
The desired output should look like:
Just without all the information but only with the Node names (which equal Airportnames).
My Java code looks like the following:
package com.routesNeo4j;
import org.neo4j.driver.v1.*;
import java.util.ArrayList;
import java.util.List;
/**
* Created by e on 11.06.17.
*/
public class Neo4JRouting implements AutoCloseable, Neo4J_Connector {
static Driver driver;
public Neo4JRouting(String startAirport, String destinationAirport, StatementResult shortestPath) {
driver = GraphDatabase.driver("bolt://ec2-13-58-101-13.us-east-2.compute.amazonaws.com:7687",
AuthTokens.basic("neo4j", "Einloggen_123"));
try(Session session = driver.session()) {
shortestPath = session.run("MATCH (a:" + startAirport.toLowerCase() + "), (b:" + destinationAirport.toLowerCase()
+ "), p = allShortestPaths((a)-[r*1..4]-(b)) UNWIND rels(p) AS rel RETURN nodes(p), sum(rel.weight) " +
"AS weight ORDER BY sum(rel.weight)");
List<Record> storeList = storeList(shortestPath);
while (shortestPath.hasNext()) {
System.out.println(shortestPath.next().toString());
}
System.out.println(storeList);
} catch (Exception e) {
e.printStackTrace();
}
}
public List<Record> storeList(StatementResult statementResult) {
List<Record> list = new ArrayList<>();
while (statementResult.hasNext()) {
list.add(statementResult.next());
}
return list;
}
#Override
public Driver runDriver(String user, AuthToken basicAuthToken) throws Exception {
return null;
}
#Override
public void close() throws Exception {
}
}
I am looking forward to your answers. Many thanks!
Every row you are returning contains a list of nodes and a weight. That's what you ask in your query and that's what you get. So you have to "unpack" that result into the format that you desire.
Couple of code-snippets to show what I mean :
StatementResult vResult = vSession.run(aCypher);
while (vResult.hasNext()) {
Record vRecord = vResult.next();
vMutator.pushNode("row");
for (Pair <String, Value> vListEntry : vRecord.fields()) {
process_listentry(vSession, vMutator, vListEntry.key(), vListEntry.value());
}
vMutator.popNode(); // row
}
and then in process_listentry :
private void process_listentry(Session vSession, IHDSMutator vMutator, String vKey, Value vValue) {
...
else if (vValue.type().equals(vSession.typeSystem().NODE())){
vMutator.pushNode(vKey);
vMutator.addNode("_id", Long.toString(vValue.asNode().id()));
for (String lLabel : vValue.asNode().labels()) {
vMutator.addNode("_label", lLabel);
}
for (String lKey : vValue.asNode().keys()) {
Value lValue = vValue.asNode().get(lKey);
process_listentry(vSession, vMutator, lKey, lValue);
}
vMutator.popNode();
}
...
but it does depend on what you ask in the query and thus have to unpack ...
Hope this helps,
Tom
I have many queries with many select fields and some nested entities. This a simplified version of nested entity structure:
public class OuterEntity{
private String name1;
private String name2;
private MiddleEntity middle;
//... get/set..
}
public class MiddleEntity{
private String surname1;
private String surname2;
private InnerEntity inner;
//... get/set..
}
public class InnerEntity{
private String nickname1;
private String nickname2;
//... get/set..
}
All entities have 1:n relationship, so I can write a single long query to get all data. I want to avoid multiple queries to get each single entity separately.
select
o.name1
o.name2
m.surname1
m.surname2
i.nickname1
i.nickname2
from outertable o
join middletable m on m.id=o.middle
join innertable i on i.id=m.inner
I wish to have a RowMapper for this mapping using column names aliases that can get and nest all entity. Maybe I can describe all nesting path with columns alias:
select
o.name1 as name1
o.name2 as name1
m.surname1 as middle_surname1
m.surname2 as middle_surname2
i.nickname1 as middle_inner_nickname1
i.nickname2 as middle_inner_nickname2
from outertable o
join middletable m on m.id=o.middle
join innertable i on i.id=m.inner
Do you think is it possibile? Does jdbctemplate provide something for this need?
I'm not asking to code a new RowMapper for me, I just want to know if exists something or better solution becase I think it is a very common problem
My actual solution is to get entities separately (one query per entity) and map them with BeanPropertyRowMapper. Another solution could be to write a different RowMapper for each query, but I will use this as last chance because I should write many different mapper for a common logic.
ORM frameworks like Hibernate is not an option for me.
I did not find nothing for now, I tried to write a custom BeanWrapper base on BeanPropertyRowMapper soruce.
import java.beans.PropertyDescriptor;
import java.math.BigDecimal;
import java.sql.Date;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.springframework.beans.BeanUtils;
import org.springframework.beans.BeanWrapper;
import org.springframework.beans.NotWritablePropertyException;
import org.springframework.beans.PropertyAccessorFactory;
import org.springframework.dao.DataRetrievalFailureException;
import org.springframework.jdbc.core.BeanPropertyRowMapper;
import org.springframework.jdbc.core.RowMapper;
import org.springframework.jdbc.support.JdbcUtils;
/**
* #author tobia.scapin
*
* BeanRowMapper for nesting beans of 1:n entity this uses query aliases to build entity nesting.
* Field names should be exactly the same of bean property, respect cases and do not use underscore for field names
* "id" columnname/alias is used to check if a nested entity should be null.
*
* example:
* select
* a.p1 as property1
* b.id as entityname_id //<-- if this is values is null, the entity will be null
* b.p1 as entityname_property1
* b.p2 as entityname_property2
* c.id as entityname_subentity_id //<-- if this is values is null, the subentity will be null
* c.p1 as entityname_subentity_property1
* from a,b,c
*
* #param <T>
*/
public class NestedBeanAliasRowMapper<T> implements RowMapper<T> {
private static final String NESTING_SEPARATOR = "_";
private static final String NULLIZER_FIELD = "id";
#SuppressWarnings("rawtypes")
private final static List<Class> TYPES;
static{
TYPES=Arrays.asList(new Class[]{ int.class, boolean.class, byte.class, short.class, long.class, double.class, float.class, Boolean.class, Integer.class, Byte.class, Short.class, Long.class, BigDecimal.class, Double.class, Float.class, String.class, Date.class});
}
private Class<T> mappedClass;
private Map<String, PropertyDescriptor> mappedFields;
private Map<String, PropertyDescriptor> mappedBeans;
#SuppressWarnings("rawtypes")
private Map<Class,NestedBeanAliasRowMapper> mappersCache=new HashMap<Class,NestedBeanAliasRowMapper>();
private Map<String,BeanProp> beanproperties=null;
public NestedBeanAliasRowMapper(Class<T> mappedClass) {
initialize(mappedClass);
}
/**
* Initialize the mapping metadata for the given class.
* #param mappedClass the mapped class
*/
protected void initialize(Class<T> mappedClass) {
this.mappedClass = mappedClass;
mappersCache.put(mappedClass, this);
this.mappedFields = new HashMap<String, PropertyDescriptor>();
this.mappedBeans = new HashMap<String, PropertyDescriptor>();
PropertyDescriptor[] pds = BeanUtils.getPropertyDescriptors(mappedClass);
for (PropertyDescriptor pd : pds) {
if (pd.getWriteMethod() != null) {
if(TYPES.contains(pd.getPropertyType()))
this.mappedFields.put(pd.getName(), pd);
else
this.mappedBeans.put(pd.getName(), pd);
}
}
}
#Override
public T mapRow(ResultSet rs, int rowNumber) throws SQLException {
ResultSetMetaData rsmd = rs.getMetaData();
int columnCount = rsmd.getColumnCount();
List<Integer> cols=new ArrayList<Integer>();
for (int index = 1; index <= columnCount; index++)
cols.add(index);
return mapRow(rs, rowNumber, cols, "", true);
}
#SuppressWarnings({ "unchecked", "rawtypes" })
public T mapRow(ResultSet rs, int rowNumber, List<Integer> cols, String aliasPrefix, boolean root) throws SQLException {
T mappedObject = BeanUtils.instantiate(this.mappedClass);
BeanWrapper bw = PropertyAccessorFactory.forBeanPropertyAccess(mappedObject);
ResultSetMetaData rsmd = rs.getMetaData();
if(rowNumber==0) beanproperties=new HashMap<String,BeanProp>();
for (int index : cols) {
String column = JdbcUtils.lookupColumnName(rsmd, index);
if(aliasPrefix!=null && column.length()>aliasPrefix.length() && column.substring(0, aliasPrefix.length()).equals(aliasPrefix))
column=column.substring(aliasPrefix.length()); //remove the prefix from column-name
PropertyDescriptor pd = this.mappedFields.get(column);
if (pd != null) {
try {
Object value = getColumnValue(rs, index, pd);
if(!root && NULLIZER_FIELD.equals(column) && value==null)
return null;
bw.setPropertyValue(pd.getName(), value);
}
catch (NotWritablePropertyException ex) {
throw new DataRetrievalFailureException("Unable to map column '" + column + "' to property '" + pd.getName() + "'", ex);
}
}else if(rowNumber==0 && column.contains(NESTING_SEPARATOR)){
String[] arr=column.split(NESTING_SEPARATOR);
column=arr[0];
PropertyDescriptor bpd = this.mappedBeans.get(column);
if(bpd!=null){
BeanProp beanprop=beanproperties.get(column);
if(beanprop==null){
beanprop=new BeanProp();
beanprop.setClazz(bpd.getPropertyType());
beanproperties.put(column, beanprop);
}
beanprop.addIndex(index);
}
}
}
if(!beanproperties.isEmpty()) for (String beanname : beanproperties.keySet()) {
BeanProp beanprop=beanproperties.get(beanname);
NestedBeanAliasRowMapper mapper=mappersCache.get(beanprop.getClazz());
if(mapper==null){
mapper=new NestedBeanAliasRowMapper<>(beanprop.getClazz());
mappersCache.put(beanprop.getClazz(), mapper);
}
Object value = mapper.mapRow(rs, rowNumber, beanprop.getIndexes(), aliasPrefix+beanname+NESTING_SEPARATOR, false);
bw.setPropertyValue(beanname, value);
}
return mappedObject;
}
protected Object getColumnValue(ResultSet rs, int index, PropertyDescriptor pd) throws SQLException {
return JdbcUtils.getResultSetValue(rs, index, pd.getPropertyType());
}
public static <T> BeanPropertyRowMapper<T> newInstance(Class<T> mappedClass) {
return new BeanPropertyRowMapper<T>(mappedClass);
}
#SuppressWarnings("rawtypes")
private class BeanProp{
private Class clazz;
private List<Integer> indexes=new ArrayList<Integer>();
public Class getClazz() {
return clazz;
}
public void setClazz(Class clazz) {
this.clazz = clazz;
}
public List<Integer> getIndexes() {
return indexes;
}
public void addIndex(Integer index) {
this.indexes.add(index);
}
}
}
I need to allow my content pipeline extension to use a pattern similar to a factory. I start with a dictionary type:
public delegate T Mapper<T>(MapFactory<T> mf, XElement d);
public class MapFactory<T>
{
Dictionary<string, Mapper<T>> map = new Dictionary<string, Mapper<T>>();
public void Add(string s, Mapper<T> m)
{
map.Add(s, m);
}
public T Get(XElement xe)
{
if (xe == null) throw new ArgumentNullException(
"Invalid document");
var key = xe.Name.ToString();
if (!map.ContainsKey(key)) throw new ArgumentException(
key + " is not a valid key.");
return map[key](this, xe);
}
public IEnumerable<T> GetAll(XElement xe)
{
if (xe == null) throw new ArgumentNullException(
"Invalid document");
foreach (var e in xe.Elements())
{
var val = e.Name.ToString();
if (map.ContainsKey(val))
yield return map[val](this, e);
}
}
}
Here is one type of object I want to store:
public partial class TestContent
{
// Test type
public string title;
// Once test if true
public bool once;
// Parameters
public Dictionary<string, object> args;
public TestContent()
{
title = string.Empty;
args = new Dictionary<string, object>();
}
public TestContent(XElement xe)
{
title = xe.Name.ToString();
args = new Dictionary<string, object>();
xe.ParseAttribute("once", once);
}
}
XElement.ParseAttribute is an extension method that works as one might expect. It returns a boolean that is true if successful.
The issue is that I have many different types of tests, each of which populates the object in a way unique to the specific test. The element name is the key to MapFactory's dictionary. This type of test, while atypical, illustrates my problem.
public class LogicTest : TestBase
{
string opkey;
List<TestBase> items;
public override bool Test(BehaviorArgs args)
{
if (items == null) return false;
if (items.Count == 0) return false;
bool result = items[0].Test(args);
for (int i = 1; i < items.Count; i++)
{
bool other = items[i].Test(args);
switch (opkey)
{
case "And":
result &= other;
if (!result) return false;
break;
case "Or":
result |= other;
if (result) return true;
break;
case "Xor":
result ^= other;
break;
case "Nand":
result = !(result & other);
break;
case "Nor":
result = !(result | other);
break;
default:
result = false;
break;
}
}
return result;
}
public static TestContent Build(MapFactory<TestContent> mf, XElement xe)
{
var result = new TestContent(xe);
string key = "Or";
xe.GetAttribute("op", key);
result.args.Add("key", key);
var names = mf.GetAll(xe).ToList();
if (names.Count() < 2) throw new ArgumentException(
"LogicTest requires at least two entries.");
result.args.Add("items", names);
return result;
}
}
My actual code is more involved as the factory has two dictionaries, one that turns an XElement into a content type to write and another used by the reader to create the actual game objects.
I need to build these factories in code because they map strings to delegates. I have a service that contains several of these factories. The mission is to make these factory classes available to a content processor. Neither the processor itself nor the context it uses as a parameter have any known hooks to attach an IServiceProvider or equivalent.
Any ideas?
I needed to create a data structure essentially on demand without access to the underlying classes as they came from a third party, in this case XNA Game Studio. There is only one way to do this I know of... statically.
public class TestMap : Dictionary<string, string>
{
private static readonly TestMap map = new TestMap();
private TestMap()
{
Add("Logic", "LogicProcessor");
Add("Sequence", "SequenceProcessor");
Add("Key", "KeyProcessor");
Add("KeyVector", "KeyVectorProcessor");
Add("Mouse", "MouseProcessor");
Add("Pad", "PadProcessor");
Add("PadVector", "PadVectorProcessor");
}
public static TestMap Map
{
get { return map; }
}
public IEnumerable<TestContent> Collect(XElement xe, ContentProcessorContext cpc)
{
foreach(var e in xe.Elements().Where(e => ContainsKey(e.Name.ToString())))
{
yield return cpc.Convert<XElement, TestContent>(
e, this[e.Name.ToString()]);
}
}
}
I took this a step further and created content processors for each type of TestBase:
/// <summary>
/// Turns an imported XElement into a TestContent used for a LogicTest
/// </summary>
[ContentProcessor(DisplayName = "LogicProcessor")]
public class LogicProcessor : ContentProcessor<XElement, TestContent>
{
public override TestContent Process(XElement input, ContentProcessorContext context)
{
var result = new TestContent(input);
string key = "Or";
input.GetAttribute("op", key);
result.args.Add("key", key);
var items = TestMap.Map.Collect(input, context);
if (items.Count() < 2) throw new ArgumentNullException(
"LogicProcessor requires at least two items.");
result.args.Add("items", items);
return result;
}
}
Any attempt to reference or access the class such as calling TestMap.Collect will generate the underlying static class if needed. I basically moved the code from LogicTest.Build to the processor. I also carry out any needed validation in the processor.
When I get to reading these classes I will have the ContentService to help.