In project i use latest Dart version with objectbox: ^1.0.0:
#Entity
class Node{
...
#Transient()
final List<Edge> _edges = List<Edge>.empty(growable: true);
final relEdges = ToMany<Edge>();
...
}
#Entity
class Edge{
...
#Transient()
final List<Node> _nodes = List<Node>.empty(growable: true);
#Backlink()
final relNodes = ToMany<Node>();
...
}
after nodes and edges are created. Nodes are assigned to the list of nodes in edge object(reverse in nodeObject) and then in DAO layer they are (re)applied to relList(ToMany) of objectbox.
Actual put:
main_test.dart:
// nodes and edge are created
node1.dao.create(node1);
node2.dao.create(node2);
node3.dao.create(node3);
edge..nodes.addAll([node1,node2]);
edge.dao.create(edge);
node1.dao.update(node1..edges.add(edge));
node2.dao.update(node2..edges.add(edge));
...
// removes edge from node2's edge list
node2..edges.removeWhere((element) {
return element.uuid == edge.uuid;
});
node3..edges.add(edge);
// this placement also didn't change anything
//await node2.dao.update(node2);
//await node3.dao.update(node3);
// or remove(1) and add(node3)
edge..edges.clear();
edge..edges.addAll([node1,node3]);
await node2.dao.update(node2);
await node3.dao.update(node3);
await node1.dao.update(node1);
await edge.dao.update(edge)
...
edgeDao.dart
...
element.relNodes.clear();
element.relNodes.addAll(edges);
...
nodeDao.dart
...
element.relEdges.clear();
element.relEdges.addAll(nodes);
...
databaseConnector.dart:
// Box<(Element)> _box; is alrady initialized
...
// it is implemented the same way, for Edge class
// create works the same way
Future<void> update(Node element) async {
this._box.put(element);
}
...
add operation works properly, just like update, before i try to save changes of edge relations, after old node is removed from ToMany and new one is added (this crashes every further put operation). I get following error from objectbox (x3):
>package:objectbox/src/native/bindings/helpers.dart 78:9 ObjectBoxNativeError.throwMapped
>package:objectbox/src/native/bindings/helpers.dart 50:48 throwLatestNativeError
>package:objectbox/src/native/bindings/helpers.dart 17:5 checkObx
>package:objectbox/src/native/box.dart 461:7 InternalBoxAccess.relRemove
>package:objectbox/src/relations/to_many.dart 195:33 ToMany.applyToDb.<fn>
>dart:collection _LinkedHashMapMixin.forEach
>package:objectbox/src/relations/to_many.dart 168:15 ToMany.applyToDb
>package:objectbox/src/native/box.dart 365:13 Box._putToManyRelFields.<fn>
>dart:collection _LinkedHashMapMixin.forEach
>package:objectbox/src/native/box.dart 362:37 Box._putToManyRelFields
> ...
>ObjectBoxException: 404 404: Unknown native error
neither put() nor applyToDB() work. I even tried to use clear() and then to addAll to ToMany from list object. Any suggestions why this happens?
In my case after remove, i have to update the ToMany relation. Only then i can add a new value. When i find the reason, why it these operations don't work in my tests, i'll update the answer.
Related
I am working on an xtext Project where I have to customize the Scope Provider. I need to add up some possible candidates for the scope. The first part (getServiceInputs()) works fine but for the second one (addAll(sub.GetSubRecipeParameters()) not. Debugging showed that they get removed from its original source (sub) and can therefore not be retrieved again. When simply commenting out the addAll line the SubRecipeParameters remain in sub. Really dont know how to solve that, tried already some work arounds. Anyone with an Idea?
public class AutomationServiceDslScopeProvider extends AbstractAutomationServiceDslScopeProvider {
#Override
public IScope getScope(EObject context, EReference reference) {
if (context instanceof ServiceInvocationParameter
&& reference == AutomationServiceDslPackage.Literals.LITERAL) {
ServiceInvocation serviceCall = (ServiceInvocation) invocationParameter.eContainer();
ServiceDefinition calledService = serviceCall.getService();
List<ServiceParameterDefinition> candidates= calledService.getServiceInputs();
final EObject rootContainer = EcoreUtil.getRootContainer(context);
List<SubRecipeDefinition> subs = EcoreUtil2.getAllContentsOfType(rootContainer, SubRecipeDefinition.class);
for(SubRecipeDefinition sub:subs) {
for(RecipeStep step:sub.getRecipeSteps()) {
if(step.getName()==serviceCall.getName()) {
candidates.addAll(sub.getSubRecipeParameters());
}
}
}
return Scopes.scopeFor(candidates);
Thanks for any help!!
This is normal EMF behaviour if you move elements from one EList to another one. The solution is to create a new list e.g. new ArrayList<>() and also add the inputs there
List<ServiceParameterDefinition> candidates = new ArrayList<>();
candidates.addAll(calledService.getServiceInputs());
Say I have this class:
void main() async {
final example = ExampleClass();
await example.waitOne();
await example.waitOne();
print('finished');
}
class ExampleClass {
Future<void> waitOne() async {
await Future.delayed(Duration(seconds: 1));
print('1 second');
}
}
This code works exactly as I expect it to. It's output is as follows:
1 second
1 second
finished
Then we have this code:
void main() async {
final example = ExampleClass();
await example
..waitOne()
..waitOne();
print('finished');
}
This code now has cascading operators (..) and the output seems strange:
finished
1 second
1 second
The code skips the two futures and prints "finished" to the console first, then "1 second" gets printed twice at the same time (like Future#wait would do).
Why does Dart act in this way?
In your example with the cascading operator adding await doesn't do anything since the cascade operation doesn't return anything hence there is no future to be awaited and then finished is printed right away
Remember that the result of the cascade operator is the original object that you used it on. That is, for var result = object..x()..y()..z(), result will be assigned the value of object, regardless of what x, y, or z return. The values returned by x(), y(), and z() are ignored. It's the equivalent of:
object.x();
object.y();
object.z();
var result = object;
Your case, which involves Futures, is no different:
final example = ExampleClass();
await example
..waitOne()
..waitOne();
So you're doing the equivalent of:
final example = ExampleClass();
example.waitOne(); // The returned Future is ignored.
example.waitOne(); // The returned Future is ignored.
await example; // Incorrectly using await on a non-Future.
(Note that enabling the unawaited_futures and await_only_futures lints would catch this mistake.)
To properly wait, you can't use the cascade operator and will need to explicitly await the individual operations. Also see the issue Prefix await is cumbersome to work with. which discusses possible changes to the language to support using await with member or cascade operators.
After a little research, org.apache.jena.sparql.core.DatasetGraphMonitor looked the way to go.
To my understanding I have to crate a DatasetGraph wrapped by the DatasetGraphMonitor, use this graph to create a Model and all the modifications to the model are now notified to my DatasetChanges object.
So that's what I'm doing:
//create a Dataset backed by TBD2
Dataset dataset = TDB2Factory.connectDataset(location);
//wrap the dataset with a DatasetGraphMonitor and obtain a DatasetGraph
DatasetGraph datasetGraph = new DatasetGraphMonitor(dataset.asDatasetGraph(), new DatasetChanges() {
#Override
public void start() {
}
#Override
public void reset() {
}
#Override
public void finish() {
}
#Override
public void change(QuadAction qaction, Node g, Node s, Node p, Node o) {
LOG.info("Dataset change: "+qaction);
}
});
//create a model using the DatasetGraphMonitor as underlying graph
Model model = ModelFactory.createModelForGraph(datasetGraph.getDefaultGraph());
//run an insert sparql query to add new triples to the triplestore (this really is in a write transaction, maybe I'm oversimplifying here)
UpdateAction.parseExecute(sparqlQuery, model);
well, you guessed that already: change never gets called.
Any idea about what I'm doing wrong here? Thanks.
DatasetGraphMonitor is for monitoring actions on the dataset. Getting the default graph, making it a model, doesn't trigger that machinery. (If it did, you'd get a "not in transaction" exception). The returns graph does straight to the core database.
Instead, either:
Wrap the graph from datasetGraph.getDefaultGraph() with GraphWrapper and put
the monitoring code on the various add/delete methods.
Do the update (in a transaction) on the datasetGraph.
I ran into an issue where I was intermittently receiving an error:
An entity object cannot be referenced by multiple instances of IEntityChangeTracker
whenever trying to attach an entity to the DbContext.
UPDATE: Original post is below and is TL;DR. So here is a simplified version with more testing.
First I get the Documents collection. There are 2 items returned in this query.
using (UnitOfWork uow = new UnitOfWork())
{
// uncomment below line resolves all errors
// uow.Context.Configuration.ProxyCreationEnabled = false;
// returns 2 documents in the collection
documents = uow.DocumentRepository.GetDocumentByBatchEagerly(skip, take).ToList();
}
Scenario 1:
using (UnitOfWork uow2 = new UnitOfWork())
{
// This errors ONLY if the original `uow` context is not disposed.
uow2.DocumentRepository.Update(documents[0]);
}
This scenario works as expected. I can force the IEntityChangeTracker error by NOT disposing the original uow context.
Scenario 2:
Iterate through the 2 items in the documents collection.
foreach (Document document in documents)
{
_ = Task.Run(() =>
{
using (UnitOfWork uow3 = new UnitOfWork())
{
uow3.DocumentRepository.Update(document);
});
}
}
Both items fail to attach to the DbSet with the IEntityChangeTracker error. Sometimes one succeeds and only one fails. I assume this might be to do with the exact timings of the Task Scheduler. But even if they are attaching concurrently, they are different document entities. So they shouldn't be being tracked by any other context. Why am I getting the error?
If I uncomment ProxyCreationEnabled = false on the original uow context, this scenario works! So how are they still being tracked even thought the context was disposed? Why is it a problem that they are DynamicProxies, even though they are not attached to or tracked by any context.
ORIGINAL POST:
I have an entity object called Document, and it's related entity which is a collection of DocumentVersions.
In the code below, the document object and all related entities including DocumentVersions have already been eagerly loaded before being passed to this method - which I will demonstrate after.
public async Task PutNewVersions(Document document)
{
// get versions
List<DocumentVersion> versions = document.DocumentVersions.ToList();
for (int i = 0; i < versions.Count; i++)
{
UnitOfWork uow = new UnitOfWork();
try
{
versions[i].Attempt++;
//... make some API call that succeeds
versions[i].ContentUploaded = true;
versions[i].Result = 1;
}
finally
{
uow.DocumentVersionRepository.Update(versions[i]); // error hit in this method
uow.Save();
}
}
}
The Update method just attaches the entity and changes the state. It is part of a GenericRepository class that all my Entity Repositories inherit from:
public virtual void Update(TEntity entityToUpdate)
{
dbSet.Attach(entityToUpdate); // error is hit here
context.Entry(entityToUpdate).State = EntityState.Modified;
}
The document entity, and all related entities are loaded eagerly using a method in the Document entity repository:
public class DocumentRepository : GenericRepository<Document>
{
public DocumentRepository(MyEntities context) : base(context)
{
this.context = context;
}
public IEnumerable<Document> GetDocumentByBatchEagerly(int skip, int take)
{
return (from document in context.Documents
.Include(...)
.Include(...)
.Include(...)
.Include(...)
.Include(d => d.DocumentVersions)
.AsNoTracking()
orderby document.DocumentKey descending
select document).Skip(skip).Take(take);
}
}
The method description for .AsNoTracking() says that "the entities returned will not be cached in the DbContext". Great!
Then why does the .Attach() method above think that this DocumentVersion entity is already referenced in another IEntityChangeTracker? I am assuming this means it is referenced in another DbContext, i.e: the one calling GetDocumentByBatchEagerly(). And why does this issue only present intermittently? It seems that it happens less often when I am stepping through the code.
I resolved this by adding the following line to the above DocumentRepository constructor:
this.context.Configuration.ProxyCreationEnabled = false;
I just don't understand why this appears to resolve the issue.
It also means if I ever want to use the DocumentRepository for something else and want to leverage change tracking and lazy loading, I can't. There doesn't seem to be a 'per query' option to turn off dynamic proxies like there is with 'as no tracking'.
For completeness, here is how the 'GetDocumentsByBatchEagerly' method is being used, to demonstrate that it uses it's own instance of UnitOfWork:
public class MigrationHandler
{
UnitOfWork uow = new UnitOfWork();
public async Task FeedPipelineAsync()
{
bool moreResults = true;
do
{
// documents retrieved with AsNoTracking()
List<Document> documents = uow.DocumentRepository.GetDocumentByBatchEagerly(skip, take).ToList();
if (documents.Count == 0) moreResults = false;
skip += take;
// push each record into TPL Dataflow pipeline
foreach (Document document in documents)
{
// Entry point for the data flow pipeline which links to
// a block that calls PutNewVersions()
await dataFlowPipeline.DocumentCreationTransformBlock.SendAsync(document);
}
} while (moreResults);
dataFlowPipeline.DocumentCreationTransformBlock.Complete();
// await completion of each block at the end of the pipeline
await Task.WhenAll(
dataFlowPipeline.FileDocumentsActionBlock.Completion,
dataFlowPipeline.PutVersionActionBlock.Completion);
}
}
We have a problem making asList() method sortable.
We thought we could do this by just extending the View class and override the asList method but realized that View class has a private constructor so we could not do this.
Our other attempt was to fork the Google Dataflow code on github and modify the PCollectionViews class to return a sorted list be using the Collections.sort method as shown in the code snippet below
#Override
protected List<T> fromElements(Iterable<WindowedValue<T>> contents) {
Iterable<T> itr = Iterables.transform(
contents,
new Function<WindowedValue<T>, T>() {
#SuppressWarnings("unchecked")
#Override
public T apply(WindowedValue<T> input){
return input.getValue();
}
});
LOG.info("#### About to start sorting the list !");
List<T> tempList = new ArrayList<T>();
for (T element : itr) {
tempList.add(element);
};
Collections.sort((List<? extends Comparable>) tempList);
LOG.info("##### List should now be sorted !");
return ImmutableList.copyOf(tempList);
}
Note that we are now sorting the list.
This seemed to work, when run with the DirectPipelineRunner but when we tried the BlockingDataflowPipelineRunner, it didn't seem like the code change was being executed.
Note: We actually recompiled the dataflow used it in our project but this did not work.
How can we be able to achieve this (as sorted list from the asList method call)?
The classes in PCollectionViews are not intended for extension. Only the primitive view types provided by View.asSingleton, View.asSingleton View.asIterable, View.asMap, and View.asMultimap are supported.
To obtain a sorted list from a PCollectionView, you'll need to sort it after you have read it. The following code demonstrates the pattern.
// Assume you have some PCollection
PCollection<MyComparable> myPC = ...;
// Prepare it for side input as a list
final PCollectionView<List<MyComparable> myView = myPC.apply(View.asList());
// Side input the list and sort it
someOtherValue.apply(
ParDo.withSideInputs(myView).of(
new DoFn<A, B>() {
#Override
public void processElement(ProcessContext ctx) {
List<MyComparable> tempList =
Lists.newArrayList(ctx.sideInput(myView));
Collections.sort(tempList);
// do whatever you want with sorted list
}
}));
Of course, you may not want to sort it repeatedly, depending on the cost of sorting vs the cost of materializing it as a new PCollection, so you can output this value and read it as a new side input without difficulty:
// Side input the list, sort it, and put it in a PCollection
PCollection<List<MyComparable>> sortedSingleton = Create.<Void>of(null).apply(
ParDo.withSideInputs(myView).of(
new DoFn<Void, B>() {
#Override
public void processElement(ProcessContext ctx) {
List<MyComparable> tempList =
Lists.newArrayList(ctx.sideInput(myView));
Collections.sort(tempList);
ctx.output(tempList);
}
}));
// Prepare it for side input as a list
final PCollectionView<List<MyComparable>> sortedView =
sortedSingleton.apply(View.asSingleton());
someOtherValue.apply(
ParDo.withSideInputs(sortedView).of(
new DoFn<A, B>() {
#Override
public void processElement(ProcessContext ctx) {
... ctx.sideInput(sortedView) ...
// do whatever you want with sorted list
}
}));
You may also be interested in the unsupported sorter contrib module for doing larger sorts using both memory and local disk.
We tried to do it the way Ken Knowles suggested. There's a problem for large datasets. If the tempList is large (so sort takes some measurable time as it's proportion to O(n * log n)) and if there are millions of elements in the "someOtherValue" PCollection, then we are unecessarily re-sorting the same list millions of times. We should be able to sort ONCE and FIRST, before passing the list to the someOtherValue.apply's DoFn.