Where in the Admin site of EventStore I can view my saving events? - eventstoredb

By the way how do you create a STREAM?
I use AppendToStreamAsync directly, is this right or shall I create a
stream first then append onto this stream?
I also tried performing some tests but using the methods below I can write
events onto EventStore but can't read Events from it.
And most import question is how do I view my saving events in the Admin site of EventStore?
Here are the code:
public async Task AppendEventAsync(IEvent #event)
{
try
{
var eventData = new EventData(#event.EventId,
#event.GetType().AssemblyQualifiedName,
true,
Serializer.Serialize(#event),
Encoding.UTF8.GetBytes("{}"));
var writeResult = await connection.AppendToStreamAsync(
#event.SourceId.ToString(),
#event.AggregateVersion,
eventData);
Console.WriteLine(writeResult);
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
public async Task<IEnumerable<IEvent>> ReadEventsAsync(Guid aggregateId)
{
var ret = new List<IEvent>();
StreamEventsSlice currentSlice;
long nextSliceStart = StreamPosition.Start;
do
{
currentSlice = await connection.ReadStreamEventsForwardAsync(aggregateId.ToString(), nextSliceStart, 200, false);
if (currentSlice.Status != SliceReadStatus.Success)
{
throw new Exception($"Aggregate {aggregateId} not found");
}
nextSliceStart = currentSlice.NextEventNumber;
foreach (var resolvedEvent in currentSlice.Events)
{
ret.Add(Serializer.Deserialize(resolvedEvent.Event.EventType, resolvedEvent.Event.Data));
}
} while (!currentSlice.IsEndOfStream);
return ret;
}

Streams are created automatically as you write events. You should follow the recommended naming convention though as it enables a few features out of the box.
await Connection.AppendToStreamAsync("CustomerAggregate-b2c28cc1-2880-4924-b68f-d85cf24389ba", expectedVersion, creds, eventData);
It is recommended to call your streams as "category-id" - (where category in our case is the aggregate name) as we use are using DDD+CQRS pattern
CustomerAggregate-b2c28cc1-2880-4924-b68f-d85cf24389ba
The stream matures as you write more events to the same stream name.
The first events ID becomes the "aggregateID" in our case and then each new
eventID after that is unique. The only way to recreate our aggregate is
to replay the events in sequence. If the sequence fails an exception is thrown
The reason to use this naming convention is that Event Store runs a few default internal projection for your convenience. Here is a very convoluted documentation about it
$by_category
$by_event_type
$stream_by_category
$streams
By Category
By category basically means there is stream created using internal projection which for our CustomerAggregate we subscribe to $ce-CustomerAggregate events - and we will see only those "categories" regardless of their ID's - The event data contains everything we need there after.
We use persistent subscribers (small C# console applications) which are setup to work with $ce-CustomerAggregate. Persistent subscribers are great because they remember the last event your client acknowledged. So if the application crashes, you start it and it starts from the last place that application finished.
This is where event store starts to shine and stand out from the other "event store implementations"
Viewing your events
The example with persistent subscribers is one way to set things up using code.
You cannot really view "all" your data in the admin site. The purpose of the admin site it to manage projections, manage users, see some statistics, create some projections, and have a recent view of streams and events only. (If you know the ID's you can create the URL's as you need them - but you cant search for them)
If you want to see ALL the data then you use the RESTfull API using by using something like Postman. Maybe there is a 3rd party software that can create a grid like data source viewer but I am unaware of this. That would probably also just hook into the REST API and you could create your own visualiser this way quite quickly.
Again back to code, you can also always read all events from 0 using on of the libraries - which incidentally using DDD+CQRS you always read the aggregates stream from 0 to rebuilt its state. But you can do the same for other requirements.
In some cases looking at how to use snapshots makes replaying events allot faster, if you have an extremely large stream to deal with.
Paradigm shift
Event Store has quite a learning curve and is a paradigm shift from conventional transactional databases. Event Stores best friend is CQRS - We use a slightly modified version of the CQRS Lite open source framework
To truly appreciate Event Store you would need to understand DDD concepts and then dig into CQRS/ES - There are a few good YouTube videos and examples.

Related

Realm Swift: Question about Query-based public database

I’ve seen all around the documentation that Query-based sync is deprecated, so I’m wondering how should I got about my situation:
In my app (using Realm Cloud), I have a list of User objects with some information about each user, like their username. Upon user login (using Firebase), I need to check the whole User database to see if their username is unique. If I make this common realm using Full Sync, then all the users would synchronize and cache the whole database for each change right? How can I prevent that, if I only want the users to get a list of other users’ information at a certain point, without caching or re-synchronizing anything?
I know it's a possible duplicate of this question, but things have probably changed in four years.
The new MongoDB Realm gives you access to server level functions. This feature would allow you to query the list of existing users (for example) for a specific user name and return true if found or false if not (there are other options as well).
Check out the Functions documentation and there are some examples of how to call it from macOS/iOS in the Call a function section
I don't know the use case or what your objects look like but an example function to calculate a sum would like something like this. This sums the first two elements in the array and returns their result;
your_realm_app.functions.sum([1, 2]) { sum, error in
if let err = error {
print(err.localizedDescription)
return
}
if case let .double(x) = result {
print(x)
}
}

How to optimize performance of Results change listeners in Realm (Swift) with a deep hierarchy?

We're using Realm (Swift binding currently in version 3.12.0) from the earliest days in our project. In some early versions before 1.0 Realm provided change listeners for Results without actually giving changeSets.
We used this a lot in order to find out if a specific Results list changed.
Later the guys at Realm exchanged this API with changeSet providing methods. We had to switch and are now mistreating this API just in order to find out if anything in a specific List changed (inserts, deletions, modifications).
Together with RxSwift we wrote our own implementation of Results change listening which looks like this:
public var observable: Observable<Base> {
return Observable.create { observer in
let token = self.base.observe { changes in
if case .update = changes {
observer.onNext(self.base)
}
}
observer.onNext(self.base)
return Disposables.create(with: {
observer.onCompleted()
token.invalidate()
})
}
}
When we now want to have consecutive updates on a list we subscribe like so:
someRealm.objects(SomeObject.self).filter(<some filter>).rx.observable
.subscribe(<subscription code that gets called on every update>)
//dispose code missing
We wrote the extension on RealmCollection so that we can subscribe to List type as well.
The concept is equal to RxRealm's approach.
So now in our App we have a lot of filtered lists/results that we are subscribing to.
When data gets more and more we notice significant performance losses when it comes to seeing a change visually after writing something into the DB.
For example:
Let's say we have a Car Realm Object class with some properties and some 1-to-n and some 1-to-1 relationships. One of the properties is a Bool, namely isDriving.
Now we have a lot of cars stored in the DB and bunch of change listeners with different filters listing to changes of the cars collection (collection observers listening for changeSets in order to find out if the list was changed).
If I take one car of some list and set the property of isDriving from false to true (important: we do writes in the background) ideally the change listener fires fast and I have the nearly immediate correct response to my write on the main thread.
Added with edit on 2019-06-19:
Let's make the scenario still a little more real:
Let's change something down the hierarchy, let's say the tires manufacturer's name. Let's say a Car has a List<Tire>, a Tire has a Manufacturer and a Manufacturer has aname.
Now we're still listing toResultscollection changes with some more or less complex filters applied.
Then we're changing the name of aManufacturer` which is connected to one of the tires which are connected to one of the cars which is in that filtered list.
Can this still be fast?
Obviously when the length of results/lists where change listeners are attached to gets longer Realm's internal change listener takes longer to calculate the differences and fires later.
So after a write we see the changes - in worst case - much later.
In our case this is not acceptable. So we are thinking through different scenarios.
One scenario would be to not use .observe on lists/results anymore and switch to Realm.observe which fires every time anything did change in the realm, which is not ideal, but it is fast because the change calculation process is skipped.
My question is: What can I do to solve this whole dilemma and make our app fast again?
The crucial thing is the threading stuff. We're always writing in the background due to our design. So the writes itself should be very fast, but then that stuff needs to synchronize to the other threads where Realms are open.
In my understanding that happens after the change detection for all Results has run through, is that right?
So when I read on another thread, the data is only fresh after the thread sync, which happens after all notifications were sent out. But I am not sure currently if the sync happens before, that would be more awesome, did not test it by now.

Using Dart for HTML5 app, but want to load a file from the server-side

I'm new to Dart, and trying to create my first Dart web game. Sorry if I missed an answered question related to this. I did search, but wasn't having much luck.
To load a level, I would like to be able to read in a text file with the level data, then process that and use it to build the level. Unfortunately, I am running into the issue where dart:io and dart:html can not both be loaded if you want to use the File() object. From what I can tell, dart:html's File() object is client-side, so that would not be able to open the text-file I will have on the server.
Is there another way to read in a text file from the server? If not, do I need to set up a database just to store the game data, or is there a better option I'm not thinking about here?
In case it helps, the game data I'm working with currently is just a text file that gives a map of what the level will look like. For example:
~~~~Z~~~~
P
GGGGLLGGG
Each of those characters would denote a type of block to be placed in the level. It's not the best way to store levels, but it is pretty easy to create and easy to read in and process.
Thanks so much for the help!
If the file you are loading is a sibling of the index.html your game is loaded from, then you can just make an HTTP request to fetch the file.
To download web/level1.json you can use
Future<String> getGameData(String name) {
var response = await HttpRequest.getString('${name}.json');
print(response);
return response;
}
otherMethod() async {
var data = await getGameData('level1');
}
See also https://api.dartlang.org/stable/1.24.3/dart-html/HttpRequest-class.html

how do i implement / build / create an 'in memory database' for my unit test

i've started unit testing a while ago and as turned out i did more regression testing than unit testing because i also included my database layer thus going to the database verytime.
So, implemented Unity to inject a fake database layer, but i of course want to store some data, and the main opinion was: "create an in-memory database"
But what is that / how do i implement that?
Main question is: i think i have to fake the database layer, but doesn't that make me create a 'simple database' myself or: how can i keep it simple and not rebuilding Sql Server just for my unit tests :)
At the end of this question i'll give an explanation of the situation i got in on the project i just started on, and i was wondering if this was the way to go.
Michel
Current situation i've seen at this client is that testdata is contained in XML files, and there is a 'fake' database layer that connects all the xml files together.
For the real database we're using the entity framework, and this works very simple.
And now, in the 'fake' layer, i have top create all kind of classes to load, save, persist etc. the data.
It sounds weird that there is so much work in the fake layer, and so little in the real layer.
I hope this all makes sense :)
EDIT:
so i know i have to create a separate database layer for my unit test, but how do i implement it?
Define an interface for your data access layer and have (at least) two implementations of it:
The real database provider, which will in turn run queries on an SQL database, etc.
An in-memory test provider, which can be prepopulated with test data as part of each unit test.
The advantage of this is that the modules making use of the data provider do not need to whether the database is the real one or the test one, and hence more of the real code will be tested. The test database can be simple (like simple collections of objects) or complex (custom structures with indexes). It can also be a mocked implementation that will assert that it's being called appropriately as part of the test.
Additionally, if you ever need to support another data storage method (or different SQL database), you just need to write another implementation that conforms to the interface, and can be confident that none of the calling code will need to be reworked.
This approach is easiest if you plan for it from (or near) the start, so I'm not sure how easy it will be to apply to your situation.
What it might look like
If you're just loading and saving objects by id, then you can have an interface and implementations like (in Java-esque pseudo-code; I don't know much about asp.net):
interface WidgetDatabase {
Widget loadWidget(int id);
saveWidget(Widget w);
deleteWidget(int id);
}
class SqlWidgetDatabase extends WidgetDatabase {
Connection conn;
// connect to database server of choice
SqlWidgetDatabase(String connectionString) { conn = new Connection(connectionString); }
Widget loadWidget(int id) {
conn.executeQuery("SELECT * FROM widgets WHERE id = " + id);
Widget w = conn.fetchOne();
return w;
}
// more methods that run simple sql queries...
}
class MemeoryWidgetDatabase extends WidgetDatabase {
Set widgets;
MemoryWidgetDatabase() { widgets = new Set(); }
Widget loadWidget(int id) {
for (Widget w: widgets)
if (w.getId() == id)
return w;
return null;
}
// more methods that find/add/delete a widget in the "widgets" set...
}
If you need to run more other queries (such as batch selects based on more complex criteria), you can add methods to do this to the interface.
Likewise for complex updates. Transaction support is possible for the real database implementation. I'm not sure how easy it is to build an in-memory db that is capable of providing proper transaction support. To test it you'd need "open" several "connections" to the same data set, and to only apply updates to that shared dataset when a transaction is committed.
i used Sqlite for unit test as fake DB
Why don't you use a mocking framework (like moq or rhino mocks)? If you access your data through an interface, you can mock that interface and specify whatever you want to return on every test. Other approach is to have a separate environment for testing purposes, with a "real" database, where you make tests before taking your code for the production environment.
Uhhhh...... If you're storing all your test data in XML files. You've just changed one database for another. That is not an in memory database. In PHP you would use something like this.
class MemoryProductDB {
private $products;
function MemoryProductDB() {
$this->products = array();
}
public function find($index) {
return $this->products[$index];
}
public function save($product) {
$this->products[$product['index']] = $product;
}
}
You notice that all my data is stored in a memory array and is retrieved from a memory array. This is a simple In Memory Database.
IMHO, if you're using XML to store test data then you really haven't disconnected the dependencies from the model and the database effectively. No matter how complex your business rules are, when they touch the database, all they really are doing is CRUD (create, retrieve, update, and delete) functionality.
If you what your dealing with in the model is multiple objects from the database then maybe you need to compose all those objects into a single object and have the model use that one object. An example would be an order composed of products. Don't be retrieving products then saving products. Retrieve orders then save orders and have your model work on orders. The model shouldn't know anything about products.
This is called granularity of abstraction.
[Edit]
There was a very good question in the comments. When testing with an In Memory Database we don't care about how the select works in a database. The controller, first off, has to have functionality on the database to count the number of possible records that could be accessed for paging. The IMDb (in memory database) should just send a number. The controller should never care what that number is. Same with the actual records. Hopefully all your controller is doing is displaying what it gets back from the IMDb.
[EDit]
You should never be unit testing your controllers with a live model and imdb. The setup code for the imdb will have a lot of friction. Instead when unit testing a controller, you need to unit test a mock, stub, fake model. The best use of an imdb is during an integration test or when unit testing a model. Isn't an imdb a fake?
My scenario is:
In my client I use a plug in for a table. DataTables. Server side processing.
Client GET requests items in table product.get(5,10). The return data will be encoded JSON.
The model will be responsible for forming the JSON from retrieving information from the gateway to the database. The gateway is just a facade over the database. I'm a mocker so my gateway is a mock not an in memory gateway.
public function testSkuTable() {
$skus = array(
array('id' => '1', 'data' => 'data1'),
array('id' => '2', 'data' => 'data2'),
array('id' => '3', 'data' => 'data3'));
$names = array(
'id',
'data');
$start_row = $this->parameters['start_row'];
$num_rows = $this->parameters['num_rows'];
$sort_col = $this->parameters['sort_col'];
$search = $this->parameters['search'];
$requestSequence = $this->parameters['request_sequence'];
$direction = $this->parameters['dir'];
$filterTotals = 1;
$totalRecords = 1;
$this->gateway->expects($this->once())
->method('names')
->with($this->vendor)
->will($this->returnValue($names));
$this->gateway->expects($this->once())
->method('skus')
->with($this->vendor, $names, $start_row, $num_rows, $sort_col, $search, $direction)
->will($this->returnValue($skus));
$this->gateway->expects($this->once())
->method('filterTotals')
->will($this->returnValue($filterTotals));
$this->gateway->expects($this->once())
->method('totalRecords')
->with($this->vendor)
->will($this->returnValue($totalRecords));
$expectJson = '{"sEcho": '.$requestSequence.', "iTotalRecords": '.$totalRecords.', "iTotalDisplayRecords": '.$filterTotals.', "aaData": [ ["1","data1"],["2","data2"],["3","data3"]] }';
$actualJson = $this->skusModel->skuTable($this->vendor, $this->parameters);
$this->assertEquals($expectJson, $actualJson);
}
You will notice that with this unit test that I'm not concerned what the data looks like. $skus doesn't even look anything like that actual table schema. Just that I return records. Here is the actual code for the model:
public function skuTable($vendor, $parameterList) {
$startRow = $parameterList['start_row'];
$numRows = $parameterList['num_rows'];
$sortCols = $parameterList['sort_col'];
$search = $parameterList['search'];
if($search == null) {
$search = "";
}
$requestSequence = $parameterList['request_sequence'];
$direction = $parameterList['dir'];
$names = $this->propertyNames($vendor);
$skus = $this->skusList($vendor, $names, $startRow, $numRows, $sortCols, $search, $direction);
$filterTotals = $this->filterTotals($vendor, $names, $startRow, $numRows, $sortCols, $search, $direction);
$totalRecords = $this->totalRecords($vendor);
return $this->buildJson($requestSequence, $totalRecords, $filterTotals, $skus, $names);
}
The first part of the method breaks the individual parameters from the $parameterList that I get from the get request. The rest are calls to the gateway. Here is one of the methods:
public function skusList($vendor, $names, $start_row, $num_rows, $sort_col, $search, $direction) {
return $this->skusGateway->skus($vendor, $names, $start_row, $num_rows, $sort_col, $search, $direction);
}
I've been using in memory Sqlite for my unit tests, its really usefull

Editing a big imported file on a second page

This is mostly theoretical question, since I actually can implement it in any way, but it confuses me a bit. So, suppose I present a user with a page to select an Excel file, which is then uploaded to the server. Server code parses the file, and presents the user with another page with many options. The user can select and deselect some of them, edit names, and then click OK - after which the server has to process only the selected options.
The question may be:
is it better to store parsed file in Session?
is it better to push parsed data to client's page and then receive it back?
Here's example:
public class Data
{
public string Name { get; set; } // shown to user, can be changed
public bool Selected { get; set; } // this is in ViewModel but anyway
public string[] InternalData { get; set; } // not shown to user
}
// 1st option is to receive data via POST
public ActionResult ImportConfirmed(IList<Data> postitems)
{
// 2nd option is to receive only user changes via POST
var items = Session["items"] as IList<Data>;
items = items.Where(postitems of same name selected);
items.ForEach(set name to postitems name);
}
Obviously option #2 has less side effects, since it does not have global state. But in option #1 we don't push loads of useless-to-user data to the client. And this can be a lot.
Of course this problem is not new, and as always, the answer is: it depends.
I have to admit, I don't have any exact question in mind. I can't even tell why I don't like the Session solution which takes only couple of additional lines of code. The reason I ask is that I've read about Weblocks concept and was very impressed. So, I tried to invent something similar in ASP.NET MVC and failed to. Thus, I wonder, is there any elegant way to deal with such situations? By elegant I mean something that doesn't show it uses Session, easy to use, handles expirations (cleans up the Session if user does not press the final "Save" button), etc. Something like:
var data = parse(filestream);
var confirmationPostData = ShowView("Confirm", data);
items = items.Where(confirmationPostData of same name selected);
items.ForEach(set name to confirmationPostData name);
Here ShowView actually sends GET, wait for user's POST, and returns. Kind of. I do not insist, I just show the way that impressed me (in Weblocks - if I actually did understand it correctly).
Does everyone just use Session in such cases? Or is there a better way (except learning LISP which I already started to investigate if I can cope with)? Maybe, async actions in MVC v2 do it?
UPDATE: storing in DB/temp files, it works. I do sometimes store in DB. However this needs a way to expire the data since user may just abandon it (as simple as closing the browser). What I'm asking for: is there a proven and elegant way to solve it - not about how to do it. An abstraction built on top of serialization not tied to particular DB/file implementation, something like this.
I'm not sure what the purpose of uploading the Excel file is, but I like to make all actions that affect the long term state of the application, for the user, persisted. For example, what if the user uploads the file, changes a couple of options, then goes to lunch. If you store the info in session, it may be gone when they get back, ditto for storing it in the page with hidden variables. What about storing it in a DB?
I would store the file at the temp folder and only associate the name of the file with the user session so that later it can be processed:
// Create a temp file in the Temp folder and return its name:
var tempFile = Path.GetTempFileName();
// write to the temp file and put the filename into the session
// so that the next request can fetch the file and process it
There's a flaw with the GetTempFileName that I once fell into because I didn't read the documentation carefully. It says that the method will start throwing exceptions if you have more than 65535 files in the temp folder. So remember to always delete the temp file once you've finished processing it.
Another alternative to the temp folder would be to store the file into a database, but I am a little skeptic about storing files inside a relational database.

Resources