Using Dart for HTML5 app, but want to load a file from the server-side - dart

I'm new to Dart, and trying to create my first Dart web game. Sorry if I missed an answered question related to this. I did search, but wasn't having much luck.
To load a level, I would like to be able to read in a text file with the level data, then process that and use it to build the level. Unfortunately, I am running into the issue where dart:io and dart:html can not both be loaded if you want to use the File() object. From what I can tell, dart:html's File() object is client-side, so that would not be able to open the text-file I will have on the server.
Is there another way to read in a text file from the server? If not, do I need to set up a database just to store the game data, or is there a better option I'm not thinking about here?
In case it helps, the game data I'm working with currently is just a text file that gives a map of what the level will look like. For example:
~~~~Z~~~~
P
GGGGLLGGG
Each of those characters would denote a type of block to be placed in the level. It's not the best way to store levels, but it is pretty easy to create and easy to read in and process.
Thanks so much for the help!

If the file you are loading is a sibling of the index.html your game is loaded from, then you can just make an HTTP request to fetch the file.
To download web/level1.json you can use
Future<String> getGameData(String name) {
var response = await HttpRequest.getString('${name}.json');
print(response);
return response;
}
otherMethod() async {
var data = await getGameData('level1');
}
See also https://api.dartlang.org/stable/1.24.3/dart-html/HttpRequest-class.html

Related

Where in the Admin site of EventStore I can view my saving events?

By the way how do you create a STREAM?
I use AppendToStreamAsync directly, is this right or shall I create a
stream first then append onto this stream?
I also tried performing some tests but using the methods below I can write
events onto EventStore but can't read Events from it.
And most import question is how do I view my saving events in the Admin site of EventStore?
Here are the code:
public async Task AppendEventAsync(IEvent #event)
{
try
{
var eventData = new EventData(#event.EventId,
#event.GetType().AssemblyQualifiedName,
true,
Serializer.Serialize(#event),
Encoding.UTF8.GetBytes("{}"));
var writeResult = await connection.AppendToStreamAsync(
#event.SourceId.ToString(),
#event.AggregateVersion,
eventData);
Console.WriteLine(writeResult);
}
catch (Exception ex)
{
Console.WriteLine(ex);
}
}
public async Task<IEnumerable<IEvent>> ReadEventsAsync(Guid aggregateId)
{
var ret = new List<IEvent>();
StreamEventsSlice currentSlice;
long nextSliceStart = StreamPosition.Start;
do
{
currentSlice = await connection.ReadStreamEventsForwardAsync(aggregateId.ToString(), nextSliceStart, 200, false);
if (currentSlice.Status != SliceReadStatus.Success)
{
throw new Exception($"Aggregate {aggregateId} not found");
}
nextSliceStart = currentSlice.NextEventNumber;
foreach (var resolvedEvent in currentSlice.Events)
{
ret.Add(Serializer.Deserialize(resolvedEvent.Event.EventType, resolvedEvent.Event.Data));
}
} while (!currentSlice.IsEndOfStream);
return ret;
}
Streams are created automatically as you write events. You should follow the recommended naming convention though as it enables a few features out of the box.
await Connection.AppendToStreamAsync("CustomerAggregate-b2c28cc1-2880-4924-b68f-d85cf24389ba", expectedVersion, creds, eventData);
It is recommended to call your streams as "category-id" - (where category in our case is the aggregate name) as we use are using DDD+CQRS pattern
CustomerAggregate-b2c28cc1-2880-4924-b68f-d85cf24389ba
The stream matures as you write more events to the same stream name.
The first events ID becomes the "aggregateID" in our case and then each new
eventID after that is unique. The only way to recreate our aggregate is
to replay the events in sequence. If the sequence fails an exception is thrown
The reason to use this naming convention is that Event Store runs a few default internal projection for your convenience. Here is a very convoluted documentation about it
$by_category
$by_event_type
$stream_by_category
$streams
By Category
By category basically means there is stream created using internal projection which for our CustomerAggregate we subscribe to $ce-CustomerAggregate events - and we will see only those "categories" regardless of their ID's - The event data contains everything we need there after.
We use persistent subscribers (small C# console applications) which are setup to work with $ce-CustomerAggregate. Persistent subscribers are great because they remember the last event your client acknowledged. So if the application crashes, you start it and it starts from the last place that application finished.
This is where event store starts to shine and stand out from the other "event store implementations"
Viewing your events
The example with persistent subscribers is one way to set things up using code.
You cannot really view "all" your data in the admin site. The purpose of the admin site it to manage projections, manage users, see some statistics, create some projections, and have a recent view of streams and events only. (If you know the ID's you can create the URL's as you need them - but you cant search for them)
If you want to see ALL the data then you use the RESTfull API using by using something like Postman. Maybe there is a 3rd party software that can create a grid like data source viewer but I am unaware of this. That would probably also just hook into the REST API and you could create your own visualiser this way quite quickly.
Again back to code, you can also always read all events from 0 using on of the libraries - which incidentally using DDD+CQRS you always read the aggregates stream from 0 to rebuilt its state. But you can do the same for other requirements.
In some cases looking at how to use snapshots makes replaying events allot faster, if you have an extremely large stream to deal with.
Paradigm shift
Event Store has quite a learning curve and is a paradigm shift from conventional transactional databases. Event Stores best friend is CQRS - We use a slightly modified version of the CQRS Lite open source framework
To truly appreciate Event Store you would need to understand DDD concepts and then dig into CQRS/ES - There are a few good YouTube videos and examples.

Is it OK to open a DB4o file for query, insert, update multiple times?

This is the way I am thinking of using DB4o. When I need to query, I would open the file, read and close:
using (IObjectContainer db = Db4oFactory.OpenFile(Db4oFactory.NewConfiguration(), YapFileName))
{
try
{
List<Pilot> pilots = db.Query<Pilot>().ToList<Pilot>();
}
finally
{
try { db.Close(); }
catch (Exception) { };
}
}
At some later time, when I need to insert, then
using (IObjectContainer db = Db4oFactory.OpenFile(Db4oFactory.NewConfiguration(), YapFileName))
{
try
{
Pilot pilot1 = new Pilot("Michael Schumacher", 100);
db.Store(pilot1);
}
finally
{
try { db.Close(); }
catch (Exception) { };
}
}
In this way, I thought I will keep the file more tidy by only having it open when needed, and have it closed most of the time. But I keep getting InvalidCastException
Unable to cast object of type 'Db4objects.Db4o.Reflect.Generic.GenericObject' to type 'Pilot'
What's the correct way to use DB4o?
No, it's not a good idea to work this way. db4o ObjectContainers are intended to be kept open all the time your application runs. A couple of reasons:
db4o maintains a reference system to identify persistent objects, so it can do updates when you call #store() on an object that is already stored (instead of storing new objects) . This reference system is closed when you close the ObjectContainer, so updates won't work.
Class Metadata would have to be read from the database file every time you reopen it. db4o would also have to analyze the structure of all persistent classes again, when they are used. While both operations are quite fast, you probably don't want this overhead every time you store a single object.
db4o has very efficient caches for class and field indexes and for the database file itself. If you close and reopen the file, you take no advantage of them.
The way you have set up your code there could be failures when you work with multiple threads. What if two threads would want to open the database file at exactly the same time? db4o database files can be opened only once. It is possible to run multiple transactions and multiple threads against the same open instance and you can also use Client/Server mode if you need multiple transactions.
Later on you may like to try Transparent Activation and Transparent Persistence. Transparent Activation lazily loads object members when they are first accessed. Transparent Persistence automatically stores all objects that were modified in a transaction. For Transparent Activation (TA) and Transparent Persistence (TP) to work you certainly have to keep the ObjectContainer open.
You don't need to worry about constantly having an open database file. One of the key targets of db4o is embedded use in (mobile) devices. That's why we have written db4o in such a way that you can turn your machine off at any time without risking database corruption, even if the file is still open.
Possible reasons why you are getting a GenericObject back instead of a Pilot object:
This can happen when the assembly name of the assembly that contains the Pilot object has changed between two runs, either because you let VisualStudio autogenerate the name or because you changed it by hand.
Maybe "db4o" is part of your assembly name? One of the recent builds was too agressive at filtering out internal classes. This has been fixed quite some time ago. You may like to download and try the latest release, "development" or "production" should both be fine.
In a presentation I once did I have once seen really weird symptoms when db4o ObjectContainers were opened in a "using" block. You probably want to work without that anyway and keep the db4o ObjectContainer open all the time.
It is ok to reopen the database multiple times. The problem would be performance and loosing the "identity". Also you can't keep a reference to a result of a query and try to iterate it after closing the db (based on you code, looks like you want to do that).
GenericObjects are instantiated when the class cannot be found.
Can you provide a full, minimalist, sample that fails for you?
Also, which db4o version are you using?
Best

Editing a big imported file on a second page

This is mostly theoretical question, since I actually can implement it in any way, but it confuses me a bit. So, suppose I present a user with a page to select an Excel file, which is then uploaded to the server. Server code parses the file, and presents the user with another page with many options. The user can select and deselect some of them, edit names, and then click OK - after which the server has to process only the selected options.
The question may be:
is it better to store parsed file in Session?
is it better to push parsed data to client's page and then receive it back?
Here's example:
public class Data
{
public string Name { get; set; } // shown to user, can be changed
public bool Selected { get; set; } // this is in ViewModel but anyway
public string[] InternalData { get; set; } // not shown to user
}
// 1st option is to receive data via POST
public ActionResult ImportConfirmed(IList<Data> postitems)
{
// 2nd option is to receive only user changes via POST
var items = Session["items"] as IList<Data>;
items = items.Where(postitems of same name selected);
items.ForEach(set name to postitems name);
}
Obviously option #2 has less side effects, since it does not have global state. But in option #1 we don't push loads of useless-to-user data to the client. And this can be a lot.
Of course this problem is not new, and as always, the answer is: it depends.
I have to admit, I don't have any exact question in mind. I can't even tell why I don't like the Session solution which takes only couple of additional lines of code. The reason I ask is that I've read about Weblocks concept and was very impressed. So, I tried to invent something similar in ASP.NET MVC and failed to. Thus, I wonder, is there any elegant way to deal with such situations? By elegant I mean something that doesn't show it uses Session, easy to use, handles expirations (cleans up the Session if user does not press the final "Save" button), etc. Something like:
var data = parse(filestream);
var confirmationPostData = ShowView("Confirm", data);
items = items.Where(confirmationPostData of same name selected);
items.ForEach(set name to confirmationPostData name);
Here ShowView actually sends GET, wait for user's POST, and returns. Kind of. I do not insist, I just show the way that impressed me (in Weblocks - if I actually did understand it correctly).
Does everyone just use Session in such cases? Or is there a better way (except learning LISP which I already started to investigate if I can cope with)? Maybe, async actions in MVC v2 do it?
UPDATE: storing in DB/temp files, it works. I do sometimes store in DB. However this needs a way to expire the data since user may just abandon it (as simple as closing the browser). What I'm asking for: is there a proven and elegant way to solve it - not about how to do it. An abstraction built on top of serialization not tied to particular DB/file implementation, something like this.
I'm not sure what the purpose of uploading the Excel file is, but I like to make all actions that affect the long term state of the application, for the user, persisted. For example, what if the user uploads the file, changes a couple of options, then goes to lunch. If you store the info in session, it may be gone when they get back, ditto for storing it in the page with hidden variables. What about storing it in a DB?
I would store the file at the temp folder and only associate the name of the file with the user session so that later it can be processed:
// Create a temp file in the Temp folder and return its name:
var tempFile = Path.GetTempFileName();
// write to the temp file and put the filename into the session
// so that the next request can fetch the file and process it
There's a flaw with the GetTempFileName that I once fell into because I didn't read the documentation carefully. It says that the method will start throwing exceptions if you have more than 65535 files in the temp folder. So remember to always delete the temp file once you've finished processing it.
Another alternative to the temp folder would be to store the file into a database, but I am a little skeptic about storing files inside a relational database.

ASP.NET MVC - Sharing Session State Between Controllers

I am still mostly unfamiliar with Inversion of Control (although I am learning about it now) so if that is the solution to my question, just let me know and I'll get back to learning about it.
I have a pair of controllers which need to a Session variable, naturally nothing too special has happen because of how Session works in the first place, but this got me wondering what the cleanest way to share related objects between two separate controllers is. In my specific scenario I have an UploadController and a ProductController which work in conjunction with one another to upload image files. As files are uploaded by the UploadController, data about the upload is stored in the Session. After this happens I need to access that Session data in the ProductController. If I create a get/set property for the Session variable containing my upload information in both controllers I'll be able to access that data, but at the same time I'll be violating all sorts of DRY, not to mention creating a, at best, confusing design where an object is shared and modified by two completely disconnected objects.
What do you suggest?
Exact Context:
A file upload View posts a file to UploadController.ImageWithpreview(), which then reads in the posted file and copies it to a temporary directory. After saving the file, another class produces a thumbnail of the uploaded image. The path to both the original file and the generated thumbnail are then returned with a JsonResult to a javascript callback which updates some dynamic content in a form on the page which can be "Saved" or "Cancelled". Whether the uploaded image is saved or it is skipped, I need to either move or delete both it and the generated thumbnail from the temporary directory. To facilitate this, UploadController keeps track of all of the upload files and their thumbnails in a Session-maintained Queue object.
Back in the View: after the form is populated with a generated thumbnail of the image that was uploaded, the form posts back to the ProductsController where the selected file is identified (currently I store the filename in a Hidden field, which I realize is a horrible vulnerability), and then copied out of the temp directory to a permanent location. Ideally, I would like to simply access the Queue I have stored in the Session so that the form does not need to contain the image location as it does now. This is how I have envisioned my solution, but I'll eagerly listen to any comments or criticisms.
A couple of solutions come to mind. You could use a "SessionState" class that maps into the request and gets/sets the info as such (I'm doing this from memory so this is unlikely to compile and is meant to convey the point):
internal class SessionState
{
string ImageName
{
get { return HttpContext.Current.Session["ImageName"]; }
set { HttpContext.Current.Session["ImageName"] = value; }
}
}
And then from the controller, do something like:
var sessionState = new SessionState();
sessionState.ImageName = "xyz";
/* Or */
var imageName = sessionState.ImageName;
Alternatively, you could create a controller extension method:
public static class SessionControllerExtensions
{
public static string GetImageName(this IController controller)
{
return HttpContext.Current.Session["ImageName"];
}
public static string SetImageName(this IController controller, string imageName)
{
HttpContext.Current.Session["ImageName"] = imageName;
}
}
Then from the controller:
this.SetImageName("xyz");
/* or */
var imageName = this.GetImageName();
This is certainly DRY. That said, I don't particularly like either of these solutions as I prefer to store as little data, if any, in session. But if you're intent is to hold onto all of this information without having to load/discern it from some other source, this is the quickest (dirtiest) way I can think of to do it. I'm quite certain there's a much more elegant solution, but I don't have all of the information about what it is you're trying to do and what the problem domain is.
Keep in mind that when storing information in the session, you will have to dehydrate/rehydrate the objects via serialization and you may not be getting the performance you think you are from doing it this way.
Hope this helps.
EDIT: In response to additional information
Not sure on where you're looking to deploy this, but processing images "real-time" is a sure fire way to be hit with a DoS attack. My suggestion to you is as follows -- this is assuming that this is public facing and anyone can upload an image:
1) Allow the user to upload an image. This image goes into the processing queue for background processing by the application or some service. Additionally, the name of the image goes into the user's personal processing queue -- likely a table in the database. Information about background processing in a web app can be found # Schedule a job in hosted web server
2) Process these images and, while processing, display a "processing graphic". You can have an ajax request on the product page that checks for images being processed and trys to reload them every X seconds.
3) While an image is being "processed", the user can opt out of processing assuming they're the one that uploaded the image. This is available either on the product page(s) that display the image or on a separate "user queue" view that will allow them to remove the image from consideration.
So, you end up with some more domain objects and those objects are managed by the queue. I'm a strong advocate of convention over configuration so the final destination of the product image(s) should be predefined. Something like:
images/products/{id}.jpg or, if a collection, images/products/{id}/{sequence}.jpg.
You then don't need to know the destination in the form. It's the same for all images.
The queue then needs to know where the temp image was uploaded and what the product id was. The queue worker pops items from the queue, processes them, and stores them accordingly.
I know this sounds a little more "structured" than what you originally intended, but I think it's a little cleaner.
Is there complete equivalence between the UploadController and ProductController?
As files are uploaded by the UploadController, data about the upload is stored in the Session. After this happens I need to access that Session data in the ProductController.
As I read that the UploadControl needs read and write access to Upload data, the ProductController needs only read.
If that's true then you can make it clear by using an immuatable wrapper around the upload information and have the UploadController put that into the session.
The Session itself is by definiton a public shared noticeboard, decouples explicit relationships at the cost of allowing anyone to get and put. You could allow the ProductController to know about the UploadController and hence remove the need for passing the upload information via the session. My instinct is that the upload info is interesting to the public, so using Session is reasonable.
I don't see any DRY violation here, we are explicitly trying to separate responsibilities.

AJAX pattern in Rails for submitting small chunks of data

I have a web page with lots of small images on it. In a typical scenario user clicks on image and expects it to change with a new image.
Requirements:
When user clicks on image, it should be immediately known to a controller in an Ajax way.
Some strings should be passed to a controller when user clicks on image.
Controller does its job and returns another image (which replaces old one).
Along with image controller returns a couple of extra strings (such as completion status).
Web page updates old image with new one and also updates other parts with these new strings.
Number of images on a page varies but potentially it can be a couple of dozens.
Question: What Ajax technique should be used here? I'm quite new to Ajax and don't feel solid with patterns. Should it be Json or something else?
Any code example would be very very welcome and helpful.
Thank you.
Well it sounds like you need a Event observer on the image object. On that image object, you could have various custom attributes, such as imageid="2", etc. With the element being observed onclick, you'd read the attributes of the elements and pass them on to an AJAX call. I'm not sure if the image is known by the database or would it be available on the page itself. Maybe a back/previous button? In either case, the AJAX call could either return JavaScript directly which then gets parsed to update the DOM and replaces the image with the new image source, or it could return a JSON response which then needs to get read and parsed by the AJAX callback and then updates the DOM. Easiest being to return JS code which gets parsed, but I prefer to have all my JavaScript in one file and not have it all over the place mixed with server side code.
It really depends on what AJAX library you are using.
With jQuery, you might do something like this.
$("#buttonImage").click(function () {
var imageid = $(this).attr('imageid');
$.getJSON("/controller/get_image/" + imageid,
function(data){
$("#buttonImage").attr("src", data.imagesrc);
});
});
And your /controller/get_image/123 would return a JSON response like...
{ 'imagesrc' : '/my/image.jpg' }
As far as I known, the only browser-safe way to change an image is by assigning a new URL to it's src attribute. If you return an image to a request that pass some parameters, it might prevent client-side cashing of the images. For these reasons, I would treat separately the transfer of textual data and images.
The completion status can always be return as the HTTP status text but if more information is needed from the server, you can always return it in JSON or XML, the simplest being JSON.
The responsiveness could be improved by preloading images on the mouseover event.

Resources