So here is the situation, I have a parent SWF that is load multiple child SWFs below it. I want to know if there is any way I can 100% trust that all of these child SWFs are mine.
For instance, when loading child SWF "b.swf" from domain "http://example.com/b.swf" is there anyway I can always guarantee that the SWF passed to me is mine, and not one that has been intercepted and modified with the use of a tool like Fiddler, and then passed to me.
Something like checking its size, hash? I don't know, can any of you offer any help?
Well, you certainly can create a hash of your SWF that you are going to load. MD5 is a commonly used algorithm to do that. So, if along with the URLs you store MD5 hashes of the SWFs you are loading, it will be very difficult to come up with a fake SWF that generates the same hash. (Very difficult as it will probably take years, if not hundreds of years to make a SWF with the same hash).
This mechanism is often used with other software distributions. Many opensource tools provide md5 hashes with the programs / installers. Maven also uses MD5 hashes to make sure the libraries loaded are genuine. So that sounds like a way to go.
Related
I need to manage a large number of diverse embedded resources (such as wav files, pictures etc.).
For images, we have things like TImageList which makes it very easy and comfortable to embed, access and use icons at runtime.
I wonder if there is an easy, comfortable library component for other resources, such as wav files, too, so I can access them easily in this manner, populate menus with them or play them, for example:
PlayWav(WavLibrary.ItemIndex(1));
instead of
PlayWav('C:\Users\Documents\sounds\wav1.wav');
which is obviously error-prone and needs loads of extra handling (deploying the files in the right directory during install time, ensure permissions, prevent them from being deleted, copied etc., enumerate them and lastly making sure to call the right path from the app at runtime).
Doing it with rc files and resource streams etc. seems comparably cumbersome.
If there is a general library component that can manage embedded resources of other kinds (not just wav) I would also like to know that.
If there isn't, how would you go about this in the most time-saving and easy way?
What is the best way to load huge text file data in delphi? Is there any component that can load text file superfast?
Let's say I have a text file contains database and stored in fix length format.
It contains 150 field with each at least 50 characters.
1. I need to load it into memory
2. I need to parse it and probably store it in a memdataset for processing
My questions:
1. Is it enough if I use TStringList.loadFromFile method?
2. Is there any other better component to manipulate the text file?
3. Should I use low level reading from textfile?
Thank you in advance.
TStringList is never the optimal way of working with lots of text, but it's the simplest. If you've got small files on your hands you can use TStringList without issues. Even if you have large files (not huge files) you might implement a version of you algorithm using TStringList for testing purposes, because it's simple and easy to understand.
If your files are large, as they probably are since you call them "databases", you need to look into alternative technologies that will enable you to read only as much as you need from the database. Look into:
TFileStream
Memory mapped files.
Don't look at the old "file" based API's still available in Delphi, they're plain old.
I'm not going to go into details on how to access text using those methods because we've recently had two similar questions on SO:
How Can I Efficiently Read The FIrst Few Lines of Many Files in Delphi
and
Fast Search to see if a String Exists in Large Files with Delphi
Since you have a fixed length that you're working with, you can build an access class based on TList with a TWriter and TReader that will take your records into account. You'll have none of the overhead of a TStringList (not that it's a bad thing, but if you don't need it, why have it) and you can build in your own access to records into the class.
Ultimately it depends on what you are trying to accomplish with the data once you have it loaded into memory. While TStringlist is easy to use, it isn't as efficient as "rolling your own".
However, efficiency in data manipulation may not be that much of an issue, as you are using text files to hold a database. If you just need to read in and make decisions based on data in the file, the more flexible TList may be overkill.
I recommend to adhere to TStringList if you find it convenient for your problem. Optimization is another thing that should be done later.
As for TStringList the optimization is to declare a descendant class that overrides TStrings.LoadFromStream method - you can make it practically as fast as possible, taking into account the structure of your files.
It is not entirely clear from your question why you need to load the entire file into memory, prior to then going on to create an in-memory data set.... are you conflating the two issues? (i.e. because you need to create an in-memory data set you think you first need to load the source data entirely into memory? Or is there some initial pre-processing of the source file which is only possible with the entire file loaded in memory (this is unlikely and even if this is the case, it isn't necessary with a navigable stream object such as a TFileStream).
But I think the answer you are looking for is right there in the question....
If you are loading this file in order to parse it and populate/initialise a further data structure (the data set) for further processing, then using an existing high level data structure is an unnecessary and potentially costly (in terms of time) step.
Use the lowest level means of access that provides the capabilities you need.
In this case a TFileStream will likely provide the best balance of convenience and ease of use.
I would like to know if there are better ways to initialize a large collection of same-type instances. This is not a problem only limited to Swift, but I am using Swift in this case.
Take, for example, a large list of API endpoints. Suppose I have 100 endpoints in this API and each of them share some common functionality, such as headers, parameter lists, parsing formats, etc... albeit with different values for each of these "options".
I could think of a few different ways to express 100 endpoints:
Create a resource file with all of the values and read them in from the file on app launch. The problem with this is that it becomes stringly typed and there is potential for typos and/or lots of copy/paste key values. This would include plist files, json files, sqlite tables, csv files, etc. It centralizes and condenses the data, but it doesn't seem maintenance friendly or swiftly. Furthermore, it seems like resource files are harder to obfuscate should the details be somewhat private.
Create a giant enum-ish function with all of the API endpoint instance initialization code blobbed all in the same area/function/file. This would be equivalent of doing a giant switch statement or making a collection literal with all the instantiation happening in one spot. The advantage here is that it can be strongly typed and it is also contained to one area, similar to what a resource file would do. However, it will be a BIG file with lots of scrolling. Maybe too big?
Create a separate file/module/instance/subtype for each endpoint and, more or less, hardcode computed properties inside the instance. This would be maybe creating an extension and/or subclass for each endpoint and putting them in a separate swift file. This limits the visual scope for each endpoint, but it also just turns your project files into the blob of data instead.
I'm wondering if there are philosophical arguments for either of these options. Or, are there other options I have not thought of. Is it preference? Are there best practices when initializing a large collection of what seems like a bunch of complex literals?
If you have lots of this static data, or machine-generated classes, consider the advice in WWDC 2016's Optimizing App Startup Time. It's a great talk. The loader has to initialize and fix up all your static object instances and classes; if you have a lot, your app load time will be adversely affected.
For static data, one piece of advice is to use Swift, which you've already done, as Swift knows to defer the instantiations until run time.
Swift doesn't help with mass-produced classes; though you can switch to structs instead.
Even ignoring the startup time issue, I'd err on the side of being data driven. Option 1. Less code to maintain. IMHO There's nothing wrong with stringly typed here, this code is unlikely to change much; adding endpoints will be trivial. It's cool to see new function when you didn't even write new code!
What is the best way to load huge text file data in delphi? Is there any component that can load text file superfast?
Let's say I have a text file contains database and stored in fix length format.
It contains 150 field with each at least 50 characters.
1. I need to load it into memory
2. I need to parse it and probably store it in a memdataset for processing
My questions:
1. Is it enough if I use TStringList.loadFromFile method?
2. Is there any other better component to manipulate the text file?
3. Should I use low level reading from textfile?
Thank you in advance.
TStringList is never the optimal way of working with lots of text, but it's the simplest. If you've got small files on your hands you can use TStringList without issues. Even if you have large files (not huge files) you might implement a version of you algorithm using TStringList for testing purposes, because it's simple and easy to understand.
If your files are large, as they probably are since you call them "databases", you need to look into alternative technologies that will enable you to read only as much as you need from the database. Look into:
TFileStream
Memory mapped files.
Don't look at the old "file" based API's still available in Delphi, they're plain old.
I'm not going to go into details on how to access text using those methods because we've recently had two similar questions on SO:
How Can I Efficiently Read The FIrst Few Lines of Many Files in Delphi
and
Fast Search to see if a String Exists in Large Files with Delphi
Since you have a fixed length that you're working with, you can build an access class based on TList with a TWriter and TReader that will take your records into account. You'll have none of the overhead of a TStringList (not that it's a bad thing, but if you don't need it, why have it) and you can build in your own access to records into the class.
Ultimately it depends on what you are trying to accomplish with the data once you have it loaded into memory. While TStringlist is easy to use, it isn't as efficient as "rolling your own".
However, efficiency in data manipulation may not be that much of an issue, as you are using text files to hold a database. If you just need to read in and make decisions based on data in the file, the more flexible TList may be overkill.
I recommend to adhere to TStringList if you find it convenient for your problem. Optimization is another thing that should be done later.
As for TStringList the optimization is to declare a descendant class that overrides TStrings.LoadFromStream method - you can make it practically as fast as possible, taking into account the structure of your files.
It is not entirely clear from your question why you need to load the entire file into memory, prior to then going on to create an in-memory data set.... are you conflating the two issues? (i.e. because you need to create an in-memory data set you think you first need to load the source data entirely into memory? Or is there some initial pre-processing of the source file which is only possible with the entire file loaded in memory (this is unlikely and even if this is the case, it isn't necessary with a navigable stream object such as a TFileStream).
But I think the answer you are looking for is right there in the question....
If you are loading this file in order to parse it and populate/initialise a further data structure (the data set) for further processing, then using an existing high level data structure is an unnecessary and potentially costly (in terms of time) step.
Use the lowest level means of access that provides the capabilities you need.
In this case a TFileStream will likely provide the best balance of convenience and ease of use.
Can anyone (maybe an XSL-fan?) help me find any advantages with handling presentation of data on a web-page with XSL over ASP.NET MVC?
The two alternatives are:
ASP.NET (MVC/WebForms) with XSL
Getting the data from the database and transforming it to XML which is then displayed on the different pages with XSL-templates.
ASP.NET MVC
Getting the data from the database as C# objects (or LinqToSql/EF-objects) and displaying it with inline-code on MVC-pages.
The main benefit of XSL has been consistent display of data on many different pages, like WebControls. So, correct me if I'm wrong, ASP.NET MVC can be used the same way, but with strongly typed objects. Please help me see if there are any benefits to XSL.
I can see the main benefit of employing XSLT to transform your data and display it to the user would be the following:
The data is already in an XML format
The data follows a well defined schema (this makes using tools like XMLSpy much easier).
The data needs to be transformed into a number of different output formats, e.g. PDF, WMP and HTML
If this is to be the only output for your data, and it is not in XML format, then XSLT might not be the best solution.
Likewise if user interaction is required (such as editing of the data) then you will end up employing back-end code anyway to handle updates so might prove one technology too far...
I've always found two main issues when working with XML transformations:
Firstly they tend to be quite slow, the whole XML file must be parsed and validated before you can do anything with it. Being XML it's also excessively verbose, and therefore larger than it needs to be.
Secondly the way transformations work is a bit of a pain to code - custom tools like XmlSpy help, but it's still a different model to what most developers are used to.
At the moment MVC is very quick and looking very promising, but does suffer from the traditional web-development blight of <% and %> bee-stings all over your code. Using XML transformations avoids that, but is much harder to read and maintain.
I've used that technique in the past, and there are applications where we use it at my current place of employment. (I will admit, I am not totally a fan of it, but I'll play devil's advocate) Really that is one of the main advatages, and pushing this idea can be kinda neat. You're able to dynamically create the xsl on the fly and change the look and feel of the page on a whim. Is it possible to do this through the other methods...yes, but it's really easy to build a program to modify an xml/xsl document on the fly.
If you think of using XSL to transform one xml document to another and displaying it as html (which is really what you're doing), you're opening up your system to allow other programs to access the data on the page via XML. You can do this through the other methods, but using an xsl transformation forces it to output xml every time.
I would tread lightly with creating a system this way. You'll find a lot of pit falls you aren't expecting, and if you don't know xsl really really well, there is going to be a learning curve also.
Check this out if you want to use XSLT and ASP.MVC
http://www.bleevo.com/2009/06/aspnet-mvc-xslt-iviewengine/
Jafar Husain offers a few advantages in his proposal for Pretty XSL, primarily caching of the stylesheet to increase page load and reduce the size of your data. Steve Sanderson proposed a slightly different approach using JavaScript as the controller here.
Another, similar approach would be to use XForms, though the best support for it is through a JavaScript library.
If you only going to display data from DB XSL templates may be convenient solution, but if you gonna handle user interaction. Hm... I don't think it'll be maintainable at all.