Does a transformed Dart app need to be one file? - dart

Lets say I'm making an app where I need to push updates to the client-side regularly. Now most of these update would only ever affect my own code, never the libraries that form up my dependancies. As far as I'm aware, the transformer I'll use when invoking pub build, will take all my libraries, dependancies, and whatever else, and compile them together in a single web/main.dart.js file. This tends to remind me somewhat of static linking, as with languages like C++.
For obvious reasons, compiling like this is something you'd only want to do when finally deploying an app. For most people, it suffices to just use dartium and work off the .dart hierarchy directly. However, what if I were testing in a javascript browser, trying out dart:js code, for instance. I wouldn't want to have to recompile all of angular and friends, when they go entirely untouched. My specific case is the desire to use CDN services for static files within my app.
AngularDart, as my example, contributes a massive 22,000 lines to my compiled javascript, and if I change one little thing within my own app, I can kiss 304 NOT MODIFIED goodbye, let alone CDN savings, for that bulk of untouched angular.
All that being said, is there a way to de-couple dependancies in a transformed dart application? Can I "dynamicly link" my dart libraries? And furthermore, could I hypothetically distribute my dart libraries as dart.js, for use in other transformed code?

No, it doesn't work this way. This could work if the deployable contained the entire dependencies but that's not the case.
pub build includes only the code that's actually used in your app (tree shaking) and this is usually only a small fraction of a dependency. Changing a single character in your code can require including different code from your dependencies. I think treeshaking has a much bigger effect than caching could have.

Related

Does compiled AngularDart pollutes global scope or overrides Standard objects of the browser?

I'm looking for a framework that will allow me to write a SPA and a embeddable library. I would love to have a way to share component between both. So I'm looking for a solution that has relatively small amount of potential conflicts with other frameworks and with AngularDart it self. Including case when library has been included using script tab, yes two versions of AngularDart on the same page. A framework that has less Global Objects, no Standard Object overrides, no Global Event handling and limited polyfill conflicts.
Dart and AngularDart seams what I need, but I also need more details and docs to validate my assumptions. Anything you are able to point out would be very helpful and greatly appreciated (issues, PR, blogs , roadmap, commits, specs, docs)
It's possible to run multiple AngularDart apps on the same page. I've tested AngularDart todo example app embedded in itself. But I need more details on what dart2js is doing and how compiler avoids global scope pollution.
Yes, AngularDart should be well suited for your requirements.
Dart itself shouldn't pollute your scope at all, you can try running dart2js on something trivial (like just print inside main) and verify the code - it creates a closure and executes it, so nothing inside is accessible from outside. There is also no patching of any global JS objects, so you can run it alongside anything without interference. If it's not the case file a bug.
You can run as many AngularDart applications on a single page as you wish. To get them fully isolated you can compile each one separately with dart2js, then they wouldn't be able to access any of each other internals whatsoever.

What is the difference between a unity plugin and a dll file?

i am new to Unity and i am try to understand plugins. I have got the difference between a managed plugin and a native plugin, but what is not very clear to me is:
what is the difference between a plugin and a dll? what should i expect to find in an sdk to make it usable in my unity project?
Thanks a lot
To expand on #Everts comment instead of just copying it into an answer, I'll go a little into details here
What is a plugin?
It's a somewhat vague word for a third-party library that is somehow integrated with the rest of your game. It means that it neither is officialy supported by Unity, nor is it a part of your core code. It can be "plugged" in or out without altering its internals, so it must provide some kind of API that can be used by the game code.
For example, you'll find many plugins that handle external services like ads, notifications, analytics etc. You'll also find a couple of developer-tools that can also be called plugins, like tile-based map editors and such.
Plugins come in many forms - DLL files are one example but some plugins actually provide full source code for easier use. And of course, other plugins will provide native code for different platforms, like Objective-C for iOS or .jars for Android.
So to answer your first question:
DLL is simply a pre-compiled source file that can be a part of a plugin
A plugin is a whole library that can consist of multiple files with different formats (.cs, .dll, .jar, .m etc)
What do you need to use an sdk?
First of all - documentation. Like I said before, and like you noticed yourself, not all plugins give you access to the source code. And unfortunately, not many sdks have extensive and developer-friendly documentations so it can be a tough task to actually understand how to use a given sdk.
Secondly - the code. Many sdks give you some kind of "drag & drop" library, a single folder with all the neccessary files inside that you simply add to your Unity projects. I've also seen sdks that use Unity packages that you have to import via Assets > Import Package > Custom Package.
Once you have the code and documentation it's time to integrate it with your game. I strongly recommend using an abstract lyer in your game as, in my experience, you often have to change sdks for various reasons and you don't want to rewrite your game logic every time. So I suggest encapsulating sdk-related code in a single class so that you have to change only one class in your code when switching from, say, one ad provider to another (and keep the old class in case you need to switch back).
So you basically need three things:
Documentation (either a readme file or an online documentation)
The code (precompiled or source)
A versatile integration

How to manage 3rd-party library in iOS project if its source code needs to be modified in order to be used?

We need to use some 3rd-party libraries in our iOS app project (which is an Xcode project). The 3rd-party libraries are either managed by Cocoapods or directly added in the project by importing source code files.
Sometimes we need to modify a library's source code in order to actually use it appropriately in our project. It's easy to make the modification since we have all the source code at our hand, but how can we maintain the modification when some time in the future we upgrade the version of the 3rd-party library? Is there any tools or best practice out there that can help with this? And after the upgrade of the 3rd-party library, it will be good if we could differentiate in the code history about which part is done by library upgrade and which part is done by our modification.
That depends entirely on what your change is...
The best option is to not change the external code. Instead, subclass the part you need to change and use only public API so you should find out early when updating if the subclass needs to change. Unit tests also help.
If you can't subclass then you should fork the external code, then you have to manually merge updates on top of your change when you want them. This is obviously significantly more effort both in terms of importing and managing the code.

How to identify what projects have been affected by a code change

I have a large application to manage consisting of of three or four executables and as many as fifty .dlls. Many of the source code files are shared across many of the projects.
The problem is a familiar one to many of us - if I change some source code I want to be able to identify which of the binaries will change and, therefore, what it is appropriate to retest.
A simple approach would be simply to compare file sizes. That is an 80% acceptable solution, but there is at least a theoretical possibility of missing something. Secondly, it gives me very little indication as to WHAT has changed; It would be ideal to get some form of report on this so I can then filter out irrelevant (e.g. dates/versions copyrights etc..)
On the plus side :
all my .dcus are in a row - I mean they are all built into a single folder
the build is controlled by a script (.bat)(easy, for example, to emit .obj files if that helps)
svn makes it easy to collect together any (two) revisions for comparison
On the minus side
There is no policy to include all used units in all projects; some units get included because they are on a search path.
Just knowing that a changed unit is used/compiled by a project is not sufficient proof that the binary is affected.
Before I begin writing some code to solve the problem I would like to ask the panel what suggestions they might have as to how to approach this.
The rules of StackOverflow forbid me to ask for recommended software, but if anyone has any positive experiences of continuous integration tools that would help - great
I am open to any suggestion or observation that is relevant in this context.
It seems to me that your question boils down to knowing which units are contained in your various executables. Since you are using search paths, it will be hard for you to work this out ahead of time. The most robust way to find out is to consult the .map file that the compiler emits. This contains a list of all units contained in your executable.
Once you know which units are contained in each executable, you need to know whether or not anything has changed in those units. That information is contained in your revision control system. Put this all together and you have the information that you need.
Of course, just because the source code for a unit has changed, you might argue that re-testing is not needed. Perhaps the only change made was the version, or the date in a copyright label or some such. But it is asking too much to be able to ask a computer to make such a judgement. At some point you need a human to step up and take responsibility.
What is odd about this though is that you are asking the question at all. It seems to me to be enormously risky to attempt partial testing. I cannot understand why you don't simply retest the entire product.
After using it for > 10 years for commercial in-house and freelancer work in large projects, I can recommend to try Apache Ant. It is a build tool which supports dependencies, and has many very helpful features.
Apache Ant also integrates nicely with CI tools such as Hudson/Jenkins, Bamboo etc.
Another suggestion - based on experience with Maven - is to design the general software architecture as modular as possible. If modules (single or multiple source or DCU files in one directory) use a version number in the directory name as a version number, it is possible to control exactly how application are composed from these modules.
If you want to program such a tool yourself the approach would be something like this:
First you need to detect wheter there were any changes made to seperate source files. As you already figured out comparing the file size is bad idea as the file size can stay the same despite lots of changes made to it (as long as there is same amount of text in pas file its size won't change). So instead you could check the last modification time for specific file or create some hash value like MD5 hash for comparison (can be quite slow).
Then you need to generate yourself a dependancy tree which will tell you which files are used for which project/subproject.
Finally based on changes detected in seperate files you check the dependancy tree to see which projects needs to be recompiled.
The problem of such approach is that you would probably have to update the dependancy tree manually each time when new unit is added to the project or an existing one is removed from the project.
But the best way would be to go and use some version controll software istead of reinventing the wheel. I myself like the way how GIT works and I belive that with proper implementation of GIT into the project mannager itself could be quite powerfull do to GIT support of branching/subbranching (each project is its own branch, each version of your software can be its own subbranch).
Now latest version of Delphi does have GIT integration done though SVN but this unfortunately limits some of best GIT functionality. So if you maybe decide to go and integrate GIT support directly into Delphi I'm first in line to use it.

Grouping DLL's for use in Executable

Is there a way to group a bunch of DLL's and still use them at run time (not zipped up). Sorry this question sounds terse and stupid, but I'm not sure what more to ask.
I'll explain the situation though:
We've had two standalone Windows Applications and now one of our Applications has swelled to such ungainly proportions that the other application cannot run outside of the scope of the first app. We want to maintain some of the encapsulation we had while letting the smaller program in on some of the bigger program's features.
There is no problem in running the application, other than we don't want to send out all the 20-30 DLL's that the smaller project has.
It is possible to do this by adding startup code which checks if the DLLs are present on the target system and if not then extracts them from the resources section (or simply tagged onto the end of the exe). A good example of this being done is Process Explorer - it's distributed as a single binary, but when run it extracts and installs a driver.
If you have a situation where most, or all, of those assemblies have to be kept together, then I would highly recommend just merging the code files into the same project and recompiling. This would leave you with one assembly.
Of course there are other considerations like compile time, overall size of the final dll, how often various pieces change, and whether each component is deployed without the others.
One example of a company that did this is Telerik. Their dev components are all compiled into the same assembly. This makes deployment an absolute breeze. Contrasting that is Dev Express which put just about each control into it's own assembly. Because of this just maintaining, much less deploying, a Dev Express project is not something for the faint of heart.
(I don't work for either of those companies. However, I have a lot of experience with both toolkits.)
You could store the DLLs as Resources, and use BTMemoryModule, which essentially allows you to LoadLibrary on a Stream.
That way you could compile-in the multiple DLLs straight into the EXE or into a single resource DLL.
see http://www.jasontpenny.com/blog/2009/05/01/using-dlls-stored-as-resources-in-delphi-programs/

Resources