Running Scala code on iOS [duplicate] - ios

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Any way to use some Scala for iOS coding?
Would it be possible to use the Scala.NET implementation, and then MonoTouch to run Scala code on an iOS device?

I have not been able to find a page with binaries of Scala.NET that I can test, so the following are just general guidelines as to what you can do with MonoTouch and .NET languages.
MonoTouch can run any ECMA CIL that you feed to it. When you consider using a new language with Monotouch, there are two components that come into play:
Tooling for the IDE
Runtime for the language
The tooling for the IDE is the part responsible for starting the builds, providing intellisense and if you use Interface Builder, it creates a set of helper methods and properties to access the various outlets in your UI. As of today, we have only done the full implementation for C#. What this means for an arbitrary language is that you wont get the full integrated experience until someone does the work to integrate other languages.
This is not as bad as it sound, it just means that you need to give up on using XIB files from your language and you probably wont get syntax highlighting and intellisense. But if you are porting code from another language, you probably dont need it. This also means that you would probably have to build your assembly independently and just reference that from your C# project.
So you compile with FoobarCompiler your code into a .dll and then reference in your main C# project.
The language runtime component only matters for languages that generate calls into a set of supporting routines at runtime and those routines are not part of the base class libraries (BCL). C# makes a few of those calls, but they are part of the BCL.
If your compiler generates calls to a supporting runtime that is not part of the BCL, you need to rebuild your compiler runtime using the Mono Mobile Profile. This is required since most runtimes target a desktop edition of the BCL. There are many other API profiles available, like Silverlight, Mono Mobile, Compact Framework and Micro Framework.
Once you have your runtime compiled with our core assemblies, then you are done

If you had read the MonoTouch FAQ, you would have noticed that it currently supports only C# and no other CLR languages.

Binaries for the Scala.NET library and the compiler can be obtained via SVN, in the bin folder of the preview:
svn co http://lampsvn.epfl.ch/svn-repos/scala/scala-experimental/trunk/bootstrap
Bootstrapping has been an important step, and ongoing work will add support for missing features (CLR generics, etc). All that will be done.
For now we're testing Scala.NET on Microsoft implementations only, but we would like our compiler to be useful for as many profiles and runtime implementations as possible.
A survivor's report on using Scala.NET on XNA at http://www.srtsolutions.com/tag/scala
Miguel Garcia
http://lamp.epfl.ch/~magarcia/ScalaNET/

Related

Implement F# libraries for consumption by TypeScript/Javascript?

I know there are a number of projects which can compile F# to JavaScript.
Does any of these projects support this use case:
developing an application in TypeScript
but writing part of the application in F#, as a library
consuming this F# library from the main TypeScript application, optimally in a type-safe way?
WebSharper produces d.ts files for the compiled JS files. You can read about this in the relevant section of the documentation. However this feature is still experimental and uses an older version of TypeScript.
There is FunScript (https://github.com/ZachBray/FunScript) but it does not seem to be widespread, so it may take you more time than the benefits are.

IOS: AOT ahead of time, what is it?

I have read an article from Xamarin, and came across a particular computer science word : Ahead of Time.
According to some google search result, this AOT does not allow for code generation during run time.
Does it mean, it does not support dynamic stuff?
I know this question may stupid and I have 0 knowledge in IOS, hopefully can get some answer from here. thanks
First, what is the definition of dynamic? For the general public, dynamic code mean the application can change functionality at run-time. For the iOS platform, the binaries are signed to prevent malware. And Apple don't like apps that can load functionality at run-time.
An ahead-of-time (AOT) compiler has nothing to do with dynamic code per se. It's has to do with intermediate language that are just-in-time compilation (JIT). The biggest example of intermediate language is Java bytecode; compile once, run anywhere. When a Java application is executing, the compiled code is JIT to native machine code. AOT compiler is just doing it ahead of time, to save time.
For the iOS platform, Xcode compiles Objective-C to a native binary for the device.
Another way of looking at this is with an example...
In .Net, you can use the Reflection.Emit namespace to generate and compile code at runtime.
Eg you could create an "IDE" with a textbox that accepts C#. When you click a button that C# could be compiled by the .Net framework into a custom library that's loaded dynamically or a fully-fledged executable which is launched as a new process.
This is insanely powerful when combined with the rest of the System.Reflection namespace. You can examine objects at runtime and compile custom code based on any criteria you like.
That said... The problems usually outweigh the benefits in most cases. Mainly, it's a massive security issue, especially when running on a consumer device.
It would be possible to create an app that wouldn't have anything close to malicious code, get it audited by apple, then have the app download code from your webserver, compile it, and execute it. This new code wouldn't be audited...
There really is no good reason to be doing this in a consumer app.

MonoTouch Migration Analyser

Is there any migration analysers available for MonoTouch ?
I have seen one for Mono, but not for MonoTouch.
Short answer: No, there is none at the moment.
Long answer
The situation is a bit different from Mono. In general you test a complete and compiled (against a specific version of the framework) .NET application with MoMA, to get a report of what pieces are missing (or incomplete) in Mono that could affect the execution of your application on other platforms (e.g. OSX and Linux).
Testing a complete applications for MonoTouch would reports tons of issues - since the UI toolkit is totally different. E.g. anything about System.Windows.Forms, WPF... would always missing.
However if your assemblies are already split into (something like) an MVC design it would be possible to test some (the non-UI parts) of them against definitions based on the MonoTouch base class library.
Finally if someone has an immediate need (or looking for a nice project) MoMA is available as open source and the evaluation versions of MonoTouch contains all the assemblies needed to build the definitions files. A bit of extra filtering could make this into a very nice tool.
Alternative
You can see a list of the assemblies that are part of MonoTouch and some platform restrictions (compared to .NET) you should be aware.

What tools are built using themselves? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am curious about what tools are used to build the next version of themselves.
For example, Delphi has long claimed that "Delphi is written in Delphi".
I assume Visual Studio is written using Visual Studio.
What are some other examples of tools that written in themselves?
Interestingly, the VB.NET & C# compilers themselves are written in unmanaged C++ (leading to the C++ team's T-Shirt: "My compiler compiled yours"). The C# team hopes to have a fully managed-C# hosted C# compiler for VS2010.
Bjarne Stroustrup mentioned in The Design and Evolution of C++ that the first C++ compiler was written in C++.
I've just noticed this is also a question in his FAQ:
The first C++ compiler (Cfront) was
written in C++. To build that, I first
used C to write a "C with
Classes"-to-C preprocessor. "C with
Classes" was a C dialect that became
the immediate ancestor to C++. That
preprocessor translated "C with
Classes" constructs (such as classes
and constructors) into C. It was a
traditional preprocessor that didn't
understand all of the language, left
most of the type checking for the C
compiler to do, and translated
individual constructs without complete
knowledge. I then wrote the first
version of Cfront in "C with Classes".
This is off-topic, but strictly speaking, it is an example of a tool which builds itself.
The reprap - an open-source 3d prototyping machine, which recently gave 'birth' to "its first complete working replicated copy".
I love this kind of stuff.
Generically speaking, C compilers are usually written in C... *nix kernels are compiled on *nix, etc.
Also, there's the pypy project which provides a Python interpreter written in Python.
When gcc (the Gnu C compiler http://gcc.gnu.org/) was not available widely, you had to compile it from source, compiling stage1 compiler, then compile stage2 with stage1, till you have your final compiler. I assume it must be the same today.
Here is another example: Mono's C# compiler is self hosting - i.e. it's written in C# and used to compile itself.
Dog fooding refers to the more general practice of a company using its own product internally, especially during its development.
Lots of folks like to look at how Lisp can be implemented in Lisp.
Squeak is a Smalltalk-80 implementation written in itself.
even its virtual machine is written entirely in Smalltalk making it easy to debug, analyze, and change.
Sun's Java compiler has long been written in Java. However, recent work is writing a JIT compiler in Java as well. This is the JVM component that converts Java byte code to native processor instructions.
We use to develop using RealBasic. The IDE is written in itself, or so I've been lead to believe.
ghc, the Haskell compiler, is mostly written in Haskell.
tcc is another self-hosting C compiler for x86 and ARM. Its claim to fame is being, well, tiny (100k or so for preprocessor, compiler, assembler, and linker).
I would assume that pretty much any tool that's part of the typical development process would be involved in its own development, to whatever extent possible. This includes:
certain programming languages, especially compiled ones
IDEs
text editors
version control systems
bug trackers
build systems
If you're on a team building one of these tools, and you're not developing it for a specific niche that doesn't apply to your team, I don't know why you wouldn't use it to build itself. Having developers be users of the product is one of the best ways to find possible improvements.
For the AmigaOS there was a third party Basic interpreter (don't remember the name) for which you could later buy a compiler. The compiler was delivered as source, so you had to use the interpreter to run the compiler to compile itself...
To cite Kent Beck:
...it may seem a bit like performing brain surgery on yourself.
Visual Studio and Team Foundation Server build themselves. It's called dogfooding, a term which if not originating in Microsoft, it certainly likes
Oracle Application Express is a web application development tool that is built in itself.
Eclipse IDE is generally built and developed using Eclipse IDE.
It is fairly typical to have a languages compiler written in its own language. This is called self-hosting or bootstrapping.
Maven2 is built using Maven2.
Ok, it's not built (i.e. written) using itself, as it is a tool to build (i.e. compile) project, but it is using its own code to compile...
I was amazed with JSLint
In short it has been described as Javascript "compiler" using javascript.
I am building an IDE-based code generator, and I am using it to build itself. If fact, as Stroustrup did, I am first building a valid generator model and using a pre-processor to build the final C++ code to compile. Once I have a good working version of the IDE, I'll start using it to build further versions of itself.
It's like giving a new dimension to the meaning of "recursive programming"!
AFAIK does the OpenJDK build itself first with the installed java and afterwards with itself.
Naturally the Jetbrains team uses its own IDE IntelliJ IDEA to develop this IDE.
I assume this is true for most IDE vendors.
As far as I know, when building EMACS from source, all of the ELISP code is bootstrapped. I found that quite noteworthy.
Not quite what you're asking for, but the entire development environment for Revolution http://www.runrev.com is built using Revolution itself, and the source (except for small parts that enforce the license) is completely exposed in source form. So if you don't like the way the dev environment is implemented, you can change it. Find a bug, fix it. You can also easily build additional development tools and integrate them.
Ada and Forth
I gave the Smalltalk-80 answer an uptick. Best, most elegant example I can think of. The question also reminds me of a slightly related problem that used to be popular: write a program that outputs itself. Not the same level of bootstrapping, but a fun little programming puzzle for your amusement. Maybe not possible in all languages?
The old Watcom C/C++ compiler was built using itself.
Kragen Sitaker's Ur-Scheme is a fine example of a small nontrivial compiler written in itself. That page links to several more good sources in that vein.

Cross-platform development - Delphi 2011: How to made a Windows-tied library cross-platform?

As perhaps you know already, most probably the next version of Delphi will be cross-platform. Also, here are some polls on the matter.
While writing a cross-compiler isn't a thing which interests us very much now, porting a library which was/is Windows-tied to multiple platforms, certainly does.
You can think, for example at VCL (Delphi's standard library). While it was designed for Windows only, it has value in it, and, of course, there are huge codebases which depend on it.
The question is:
Which would be the best approach to made an application / library cross-platform aware ensuring a smooth conversion / upgrade path (as much as possible of course)?
I stress it again, we are not interested which is the best way to do cross-platform development only (there were questions on this theme). We are interested also in yet another requirement: The old code base / installations management.
PS: Experiences and/or methodologies from similar situations with other languages (eg. C/C++) which are regarded as standard practices are welcomed.
Thanks in advance.
Visual component developer's perspective:
Add levels of functionality to your code, so as to be able to add another platform without changing the "Core" of the component.
The compiler hopefully will have a platform switch. (Preferable more than one, working in conjunction with each other. ex. Windows/ARM, Windows/386, OSX/Cacao/386, Linux/Gnome/386).
The Layout structure might look something like this.
ComponentJ.pas
Linux\ComponentJ.pas
Linux\Gnome\ComponentJ.pas
Linux\KDE\ComponentJ.pas
OSX\ComponentJ.pas
386\ComponentJ.pas
ARM\ComponentJ.pas
As an Application Developer:
I'll start by moving all WIN API calls in my code into a group of libraries in a Windows directory as to be able to IFDEF it at library level and translate it into another platform I'd like to support as soon as the compiler becomes available, but only as I come across them.)
This will also add the possibility to add adapters easier for the new platforms.
It in any case is good practice to remove possible dependencies into a central place.
IMHO you can't build a xplatform Delphi and ensure a smooth transition for current VCL applications. It won't work. VCL was (luckily, because it allowed for great applications) designed with Windows in mind, and trying to design a compatible library working on a different platform would just mean longer development cycles and lots of compromises. The outcome will be a library noone would wish to use. Look at what happened to VCL.NET: it was the wrong choice. And it was working on the same OS!
We know that targeting non-Windows platform with native applications needs a native GUI library. We don't care about creating a GUI from scratch, for our application it's the way to go, we don't need Windows GUIs with all their standards under a different OS using different standards - we need to be able to code a fully native GUI for the target OS.
Other applications may survive a GUI porting, but in the long run you don't get a real xplatform tool - you get a tool that may compile for other platforms but brings one platform paradigms to others - and it will also be not welcome by "native" developers on other platforms. If you're a Linux or Mac developer, why should you learn how to work with a library that carries its Windows inheritance to your platform? You'd find it a pain in the ass. If Embarcadero wants to sell XDelphi outside actual developers base, it has to offer much more than a new CLX.
I will pull from some ancient experience in making a code base cross compilable between windows and dos (Delphi 1/Turbo Pascal 7). The rule of thumb was to separate code into multiple units. Try to code WITHOUT using windows, messages or any visual components. If you find you need to make a call to one of these, then place that call in another unit and write a proxy (abstract class that you descend from works well) to dispatch the calls through. When a cross compatible version is released, all that you should have to do is code the other side of the proxy for the new target.
If you're designing a form based system, then try to stick with as many of the standard components as possible. NEVER implement any "business" rules directly in an event, instead place them in another unit and call into the other unit to perform the logic.
Now, there will definitely be changes required to get your final project cross compatible, but by following these simple patterns you should be able to greatly reduce the amount of work it will take.
Experience so far has shown that the best way to get a Delphi app compatible with future versions is to stick to pure Delphi components, and use nothing third party. Such an app will probably suck, but that's how it seems to me. I use lots of third party components, and the apps are great and successful. But the chances of them moving to this future too are not certain, and that may cause problems with such changes, but I'd rather have a great app now and have the problem than have a poor app now and not need to worry about it.
Compromises should not be done too much to make VCL compatible with Linux and Mac. Windows is VCL's root. I'll prefer a new and very clean GUI framework, even though without any backward compatibility. Make VCL fatter and fatter isn't a good idea!
make a cross-platform Pascal compiler
make a cross-platform RTL
put the QT on top
Well, look at freepascal and lazarus
I don't get it. All .NET looks the same to me providing we don't use any third party.
Delphi using standard control is already fully functional but your app would look
like thousands of others.
I think Embar should go for PDA, IPhone, Andriod as Windows desktop already eat about
98% of the market.
Mac is expensive and Linux is no cost at all. No use to go for Mac and Linux. Not worth
the investment.
Well, aside the things said - thanks all - I do think that there we need some additional things:
we need tooling to do the necessary conversions
we need tooling to help us in programming against a (some form of) MVC pattern
Simply pick the latest 4.6 QT and add good integration betwen the Pascal and the QT library.
They have done it before (in the Kylix times). The QT is such powerfull these days.
I believe that QT is even better then VCL and at least 10 times more frequently updated and fixed.
So the plan is simple:
make a cross-platform Pascal compiler
make a cross-platform RTL
put the QT on top
and you will have a first-class natively looking applications on all platforms.
My opinion:
Make cross platform compiler (OS x/Linux/ embedded solutions?/ symbian?). Maybe add ability to compile/convert pascal code into portable c/c++ code to build then on embedded platforms.
RTL have to be separated into cross-platform layer and native layer (as for JCL).
Add new core components for cross-platform compatibility and native components for each supported platform (QT for ex)
Add translation utilities to create/convert between platform's components, for ex: to convert pure windows form into mac os x cocoa's form.
All windows hierarchy of components have to be only upgraded to support x64 with maximum backward compatibility. All cross-platform component have to be in parallel hierarchy.
Next version of cross platform solution can be refactored and can include migration/convertion utility. Due to minimum codebase of cross-platform solutions, hierarchy and classes for cross platform can be heavily changed from version to version to achieve best architecture.
sorry for my English - not a native language (Russian is)!
Make C/C++/Delphi compilers that targets OSX/Linux
Make C/C++ compiler that can be Boosted
Write new VCL-Presentation Foundation (VGScene/WPF alike)
it should not be backward compatibile! Delphi IDE should be
written with such VCL-PF
Component Library should stay as it is (but with improved Data-Binding)
Only provide VCL 64-bit for Win64
Is this a problem?

Resources