Swift - Conceptual, type-check time concerns - ios

I'm building what I would consider a fairly light application, at least from a web developers point of view. I have messages, posts, what have you, with simple design with images and such. I'm utilizing my old 2012 MacBook Pro to write this application, so I figure that might have something to do with it, but every build takes an obscene amount of time to write, like a minute or two, if at all. I try to add something simple like an image that's clipped to the shape of a circle and apparently that pushes the build past the point where the type-check takes a reasonable amount of time.
It just seems incredible how such a relatively simple application could take so long to write, and even crazier that this type-check is forced upon you, and if it fails you can't even test out your application. Sometimes the type-check error will just get put on a Z-Stack with multiple different views combined after making many changes so it's impossible to tell what made it throw the error so I just have to rework backwards removing things and see what works, it just seems like such an archaic way to write code. I don't know if I'm alone, perhaps if you start in mobile development you get used to it just being the way it is, or as I mentioned it might be due to the lackluster power of my computer in question, but there is no reason an IDE should be super processing intensive, at least from how I see it. I guess I kind of understand Apple's prerogative with forcing this type-checking software on all apps, I guess to prevent apps from having bugs, but honestly it seems half polished sometimes and has never really helped me much with finding errors that I wouldn't have found from just running the build, and has been a detriment to the experience of writing Swift when it just won't build at all because it's having problems type-checking it.
Is this issue with the time it takes to make each build just a reality of XCode and Swift or is it just due to the processing power of the machine I'm writing it on? Both seem equally depressing, in all honesty.

Related

Erlang Hot Code Loading not widely used?

I just saw this 2012 video from LinuxConf.au about Erlang in production.
There's a part on the video where Bernard says no big Erlang projects use Hot Code Loading apart from Ericsson, because it's really hard to guarantee things will work. It's around minute 29.
Is that still true? Are there tools to help test a hot code load or or make it easier nowadays?
This is not true. Every Erlang user uses hot code loading to his advantage in one way or another -- whether it is for development, testing, troubleshooting, one-off fixes, or full scale deployments. This is one of the major Erlang advantages. Rather unique too.
For example, WhatsApp, one of the biggest Erlang users, relies on hot code loading for almost all code pushes.
I have personally worked with hot code loading in scenarios where each change was well understood and often performed by the same person who made the change. It works extremely well and good engineers don't have any problems doing this. Speaking of tools, loading modules one by one from Erlang shell using l(...). or all at once using l(). (see here) works just fine. Some prefer release-based tools like relx.
Others, like Ericsson, use enterprise-style deployments with hot code loading after rigorous testing of clear-cut releases and patches. The goal here is to upgrade without using spare capacity and special procedures for draining and shifting load. Operationally this may be simpler and more efficient than restarts, but testing can be more expensive.
It is difficult to know whether it is a feature widely or scarcely used. Nowadays there are plenty of Erlang systems out there. I can however think of reasons why and why not to use it, since I have been working with bot options for quite some time.
In favour of using it:
It is obviously quite useful during development to ensure a fast feedback cycle. I always develop with an open shell, and with functions to load code automatically as son as compile.
In the rare case you need to implement a monolithic application with high availability requirements, it is basically the only option
The main reason not to use it, as the presentation states: it is hard. Even if you manage to understand exactly how it works (which is not the hardest part).
It is not, in my opinion just a problem of tooling, but rather that you are getting a lot of intrinsic complications just by the fact that now your code is part of the mutable running state of the system. You basically end up having a long running system that changes behaviour, so you add those to the problems you already had:
You are no longer sure that restarting the system will not change behaviour on any fundamental way. So you will probably need to put extra care on making sure that whatever code you load, it is also written to disk.
Changing the way your modules work (i.e. loading new code) is very tricky unless a) you never break compatibility, b) you somehow figure out the order in which the modules should change or c) you assume the worst that can happen is a few crashes due to undefined functions, function or case clauses, etc, and hope for the best (the actual worst is when the new and old modules interact in unexpected ways while you haven't finished loading all of the new ones and the actually run some impossible logic).
You will almost certainly will end up killing some process running old code when loading new code at some point. Maybe your supervisors will help you, maybe not. In any case that could be very confusing and difficult to debug.
As the presentation also states, is very hard to test (if not impossible).
Etc.
Adding to all those, you are running a long living server with long living state, which is far from ideal.
So my advise is always that, if you could get away with a distributed application and rolling upgrades, you should do it. That option is much easier to handle, and in my experience, performs better overall.

llvm based code mutation for genetic programming?

for a study on genetic programming, I would like to implement an evolutionary system on basis of llvm and apply code-mutations (possibly on IR level).
I found llvm-mutate which is quite useful executing point mutations.
As far as I have understood, the instructions get count/numbered, one can then e.g. delete a numbered instruction.
However, introduction of new instructions seems to be possible as one of the availeable statements in the code.
Real mutation however would allow to insert any of the allowed IR instructions, irrespective of it beeing used in the code to be mutated.
In addition, it should be possible to insert library function calls of linked libraries (not used in the current code, but possibly available, because the lib has been linked in clang).
Did I overlook this in the llvm-mutate or is it really not possible so far?
Are there any projects trying to /already have implement(ed) such mutations for llvm?
llvm has lots of code analysis tools which should allow the implementation of the afore mentioned approach. llvm is huge, so I'm a bit disoriented. Any hints which tools could be helpful (e.g. getting a list of available library functions etc.)?
Thanks
Alex
Very interesting question. I have been intrigued by the possibility of doing binary-level genetic programming for a while. With respect to what you ask:
It is apparent from their documentation that LLVM-mutate can't do what you are asking. However, I think it is wise for it not to. My reasoning is that any machine-language genetic program would inevitably face the "Halting Problem", e.g. it would be impossible to know if a randomly generated instruction would completely crash the whole computer (for example, by assigning a value to a OS-reserved pointer), or it might run forever and take all of your CPU cycles. Turing's theorem tells us that it is impossible to know in advance if a given program would do that. Mind you, LLVM-mutate can cause for a perfectly harmless program to still crash or run forever, but I think their approach makes it less likely by only taking existing instructions.
However, such a thing as "impossibility" only deters scientists, not engineers :-)...
What I have been thinking is this: In nature, real mutations work a lot more like LLVM-mutate that like what we do in normal Genetic Programming. In other words, they simply swap letters out of a very limited set (A,T,C,G) and every possible variation comes out of this. We could have a program or set of programs with an initial set of instructions, plus a set of "possible functions" either linked or defined in the program. Most of these functions would not be actually used, but they will be there to provide "raw DNA" for mutations, just like in our DNA. This set of functions would have the complete (or semi-complete) set of possible functions for a problem space. Then, we simply use basic operations like the ones in LLVM-mutate.
Some possible problems though:
Given the amount of possible variability, the only way to have
acceptable execution times would be to have massive amounts of
computing power. Possibly achievable in the Cloud or with GPUs.
You would still have to contend with Mr. Turing's Halting Problem.
However I think this could be resolved by running the solutions in a
"Sandbox" that doesn't take you down if the solution blows up:
Something like a single-use virtual machine or a Docker-like
container, with a time limitation (to get out of infinite loops). A
solution that crashes or times out would get the worst possible
fitness, so that the programs would tend to diverge away from those
paths.
As to why do this at all, I can see a number of interesting applications: Self-healing programs, programs that self-optimize for an specific environment, program "vaccination" against vulnerabilities, mutating viruses, quality assurance, etc.
I think there's a potential open source project here. It would be insane, dangerous and a time-sucking vortex: Just my kind of project. Count me in if someone doing it.

Should a proof-of-concept application have automated tests?

If I'm developing a proof-of-concept application, does it make sense to invest time in writing automated tests? This is for a personal project where I am the sole developer.
I see the only benefit of automated testing at this point as:
If the concept catches, the tests already exist.
Some of the cons related to writing automated tests for this type of project could be:
It takes valuable time to write tests for an idea that might not be worthwhile to people.
At this level, time is better spent building a demonstration of your idea.
Can anyone provide pros and cons of investing time in writing automated tests for an application in its early stages?
This whole talk from the Google Testing Automation Conference is about your question:
http://www.youtube.com/watch?v=X1jWe5rOu3g
Basically, the conclusion is that it is more important to know you are building the right thing than to build something right (build the right "it", rather than build "it" right). The most important thing is to get a proof-of-concept through and make sure that it works and is liked. If people like your thing, then they will tolerate bugs; but if they don't like your thing, it can have no bugs and they still won't like it.
TDD is not really about testing, it's about designing. Doing TDD for your application will make it have a better design (probably) than just doing it on your feeling.
Your problem is : Do you need a good design ? Design is helpful for maintainance and most devs doing TDD consider themselves in maintainance mode just after having added their 1st feature.
On a more pragmatic perspective : if you're the only dev, have very accurate specs and work on this code to do it and never return to it (nor send someone else return to it), I would say that making it work is enough.
But then don't try to get anything back from it if your POC works, and just redo it.
You can save time by doing an ugly POC and come to the conclusion that your idea is not doable.
You can save time by doing an ugly POC and understanding much better the domain you're trying to model
You cannot save time by trying to get some lines of code out of an horrible codebase.
My best advice for estimating how much effort you should put in design (because overdesigning can be a big problem, too) is : try to estimate how long will that code live
Reference : I would suggest you to make some research on the motto "Make it work, make it right, make it fast" . The question you ask is about the 2 first points but you will sooner or later ask yourself the same question about optimization (the third point)
There's no "right" answer. TDD can make your concept stronger, more resilient, easier to bang on, and help drive API development. It also takes time, and radical changes mean test changes.
It's rare you get to completely throw away "prototype" code in real life, though.
The answer depends entirely on what happens if you prove your concept. True Proof-of-Concept applications are thrown away regardless of the outcome, and the real application is written afterward if the PoC proved out. Those PoCs obviously don't need tests. But there are way too many "productized PoCs" out there. Those applications probably should have tests written right up front. The other answers you've received give you solid support for both positions, you just need to decide which type of PoC you're building.

Migrate an app from Delphi to Silverlight C#

I have a legacy desktop accounting application developed using Delphi 5 & Paradox, which I intend to migrate to a web based Silverlight (for the sake of UX) application with SQL Server.
Can anybody suggest a way to implement this quickly?
I know this is a very open-ended question and I am not looking for concrete answers. Instead opinion/experiences from SO users.
My main concern is about migration approach, possible architecture and design patterns (for SL I know of MVVM) implementation.
Quickly? That's what every manager wants, but I doubt it.
You have fundamentally different models of UIs, and different programming languages.
Unless these applications are small, it is unlikely that will be able to convert them by hand in any short period of time (or even by yourself as it appears the OP implies "I intend").
Gartner Group has analyzed manual migrations, and suggests if everything is "similar" the actual conversion rate is ~~ 150 lines/day, which is possible because you are translating more or less directly from a working, debugged application. (Just how big is the application in SLOC?) So, if you have 75,000 lines of code, you're looking at 500 man-days minimum. You might make the case that Delphi as programming langauges and C# are similar. You cannot reasonably make that case for the Delphi UI and Silverlight, so this estimate is a lower bound.
There are those that say, "just throw it away and recode it from scratch". Unless your productivity exceeds 150 debugged lines of code per day [classic software engineering texts will tell you it is much smaller than this] this will take you even longer. Usually it fails because you end up forgetting what features exist in the current program, and rediscover them late in development or worse after an attempted reployment. Usually what happens is the old application continues to evolve while the new one is being built (remember, you're 500 man-days away from the new one minimum!) and the new one has to play catchup with these changes. If the application has any serious scale (e.g., a million lines) this often prevents the new one from ever being servicable. Another way to think about this, "how long did it take to build the original application?", and "why should building a replacement be enormously easier?". YMMV, if you can work miracles.
My very biased opinion (I build langauge translation tools) is that one of the most practical ways to do this is automated translation. This has its costs, too; they aren't off-the-shelf items no matter what somebody tells you. You have set up the translator, and that also takes a lot of energy, but that energy is proportional to the size of the language and (UI) library features used, rather than the application size, so it is far more effective as the program gets large. This is still on the order of hundreds of man-days to code and test for just for the langauge translation part. The difference is that once set up, you can apply it to the existing application of whatever size in whatever state it happens to be in. There's more complications than this, but this approach overcomes the "can't catch up" problem of manual conversions, and the "can't get enough coders to manually translate it".
For more details, see my answer on how to translate between languages.
If your application is relatively small, there are IMHO no good answers. Hand translation or recoding are likely your only (ugly) choices.
My suggestion would be to create "value add" extras and updates to your application using Silverlight as and when the need for extra functionality comes up until you've got something resembling a full product.
To me developing Silverlight seems to take a very long time and the UX for a business application isn't massively improved over say ASP.NET Ajax (if the Ajax is done properly). I imagine if you were to sit down today and completely re-write a decent size application in Silverlight then Silverlight would be end of life before your development is completed (unless you threw a massive team at it of course)
If your business logic is well separated from the UI, you can start with "porting" your code to Delphi Prism rather than C#. This offers shorter migration path. If your business logic is tightly coupled with UI (as it happened frequently 10-15 years ago), then rewriting everything from scratch could be a better idea.
And once you have all the code in Pascal up and running, rewriting it in C# (if you need it at the end) is almost trivial with help of decompiler.

Scale now or later?

I am looking to start developing a relatively simple web application that will pull data from various sources and normalizing it. A user can also enter the data directly into the site. I anticipate hitting scale, if successful. Is it worth putting in the time now to use scalable or distributed technologies or just start with a LAMP stack? Framework or not? Any thoughts, suggestions, or comments would help.
Disregard my vague description of the idea, I'd love to share once I get further along.
Later. I can't remember who said it (might have been SO's Jeff Atwood) but it rings true: your first problem is getting other people to care about your work. Worry about scale when they do.
Definitely go with a well structured framework for your own sanity though. Even if it doesn't end up with thousands of users, you'll want to add features as time goes on. Maintaining an expanding codebase without good structure quickly becomes fairly horrible (been there, done that, lost the client).
btw, if you're tempted to write your own framework, be aware that it is a lot of work. My company has an in-house one we're quite proud of, but it's taken 3-4 years to mature.
Is it worth putting in the time now to use scalable or distributed technologies or just start with a LAMP stack?
A LAMP stack is scalable. Apache provides many, many alternatives.
Framework or not?
Always use the highest-powered framework you can find. Write as little code as possible. Get something in front of people as soon as you can.
Focus on what's important: Get something to work.
If you don't have something that works, scalability doesn't matter, does it?
Then read up on optimization. http://c2.com/cgi/wiki?RulesOfOptimization is very helpful.
Rule 1. Don't.
Rule 2. Don't yet.
Rule 3. Profile before Optimizing.
Until you have a working application, you don't know what -- specific -- thing limits your scalability.
Don't assume. Measure.
That means build something that people actually use. Scale comes later.
Absolutely do it later. Scaling pains is a good problem to have, it means people like your project enough to stress the hardware it's running on.
The last company I worked at started fairly small with PHP and the very very first versions of CakePHP that came out (when it was still in beta). Some of the code was dirty, the admin tool was a mess (code-wise), and sure it could have been done better from the start. But do you know what? They got it out the door before their competitors did, and became extremely successful.
When I came on board they were starting to hit the limits of their current potential scalability, and that is when they decided to start looking at CDN's, lighttpd caching techniques, and other ways to clean up the code and make things run smoother when under heavy load. I don't work for them anymore but it was a good experience in growing an architecture beyond what it was originally scoped at.
I can tell you right now if they had tried to do the scalability and optimizations before selling content and getting a website live - they would never have grown to the size they are now. The company is www.beatport.com if you're interested in who I'm talking about (To re-iterate, I'm not trying to advertise them as I am no longer affiliated with them, but it stands as a good case study and it's easier for people to understand what I'm talking about when they see their website).
Personally, after working with Ruby and Rails (and understanding the separation!) for a couple of years, and having experience with PHP at Beatport - I can confidently say that I never want to work with PHP code again =p
Funny to ask "scale now or later?" and label it "ruby on rails".
Actually, Ruby on Rails was created by David Heinemeier Hansson, who has a whole chapter in his book labeled "Scale later" :))
http://gettingreal.37signals.com/ch04_Scale_Later.php
I agree with the earlier respondents -- make it useful, make it work and get people motivated to use it first. I also agree that you should pick off-the shelf components (of which there are many) rather than roll your own, as much as possible. At the same time, make sure that you choose components for your infrastructure that you know to be scalable so that you can go there when you need to, without having to re-write major chunks of your application.
As the Product Manager for Berkeley DB, I've seen countess cases of developers who decided "Oh, we'll just write that to a flat file" or "I can write my own simple B-tree function" or "Database XYZ is 'good enough', I don't have to worry about concurrency or scalability until later". The problem with that approach is that a) you're re-inventing the wheel (and forgoing what others have learned the hard way already) and b) you're ignoring the fact that you'll have to deal with scalability at some point and going with a 'good enough' solution.
Good luck in your implementation.

Resources