C-style functions in Objective-C iOS application [closed] - ios

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
As many of us know - we can use C-style functions in our Objective-C iOS applications like this one, for instance:
NSString* returnTen() {
return #"Ten";
}
But when is it reasonable to use C style function instead of regular Objective-C method? Or just when in general one would use C-style functions in iOS application?

But when is it reasonable to use C style function instead of regular Objective-C method?
It's entirely reasonable to use a function instead of a class and methods whenever you want to define an operation that isn't tied to an object and its data. Doing that is sort of antithetical to the object oriented programming style, so most people would normally prefer to do things with classes when possible, but it's certainly not unreasonable to write a function here and there if it suits your purpose.
Or just when in general one would use C-style functions in iOS application?
The obvious case is when you're using a library or framework that provides functions. There are a great many of these. Some examples:
C-based system frameworks such as Core Foundation and Core Graphics
POSIX/C standard library
any 3rd party C library you happen to need
Also, consider that blocks are very similar to functions, differing mainly in that they have state associated with them. Blocks are used extensively in modern Objective-C, and in many cases they simplify API's by replacing or avoiding delegate methods and such.

There is nothing wrong about using what you call "C-Style functions" in an iOS project. Objective-C is a superset of C - so in a way, you're writing C all the time anyway.
Usually, you'd be eligible to use plain C functions when the purpose of the function is very atomic and/or when you'd like to "inline" the thing the functions does in the rest of your code. For example, you could be in need of a function that evaluates the biggest of two given integers. Because this is such a "small" thing to do and not worth the (admittedly tiny) overhead of sending a full blown Objective-C message (that behind the curtains needs to make use of objc_msgSend() in order to work), you could implement it as a small C function. So, instead of
- (int)biggerIntegerOf:(int)a and:(int)b {
return a > b ? a : b;
}
you could use
static inline int max(int a, int b) {
return a > b ? a : b;
}
By specifying "inline" you give the compiler a hint that says "At the place(s) in my code where I call the function max(), you can instead just inline its contents right in place as if I copy and pasted it there".

Related

Why can't we use a for loop with increment operator in Swift [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
Instead of using:
for (n = 1; n <= 5; n++) {
print(n)
}
why do we use the following construct in Swift?
for n in 1...5 {
print(n)
}
// Output: 1 2 3 4 5
"I am certainly open to considering dropping the C-style for loop.
IMO, it is a rarely used feature of Swift that doesn’t carry its
weight. Many of the reasons to remove them align with the rationale
for removing -- and ++. "
-- Chris Lattner,
There is a proposal about increment https://github.com/apple/swift-evolution/blob/master/proposals/0004-remove-pre-post-inc-decrement.md
These operators increase the burden to learn Swift as a first
programming language - or any other case where you don't already know
these operators from a different language.
Their expressive advantage is minimal - x++ is not much shorter than x
+= 1.
Swift already deviates from C in that the =, += and other
assignment-like operations returns Void (for a number of reasons).
These operators are inconsistent with that model.
Swift has powerful features that eliminate many of the common reasons
you'd use ++i in a C-style for loop in other languages, so these are
relatively infrequently used in well-written Swift code. These
features include the for-in loop, ranges, enumerate, map, etc.
Code that actually uses the result value of these operators is often
confusing and subtle to a reader/maintainer of code. They encourage
"overly tricky" code which may be cute, but difficult to understand.
While Swift has well defined order of evaluation, any code that
depended on it (like foo(++a, a++)) would be undesirable even if it
was well-defined.
These operators are applicable to relatively few types: integer and
floating point scalars, and iterator-like concepts. They do not apply
to complex numbers, matrices, etc.
Finally, these fail the metric of "if we didn't already have these,
would we add them to Swift 3?"
And about the loops
https://github.com/apple/swift-evolution/blob/master/proposals/0007-remove-c-style-for-loops.md
Both for-in and stride provide equivalent behavior using
Swift-coherent approaches without being tied to legacy terminology.
There is a distinct expressive disadvantage in using for-loops
compared to for-in in succinctness
for-loop implementations do not lend themselves to use with
collections and other core Swift types.
The for-loop encourages use of unary incrementors and decrementors,
which will be soon removed from the language.
The semi-colon delimited declaration offers a steep learning curve
from users arriving from non C-like languages
If the for-loop did not exist, I doubt it would be considered for
inclusion in Swift 3.

Best practice to define the helper functions in swift [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I have some helper functions like image compression function in my ios project. What is the best place to define them (considering memory)?
as static method inside a class
function inside a struct
function without any class (outside of class)
None of the approaches has any effect on memory. Code is linked into your executable regardless, and does not affect the size of your objects. You should make your decision based on what makes for the cleanest design.
Functions inside a struct sound like a bad idea unless the functions relate to the hosting struct.
Static class methods can be good for related functions that you want to group together, assuming they are related to the class that "hosts" them. That approach also has the advantage of "namespacing" your function names so you avoid naming collisions. (As others have pointed out, though, making methods class methods purely for name-spacing is probably not a good idea.)
Global functions are good for functions that really are global in scope and don't naturally fall into groups with other functions, but you need to be careful about naming them to avoid collisions.
The differences should be negligible. You should not use classes as namespaces as well.
I'm posting this answer despite there are perfectly valid answers already available. The point I would like to highlight is that you shouldn't really bother much with any kind of optimizations before you actually hit the problems. Here is a relevant quotation from Donald Knuth:
"Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%."
Although in your case there is unlikely to be any serious problems, whether you do it this way or that :)
The difference between a static function inside a class/struct/enum and a global function is solely the difference in the hoops you have to jump through at the calling site. For example:
class Foo {
static func bar() { }
}
func foo_bar() { }
// call the first
Foo.bar()
// call the second
foo_bar()
IMHO Classes are not namespaces and should not be used as namespaces. That's what namespaces are for.

categories vs utility classes in iOS [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Why are utility classes considered bad practice in iOS ? And categories used as a replacement instead of helper classes/utility classes. Is there any particular benefit that we get out of categories that we don't get from utility classes ?
Categories have a specific purpose. They extend functionality of a class in code that's external to the class for some reason (you don't have source for the original, you want different visibility for the category, ...).
When you say "helper" class, that sounds like delegates rather than categories...or just simple composition.
Actual utility classes -- ones that have no instances or state -- do exist where needed.
Utilities are not a bad practice in iOS. Sometimes, it makes sense to have them around if you need a central hub for useful functions with a specific common goal (i.e.: a MathUtils class for parsing doubles or ints in objective-c).
Having said that, by convention categories / extensions are considered nicer as they allow you to operate directly on the objects themselves without the need to allocate memory / instantiate other objects. For example, you can create a category on an NSNumber object to divide by a number easily, allowing you to have a language syntax that is easy to follow: i.e:
in swift:
number.divideBy(2)
or in objective-c
[number divideBy:2]
As opposed to:
let utility = UtilityClass()
utility.divideNumber(number, by:2)
Hopefully this helps convince you to start working with Categories, they are your best friends here!
One should not say that utility classes are worse at all. It depends on the task.
Maybe the reason for the statement is that developers coming from different programming languages don't know categories and simply use utility classes, even a category would do the job better. This esp. applies to utility classes, whose single purpose is to split an existing parent class into more lightweight parts. This is akin of bad, because it does not reflect the meaning of a class: If code is put into a class semantical correctly, you should not break the semantics for administrative reasons. It is a part of the class, let it be a part of the class.
There is a simple test for it: If you find yourself typing self.master (for master being the original class) very often (esp. this is the only usage of self at all) it is obvious that the utility class has no individual purpose and completely works on the original class.
But, of course, if you have a separate functionality, backed with a separate set of ivars, it might be correct to have extra classes for them. (Are they still utility classes? Maybe you should ask your Q more specific.)

need of pointer objects in objective c

A very basic question .. but really very important to understand the concepts..
in c++ or c languages, we usually don't use pointer variables to store values.. i.e. values are stored simply as is in:
int a=10;
but here in ios sdk, in objective c, most of the objects which we use are initialized by denoting a pointer with them as in:
NSArray *myArray=[NSArray array];
So,the question arises in my mind ,that, what are the benefit and need of using pointer-objects (thats what we call them here, if it is not correct, please, do tell)..
Also I just get confused sometimes with memory allocation fundamentals when using a pointer objects for allocation. Can I look for good explanations anywhere?
in c++ or c languages, we usually don't use pointer variables to store values
I would take that "or C" part out. C++ programmers do frown upon the use of raw pointers, but C programmers don't. C programmers love pointers and regard them as an inevitable silver bullet solution to all problems. (No, not really, but pointers are still very frequently used in C.)
but here in ios sdk, in objective c, most of the objects which we use are initialized by denoting a pointer with them
Oh, look closer:
most of the objects
Even closer:
objects
So you are talking about Objective-C objects, amirite? (Disregard the subtlety that the C standard essentially describes all values and variables as an "object".)
It's really just Objective-C objects that are always pointers in Objective-C. Since Objective-C is a strict superset of C, all of the C idioms and programming techniques still apply when writing iOS apps (or OS X apps, or any other Objective-C based program for that matter). It's pointless, superfluous, wasteful, and as such, it is even considered an error to write something like
int *i = malloc(sizeof(int));
for (*i = 0; *i < 10; ++*i)
just because we are in Objective-C land. Primitives (or more correctly "plain old datatypes" with C++ terminology) still follow the "don't use a pointer if not needed" rule.
what are the benefit and need of using pointer-objects
So, why they are necessary:
Objective-C is an object-oriented and dynamic language. These two, strongly related properties of the language make it possible for programmers to take advantage of technologies such as polymorphism, duck-typing and dynamic binding (yes, these are hyperlinks, click them).
The way these features are implemented make it necessary that all objects be represented by a pointer to them. Let's see an example.
A common task when writing a mobile application is retrieving some data from a server. Modern web-based APIs use the JSON data exchange format for serializing data. This is a simple textual format which can be parsed (for example, using the NSJSONSerialization class) into various types of data structures and their corresponding collection classes, such as an NSArray or an NSDictionary. This means that the JSON parser class/method/function has to return something generic, something that can represent both an array and a dictionary.
So now what? We can't return a non-pointer NSArray or NSDictionary struct (Objective-C objects are really just plain old C structs under the hoods on all platforms I know Objective-C works on), because they are of different size, they have different memory layouts, etc. The compiler couldn't make sense of the code. That's why we return a pointer to a generic Objective-C object, of type id.
The C standard mandates that pointers to structs (and as such, to objects) have the same representation and alignment requirements (C99 6.2.5.27), i. e. that a pointer to any struct can be cast to a pointer to any other struct safely. Thus, this approach is correct, and we can now return any object. Using runtime introspection, it is also possible to determine the exact type (class) of the object dynamically and then use it accordingly.
And why they are convenient or better (in some aspects) than non-pointers:
Using pointers, there is no need to pass around multiple copies of the same object. Creating a lot of copies (for example, each time an object is assigned to or passed to a function) can be slow and lead to performance problems - a moderately complex object, for example, a view or a view controller, can have dozens of instance variables, thus a single instance may measure literally hundreds of bytes. If a function call that takes an object type is called thousands or millions of times in a tight loop, then re-assigning and copying it is quite painful (for the CPU anyway), and it's much easier and more straightforward to just pass in a pointer to the object (which is smaller in size and hence faster to copy over). Furthermore, Objective-C, being a reference counted language, even kind of "discourages" excessive copying anyway. Retaining and releasing is preferred over explicit copying and deallocation.
Also I just get confused sometimes with memory allocation fundamentals when using a pointer objects for allocation
Then you are most probably confused enough even without pointers. Don't blame it on the pointers, it's rather a programmer error ;-)
So here's...
...the official documentation and memory management guide by Apple;
...the earliest related Stack Overflow question I could find;
...something you should read before trying to continue Objective-C programming #1; (i. e. learn C first)
...something you should read before trying to continue Objective-C programming #2;
...something you should read before trying to continue Objective-C programming #3;
...and an old Stack Overflow question regarding C memory management rules, techniques and idioms;
Have fun! :-)
Anything more complex than an int or a char or similar is usually passed as
pointers even in C. In C you could of course pass around a struct of data
from function to function but this is rarely seen.
Consider the following code:
struct some_struct {
int an_int;
char a_char[1234];
};
void function1(void)
{
struct some_struct s;
function2(s);
}
void function2(struct some_struct s)
{
//do something with some_struct s
}
The some_struct data s will be put on the stack for function1. When function2
is called the data will be copied and put on the stack for use in function2.
It requires the data to be on the stack twice as well as the data to be
copied. This is not very efficient. Also, note that changing the values
of the struct in function2 will not affect the struct in function1, they
are different data in memory.
Instead consider the following code:
struct some_struct {
int an_int;
char a_char[1234];
};
void function1(void)
{
struct some_struct *s = malloc(sizeof(struct some_struct));
function2(s);
free(s);
}
void function2(struct some_struct *s)
{
//do something with some_struct s
}
The some_struct data will be put on the heap instead of the stack. Only
a pointer to this data will be put on the stack for function1, copied in the
call to function2 another pointer put on the stack for function2. This is a
lot more efficient than the previous example. Also, note that any changes of
the data in the struct made by function2 will now affect the struct in
function1, they are the same data in memory.
This is basically the fundamentals on which higher level programming languages
such as Objective-C is built and the benefits from building these languages
like this.
The benefit and need of pointer is that it behaves like a mirror. It reflects what it points to. One main place where points could be very useful is to share data between functions or methods. The local variables are not guaranteed to keep their value each time a function returns, and that they’re visible only inside their own function. But you still may want to share data between functions or methods. You can use return, but that works only for a single value. You can also use global variables, but not to store your complete data, you soon have a mess. So we need some variable that can share data between functions or methods. There comes pointers to our remedy. You can just create the data and just pass around the memory address (the unique ID) pointing to that data. Using this pointer the data could be accessed and altered in any function or method. In terms of writing modular code, that’s the most important purpose of a pointer— to share data in many different places in a program.
The main difference between C and Objective-C in this regard is that arrays in Objective-C are commonly implemented as objects. (Arrays are always implemented as objects in Java, BTW, and C++ has several common classes resembling NSArray.)
Anyone who has considered the issue carefully understands that "bare" C-like arrays are problematic -- awkward to deal with and very frequently the source of errors and confusion. ("An array 'decays' to a pointer" -- what is that supposed to mean, anyway, other than to admit in a backhanded way "Yes, it's confusing"??)
Allocation in Objective-C is a bit confusing in large part because it's in transition. The old manual reference count scheme could be easily understood (if not so easily dealt with in implementations), but ARC, while simpler to deal with, is far harder to truly understand, and understanding both simultaneously is even harder. But both are easier to deal with than C, where "zombie pointers" are almost a given, due to the lack of reference counting. (C may seem simpler, but only because you don't do things as complex as those you'd do with Objective-C, due to the difficulty controlling it all.)
You use a pointer always when referring to something on the heap and sometimes, but usually not when referring to something on the stack.
Since Objective-C objects are always allocated on the heap (with the exception of Blocks, but that is orthogonal to this discussion), you always use pointers to Objective-C objects. Both the id and Class types are really pointers.
Where you don't use pointers are for certain primitive types and simple structures. NSPoint, NSRange, int, NSUInteger, etc... are all typically accessed via the stack and typically you do not use pointers.

Parsing Source Code - Unique Identifiers for Different Languages? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm building an application that receives source code as input and analyzes several aspects of the code. It can accept code from many common languages, e.g. C/C++, C#, Java, Python, PHP, Pascal, SQL, and more (however many languages are unsupported, e.g. Ada, Cobol, Fortran). Once the language is known, my application knows what to do (I have different handlers for different languages).
Currently I'm asking the user to input the programming language the code is written in, and this is error-prone: although users know the programming languages, a small percentage of them (on rare occasions) click the wrong option just due to recklessness, and that breaks the system (i.e. my analysis fails).
It seems to me like there should be a way to figure out (in most cases) what the language is, from the input text itself. Several notes:
I'm receiving pure text and not file names, so I can't use the extension as a hint.
The user is not required to input complete source codes, and can also input code snippets (i.e. the include/import part may not be included).
it's clear to me that any algorithm I choose will not be 100% proof, certainly for very short input codes (e.g. that could be accepted by both Python and Ruby), in which cases I will still need the user's assistance, however I would like to minimize user involvement in the process to minimize mistakes.
Examples:
If the text contains "x->y()", I may know for sure it's C++ (?)
If the text contains "public static void main", I may know for sure it's Java (?)
If the text contains "for x := y to z do begin", I may know for sure it's Pascal (?)
My question:
Are you familiar with any standard library/method for figuring out automatically what the language of an input source code is?
What are the unique code "tokens" with which I could certainly differentiate one language from another?
I'm writing my code in Python but I believe the question to be language agnostic.
Thanks
Vim has a autodetect filetype feature. If you download vim sourcecode you will find a /vim/runtime/filetype.vim file.
For each language it checks the extension of the file and also, for some of them (most common), it has a function that can get the filetype from the source code. You can check that out. The code is pretty easy to understand and there are some very useful comments there.
build a generic tokenizer and then use a Bayesian filter on them. Use the existing "user checks a box" system to train it.
Here is a simple way to do it. Just run the parser on every language. Whatever language gets the farthest without encountering any errors (or has the fewest errors) wins.
This technique has the following advantages:
You already have most of the code necessary to do this.
The analysis can be done in parallel on multi-core machines.
Most languages can be eliminated very quickly.
This technique is very robust. Languages that might appear very similar when using a fuzzy analysis (baysian for example), would likely have many errors when the actual parser is run.
If a program is parsed correctly in two different languages, then there was never any hope of distinguishing them in the first place.
I think the problem is impossible. The best you can do is to come up with some probability that a program is in a particular language, and even then I would guess producing a solid probability is very hard. Problems that come to mind at once:
use of features like the C pre-processor can effectively mask the underlyuing language altogether
looking for keywords is not sufficient as the keywords can be used in other languages as identifiers
looking for actual language constructs requires you to parse the code, but to do that you need to know the language
what do you do about malformed code?
Those seem enough problems to solve to be going on with.
One program I know which even can distinguish several different languages within the same file is ohcount. You might get some ideas there, although I don't really know how they do it.
In general you can look for distinctive patterns:
Operators might be an indicator, such as := for Pascal/Modula/Oberon, => or the whole of LINQ in C#
Keywords would be another one as probably no two languages have the same set of keywords
Casing rules for identifiers, assuming the piece of code was writting conforming to best practices. Probably a very weak rule
Standard library functions or types. Especially for languages that usually rely heavily on them, such as PHP you might just use a long list of standard library functions.
You may create a set of rules, each of which indicates a possible set of languages if it matches. Intersecting the resulting lists will hopefully get you only one language.
The problem with this approach however, is that you need to do tokenizing and compare tokens (otherwise you can't really know what operators are or whether something you found was inside a comment or string). Tokenizing rules are different for each language as well, though; just splitting everything at whitespace and punctuation will probably not yield a very useful sequence of tokens. You can try several different tokenizing rules (each of which would indicate a certain set of languages as well) and have your rules match to a specified tokenization. For example, trying to find a single-quoted string (for trying out Pascal) in a VB snippet with one comment will probably fail, but another tokenizer might have more luck.
But since you want to perform analysis anyway you probably have parsers for the languages you support, so you can just try running the snippet through each parser and take that as indicator which language it would be (as suggested by OregonGhost as well).
Some thoughts:
$x->y() would be valid in PHP, so ensure that there's no $ symbol if you think C++ (though I think you can store function pointers in a C struct, so this could also be C).
public static void main is Java if it is cased properly - write Main and it's C#. This gets complicated if you take case-insensitive languages like many scripting languages or Pascal into account. The [] attribute syntax in C# on the other hand seems to be rather unique.
You can also try to use the keywords of a language - for example, Option Strict or End Sub are typical for VB and the like, while yield is likely C# and initialization/implementation are Object Pascal / Delphi.
If your application is analyzing the source code anyway, you code try to throw your analysis code at it for every language and if it fails really bad, it was the wrong language :)
My approach would be:
Create a list of strings or regexes (with and without case sensitivity), where each element has assigned a list of languages that the element is an indicator for:
class => C++, C#, Java
interface => C#, Java
implements => Java
[attribute] => C#
procedure => Pascal, Modula
create table / insert / ... => SQL
etc. Then parse the file line-by-line, match each element of the list, and count the hits.
The language with the most hits wins ;)
How about word frequency analysis (with a twist)? Parse the source code and categorise it much like a spam filter does. This way when a code snippet is entered into your app which cannot be 100% identified you can have it show the closest matches which the user can pick from - this can then be fed into your database.
Here's an idea for you. For each of your N languages, find some files in the language, something like 10-20 per language would be enough, each one not too short. Concatenate all files in one language together. Call this lang1.txt. GZip it to lang1.txt.gz. You will have a set of N langX.txt and langX.txt.gz files.
Now, take the file in question and append to each of he langX.txt files, producing langXapp.txt, and corresponding gzipped langXapp.txt.gz. For each X, find the difference between the size of langXapp.gz and langX.gz. The smallest difference will correspond to the language of your file.
Disclaimer: this will work reasonably well only for longer files. Also, it's not very efficient. But on the plus side you don't need to know anything about the language, it's completely automatic. And it can detect natural languages and tell between French or Chinese as well. Just in case you need it :) But the main reason, I just think it's interesting thing to try :)
The most bulletproof but also most work intensive way is to write a parser for each language and just run them in sequence to see which one would accept the code. This won't work well if code has syntax errors though and you most probably would have to deal with code like that, people do make mistakes. One of the fast ways to implement this is to get common compilers for every language you support and just run them and check how many errors they produce.
Heuristics works up to a certain point and the more languages you will support the less help you would get from them. But for first few versions it's a good start, mostly because it's fast to implement and works good enough in most cases. You could check for specific keywords, function/class names in API that is used often, some language constructions etc. Best way is to check how many of these specific stuff a file have for each possible language, this will help with some syntax errors, user defined functions with names like this() in languages that doesn't have such keywords, stuff written in comments and string literals.
Anyhow you most likely would fail sometimes so some mechanism for user to override language choice is still necessary.
I think you never should rely on one single feature, since the absence in a fragment (e.g. somebody systematically using WHILE instead of for) might confuse you.
Also try to stay away from global identifiers like "IMPORT" or "MODULE" or "UNIT" or INITIALIZATION/FINALIZATION, since they might not always exist, be optional in complete sources, and totally absent in fragments.
Dialects and similar languages (e.g. Modula2 and Pascal) are dangerous too.
I would create simple lexers for a bunch of languages that keep track of key tokens, and then simply calculate a key tokens to "other" identifiers ratio. Give each token a weight, since some might be a key indicator to disambiguate between dialects or versions.
Note that this is also a convenient way to allow users to plugin "known" keywords to increase the detection ratio, by e.g. providing identifiers of runtime library routines or types.
Very interesting question, I don't know if it is possible to be able to distinguish languages by code snippets, but here are some ideas:
One simple way is to watch out for single-quotes: In some languages, it is used as character wrapper, whereas in the others it can contain a whole string
A unary asterisk or a unary ampersand operator is a certain indication that it's either of C/C++/C#.
Pascal is the only language (of the ones given) to use two characters for assignments :=. Pascal has many unique keywords, too (begin, sub, end, ...)
The class initialization with a function could be a nice hint for Java.
Functions that do not belong to a class eliminates java (there is no max(), for example)
Naming of basic types (bool vs boolean)
Which reminds me: C++ can look very differently across projects (#define boolean int) So you can never guarantee, that you found the correct language.
If you run the source code through a hashing algorithm and it looks the same, you're most likely analyzing Perl
Indentation is a good hint for Python
You could use functions provided by the languages themselves - like token_get_all() for PHP - or third-party tools - like pychecker for python - to check the syntax
Summing it up: This project would make an interesting research paper (IMHO) and if you want it to work well, be prepared to put a lot of effort into it.
There is no way of making this foolproof, but I would personally start with operators, since they are in most cases "set in stone" (I can't say this holds true to every language since I know only a limited set). This would narrow it down quite considerably, but not nearly enough. For instance "->" is used in many languages (at least C, C++ and Perl).
I would go for something like this:
Create a list of features for each language, these could be operators, commenting style (since most use some sort of easily detectable character or character combination).
For instance:
Some languages have lines that start with the character "#", these include C, C++ and Perl. Do others than the first two use #include and #define in their vocabulary? If you detect this character at the beginning of line, the language is probably one of those. If the character is in the middle of the line, the language is most likely Perl.
Also, if you find the pattern := this would narrow it down to some likely languages.
Etc.
I would have a two-dimensional table with languages and patterns found and after analysis I would simply count which language had most "hits". If I wanted it to be really clever I would give each feature a weight which would signify how likely or unlikely it is that this feature is included in a snippet of this language. For instance if you can find a snippet that starts with /* and ends with */ it is more than likely that this is either C or C++.
The problem with keywords is someone might use it as a normal variable or even inside comments. They can be used as a decider (e.g. the word "class" is much more likely in C++ than C if everything else is equal), but you can't rely on them.
After the analysis I would offer the most likely language as the choice for the user with the rest ordered which would also be selectable. So the user would accept your guess by simply clicking a button, or he can switch it easily.
In answer to 2: if there's a "#!" and the name of an interpreter at the very beginning, then you definitely know which language it is. (Can't believe this wasn't mentioned by anyone else.)

Resources