It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 9 years ago.
What Happens when Null is set to NSString? and saved in NSUserDefaults?? Will it crash since it treats it like a deallocated instance??
Now, if what you MEANT to say is "What happens when you set a NSString pointer to Null", that depends a bit on what you mean by "Null" (which is not a standard Objective-C term).
By "Null" you could mean either "nil" (which has C aliases of "null" and "NULL") or you could mean an NSNull object.
"nil" is a "null pointer" -- all zeros. This means that the pointer does not reference any object. "NSNull" is an object class whose only ("singleton") instance is used to represent "nothing", even though there is a real object representing this.
You can readily set a NSString pointer (ie, something declared NSString *) to nil. (Note, no quotes around "nil"!) This makes the pointer "empty".
You can also, with a cast, set an NSString pointer to point to the NSNull object, though this isn't commonly done.
(And, of course, you can set an NSString pointer to an "empty string" -- #"". This is a real string of zero length, and is not really related to nil/null.)
Since "local" variables (variables declared inside a method) are not implicitly zeroed by the compiler when the method is entered, it's often good practice to explicitly set declared pointers to nil on method entry, so that, if they don't eventually get set (but do get referenced), they will not contain "garbage" that can produce mysterious errors.
You are mixing metaphors. First, sprinkle around some NSLog messages reporting on the object type, before saving then on retrieval (NSStringFromClass[obj class]). You did not make it clear if you are using a C based SQLite interface or Core Data with a SQLite backing store, so update your question.
Lastly, if you message an object of class A with messages that only class B responds to, you will get exceptions pointing that error out. A dealloced error is totally different, and indicates your memory management is faulty. To track down the latter error, enable Zombies and break on any message send to one (many Q&As on how to do that here).
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Can I create a normal variable in Swift (I mean a non-optional) and assign a nil value to it or later during the app lifecycle, let it be nil?
It confuses me, since it's a little strange compared to traditionally strong programming languages, like Java and C#.
No, that's not possible by design. This excerpt from the documentation explains why:
The concept of optionals doesn’t exist in C or Objective-C. The nearest thing in Objective-C is the ability to return nil from a method that would otherwise return an object, with nil meaning “the absence of a valid object.” However, this only works for objects—it doesn’t work for structures, basic C types, or enumeration values. For these types, Objective-C methods typically return a special value (such as NSNotFound) to indicate the absence of a value. This approach assumes that the method’s caller knows there is a special value to test against and remembers to check for it. Swift’s optionals let you indicate the absence of a value for any type at all, without the need for special constants.
You are describing optionals as a bad thing, whereas is one of the features I appreciate more in the language, because it prevents most of the null pointer exception bugs.
Another advantage is that when a function can return a non-value (nil for reference types in objective C, -1 for integers, etc.), you don't have to choose a value from the spectrum of possible values that a variable of a certain type can have. Not mentioning that it's a convention that both the caller and the function/method must follow.
Last, if you are using too many question and exclamation marks in your code, then you should think about whether or not optionals are really appropriate for the problem (thanks #David for the hint), or taking advantage of optional binding more frequently in all cases where optionals are really needed.
Suggested reading: Optionals
Addendum
Hint: I've frequently seen uses of optionals in cases where a variable is declared, but cannot be initialized contextually. Non optional mutable variables are not required to be declared and initialized in the same line - deferred initialization is allowed, provided that the variable is not used before its initialization. For example:
var x: Int // Variable declared here
for var counter = 0; counter < 10; ++counter {
println(counter)
}
var array = [1, 2, 3]
// ... more lines of code NOT using the x variable
x = 5 // Variable initialized here
print(x)
Hopefully this feature will let you remove several optionals from your code...
Can I create a normal variable in SWIFT (I mean a non Optional) and assign a nil value to it or later during the app lifecycle, let it be nil.
No.
This is easily testable in the playground:
var str = "Hello, playground"
str = nil
The second line will get this error:
Type 'String' does not conform to protocol 'NilLiteralConvertible'
You might want to read more about Swift Literal Convertibles and see an example of how to use it.
You are right, you cannot set a non-optional to nil, although this seems like a burden at first, you gain a lot of safety and readability by giving away a tiny bit of flexibility. Once you get used to it, you will appreciate it more and more.
This question already has answers here:
Objective-C dot notation with class methods?
(3 answers)
Closed 9 years ago.
This question related to knowing something we don't know. I'm researching with "Why people don't use this"? Is any reason behind this related to specific tech? So read it carefully and give downvote or give correct answer.
We can write
NSMutableString *string = NSMutableString.string;
instead of
NSMutableString *string = [NSMutableString string];
Same as how can we write this method,
NSMutableString *string = [NSMutableString stringWithString:#"test"];
Update:
This question is not an duplicate which is little bit different. And I accept with below answers which is not recommended for good programmers. But They didn't explain Why, for What reason, programmers should avoid this? Could anyone give clear explanation about this with proper link or document?
NSMutableString.string is a hack. It "works" for the same reason that myString.length and [myString length] produce the same result. However, since dot notation is not used with an actual property, it is an abuse of the language feature, because properties have a different semantic. For example, when you access a property multiple times, you naturally expect to get the same result, unless the state of the object has changed in between the two invocations. Since NSMutableString.string produces a new string object on each invocation, it breaks the semantic expected of the "proper" properties, bringing down the readability of your program.
Objective-C does not have a general way of calling a method with arguments using the dot notation. There feature is very specific to properties. Although theoretically you could use MyClass.xyz = abc in place of [MyClass setXyz:abc], but that would be a hack as well.
To answer your question, Objective-C does not offer a way to call [NSMutableString stringWithString:#"test"] with dot notation.
It's just a syntactic sugar. string method has no arguments so it's treated like a getter, which is not in fact. stringWithString: is method with parameter, so you can't call like that.
In general, I'd not recommend using dot syntax with methods, it's confusing.
Objective-C dot notation with class methods?
Update
I don't think there is any technical reason you should avoid it.
It's rather in means of coding style, keeping code clean and consistent.
I'm working in a ruby app in which symbols are used in various places where one would usually use strings or enums in other languages (to specify configurations mostly).
So my question is, why should I not add a to_str method to symbol?
It seems seems sensible, as it allows implicit conversion between symbol and string. So I can do stuff like this without having to worry about calling :symbol.to_s:
File.join(:something, "something_else") # => "something/something_else"
The negative is the same as the positive, it implicitly converts symbols to strings, which can be REALLY confusing if it causes an obscure bug, but given how symbols are generally used, I'm not sure if this is a valid concern.
Any thoughts?
when an object does respond_to? :to_str, you expect him to really act like a String. This means it should implement all of String's methods, so you could potentially break some code relying on this.
to_s means that you get a string representation of your object, that's why so many objects implement it - but the string you get is far from being 'semantically' equivalent to your object ( an_hash.to_s is far from being a Hash ). :symbol.to_str's absence reflects this : a symbol is NOT and MUST NOT be confused with a string in Ruby, because they serve totally different purposes.
You wouldn't think about adding to_str to an Int, right ? Yet an Int has a lot of common with a symbol : each one of them is unique. When you have a symbol, you expect it to be unique and immutable as well.
You don't have to implicitly convert it right? Because doing something like this will automatically coerce it to a string.
"#{:something}/something_else" # "something/something_else"
The negative is what you say--at one point, anyway, some core Ruby had different behavior based on symbol/string. I don't know if that's still the case. The threat alone makes me a little twitchy, but I don't have a solid technical reason at this point. I guess the thought of making a symbol more string-like just makes me nervous.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
This is something I am not at all familiar with.
I want to try to make a simple form with 4 edit boxes, 2 at the top, 2 at the bottom, and a button. Basically what I want to do is type a couple of things in the top two boxes that are related to each other.
When I have them both filled in I click on the button and it saves this information in a database, preferable an external file (doesn't have to be text, I think it would be better if not). So I can do that a couple of times. Saving from the edit fields into a database.
Then when I type one of the words saved in one of the edit fields at the bottom it automatically types the other word in the last edit field. The form should remember to connect to the database every time it's opened so that when I open it another time I can still work the edit fields.
Can anyone advise me on how to do this?
What you are looking for is known as a dictionary, if I understand you correctly. In other languages it is known as an associative array or sometimes a hash.
You are going to want a modern version of Delphi, I'd guess 2010 or XE. If you can't access those then you'd need a 3rd party library, or a home grown based off a TStringList. In fact TStringList can operate in a dictionary like mode but it's a bit clunky.
You declare the dictionary as follows:
dict: TDictionary<string,string>;
You can add do it as follows:
dict.Add(box1.Text, box2.Text);
The first parameter is the key. The second is the value. Think of this as an array but indexed with a string rather than an integer.
If you want to recover a value then you use:
dict[key];
In your case you would write:
box4.Text := dict[box3.Text];
If you want to save to a file then you would iterate over the dict:
var
item: TPair<string,string>;
...
for item in dict do
AddToTextFile(item.Key, item.Value);
I've ignored all error handling issues, dealing with adding keys that already exist, asking for keys that are not in the dict, and so on. But this should give you a flavour.
I'd recommend reading up on associative arrays if you aren't already familiar with them. I'm sure there will be a page on Wikipedia and you would do worse than read an tutorial on Python which is sure to cover them – the issues are really the same no matter what language you consider.
Assume you have a variety of number or int based variables that you want to be initialized to some default value. But using 0 could be problematic because 0 is meaningful and could have side affects.
Are there any conventions around this?
I have been working in Actionscript lately and have a variety of value objects with optional parameters so for most variables I set null but for numbers or ints I can't use null. An example:
package com.website.app.model.vo
{
public class MyValueObject
{
public function MyValueObject (
_id:String=null,
_amount:Number=0,
_isPurchased:Boolean=false
)
{ // Constructor
if( _id != null ) this.id = _id;
if( _amount != 0 ) this.amount = _amount;
if( _isPurchased != false ) this.isPurchased = _isPurchased;
}
public var id:String;
public var amount:Number;
public var isPurchased:Boolean;
}
}
The difficulty is that using 0 in the above code might be problematic if the value is not ever changed from its initial value. It is easy to detect if a variable has a null value. But detecting 0 may not be so easy because 0 might be a legitimate value. I want to set a default value to make the parameter optional but I also want to later detect in my code if the value was changed from its default without hard to debug side affects.
I suppose I could use something like -1 for a value. I was wondering if there are any well known coding conventions for this kind of thing? I suppose it depends on the nature of the variable and the data.
This is first my stack overflow question. Hopefully the gist of my question makes sense.
A lot of debuggers will use 0xdeadbeef for initializing registers. I always get a chuckle when I see that.
But, in all honesty, your question contains its own answer - use a value that your variable is not ever expected to become. It doesn't matter what the value is.
Since you asked in a comment I'll talk a little bit about C and C++. For efficiency reasons local variables and allocated memory are not initialized by default. But debug builds often do this to help catch errors. A common value used is 0xcdcdcdcd which is reasonably unlikely. It has the high bit set and is either a rather large unsigned or rather large negative signed number. As a pointer address it is odd which will cause an alignment exception if used on anything but a char (but not on X86). It has no special meaning as a 32 bit floating point number so it isn't a perfect choice.
Occasionally you'll see a partially aligned value in a variable such as 0xcdcd0000 or 0x0000cdcd. These can be treated as suspcious at the very least.
Sometimes different values will be used depending on the allocation area of library. That gives you a clue where a bad value may have originated (i.e., it itself wasn't initialized but it was copied from an unititialized value).
The ideal value would be invalid no matter what alignment you read from memory and is invalid over all primitive types. It also should look suspicious to a human so even if they do not know the convention they can suspect something is a foot. That's why 0xdeadbeef can be a good choice because the (hex viewing) programmer will recognize that as the work of a human and not random chance. Note also that it is odd and has the high bit set so it has that going for it.
The value -1 is often traditionally used as an "out of range" or "invalid" value to indicate failure or non-initialised data. Then again, that goes right down the pan if -1 is a semantically valid value for the variable...or you're using an unsigned type.
You seem to like null (and for a good reason), so why not just use it throughout?
In ActionScript you can only assign Number.NaN to variables that are typed Number, not int or uint.
That being said, because AS3 does not support named arguments you can always look at the arguments array (it's a built-in array that all functions have, unless you use the ...rest construct). If that array's length is less than the position of your numeric argument you know it wasn't passed in.
I often use a maximum value for this. As you say, zero often is a valid value. Generally max-int, while theoretically valid, is safe to exclude. But not always; be careful.
I like 0xD15EA5ED, it's similar to 0xDEADBEEF but is usually more accurate when debugging.