I have a simple enum with a doc comment wanting to display [] inside the comment:
/// Define the brackets used when displaying a `List` in a cell.
///
/// Supported bracket types are:
/// * parentheses: ()
/// * curly: {}
/// * square: []
enum ListBrackets {
/// Use parentheses
parentheses,
/// Use curly brackets
curly,
/// Use square brackets
square;
}
However all i get is:
Define the brackets used when displaying a List in a cell. Supported bracket types are: parentheses: () curly: {} square:
Any help is appreciated
Escape the square brackets with a \:
/// Define the brackets used when displaying a `List` in a cell.
///
/// Supported bracket types are:
/// * parentheses: ()
/// * curly: {}
/// * square: \[\]
enum ListBrackets {
/// Use parentheses
parentheses,
/// Use curly brackets
curly,
/// Use square brackets
square;
}
(You could also only escape the first one: \[] but I prefer to escape both!)
Related
I want to use a typed list in dart that can be one of the following types
/// The array is typed array it can be :
/// - Float64List
/// - Float32List
/// - Int32List
/// - Uint32List
/// - Int16List
/// - Uint16List
/// - Uint8ClampedList
/// - Uint8List
/// - Int8List
///
/// and can be the object positions, colors, normals or uvs or indices
dynamic array;
I will like to have something like this
TypedArray array;
For now I use dynamic but I want to use an abstract class that hass all the properties of the typed lists.
How I can do that thank you all.
An example of what I am trying to do
class BufferAttribute {
/// The array is typed array it can be :
/// - Float64List
/// - Float32List
/// - Int32List
/// - Uint32List
/// - Int16List
/// - Uint16List
/// - Uint8ClampedList
/// - Uint8List
/// - Int8List
///
/// and can be the object positions, colors, normals or uvs or indices
TypedData array;
/// 1,2 or 3 components per iteration
int itemSize;
bool normalized;
/// the number of elements in the array.
/// how is computed: array.length / numbe of components.
int count = 0;
/// gl.STATIC_DRAW
int usage = 35044;
BufferAttribute(this.array, this.itemSize, [this.normalized = false]) {
count = array.length ~/ itemSize; <---- I get an Error
usage = 35044; // gl.STATIC_DRAW
}
}
The error says:
The getter 'length' isn't defined for the type 'TypedData'.
Try importing the library that defines 'length', correcting the name to the name of an existing getter, or defining a getter or field named 'length'.dartundefined_getter
That should be TypedData:
TypedData array = UInt8List(...);
Know, however, that the base type TypedData doesn't expose many members, only the properties buffer, elementSizeInBytes, lengthInBytes, and offsetInBytes. I don't know how useful that will be, and depending on your purposes might be an overkill amount of abstraction.
I was doing some work that involved getting a string representation of a dynamic type and it got me curious on how it works and why different values print if they're called differently. Why does this happen and where do the values come from?
class TempClass {}
print(TempClass()) // [Module].TempClass
print(type(of: TempClass()) // TempClass
print(TempClass.self) // TempClass
print(TempClass().self) // [Module].TempClass
There are pretty much zero auto-completes on either the class or the instance of the class (just self)...
I wanted the name as a String variable and it seems weird that:
// works
let name: String = "\(type(of: TempClass())"
// error: initializer 'init(_:)' requires that 'TempClass.Type' conform to 'LosslessStringConvertible'
let name: String = String(type(of: TempClass())
// error: type 'TempClass' has no member 'description'
let name: String = type(of: TempClass()).description
// error: type 'TempClass' has no member 'debugDescription'
let name: String = type(of: TempClass()).debugDescription
Anyone know what's going on here?
I think its calling String(describing:) since that gives the same value, the documentation for that says
/// Use this initializer to convert an instance of any type to its preferred
/// representation as a `String` instance. The initializer creates the
/// string representation of `instance` in one of the following ways,
/// depending on its protocol conformance:
///
/// - If `instance` conforms to the `TextOutputStreamable` protocol, the
/// result is obtained by calling `instance.write(to: s)` on an empty
/// string `s`.
/// - If `instance` conforms to the `CustomStringConvertible` protocol, the
/// result is `instance.description`.
/// - If `instance` conforms to the `CustomDebugStringConvertible` protocol,
/// the result is `instance.debugDescription`.
/// - An unspecified result is supplied automatically by the Swift standard
/// library.
I guess we're just hitting that "unspecified result" mentioned in the last line.
In the swift programming language book, it states
You can use the startIndex and endIndex properties and the
index(before:), index(after:), and index(_:offsetBy:) methods on any
type that conforms to the Collection protocol. This includes String,
as shown here, as well as collection types such as Array, Dictionary,
and Set.
However, I have checked the apple documentation on swift's string api, which does not indicate that String type conform to Collection protocol
I must be missing something here, but can't seem to figure it out.
As of Swift 2, String does not conform to Collection, only its various "views"
like characters, utf8, utf16 or unicodeScalars.
(This might again change in the future, compare
String should be a Collection of Characters Again in
String Processing For Swift 4.)
It has startIndex and endIndex properties and index methods though, these
are forwarded to the characters view, as can be seen in
the source code
StringRangeReplaceableCollection.swift.gyb:
extension String {
/// The index type for subscripting a string.
public typealias Index = CharacterView.Index
// ...
/// The position of the first character in a nonempty string.
///
/// In an empty string, `startIndex` is equal to `endIndex`.
public var startIndex: Index { return characters.startIndex }
/// A string's "past the end" position---that is, the position one greater
/// than the last valid subscript argument.
///
/// In an empty string, `endIndex` is equal to `startIndex`.
public var endIndex: Index { return characters.endIndex }
/// Returns the position immediately after the given index.
///
/// - Parameter i: A valid index of the collection. `i` must be less than
/// `endIndex`.
/// - Returns: The index value immediately after `i`.
public func index(after i: Index) -> Index {
return characters.index(after: i)
}
// ...
}
Strings are collections again. This means you can reverse them, loop over them character-by-character, map() and flatMap() them, and more. For example:
let quote = "It is a truth universally acknowledged that new Swift versions bring new features."
let reversed = quote.reversed()
for letter in quote {
print(letter)
}
This change was introduced as part of a broad set of amendments called the String Manifesto.
Why are implicitly unwrapped optionals not unwrapped when using string interpolation in Swift 3?
Example:
Running the following code in the playground
var str: String!
str = "Hello"
print("The following should not be printed as an optional: \(str)")
produces this output:
The following should not be printed as an optional: Optional("Hello")
Of course I can concatenate strings with the + operator but I'm using string interpolation pretty much everywhere in my app which now doesn't work anymore due to this (bug?).
Is this even a bug or did they intentionally change this behaviour with Swift 3?
As per SE-0054, ImplicitlyUnwrappedOptional<T> is no longer a distinct type; there is only Optional<T> now.
Declarations are still allowed to be annotated as implicitly unwrapped optionals T!, but doing so just adds a hidden attribute to inform the compiler that their value may be force unwrapped in contexts that demand their unwrapped type T; their actual type is now T?.
So you can think of this declaration:
var str: String!
as actually looking like this:
#_implicitlyUnwrapped // this attribute name is fictitious
var str: String?
Only the compiler sees this #_implicitlyUnwrapped attribute, but what it allows for is the implicit unwrapping of str's value in contexts that demand a String (its unwrapped type):
// `str` cannot be type-checked as a strong optional, so the compiler will
// implicitly force unwrap it (causing a crash in this case)
let x: String = str
// We're accessing a member on the unwrapped type of `str`, so it'll also be
// implicitly force unwrapped here
print(str.count)
But in all other cases where str can be type-checked as a strong optional, it will be:
// `x` is inferred to be a `String?` (because we really are assigning a `String?`)
let x = str
let y: Any = str // `str` is implicitly coerced from `String?` to `Any`
print(str) // Same as the previous example, as `print` takes an `Any` parameter.
And the compiler will always prefer treating it as such over force unwrapping.
As the proposal says (emphasis mine):
If the expression can be explicitly type checked with a strong optional type, it will be. However, the type checker will fall back to forcing the optional if necessary. The effect of this behavior is that the result of any expression that refers to a value declared as T! will either have type T or type T?.
When it comes to string interpolation, under the hood the compiler uses this initialiser from the _ExpressibleByStringInterpolation protocol in order to evaluate a string interpolation segment:
/// Creates an instance containing the appropriate representation for the
/// given value.
///
/// Do not call this initializer directly. It is used by the compiler for
/// each string interpolation segment when you use string interpolation. For
/// example:
///
/// let s = "\(5) x \(2) = \(5 * 2)"
/// print(s)
/// // Prints "5 x 2 = 10"
///
/// This initializer is called five times when processing the string literal
/// in the example above; once each for the following: the integer `5`, the
/// string `" x "`, the integer `2`, the string `" = "`, and the result of
/// the expression `5 * 2`.
///
/// - Parameter expr: The expression to represent.
init<T>(stringInterpolationSegment expr: T)
Therefore when implicitly called by your code:
var str: String!
str = "Hello"
print("The following should not be printed as an optional: \(str)")
As str's actual type is String?, by default that's what the compiler will infer the generic placeholder T to be. Therefore the value of str won't be force unwrapped, and you'll end up seeing the description for an optional.
If you wish for an IUO to be force unwrapped when used in string interpolation, you can simply use the force unwrap operator !:
var str: String!
str = "Hello"
print("The following should not be printed as an optional: \(str!)")
or you can coerce to its non-optional type (in this case String) in order to force the compiler to implicitly force unwrap it for you:
print("The following should not be printed as an optional: \(str as String)")
both of which, of course, will crash if str is nil.
Can someone explain me what is wrong with this statement.
var someString = "Welcome"
someString.append("!")
However this works when I replace the code with,
var someString = "Welcome"
let exclamationMark : Character = "!"
someString.append(exclamationMark)
Thanks in advance
In Swift, there is no character literal (such as 'c' in C-derived languages), there are only String literals.
Then, you have two functions defined on Strings: append, to append a single character, and extend, to append a whole String. So this works:
var someString = "Welcome"
someString.extend("!")
If you really want to use append, you can force a one-char String literal to be turned into a Character either by calling Character's constructor:
someString.append(Character("!"))
or by using a type conversion:
someString.append("!" as Character)
or by using a type annotation as you did with an extra variable:
let exclamationMark: Character = "!"
someString.append(exclamationMark)
String has 2 overloaded append(_:)
mutating func append(x: UnicodeScalar)
mutating func append(c: Character)
and both Character and UnicodeScalar conforms UnicodeScalarLiteralConvertible
enum Character : ExtendedGraphemeClusterLiteralConvertible, Equatable, Hashable, Comparable {
/// Create an instance initialized to `value`.
init(unicodeScalarLiteral value: Character)
}
struct UnicodeScalar : UnicodeScalarLiteralConvertible {
/// Create an instance initialized to `value`.
init(unicodeScalarLiteral value: UnicodeScalar)
}
"!" in this case is a UnicodeScalarLiteral. So, the compiler can not determine "!" is Character or UnicodeScalar, and which append(_:) method should be invoked. That's why you must specify it explicitly.
You can see: "!" can be a UnicodeScalar literal by this code:
struct MyScalar: UnicodeScalarLiteralConvertible {
init(unicodeScalarLiteral value: UnicodeScalar) {
println("unicode \(value)")
}
}
"!" as MyScalar // -> prints "unicode !"