I'm programming in Vala language and I've a problem: I don't know how to read a char (although I know how to read a string and an integer).
string name = stdin.read_line();
int maximum = int.parse(stdin.read_line());
char option = ????;
One option is using scanf(), but if I use it I have problems during execution time.
If you just want to read a single character from a GLib.FileStream, you can use GLib.FileStream.getc, which "reads the next character from stream and returns it as an unsigned char cast to an int, or EOF on end of file or error." Actually, GLib.FileStream.read_line is implemented using getc. So, the "????" in your question would be something like (char) stdin.getc().
If you are trying to interact with a user, note that input is typically buffered by the terminal until the end of a line. If you want to get a character immediately after the user presses a key you'll need to use something like ncurses (for which Vala bindings are distributed with valac).
Related
While experimenting with the zig syntax, I noticed the type expression of string literals is omitted in all examples. Which is totally fine, I'm not saying it shouldn't be.
const zig_string = "I am a string"; //it looks nice enough for sure and compiles fine ofcourse
However, because this type omission is a bit inconsistent* with other type declarations in zig, it can lead to beginners (like me) misinterpreting the actual type of string literals (which is fact quite rightfully complicated and 'different'). Anyway, after reading about the type of string literals being 'pointers to (utf-8 encoded) immutable (const), sentinel terminated arrays of u8 bytes' (yes?), with next to the hard coded length field, a terminator field like so: [<length>:0]. To check my own understanding, I thought it reasonable to try adding this type expression to the declaration, similar to how other arrays are conveniently declared, so with an underscore to infer the length, because who likes counting characters?
const string: *const [_:0]u8 = "jolly good"; //doesn't compile: unable to infer array size
But it didn't compile :(.
After dutifully counting characters and now specifying the length of my string however, it proudly compiled :)!
const string: *const [10:0]u8 = "jolly good"; //happily compiles
Which led me to my question:
Why is this length specification needed for string literals and not for other literals/arrays? - (And should this be so?)
Please correct my type description of string literals if I missed an important nuance.
I'd like to know to further deepen my understanding of the way strings are handled in zig.
*although there are more cases where the zig compiler can infer the type without it
Types never have _ in them.
"jolly good" is a string literal. *const [10:0]u8 is the type.
For "other literals/arrays":
const a = [_]u8{ 1, 2, 3 };
[_]u8{ 1, 2, 3 } is an array literal. The type is [3]u8 and it cannot be specified as [_]u8.
Look into slices. They offer a very convenient way to use strings and arrays.
I have a table with TEXT, INTEGER, REAL and other data types.
I wrote a generic sql function to read results for all queries using sqlite3_column_text() like so:
char *dataAsChar = (char *) sqlite3_column_text(compiledStatement, ii);
Although I should be using sqlite3_column_int etc. to read the numeric values, the above code seems to work for me. I get the number values as string which I later convert to int using [*numberAsString* intValue].
Since I am using a generic function to read all my db values, this is very convenient for me. But is there something that can go wrong with my code?
I could use sqlite3_column_type for each column to determine the type and use appropriate function. Am I correct in assuming that sqlite3_column_text basically returns the column value in TEXT format and does not necessarily need the value itself for be TEXT?
The only situation where I can see this implementation failing is with BLOB data type.
The documentation says:
These routines attempt to convert the value where appropriate. […]
The following table details the conversions that are applied:
Internal Type Requested Type Conversion
NULL TEXT Result is a NULL pointer
INTEGER TEXT ASCII rendering of the integer
FLOAT TEXT ASCII rendering of the float
BLOB TEXT Add a zero terminator if needed
I have been pondering a multiple choice question on coercion. One of the 4 examples a,b,c or d is an example of coercion. I narrowed it down to A or B. But I am having a problem choosing between the two. Cane someone please explain why one is coercion and one isn't.
A)
string s="tomat";
char c='o';
s=s+c;
I thought A could be correct because we have two different types, character and string, being added. Meaning that c is promoted to string, hence coercion.
B)
double x=1.0;
double y=2.0;
int i=(int)(x+y);
I also thought B was the correct answer because the double (x+y) is being turned into a int to be placed in i. But I thought this could be wrong because its being done actively through use of (int) rather than passively such as "int i = x + y"
I'll list the other two options, even though I believe that neither one is the correct answer
C)
char A=0x20;
A = A << 1 | 0x01;
cout << A << endl;
D)
double x=1.0;
double y=x+1;
return 0;
I'm not just looking for an answer, but an explanation. I have read tons of things on coercion and A and B both look like the right answer. So why is one correct and the other not.
I actually think it's B. Even though there's the explicit (int), it's still type coercion (just not automatic type coercion). You're converting a floating point value (probably stored as an IEEE floating point value) to an integer value (probably stored in two's complement).
Whereas A is simply concatenating a character to a string, where a string is just a null terminated array of characters. There's no data type conversion going on there, just a bit of memory manipulation.
I could be wrong though.
EDIT: I would have to agree with Parris. Given that this is a C++ string and not a C array of characters (my mistake), the chracter in A is probably being coerced to a string.
I don't think type casting is equivalent to type coercion, which is why A would probably be the right answer.
B takes a double and casts it to an int, which is more like a conversion than a coercion. In A you aren't converting anything you're being implicit. You are telling the runtime/compiler/whatever "these 2 things are similar can you figure out how to concatenate them?"
C isn't a conversion or coercion its just bit shifting. Although the cout might be coercion... I am not sure if there is coercion to a string there to write to the console.
D might contain a coercion since 1 is an int and you are adding it to a double. However, you can do floating point math with integers having a decimal is just more explicit.
I think A is the most straight forward example of coercion. Although C's cout statement seems suspicious as well.
I have a Lua script, where I'm trying to use hex numbers (0x..). If I run this script in the console, with the official Windows binaries, it works fine. But if I run it in my application (simple dofile), I get
malformed number near '0x1F'
It doesn't matter what the hex is, I always get that error, as if it wouldn't support them. The library I'm using is Lua 5.1.4, and I've tried 2 different ones (the first one being one I've compiled myself), so that shouldn't be the problem.
Does anyone have a clue what might be wrong here?
Edit:
It's not the script. No matter what I do, a simple "foo = 0xf" already triggers the error, even if there's nothing else in the file.
Update:
tonumber("0xf")
This returns nil, while
tonumber("15")
work fine. There's definitely something wrong with hex in my libs...
If hex literals aren't working for you (though they should), you can always use hex from lua by doing tonumber("fe",16)
Why do functions have to be different in different compilers, ...why?
Alright, the problem was that Lua tries to convert numbers into double by default. For this it uses the function "strtod", which takes 2 arguments, the string, and a char pointer. The char pointer is supposed to point to the last position after the parsed number. Which for a hex number would mean the 'x', after the '0'. If this isn't the case, Lua assumes an error, and gives us this nice little error message.
I've compiled Lua using DMC, because I need the lib to be in OMF, and I assume others used DMC as well. But apparently DMC's strtod works differenty, since the pointers always point to the start of the string if it's a hex... or rather any invalid number.
I've now added a little hack, which checks for the x, if conversion to double failed. Not pretty, but it works fine for now.
int luaO_str2d (const char *s, lua_Number *result) {
char *endptr;
*result = lua_str2number(s, &endptr);
/* Hack for DMC */
if (endptr == s)
if(*(s+1) == 'x' || *(s+1) == 'X')
endptr++;
else
return 0; /* conversion failed */
I faced this bug with lua5.2. Lua 5.1 works fine.
I'm using CFStringTokenizer to break a load of text into words, but I'm having difficulty bridging whatever encoding CFString is using and UTF8. Consider this:
NSString *theString = #"Lorem ipsum dolor sit amet!";
const char *theCString = [theString cStringUsingEncoding:NSUTF8StringEncoding];
tokenizer = CFStringTokenizerCreate(kCFAllocatorDefault,
(__bridge CFStringRef)theString,
CFRangeMake(0, [theString length]),
kCFStringTokenizerUnitWordBoundary,
locale);
while ((tokenType = CFStringTokenizerAdvanceToNextToken(tokenizer)) != kCFStringTokenizerTokenNone) {
tokenRange = CFStringTokenizerGetCurrentTokenRange(tokenizer);
memcpy(resultPtr, theCString+tokenRange.location, tokenRange.length);
}
Unfortunately the range reported by the tokenizer is incorrect when trying to read from the C string if any non-ascii characters have been encountered. How can I go about getting the correct range from the tokenizer to be able to pull the correct chars from my C string?
To clarify, the memcpy stuff is a tad more complex than above, and is necessary for performance on my target device, the iPhone. So I can't even do anything like create a CFString substring and convert that, I need the range in the C string. Is there any way to do that without reimplementing various word boundary libraries to get it working for the various different locales I need it to work with? (which is as many as possible, so I can't just iterate through looking for ' ' unfortunately..)
Alec
NSStrings and CFStrings deal in UTF-16, not UTF-8, but that isn't the real problem.
Your code has two problems:
You're assuming that the C string's indexes correspond to the source string's indexes.
You're copying and converting the entire string to a UTF-8 C string at once.
#1 is the cause of the range mismatches, and #2 causes potentially high memory usage, depending on the length and content of the string. (UTF-8 can take as many as four bytes per character in some alphabets—and then add one for the C string terminator.)
You can solve both of these problems in a single change.
Create an NSMutableData to hold the output. For each token, set the data's length to the range's length; then, tell the string to get bytes within the desired range in the desired encoding and store them in the data's mutableBytes buffer. NSString has a method with a very long selector (briefly, getBytes:::::::) that you will want to use for this.
Since you use the range that is relative to the string exclusively with the string, there is no index/range mismatch, and each token will be output correctly.
If you really need a C string, you can set the data's length to the range's length + 1, then set the last byte to '\0' with a separate assignment after getting the token bytes. (Without the separate assignment, the byte may hold a previous value.)