I'm trying to create a variable format specifier for use in $display/$write. I've tried a large number of things, but here is what I have at the moment.
What I want to end up with is: $display(format_var,data_1,data_2), where the format string is pre-calculated using $sformatf or other.
Code:
module test;
function void pprint(input int data_1,input int field_1,input int data_2,input int field_2);
string format;
begin
format = $sformatf("%0d'h%%%0dx,%0d'h%%%0dx",field_1,field_1/4,field_2,field_2/4);
$display("format = %s",format);
$display(format,data_1,data_2);
end
endfunction
initial
begin
pprint(5,8,73737229,128);
$stop;
end
endmodule
The output I expect is:
format = 8'h%2x,128'h%32x
8'h05,128'h000000000000000000000000465240D
The output I get is:
format = 8'h%2x,128'h%32x
8'h%2x,128'h%32x 5 73737229
What do I need to do? The simulator is Vivado 2020.3
Later:
Trying more things, the following function does do what I want. My conclusion is that $display/$write can't take a variable as the format string, but $sformatf can.
function void pprint(input int data_1,input int field_1,input int data_2,input int field_2);
string format;
string outstr;
begin
format = $sformatf("%0d'h%%%0dx,%0d'h%%%0dx",field_1,field_1/4,field_2,field_2/4);
$display("format = %s",format);
$display("%s",$sformatf(format,data_1,data_2));
end
endfunction
Try:
function void pprint(
input logic [4095:0] data_1,
input int field_1,
input logic [4095:0] data_2,
input int field_2 );
string format;
format = $sformatf("%0d'h%%%0dh,%0d'h%%%0dh",
field_1, (field_1+3)/4,
field_2, (field_2+3)/4 );
$display("format = %s",format);
$display($sformatf(format,data_1,data_2));
endfunction
This should give you the output:
format = 8'h%02h,128'h%032h
8'h05,128'h000000000000000000000000465240D
Adding a zero between the % and digit could tells the simulator to pad the upper bits with zeros.
For some reason $display(format,data_1,data_2) did not use the format on simulators on edaplayground, but it did work with $sformatf so I simply nested it.
I needed to increase the bit width of the input data otherwise it would show leading zeros over 8 digits. Adjust if necessary.
Adding 3 to the field is for handling non multiples of 4. It will always round down after division.
According to section 21.3.3 Formatting data to a string of the SystemVerilog LRM, only $sformat and $sformatf have a specific formatting argument that can be a string literal or string variable. All other output tasks like $display treat any string literal argument as format specifiers and do not interpret the strings inside string variables for formatting.
Related
How do you write a Binary Literal in Dart?
I can write a Hex Literal like so:
Int Number = 0xc
If I try the conventional way to write a Binary Literal:
Int Number = 0b1100
I get an error. I've tried to look it up, but I've not been able to find any information other than for hex.
There are currently no built-in binary number literals in Dart (or any base other than 10 and 16).
The closest you can get is: var number = int.parse("1100", radix: 2);.
Maybe you can use this:
// 0b1100 -> 1 at 3th bit and 1 at 2nd bit
final number = 1 << 3 | 1 << 2;
// Print binary string
print(number.toRadixString(2)) // 1100
Try binary package:
import 'package:binary/binary.dart';
void main() {
// New API.S
print(0x0C.toBinaryPadded(8)); // 00001100
}
see: https://pub.dev/documentation/binary/latest/
I've been searching for a way to convert decimal numbers to hexadecimal format in the Dart programming language.
The hex.encode method in the HexCodec class, for example, cannot convert the decimal 1111 (which has a hex value of 457) and instead gives an exception:
FormatException: Invalid byte 0x457. (at offset 0)
How do I convert a decimal number to hex?
int.toRadixString(16)
does that.
See also https://groups.google.com/a/dartlang.org/forum/m/#!topic/misc/ljkYEzveYWk
Here is a little fuller example:
final myInteger = 2020;
final hexString = myInteger.toRadixString(16); // 7e4
The radix just means the base, so 16 means base-16. You can use the same method to make a binary string:
final binaryString = myInteger.toRadixString(2); // 11111100100
If you want the hex string to always be four characters long then you can pad the left side with zeros:
final paddedString = hexString.padLeft(4, '0'); // 07e4
And if you prefer it in uppercase hex:
final uppercaseString = paddedString.toUpperCase(); // 07E4
Here are a couple other interesting things:
print(0x7e4); // 2020
int myInt = int.parse('07e4', radix: 16);
print(myInt); // 2020
weight is a field (Number in Firestore), set as 100.
int weight = json['weight'];
double weight = json['weight'];
int weight works fine, returns 100 as expected, but double weight crashes (Object.noSuchMethod exception) rather than returning 100.0, which is what I expected.
However, the following works:
num weight = json['weight'];
num.toDouble();
When parsing 100 from Firestore (which actually does not support a "number type", but converts it), it will by standard be parsed to an int.
Dart does not automatically "smartly" cast those types. In fact, you cannot cast an int to a double, which is the problem you are facing. If it were possible, your code would just work fine.
Parsing
Instead, you can parse it yourself:
double weight = json['weight'].toDouble();
Casting
What also works, is parsing the JSON to a num and then assigning it to a double, which will cast num to double.
double weight = json['weight'] as num;
This seems a bit odd at first and in fact the Dart Analysis tool (which is e.g. built in into the Dart plugin for VS Code and IntelliJ) will mark it as an "unnecessary cast", which it is not.
double a = 100; // this will not compile
double b = 100 as num; // this will compile, but is still marked as an "unnecessary cast"
double b = 100 as num compiles because num is the super class of double and Dart casts super to sub types even without explicit casts.
An explicit cast would be the follwing:
double a = 100 as double; // does not compile because int is not the super class of double
double b = (100 as num) as double; // compiles, you can also omit the double cast
Here is a nice read about "Types and casting in Dart".
Explanation
What happened to you is the following:
double weight;
weight = 100; // cannot compile because 100 is considered an int
// is the same as
weight = 100 as double; // which cannot work as I explained above
// Dart adds those casts automatically
You can do it in one line:
double weight = (json['weight'] as num).toDouble();
You can Parse the data Like given below:
Here document is a Map<String,dynamic>
double opening = double.tryParse(document['opening'].toString());
In Dart, int and double are separate types, both subtypes of num.
There is no automatic conversion between number types. If you write:
num n = 100;
double d = n;
you will get a run-time error. Dart's static type system allows unsafe down-casts, so the unsafe assignment of n to d (unsafe because not all num values are double values) is treated implicitly as:
num n = 100;
double d = n as double;
The as double checks that the value is actually a double (or null), and throws if it isn't. If that check succeeds, then it can safely assign the value to d since it is known to match the variable's type.
That's what's happening here. The actual value of json['weight'] (likely with static type Object or dynamic) is the int object with value 100. Assigning that to int works. Assigning it to num works. Assigning it to double throws.
The Dart JSON parser parses numbers as integers if they have no decimal or exponent parts (0.0 is a double, 0e0 is a double, 0 is an integer). That's very convenient in most cases, but occasionally annoying in cases like yours where you want a double, but the code creating the JSON didn't write it as a double.
In cases like that, you just have to write .toDouble() on the values when you extract them. That's a no-op on actual doubles.
As a side note, Dart compiled to JavaScript represents all numbers as the JavaScript Number type, which means that all numbers are doubles. In JS compiled code, all integers can be assigned to double without conversion. That will not work when the code is run on a non-JS implementation, like Flutter, Dart VM/server or ahead-of-time compilation for iOS, so don't depend on it, or your code will not be portable.
Simply convert int to double like this
int a = 10;
double b = a + 0.0;
I have an integer which I want to convert to a string with leading zeros.
So I have 1 and want to turn it into 01. 14 should turn into 14 not 014.
I tried:
let str = (string 1).PadLeft(2, '0') // visual studio suggested this one
let str = (string 1).PadLeft 2 '0'
let str = String.PadLeft 2 '0' (string 1)
But neither work :(
When I search for something like this with F# I get stuff with printfn but I don't want to print to stdout :/
Disclaimer: This is my first F#
You can use sprintf which returns a string rather than printing to stdout. Any of the print functions that start with an s return a string.
Use %0i to pad with zeroes. Add the length of the intended string between 0 and i. For example, to pad to four zeroes, you can use:
sprintf "%04i" 42
// returns "0042"
I wanted to know how does writeInt treat a 32 bit unsigned or a signed integer passed to it?
It is easy to understand that how it works with a hexadecimal number. Util.Print will print the corresponding ASCII Characters.
0x41424344 will be broken down into 4 1 byte characters, A, B, C and D.
It seems like its different when an integer is passed to writeInt.
for instance,
var test: ByteArray = new ByteArray();
test.writeInt(0x41424344); // prints ABCD
test.writeInt(2590463591); // prints gVg
test.writeInt(1119885898); // prints BÀJ
I am unclear how the Util.Print function treats the integers written into the ByteArray by writeInt.
The characters, gVg do not correspond to the integer number, 2590463591
According to the definition of writeInt here:
http://livedocs.adobe.com/livecycle/es/sdkHelp/common/langref/flash/utils/ByteArray.html#writeInt%28%29
It states that it works with a 32 Bit Signed Integer.
If someone can elaborate over how it translates the integers to characters, it would be helpful.
EDIT: And how does it handle negative integers?
For instance,
test.writeInt(-11338743); // prints ÿRü
So,
-11338743 = 0xFF52FC09
is that correct?
Thanks.
If you interpret encoded bytes as ASCII
dec hex ascii
1094861636 = 0x41424344 = ABCD
2590463591 = 0x9A675667 = gVg
1119885898 = 0x42C01A4A = BÀJ
Also, note that int vs unsigned int would implement different functions:
var test:ByteArray = new ByteArray();
test.writeInt(0x41424344);
test.writeUnsignedInt(0x41424344);