Can anyone show me how to secure the value of variable in imagej macro script?
In general, there is a key word, such as const, to ensure the variable unchangable in most of programming language, but in imagej macro this optional keyword cannot be identified.
The ImageJ macro language does not support constants. Use one of the other supported scripting languages instead.
Related
Normally, when you want to reuse a regular expression, you can declare it in flex in declaration section. They will get enclosed by parenthesis by default. Eg:
num_seq [0-9]+
%%
{num_seq} return INT; // will become ([0-9]+)
{num_seq}\.{num_seq} return FLOAT; // will become ([0-9]+)\.([0-9]+)
But, I wanted to reuse some character classes. Can I define custom classes like [:alpha:], [:alnum:] etc. A toy Eg:
chars [a-zA-Z]
%%
// will become (([a-zA-Z]){-}[aeiouAEIOU])+ // ill-formed
// desired ([a-zA-Z]{-}[aeiouAEIOU])+ // correct
({chars}{-}[aeiouAEIOU])+ return ONLY_CONS;
({chars}{-}[a-z])+ return ONLY_UPPER;
({chars}{-}[A-Z])+ return ONLY_LOWER;
But currently, this will fail to compile because of the parenthesis added around them. Is there a proper way or at-least a workaround to achieve this?
This might be useful from time to time, but unfortunately it has never been implemented in flex. You could suppress the automatic parentheses around macro substitution by running flex in lex compatibility mode, but that has other probably undesirable effects.
Posix requires that regular expression bracket syntax includes, in addition to the predefined character classes,
…character class expressions of the form: [:name:] … in those locales where the name keyword has been given a charclass definition in the LC_CTYPE category.
Unfortunately, flex does not implement this requirement. It is not too difficult to patch flex to do this, but since there is no portable mechanism to allow the user to add charclasses to their locale --and, indeed, many standard C library implementations lack proper locale support-- there is little incentive to make this change.
Having looked at all these options, I eventually convinced myself that the simplest portable solution is to preprocess the flex input file to replace [:name:] with a set of characters based on name. Since that sequence of characters is unlikely to be present in a flex input file, a simple-minded search and replace using sed or python is adequate; correctly parsing the flex input file seems to me to be more trouble than it was worth.
I've seen some Rust codebases use the #[repr(C)] macro (is that what it's called?), however, I couldn't find much information about it but that it sets the type layout in memory to the same layout as 'C's.
Here's what I would like to know: is this a preprocessor directive restricted to the compiler and not the language itself (even though there aren't any other compiler front-ends for Rust), and why does Rust even have a memory layout different than that of Cs? (it's just that I've never had to do this in another language).
Here's a nice situation to demonstrate what I meant: if someone creates another compiler for Rust, are they required to implement this macro, or is it a compiler specific thing?
#[repr(C)] is not a preprocessor directive, since Rust doesn't use a preprocessor 1. It is an attribute. Rust doesn't have a complete specification, but the repr attribute is mentioned in the Rust reference, so it is absolutely a part of the language. Implementation-wise, attributes are parsed the same way all other Rust code is, and are stored in the same AST. Rust has no "attribute pass": attributes are an actual part of the language. If someone else were to implement a Rust compiler, they would need to implement #[repr(C)].
Furthermore, #[repr(C)] can't be implemented without some compiler magic. In the absence of a #[repr(...)], Rust compilers are free to arrange the fields of a struct/enum however they want to (and they do take advantage of this for optimization purposes!).
Rust does have a good reason for using it's own memory layout. If compilers aren't tied to how a struct is written in the source code, they can do optimisations like not storing struct fields that are never read from, reordering fields for better performance, enum tag pooling2, and using spare bits throughout NonZero*s in the struct to store data (the last one isn't happening yet, but might in the future). But the main reason is that Rust has things that just don't make sense in C. For instance, Rust has zero-sized types (like () and [i8; 0]) which can't exist in C, trait vtables, enums with fields, generic types, all of which cause problems when trying to translate them to C.
1 Okay, you could use the C preprocessor with Rust if you really wanted to. Please don't.
2 For example, enum Food { Apple, Pizza(Topping) } enum Topping { Pineapple, Mushroom, Garlic } can be stored in just 1 byte since there are only 4 possible Food values that can be created.
What is this?
It is not a macro it is an attribute.
The book has a good chapter on what macros are and it mentions that there are "Attribute-like macros":
The term macro refers to a family of features in Rust: declarative macros with macro_rules! and three kinds of procedural macros:
Custom #[derive] macros that specify code added with the derive attribute used on structs and enums
Attribute-like macros that define custom attributes usable on any item
Function-like macros that look like function calls but operate on the tokens specified as their argument
Attribute-like macros are what you could use like attributes. For example:
#[route(GET, "/")]
fn index() {}
It does look like the repr attribute doesn't it 😃
So what is an attribute then?
Luckily Rust has great resources like rust-by-example which includes:
An attribute is metadata applied to some module, crate or item. This metadata can be used to/for:
conditional compilation of code
set crate name, version and type (binary or library)
disable lints (warnings)
enable compiler features (macros, glob imports, etc.)
link to a foreign library
mark functions as unit tests
mark functions that will be part of a benchmark
The rust reference is also something you usually look at when you need to know something more in depth. (chapter for attributes)
To the compiler authors out there:
If you were to write a rust compiler, and wanted to support things like the standard library or other crates then you would 100% need to implement these. Because the libraries use these and need them.
Otherwise I guess you could come up with a subset of rust that your compiler supports. But then most people wouldn't use it..
Why does rust not just use the C layout?
The nomicon explains why rust needs to be able to reorder fields of structs for example. For reasons of saving space and being more efficient. It is related to, among other things, generics and monomorphization. In repr(C) fields of structs must be in the same order as the definition.
The C representation is designed for dual purposes. One purpose is for creating types that are interoperable with the C Language. The second purpose is to create types that you can soundly perform operations on that rely on data layout such as reinterpreting values as a different type.
I'm writing a set of DLLs which allows other developers write their own DLL as an extension. In the Delphi code, I widely use enums, and enum sets. I use enums through DLLs. I know I can safely use an enum through a DLL across different projects compiled with Delphi. However, I'm not sure about how adaptable it is across various languages.
Is it safe to use enums through a DLL while supporting other various languages? Or should I cast it as an integer instead?
Enums need to be passed as integers (Word or DWORD), and you should use the compiler directive {$MINENUMSIZE} (AKA {$Z}) to ensure that they're the proper size. The Delphi compiler will use different sizes based on the number of enum values unless you do so.
If you're planning to interop with C/C++ code on the Windows OS, use {$MINENUMSIZE 4}.
The documentation I linked above addresses interop with C/C++ - see the third paragraph:
The $Z directive controls the minimum storage size of Delphi enumerated types.
An enumerated type is stored as an unsigned byte if the enumeration has no more than 256 values, and if the type was declared in the {$Z1} state (the default). If an enumerated type has more than 256 values, or if the type was declared in the {$Z2} state, it is stored as an unsigned word. Finally, if an enumerated type is declared in the {$Z4} state, it is stored as an unsigned double word.
The {$Z2} and {$Z4} states are useful for interfacing with C and C++ libraries, which usually represent enumerated types as words or double words.
I have this question.
If I've a dynamic language divided in unities(subprograms). Is it possible that this language has a static scope?
In case that YES, how it's expressed in the symbols table? this has a field in each row that represent the static chain like the A.R. in Algol style language?
Most languages have static scope. That includes many dynamic langauges (Python, Ruby, Javascript and even Perl if you use my to declare your variables).
In case that YES, how it's expresed in the symbols table?
The same way it is in any other language. If you encounter a variable declaration (where in some languages "declaration" means "the first time that the variable is assigned to"), the variable is added to the table. Once you reach the end of the scope, it is removed from the table. In some languages the rules may be a bit more complicated (for example in Javascript the variable will be in scope even before its declaration), but that's basically it.
I wonder if there is a relationship between untyped/typed code quotations in F# and the hygiene of macro systems. Do they solve the same issues in their respective languages or are they separate concerns?
The meta-programming aspect is the only similarity, and even in that regard, there is a big difference. You can think of the macro's transformer as a function from syntax to syntax like you can manipulate quotations, but the transformers are globally coordinated so that names used as binders follow a specific protocol:
1) Binders may not be the same as any free name in input to the macro (unless you use an unhygienic escape hatch)
2) Names bound in a macro definition's context that are free in the macro's expansion must point to the same thing at macro use time. (this needs global coordination)
Choices for names are made so that expansion does not fail if you used the wrong name (unless it turns out that name is unbound).
Transformers of typed quotations do not have this definition time context idea. You manipulate quotations to form a program that does not refer to any names in your program. They are not meant to provide a syntactic abstraction mechanism. Arbitrary shapes of syntax? Nope. It all has to be core AST shapes.
Open code in typed quotation systems can be closed with anything that fits the type structure of the expected context - there is no coordinated composition of several open components into a coherent structure.
Quotations are a form of meta-programming. They allow you to manipulate abstract syntax trees programmatically, which can be in turned spliced into code, and evaluated.
Typed quotations embed the reified type of the AST in the host language's type system, so they ensure you cannot generate ill-typed fragments of code. Untyped quotations do not offer that guarantee (it may fail with a runtime error).
As an aside, typed quotations are strongly similar to Template Haskell quasiquotations.
Hygenic macros in Lisp-like languages are related, in that they exist to support meta-programming. The hygiene however is for simple name capture confusion, something that typed quasi quotations already avoid (and more).
So yes, they are similar, in that they are mechanisms for meta-programming in typed and untyped languages, respectively. Both typed quasi quotes and hygenic macros add additional safety to fully untyped, unsound meta programming. The level of guarantee they offer the programmer though is different. The typed quotes are strictly stronger.