Is there in Geogebra any way that we can extract a full construction protocol and feed it back modified? - geogebra

Geogebra seems to be the most intuitive graphic editor. As a bonus, it exposes the construction protocol. But unfortunately, the construction protocol exports only the creation of the objects with their dependencies without additional style information. The information is stored in the XML file, and it is, in principle, accessible. But even then, I cannot figure out how it could be used as input for Geogebra. Is there any way to feed back the modified construction protocol to have Geogebra in a loop until we finish the construction by editing the construction protocol in an outside editor? Would be a "load construction protocol" command enough? And what would be the best channel through which could such a command be fed into Geogebra (AutoHotkey can be a last resort solution, but I would avoid it)?

Related

Is it possible to modify command line parameters programmatically in Delphi?

I'm working with some legacy code that uses ParamCount() and ParamStr() in various places, and now I need to provide different values than those that were actually passed as command line variables.
The simplest solution would be to programmatically add/modify the existing command line parameters, since the alternative is to change A LOT of the legacy code to accept function parameters rather than directly accessing ParamCount() and ParamStr().
Is this possible? Can I somehow add/modify parameters from within the program itself so that ParamCount() ParamStr() will pick up my new/modified parameters?
Edit, clarification of the legacy code:
The legacy code makes some database requests, using where arguments from the command line (and sanitizing them). This is fine in 99.9% of all cases, as these arguments are fundamental for the purpose of the legacy units. However, I'm working on a new feature that "breaks the mold", where one of these fundamental arguments are unknown and need to be fetched from the database and provided internally.
Yes, I could search and replace, but my objective here is to not touch the legacy code, as it's in a unit that is shared among many different programs.
Restarting the program and/or executing a new copy of it from itself is one solution, but seems a bit risky and cumbersome. This is a production program that executes on a server and needs to be as simple and robust as possible.
Is this possible? Can I somehow add/modify parameters from within the program itself so that ParamCount() ParamStr() will pick up my new/modified parameters?
Technically yes, but it is not something that the RTL itself exposes functionality for, so you will have to implement it manually.
Since you are working with legacy code, I'm assuming you are working on Windows only. In which case, ParamStr() and ParamCount() parse the string returned by the Win32 API GetCommandLine() function in kernel32.dll.
So, one option is to simply hook the GetCommandLine() function itself at runtime, such as with Microsoft's Detours, or other similar library. Then your hook can return whatever string you want 1.
1: for that matter, you could just hook ParamCount() and ParamStr() instead, and make them return whatever you want.
Another option - which requires messing around with lower-level memory that you don't own, and I don't advise doing this - is to obtain a pointer to the PEB structure of the calling process. You can obtain that pointer using NTQueryInformationProcess(ProcessBasicInformation). The PEB contains a ProcessParameters field, which is a pointer to an RTL_USER_PROCESS_PARAMETERS struct, which contains the actual CommandLine string as a UNICODE_STRING struct. If your altered string is less then, or equal to, the length of the original command line string, you can just alter the content of the ProcessParameters.CommandLine in-place. Otherwise, you would have to allocate new memory to hold your altered string and then update ProcessParameters.CommandLine to point at that new memory.

Binding imperatively

Is there a way to set up bindings imperatively. An example use case:
var el2 = new MyElement();
el2.myProp = this.$.anotherElement.anotherProp
That won't setup a binding, it just assigns the value or object. I'd like to find a way to do something like:
el2.myProp.bindTo(this.$.anotherElement.anotherProp)
Possible?
Polymer 1.0 does not support this at the moment - as explained by #kevinpschaaf in Github https://github.com/Polymer/polymer/issues/1778.
(comment by #kevinpschaaf)
No, we don't currently support this, outside of dom-bind, which is the
only template implementation that late-binds instance children. You
can document.createElement('template', 'dom-bind'), then you can
dynamically append children with binding annotations to its content,
and the bindings will only be evaluated once the dom-bind is attached
to the document. See tests here that show this usage of it:
https://github.com/Polymer/polymer/blob/master/test/unit/dom-bind.html#L95
Note that dom-bind does not currently allow binding to outer scope, so
it has limited use in custom element templates (it's main use case is
for binding between elements in the main document), and that's not
likely to change short-term.
We are achieving a lot of performance optimization by baking the
binding connections into the prototype at registration time for an
element (rather than at instance time), and we haven't built up enough
of the machinery to easily allow runtime addition/removal of bindings.

Using Dart as a DSL

I am trying to use Dart to tersely define entities in an application, following the idiom of code = configuration. Since I will be defining many entities, I'd like to keep the code as trim and concise and readable as possible.
In an effort to keep boilerplate as close to 0 lines as possible, I recently wrote some code like this:
// man.dart
part of entity_component_framework;
var _man = entity('man', (entityBuilder) {
entityBuilder.add([TopHat, CrookedTeeth]);
})
// test.dart
part of entity_component_framework;
var man = EntityBuilder.entities['man']; // null, since _man wasn't ever accessed.
The entity method associates the entityBuilder passed into the function with a name ('man' in this case). var _man exists because only variable assignments can be top-level in Dart. This seems to be the most concise way possible to use Dart as a DSL.
One thing I wasn't counting on, though, is lazy initialization. If I never access _man -- and I had no intention to, since the entity function neatly stored all the relevant information I required in another data structure -- then the entity function is never run. This is a feature, not a bug.
So, what's the cleanest way of using Dart as a DSL given the lazy initialization restriction?
So, as you point out, it's a feature that Dart doesn't run any code until it's told to. So if you want something to happen, you need to do it in code that runs. Some possibilities
Put your calls to entity() inside the main() function. I assume you don't want to do that, and probably that you want people to be able to add more of these in additional files without modifying the originals.
If you're willing to incur the overhead of mirrors, which is probably not that much if they're confined to this library, use them to find all the top-level variables in that library and access them. Or define them as functions or getters. But I assume that you like the property that variables are automatically one-shot. You'd want to use a MirrorsUsed annotation.
A variation on that would be to use annotations to mark the things you want to be initialized. Though this is similar in that you'd have to iterate over the annotated things, which I think would also require mirrors.

Stream which implements a Seek method

I'm trying to find an interface which allows me to create a stream which allows seeking (just a Reader is fine, too) from either a file or []byte, but can't seem to find anything in the godoc. Some of the types in the bufio package would work quite well, but they don't appear to support seeking.
Is there something I overlooked which would fit what I'm looking for?
Both *os.File (for files) and *bytes.Reader (for having an io.Reader from a []byte) implement the io.Seeker interface and thus have a Seek method.
io.Seeker is implemented by...
*bytes.Reader
*io.SectionReader
io.ReadSeeker
io.WriteSeeker
io.ReadWriteSeeker
mime/multipart.File
net/http.File
*os.File
*strings.Reader
So if you're working with a file, thus very likely *os.File, you don't need to do anything additional to be able to seek it. Just make sure that if you're using interfaces instead of concrete types that you do not want an io.Reader but an io.ReadSeeker.

Parsing variable length descriptors from a byte stream and acting on their type

I'm reading from a byte stream that contains a series of variable length descriptors which I'm representing as various structs/classes in my code. Each descriptor has a fixed length header in common with all the other descriptors, which are used to identify its type.
Is there an appropriate model or pattern I can use to best parse and represent each descriptor, and then perform an appropriate action depending on it's type?
I've written lots of these types of parser.
I recommend that you read the fixed length header, and then dispatch to the correct constructor to your structures using a simple switch-case, passing the fixed header and stream to that constructor so that it can consume the variable part of the stream.
This is a common problem in file parsing. Commonly, you read the known part of the descriptor (which luckily is fixed-length in this case, but isn't always), and branch it there. Generally I use a strategy pattern here, since I generally expect the system to be broadly flexible - but a straight switch or factory may work as well.
The other question is: do you control and trust the downstream code? Meaning: the factory / strategy implementation? If you do, then you can just give them the stream and the number of bytes you expect them to consume (perhaps putting some debug assertions in place, to verify that they do read exactly the right amount).
If you can't trust the factory/strategy implementation (perhaps you allow the user-code to use custom deserializers), then I would construct a wrapper on top of the stream (example: SubStream from protobuf-net), that only allows the expected number of bytes to be consumed (reporting EOF afterwards), and doesn't allow seek/etc operations outside of this block. I would also have runtime checks (even in release builds) that enough data has been consumed - but in this case I would probably just read past any unread data - i.e. if we expected the downstream code to consume 20 bytes, but it only read 12, then skip the next 8 and read our next descriptor.
To expand on that; one strategy design here might have something like:
interface ISerializer {
object Deserialize(Stream source, int bytes);
void Serialize(Stream destination, object value);
}
You might build a dictionary (or just a list if the number is small) of such serializers per expected markers, and resolve your serializer, then invoke the Deserialize method. If you don't recognise the marker, then (one of):
skip the given number of bytes
throw an error
store the extra bytes in a buffer somewhere (allowing for round-trip of unexpected data)
As a side-note to the above - this approach (strategy) is useful if the system is determined at runtime, either via reflection or via a runtime DSL (etc). If the system is entirely predictable at compile-time (because it doesn't change, or because you are using code-generation), then a straight switch approach may be more appropriate - and you probably don't need any extra interfaces, since you can inject the appropriate code directly.
One key thing to remember, if you're reading from the stream and do not detect a valid header/message, throw away only the first byte before trying again. Many times I've seen a whole packet or message get thrown away instead, which can result in valid data being lost.
This sounds like it might be a job for the Factory Method or perhaps Abstract Factory. Based on the header you choose which factory method to call, and that returns an object of the relevant type.
Whether this is better than simply adding constructors to a switch statement depends on the complexity and the uniformity of the objects you're creating.
I would suggest:
fifo = Fifo.new
while(fd is readable) {
read everything off the fd and stick it into fifo
if (the front of the fifo is has a valid header and
the fifo is big enough for payload) {
dispatch constructor, remove bytes from fifo
}
}
With this method:
you can do some error checking for bad payloads, and potentially throw bad data away
data is not waiting on the fd's read buffer (can be an issue for large payloads)
If you'd like it to be nice OO, you can use the visitor pattern in an object hierarchy. How I've done it was like this (for identifying packets captured off the network, pretty much the same thing you might need):
huge object hierarchy, with one parent class
each class has a static contructor that registers with its parent, so the parent knows about its direct children (this was c++, I think this step is not needed in languages with good reflection support)
each class had a static constructor method that got the remaining part of the bytestream and based on that, it decided if it is his responsibility to handle that data or not
When a packet came in, I've simply passed it to static constructor method of the main parent class (called Packet), which in turn checked all of its children if it's their responsibility to handle that packet, and this went recursively, until one class at the bottom of the hierarchy returned the instantiated class back.
Each of the static "constructor" methods cut its own header from the bytestream and passed down only the payload to its children.
The upside of this approach is that you can add new types anywhere in the object hierarchy WITHOUT needing to see/change ANY other class. It worked remarkably nice and well for packets; it went like this:
Packet
EthernetPacket
IPPacket
UDPPacket, TCPPacket, ICMPPacket
...
I hope you can see the idea.

Resources