Saxon transformer version 11.3 - memory leak - saxon

I upgraded from saxon transformer version 9 to 11.3. My application is facing memory leak issue.
Java spring boot application
Application is multi threaded
The application has 10 different transformer and process different xsl
The memory occupied grows over time and takes up all available memory
Tried to reuse transformer and to use new transformer every time we transform.(No luck)

As Martin says, this kind of question is more suited to the Saxonica support forums than here. The reason is that it probably involves an iterative process of investigation, which doesn't fit well with the StackOverflow question/answer format (where, if I don't immediately have the answer, someone is likely to flag my response as "not an answer").
The term "memory leak" is rather inaccurate when applied to Java. A memory leak is when objects that are no longer referenced are not releasing their memory, but the problem here is (almost certainly) that objects are still referenced when you don't want them to be.
To investigate the problem you need to get a heap dump and find out where the problem lies. It may be obvious that there are millions of objects of a particular class that shouldn't be there. A heap dump analyser will enable you to find out where these objects are referenced from, which will explain why they are not being garbage-collected.
You could proceed either by getting a heap dump and sending it to Saxonica (if you can't analyse it yourself), or by creating a repro of the failing application and sending that. My guess, given that it's a fairly complex application, is that first approach is going to be easier for you.
We try to draw some lines around the level of support we're prepared to provide. One of those boundaries is that we won't attempt to run a repro that has complex dependencies, e.g. that requires third-party products to be installed. That would include Spring.
Note: different people adopt different strategies for problem solving, but personally, when I see something like this that worked in release X and doesn't work in release Y, I will usually investigate under release Y from first principles, rather than exploring what has changed. But there are exceptions, for example we know that Saxon 11 uses a completely new approach to catalog-bsed URI resolution and that will sometimes give clues as to where it's worth looking. So: it probably doesn't matter what release X was, but you should be aware that "Saxon 9" describes a series of ten major releases from 9.0 in 2008 to 9.9 in 2020, and it therefore isn't a very useful definition of your baseline.

Related

Swift - Conceptual, type-check time concerns

I'm building what I would consider a fairly light application, at least from a web developers point of view. I have messages, posts, what have you, with simple design with images and such. I'm utilizing my old 2012 MacBook Pro to write this application, so I figure that might have something to do with it, but every build takes an obscene amount of time to write, like a minute or two, if at all. I try to add something simple like an image that's clipped to the shape of a circle and apparently that pushes the build past the point where the type-check takes a reasonable amount of time.
It just seems incredible how such a relatively simple application could take so long to write, and even crazier that this type-check is forced upon you, and if it fails you can't even test out your application. Sometimes the type-check error will just get put on a Z-Stack with multiple different views combined after making many changes so it's impossible to tell what made it throw the error so I just have to rework backwards removing things and see what works, it just seems like such an archaic way to write code. I don't know if I'm alone, perhaps if you start in mobile development you get used to it just being the way it is, or as I mentioned it might be due to the lackluster power of my computer in question, but there is no reason an IDE should be super processing intensive, at least from how I see it. I guess I kind of understand Apple's prerogative with forcing this type-checking software on all apps, I guess to prevent apps from having bugs, but honestly it seems half polished sometimes and has never really helped me much with finding errors that I wouldn't have found from just running the build, and has been a detriment to the experience of writing Swift when it just won't build at all because it's having problems type-checking it.
Is this issue with the time it takes to make each build just a reality of XCode and Swift or is it just due to the processing power of the machine I'm writing it on? Both seem equally depressing, in all honesty.

Is the "4GB patch" of any use in real life?

And if so, how. I'm talking about this 4GB Patch.
On the face of it, it seems like a pretty nifty idea: on Windows, each 32-bit application normally only has access to 2GB of address space, but if you have 64-bit Windows, you can enable a little flag to allow a 32-bit application to access the full 4GB. The page gives some examples of applications that might benefit from it.
HOWEVER, most applications seem to assume that memory allocation is always successful. Some applications do check if allocations are successful, but even then can at best quit gracefully on failure. I've never in my (short) life come across an application that could fail a memory allocation and still keep going with no loss of functionality or impact on correctness, and I have a feeling that such applications are from extremely rare to essentially non-existent in the realm of desktop computers. With this in mind, it would seem reasonable to assume that any such application would be programmed to not exceed 2GB memory usage under normal conditions, and those few that do would have been built with this magic flag already enabled for the benefit of 64-bit users.
So, have I made some incorrect assumptions? If not, how does this tool help in practice? I don't see how it could, yet I see quite a few people around the internet claiming it works (for some definition of works).
Your troublesome assumptions are these ones:
Some applications do check if allocations are successful, but even then can at best quit gracefully on failure. I've never in my (short) life come across an application that could fail a memory allocation and still keep going with no loss of functionality or impact on correctness, and I have a feeling that such applications are from extremely rare to essentially non-existent in the realm of desktop computers.
There do exist applications that do better than "quit gracefully" on failure. Yes, functionality will be impacted (after all, there wasn't enough memory to continue with the requested operation), but many apps will at least be able to stay running - so, for example, you may not be able to add any more text to your enormous document, but you can at least save the document in its current state (or make it smaller, etc.)
With this in mind, it would seem reasonable to assume that any such application would be programmed to not exceed 2GB memory usage under normal conditions, and those few that do would have been built with this magic flag already enabled for the benefit of 64-bit users.
The trouble with this assumption is that, in general, an application's memory usage is determined by what you do with it. So, as over the past years storage sizes have grown, and memory sizes have grown, the sizes of files that people want to operate on have also grown - so an application that worked fine when 1GB files were unheard of may struggle now that (for example) high definition video can be taken by many consumer cameras.
Putting that another way: applications that used to fit comfortably within 2GB of memory no longer do, because people want do do more with them now.
I do think the following extract from your link of 4 GB Patch pretty much explains the reason of how and why it works.
Why things are this way on x64 is easy to explain. On x86 applications have 2GB of virtual memory out of 4GB (the other 2GB are reserved for the system). On x64 these two other GB can now be accessed by 32bit applications. In order to achieve this, a flag has to be set in the file's internal format. This is, of course, very easy for insiders who do it every day with the CFF Explorer. This tool was written because not everybody is an insider, and most probably a lot of people don't even know that this can be achieved. Even I wouldn't have written this tool if someone didn't explicitly ask me to.
And to expand on CFF,
The CFF Explorer was designed to make PE editing as easy as possible,
but without losing sight on the portable executable's internal
structure. This application includes a series of tools which might
help not only reverse engineers but also programmers. It offers a
multi-file environment and a switchable interface.
And to quote a Microsoft insider, Larry Miller of Microsoft MCSA on a blog post about patching games using the tool,
Under 32 bit windows an application has access to 2GB of VIRTUAL
memory space. 64 bit Windows makes 4GB available to applications.
Without the change mentioned an application will only be able to
access 2GB.
This was not an arbitrary restriction. Most 32 bit applications simply
can not cope with a larger than 2GB address space. The switch
mentioned indicates to the system that it is able to cope. If this
switch is manually set most 32 bit applications will crash in 64 bit
environment.
In some cases the switch may be useful. But don't be surprised if it
crashes.
And finally to add from MSDN - Migrating 32-bit Managed Code to 64-bit,
There is also information in the PE that tells the Windows loader if
the assembly is targeted for a specific architecture. This additional
information ensures that assemblies targeted for a particular
architecture are not loaded in a different one. The C#, Visual Basic
.NET, and C++ Whidbey compilers let you set the appropriate flags in
the PE header. For example, C# and THIRD have a /platform:{anycpu,
x86, Itanium, x64} compiler option.
Note: While it is technically possible to modify the flags in the PE header of an assembly after it has been compiled, Microsoft does not recommend doing this.
Finally to answer your question - how does this tool help in practice?
Since you have malloc in your tags, I believe you are working on unmanaged memory. This patch would mostly result in invalid pointers as they become twice the size now, and almost all other primitive datatypes would be scaled by a factor of 2X.
But for managed code since all these are handled by the CLR in .NET, this would mean really helpful and would not have much problems unless you are dealing with any of the following :
Invoking platform APIs via p/invoke
Invoking COM objects
Making use of unsafe code
Using marshaling as a mechanism for sharing information
Using serialization as a way of persisting state
To summarize, being a programmer I would not use the tool to convert my application and rather would migrate it myself by changing build targets. being said that if I have a exe that can do well like games with more RAM, then this is worth a try.

llvm based code mutation for genetic programming?

for a study on genetic programming, I would like to implement an evolutionary system on basis of llvm and apply code-mutations (possibly on IR level).
I found llvm-mutate which is quite useful executing point mutations.
As far as I have understood, the instructions get count/numbered, one can then e.g. delete a numbered instruction.
However, introduction of new instructions seems to be possible as one of the availeable statements in the code.
Real mutation however would allow to insert any of the allowed IR instructions, irrespective of it beeing used in the code to be mutated.
In addition, it should be possible to insert library function calls of linked libraries (not used in the current code, but possibly available, because the lib has been linked in clang).
Did I overlook this in the llvm-mutate or is it really not possible so far?
Are there any projects trying to /already have implement(ed) such mutations for llvm?
llvm has lots of code analysis tools which should allow the implementation of the afore mentioned approach. llvm is huge, so I'm a bit disoriented. Any hints which tools could be helpful (e.g. getting a list of available library functions etc.)?
Thanks
Alex
Very interesting question. I have been intrigued by the possibility of doing binary-level genetic programming for a while. With respect to what you ask:
It is apparent from their documentation that LLVM-mutate can't do what you are asking. However, I think it is wise for it not to. My reasoning is that any machine-language genetic program would inevitably face the "Halting Problem", e.g. it would be impossible to know if a randomly generated instruction would completely crash the whole computer (for example, by assigning a value to a OS-reserved pointer), or it might run forever and take all of your CPU cycles. Turing's theorem tells us that it is impossible to know in advance if a given program would do that. Mind you, LLVM-mutate can cause for a perfectly harmless program to still crash or run forever, but I think their approach makes it less likely by only taking existing instructions.
However, such a thing as "impossibility" only deters scientists, not engineers :-)...
What I have been thinking is this: In nature, real mutations work a lot more like LLVM-mutate that like what we do in normal Genetic Programming. In other words, they simply swap letters out of a very limited set (A,T,C,G) and every possible variation comes out of this. We could have a program or set of programs with an initial set of instructions, plus a set of "possible functions" either linked or defined in the program. Most of these functions would not be actually used, but they will be there to provide "raw DNA" for mutations, just like in our DNA. This set of functions would have the complete (or semi-complete) set of possible functions for a problem space. Then, we simply use basic operations like the ones in LLVM-mutate.
Some possible problems though:
Given the amount of possible variability, the only way to have
acceptable execution times would be to have massive amounts of
computing power. Possibly achievable in the Cloud or with GPUs.
You would still have to contend with Mr. Turing's Halting Problem.
However I think this could be resolved by running the solutions in a
"Sandbox" that doesn't take you down if the solution blows up:
Something like a single-use virtual machine or a Docker-like
container, with a time limitation (to get out of infinite loops). A
solution that crashes or times out would get the worst possible
fitness, so that the programs would tend to diverge away from those
paths.
As to why do this at all, I can see a number of interesting applications: Self-healing programs, programs that self-optimize for an specific environment, program "vaccination" against vulnerabilities, mutating viruses, quality assurance, etc.
I think there's a potential open source project here. It would be insane, dangerous and a time-sucking vortex: Just my kind of project. Count me in if someone doing it.

Datamining models in FORTRAN or C (or managed code)?

We are planning to develop a datamining package for windows. The program core / calculation engine will be developed in F# with GUI stuff / DB bindings etc done in C# and F#.
However, we have not yet decided on the model implementations. Since we need high performance, we probably can't use managed code here (any objections here?). The question is, is it reasonable to develop the models in FORTRAN or should we stick to C (or maybe C++). We are looking into using OpenCL at some point for suitable models - it feels funny having to go from managed code -> FORTRAN -> C -> OpenCL invocation for these situations.
Any recommendations?
F# compiles to the CLR, which has a just-in-time compiler. It's a dialect of ML, which is strongly typed, allowing all of the nice optimisations that go with that type of architecture; this means you will probably get reasonable performance from F#. For comparison, you could also try porting your code to OCaml (IIRC this compiles to native code) and see if that makes a material difference.
If it really is too slow then see how far that scaling hardware will get you. With the performance available through a modern PC or server it seems unlikely that you would need to go to anything exotic unless you are working with truly brobdinagian data sets. Users with smaller data sets may well be OK on an ordinary PC.
Workstations give you perhaps an order of magnitude more capacity than a standard dekstop PC. A high-end workstation like a HP Z800 or XW9400 (similar kit is available from several other manufacturers) can take two 4 or 6 core CPU chips, tens of gigabytes of RAM (up to 192GB in some cases) and has various options for high-speed I/O like SAS disks, external disk arrays or SSDs. This type of hardware is expensive but may be cheaper than a large body of programmer time. Your existing desktop support infrastructure shouldn be able to this sort of kit. The most likely problem is compatibility issues running 32 bit software on a 64-bit O/S. In this case you have various options like VMs or KVM switches to work around the compatibility issues.
The next step up is a 4 or 8 socket server. Fairly ordinary wintel servers go up to 8 sockets (32-48 cores) and perhaps 512GB of RAM - without having to move off the Wintel platform. This gives you fairly wide range of options within your platform of choice before you have to go to anything exotic1.
Finally, if you can't make it run quickly in F#, validate the F# prototype and build a C implementation using the F# prototype as a control. If that's still not fast enough you've got problems.
If your application can be structured in a way that suits the platform then you could look at a more exotic platform. Depending on what will work with your application, you might be able to host it on a cluster, cloud provider or build the core engine on a GPU, Cell processor or FPGA. However, in doing this you're getting into (quite substantial) additional costs and exotic dependencies that might cause support issues. You will probably also have to bring a third-party consultant who knows how to program the platform.
After all that, the best advice is: suck it and see. If you're comfortable with F# you should be able to prototype your application fairly quickly. See how fast it runs and don't worry too much about performance until you have some clear indication that it really will be an issue. Remember, Knuth said that premature optimisation is the root of all evil about 97% of the time. Keep a weather eye out for issues and re-evaluate your strategy if you think performance really will cause trouble.
Edit: If you want to make a packaged application then you will probably be more performance-sensitive than otherwise. In this case performance will probably become an issue sooner than it would with a bespoke system. However, this doesn't affect the basic 'suck it and see' principle.
For example, at the risk of starting a game of buzzword bingo, if your application can be parallelized and made to work on a shared-nothing architecture you might see if one of the cloud server providers [ducks] could be induced to host it. An appropriate front-end could be built to run locally or through a browser. However, on this type of architecture the internet connection to the data source becomes a bottleneck. If you have large data sets then uploading these to the service provider becomes a problem. It may be quicker to process a large dataset locally than to upload it through an internet connection.
I would advise not to bother with optimizations yet. First try to get a working prototype, then find out where computation time is spent. You can probably move the biggest bottlenecks out into C or Fortran when and if needed -- then see how much difference it makes.
As they say, often 90% of the computation is spent in 10% of the code.

What is bootstrapping?

I keep seeing "bootstrapping" mentioned in discussions of application development. It seems both widespread and important, but I've yet to come across even a poor explanation of what bootstrapping actually is; rather, it seems as though everyone is just supposed to know what it means. I don't, though. Near as I can figure, it has something to do with initialization tasks required of an application upon launch, but I could be completely wrong about that. Can anyone help me to understand this idea?
"Bootstrapping" comes from the term "pulling yourself up by your own bootstraps." That much you can get from Wikipedia.
In computing, a bootstrap loader is the first piece of code that runs when a machine starts, and is responsible for loading the rest of the operating system. In modern computers it's stored in ROM, but I recall the bootstrap process on the PDP-11, where you would poke bits via the front-panel switches to load a particular disk segment into memory, and then run it. Needless to say, the bootstrap loader is normally pretty small.
"Bootstrapping" is also used as a term for building a system using itself -- or more correctly, a predecessor version. For example, ANTLR version 3 is written using a parser developed in ANTLR version 2.
An example of bootstrapping is in some web frameworks. You call index.php (the bootstrapper), and then it loads the frameworks helpers, models, configuration, and then loads the controller and passes off control to it.
As you can see, it's a simple file that starts a large process.
The term "bootstrapping" usually applies to a situation where a system depends on itself to start, sort of a chicken and egg problem.
For instance:
How do you compile a C compiler written in C?
How do you start an OS initialization process if you don't have the OS running yet?
How do you start a distributed (peer-to-peer) system where the clients depend on their currently known peers to find out about new peers in the system?
In that case, bootstrapping refers to a way of breaking the circular dependency, usually with the help of an external entity, e.g.
You can use another C compiler to compile (bootstrap) your own compiler, and then you can use it to recompile itself
You use a separate piece of code that sets up the initial process without depending on any functions provided by the OS
You use a hard-coded list of initial peers or a hard-coded tracker URL that supplies the peer list
etc.
See on the Wikipedia article on bootstrapping.
There is a section and links explaining what it means in Computing. It has four different uses in the field.
Here are some quotes, but for a more in depth explanation, and alternative meanings, consult the links above.
"...is a technique by which a simple computer program activates a more complicated system of programs."
"A different use of the term bootstrapping is to use a compiler to compile itself, by first writing a small part of a compiler of a new programming language in an existing language to compile more programs of the new compiler written in the new language."
In the context of application development, "bootstrapping" usually comes up when talking about modular and/or auto-updatable software.
Rather than the user downloading the entire app, including features he does not need, and re-downloading and manually updating it whenever there is an update, the user only downloads and starts a small "bootstrap" executable, which in turn downloads and installs those parts of the application that the user needs. Additionally, the bootstrap component is able to look for updates and install them each time it is started.
Alex, it's pretty much what your computer does when it boots up. ('Booting' a computer actually comes from the word bootstrapping)
Initially, the small program in your BIOS runs. That contains enough machine code to load and run a larger, more complex program.
That second program is probably something like NTLDR (in Windows) or LILO (in Linux), which then executes and is able to load, then run, the rest of the operating system.
For completeness, it is also a rather important (and relatively new) method in statistics that uses resampling / simulation to infer population properties from a sample. It has its own lengthy Wikipedia article on bootstrapping (statistics).
Boot strapping the dictionary meaning is to start up with minimum resources. In the Context of an OS the OS should be able to swiftly load once the Power On Self Test (POST) determines that its safe to wake up the CPU. The boot strap code will be run from the BIOS. BIOS is a small sized ROM. Generally it is a jump instruction to the set of instructions which will load the Operating system to the RAM. The destination of the Jump is the Boot sector in the Hard Disk. Once the bios program checks it is a valid Boot sector which contains the starting address of the stored OS, ie whether it is a valid MBR (Master Boot Record) or not. If its a valid MBR the OS will be copied to the memory (RAM)from there on the OS takes care of Memory and Process management.
As the question is answered. For web develoment.
I came so far and found a good explanation about bootsrapping in Laravel doc. Here is the link
In general, we mean registering things, including registering service
container bindings, event listeners, middleware, and even routes.
hope it will help someone who learning web application development.
Bootstrapping has yet another meaning in the context of reinforcement learning that may be useful to know for developers, in addition to its use in software development (most answers here, e.g. by kdgregory) and its use in statistics as discussed by Dirk Eddelbuettel.
From Sutton and Barto:
Widrow, Gupta, and Maitra (1973) modified the Least-Mean-Square (LMS)
algorithm of Widrow and Hoff (1960) to produce a reinforcement
learning rule that could learn from success and failure signals
instead of from training examples. They called this form of learning
“selective bootstrap adaptation” and described it as “learning with a
critic” instead of “learning with a teacher.” They analyzed this rule
and showed how it could learn to play blackjack. This was an isolated
foray into reinforcement learning by Widrow, whose contributions to
supervised learning were much more influential.
The book describes various reinforcement algorithms where the target value is based on a previous approximation as bootstrap methods:
Finally, we note
one last special property of DP [Dynamic Programming] methods. All of them update estimates
of the values of states based on estimates of the values of successor
states. That is, they update estimates on the basis of other
estimates. We call this general idea bootstrapping. Many reinforcement
learning methods perform bootstrapping, even those that do not
require, as DP requires, a complete and accurate model of the
environment.
Note that this differs from bootstrap aggregating and intelligence explosion that is mentioned on the wikipedia page on bootstrapping.
I belong to the generation who flipped switches to enter a boot program. In the early 1980s, I worked on a microcomputer called Micro-78, developed by Electronics Corporation of India Ltd (ECIL). It was a sort of clone of Altair 8800. I distinctly remember what happens when a small boot program was entered using the toggle switches and executed by pressing a button. The program reads a second boot program contained in the 1st track of the floppy disk and overwrites it on itself in such a way that the second boot program starts executing to load a disk operating system. I think the term "bootstrap" refers to this process of the first boot program reading and overwriting the second boot program on itself, in a way "pulling itself up" with the additional functionality of the second boot program. That may be the origin of the original meaning of "the bootstrap program".
IMHO there is not a better explanation than the fact about How was the first compiler written?
These days the Operating System loading is the most common process referred as Bootstrapping
In terms of it in regards to using the popular Twitter Bootstrap I feel like this type of bootstrapping is the action of integrating a modular component into a Web application without the Web application having to even acknowledge the modular component exists until it needs it or references it.
The developer can seamlessly integrate a default copy of the CSS Twitter Bootstrap theme by simply loading (referencing) it into the Web application. Vuola! Then you may need to override some of these changes, but you can do so in such a way that the resource/component is untouched and completely reusable.
This same concept is how Web Devs implement jQuery APIs and so on, but it's not really expressed by Devs as bootstrapping per se. What it does is it improves flexibility and reusability while allowing the isolation of different components/resources of an app to reside freely either on the same server/s or possibly on a CDN.
NOTE: In computing bootstrapping deals with the MBR and in UNIX it requires a special bootloader or manager which is a small program in ROM that loads the OS into RAM. If you think about it the same concept takes places in the action of the bootstrap loader checking the MBR and loading the OS based on this table which occurs without the OS having any idea that this takes place.
Bootstrap file is responsible for loading contents of main file. It is a wrapper around main file. This way we can catch errors if loading of file was unsuccessful for some reason.
As a humble beginner in the world of programming, and flicking through all the answers here after seeing this word used a lot in apparently slightly different ways in different places, I found reading the Wikipedia page on Bootstrapping (duh! I didn't think of it either at first) is very informative to understand differences in use of this word. Could it be......on extremely rare occasions......Wikipedia might even have better explanations of certain terms than....(redacted)? Will they bring in rep points on Wikipedia though?
To me, it seems all the meanings something to do with: start with something as simple as possible Thing1, make something slightly more complex with that Thing2, and now you can use Thing2 to do some kind of tasks more efficiently and quickly than you could originally with Thing1. Then repeat from Thing2 to Thing 3 ad infinitum...
I see it as closely connected to both biological evolution and 'Layers of Abstraction' (newbies like me see, ahem, Wikipedia, cough) - the evolution from 1940's computers with switches, machine code, Assembly, C, Python, AIs you can give all kinds of complex instructions to like "make the %4^% dinner to my default &^$% requirements and clean the floor you %$£"#:~" in drunken slang English or Amazon tribal dialect without them 'raising an exception' (for newbies again...you guessed it) - missed out lot of links there due to simple ignorance.
Then in certain specific software meanings:
Meaning1: Thing1 is used to load latest version of Thing2 (because of course Thing2 will be bigger than Thing1, just as Thing3 will be be bigger than Thing2).
Meaning2: Thing1 is a lower level language (closer to 1001011100....011001 than print("Hello, ", user.name)) used to write a little bit of the higher language of Thing2, then this little bit of Thing2 is used to expand Thing2 itself from baby vocabulary level towards adult vocabulary level (Thing2 starts to be processed, or to use correct technical term 'compiled', by the baby version of itself (it's a clever baby!), whereas the baby version of Thing2 itself could of course only be compiled by Thing1, cause it can't exist before it exists, right duh!), then child version of Thing2 compiles Surly Teenager version of Thing2, at which point programming community decides whether Surly Teenager's 'issues' (software term and metaphor term!) are worth spending enough time resolving to be accepted long term, or to abandon them to (not sure where to take the analogy here).
If yes, then Thing2 has 'Bootstrapped' itself (possibly a few times) from babyhood to adulthood: "the child is the father of the man" (Wordsworth, suggest don't try looking up the quote or the author on Stack Overflow).

Resources