Gents,
If its true that Turbo limits the size of arrays to the program's data space, that seems very inefficient. A program will then always devour the max memory, defined at compile time, whether its needed or not*. This cant be correct, surely?
QLib allocates space for arrays on the common heap, I believe, as large arrays dont require extra data space. I
cant confirm at this moment that this dynamical allocation works in reverse, ie that memory first allocated to a large array will be released again if the array is re-dimensioned to a smaller size, which would seem the natural requirement.
*It may be that LINK_LOAD resolves, or works around, the memory problem, with the extra mess and effort that involves.
This has always been the great divide between Turbo and QLib: Speed versus compatibility.
Since I already own QLib, and am deeply committed, due to the tools I find useful and the toolkits I have developed, Ill stick with it for now. Speed is not critical for most GUIs and the like - ease of use and compatibility are more important. Where theres a requirement to shuffle around a lot of data, such as moving, sorting and searching, it is usually possible to revert to assembler.
If one doesnt have QLib, or is just starting out with QL compilers, or one's requirements are for speed-critical code that doesnt lend itself to assembler, Turbo is the obvious choice. However, ultimately it was due to Turbo's handling of arrays (as it was some 25 or more years ago) that forced me to abandon it in favour of, as I thought at the time, the inferior competition. Sadly, not enough has changed in that respect to tempt me back again for anything Im working on, or even dreaming of, at the moment.
Apologies for the voluminous verbiage. Even so I feel I havent done justice to all the comments and suggestions given, for which too I apologise
Per