QL Heap

Anything QL Software or Programming Related.
User avatar
mk79
QL Wafer Drive
Posts: 1349
Joined: Sun Feb 02, 2014 10:54 am
Location: Esslingen/Germany
Contact:

Re: QL Heap

Post by mk79 »

bwinkel67 wrote:I do still need to do two seeks (or two trap3 calls) since I need to seek back to 0...for a while I couldn't figure out why I was getting the right size and not reading in the file :-/
Aaaah, right, sorry, should probably have mentioned that... for seeking to 0 you could just use fseek of course. It would have been trivial to include a ftell function in the LIBC to make this portable, but unfortunately they didn't.


User avatar
bwinkel67
QL Wafer Drive
Posts: 1187
Joined: Thu Oct 03, 2019 2:09 am

Re: QL Heap

Post by bwinkel67 »

I figured out my heap issue. It wasn't fragmentation after all, it was just an odd circumstance. When I pre-allocated 6000 bytes in my test case -- i.e. DIM A(2000) and DIM B(1000) -- and then tried to load a program of almost 13000 bytes in size, it wasn't failing due to fragmentation. When I allocated space to load in the 12925 bytes, which falls just under the next 64-byte block limit of 12928, I didn't account for the "" issue in ZX81 BASIC. I store it internally as \"" but the standard text file format for the ZX81 is just \" and so when reading in a file I have to add an extra character for each instance of \" and it turns out the particular program I was testing on had a bunch of \" in it. So when I go over the 12928 bytes I allocated as a single block I have a re-allocate process that adds an extra 64 bytes and then copies things over (I should likely make that bigger than 64 bytes) and so I was trying to allocate 12992 bytes while already having allocated 12928 and 6000 bytes which all totals just over the 30000 bytes of free space that I had in the common heap...so no bad fragmentation after all and QDOS was doing just fine.

I did find some messy stuff in my code that has been cleaned up. For any of the big allocation tasks I can abort pretty nicely (and now I'm always returning NULL in my internal memory allocation routine -- which I wasn't always doing before when I had my global error set). For little things, though, I don't check if what I get back is NULL and that may get me into trouble :-/ Don't want to add in all that code for checking as it will slow down things but I may eventually add an internal spare buffer of 128 bytes which is sufficient for all the small stuff and so when I fail with NULL I return that alternate buffer (and define my own NULL to be that new address so I can keep checking for it-- i.e. zxNULL maybe). So when the parse cascades out from deep in the tree, having run out of memory, it won't be writing to places it shouldn't. I always post a "BAD HEAP" message to the user and set a global error and for some (like LOAD program) you could continue without issue when cascaded all the way back and the global error is reset (though probably always good idea to quit at that point). It's when smaller allocation requests start to fail then the program is likely totally screwed -- right now it just uses the NULL address (since I'm not checking) and treats it like a real address and that is bad.


Post Reply