Really big temptable in PRE-NAV2013 and NAV2013+

I have to admit I had some doubts about the title. I was first thinking about “An attempt was made to move the file pointer before the beginning of the file.” but that would exclude a little bit also referring to NAV2013+ (I explain this in the second part of this blogpost). And I was more interested in what would happen in NAV2013+ with a 64 bit servicetier created from scratch.

So this blogpost has to subtopics in it: One talking about PRE-NAV2013 and another about NAV2013+.

  • PRE-NAV2013 (tested with NAV2009R2)

If you have a really big temptable, at a certain point you might hit this error:

—————————

Microsoft Dynamics NAV Classic

—————————

The operating system returned the error (131):

An attempt was made to move the file pointer before the beginning of the file.

 

Why? What does this mean?

First and for all. A temptable starts in memory and when it gets a bit bigger, it is written to a file. How big? I don’t know, but this is not important here.

PRE-NAV2013 is using a 4 byte signed integer to keep track the size of the file. Why not a biginteger? That is an inheritance of the old FAT (https://en.wikipedia.org/wiki/File_Allocation_Table) and DOS (https://en.wikipedia.org/wiki/DOS). Basically, a file was limited to max 2GB of size. And a 4 byte integer goes from −2 147 483 648 to 2 147 483 647, so a perfect match.

But with the later FAT implementations or NTFS (https://en.wikipedia.org/wiki/NTFS), we don’t have that 2G file limit anymore. But it is still there in PRE-NAV2013.

So when NAV is writing to the file, at a certain point it gets over the 2GB size and the integer used to track the size becomes negative, generating the above error (and crashing the native client, but that is not important here [well… kind of…]).

To create the problem, I have created a table with an integer as primary key and 4 Text250-fields. The following code fills up the temptable until it hits the error. The same code I also used in NAV2013+.

This is the code I used to fill up the temptable. I put the code in a codeunit.

dia.OPEN(‘#1########’);

t99915.t1 := PADSTR(”,250,’x’);

t99915.t2 := t99915.t1;

t99915.t3 := t99915.t1;

t99915.t4 := t99915.t1;

FOR i := 1 TO 5000000 DO BEGIN

IF i MOD 1000 = 0 THEN

dia.UPDATE(1,i);

t99915.int := i;

t99915.INSERT(FALSE);

END;

dia.CLOSE;

 

How to avoid this error? There are a few things you can try:

-If you are using a standard table, you can create your own table tailored to your needs. It is not necessary to put it in the license if you only use the table only as a temptable. Put only the fields in it that you need and only the sizes you need and limit SIFT-fields and extra keys as much as possible.

-Divide the records in the temptable over multiple temptables (Arrays on a temptable do not help because it points to the same temptable and not to different ones). Each temptable has its own file in the operating system, so you have multiple times 2 GB of data.

This was tested on a classic client and not on the 2009R2 RTC but I think it has the same problem.

  • NAV2013+

Does NAV2013+ still have this problem? The short answer is no. And I could finish the blogpost now, but I want to point out some details on what is going on.

-First good surprise is that it is a lot faster than with the older versions. This is because it does not write the data to disk, but keeps it in memory.

-Next good thing is that I don’t get the error anymore like I expected because it is all in memory now.

-But because everything is in memory, the service tier is gobbling up your precious memory. So you might run into out-of-memory issues. But on the positive side I did notice that the memory structure of a NAV2013+ temptable is more memory efficient than the file structure of a PRE-NAV2013 temptable. Meaning that a 2 GB file-structure in PRE-NAV2013 does NOT take 2 GB of memory in NAV2013+. I didn’t measure it scientifically but my guess is that it takes around 600MB to 1000 MB of memory instead of 2GB.

One other interesting observation I found is this:

Running the codeunit multiple times, memory increases to a few GB, but suddenly it releases memory to get under 1 GB. Looks like garbage collection kicking in.


Filed under: NAV Tagged: NAV
Comment List
Related
Recommended