What impact does my C/AL have on SQL?

This is a very challenging blogpost in my opinion. I see many stories going around on what would be the best way to use the new FIND-instructions, what C/AL to write in what case. I have my own thoughts on this which I would like to share. I don't want to say the following is the best way to go, let's just say it's a good way to go Smile.

First question (and I think the most important question that is on people's mind), how to use the FIND statements. This is not easy, because using the FIND statement really depends on the type of loop you're using.

First of all the basics. I think the table from stryk explains it very well:

C/AL

SQL (combined)

Customer.SETRANGE("Country Code", 'US'); Customer.FIND('-');

SELECT *,DATALENGTH("Picture")
FROM "dbo"."Cronus$Customer"
WITH (READUNCOMMITTED)
WHERE (("Country Code"='US'))
ORDER BY "No_"

Customer.SETRANGE("Country Code", 'US');
Customer.FINDFIRST;

SELECT TOP 1 *,DATALENGTH("Picture")
FROM "dbo"."Cronus$Customer"
WITH (READUNCOMMITTED)
WHERE (("Country Code"='US'))
ORDER BY "No_"

Customer.SETRANGE("Country Code", 'US');
Customer.FIND('+');

SELECT *,DATALENGTH("Picture")
FROM "dbo"."Cronus$Customer"
WITH (READUNCOMMITTED)
WHERE (("Country Code"='US'))
ORDER BY "No_" DESC

Customer.SETRANGE("Country Code", 'US');
Customer.FINDLAST;

SELECT TOP 1 *,DATALENGTH("Picture")
FROM "dbo"."Cronus$Customer"
WITH (READUNCOMMITTED)
WHERE (("Country Code"='US'))
ORDER BY "No_" DESC

Customer.SETRANGE("Country Code", 'US');
Customer.FINDSET;

SELECT TOP 500 *,
DATALENGTH("Picture")
FROM "dbo"."Cronus$Customer"
WITH (READUNCOMMITTED)
WHERE (("Country Code"='US'))
ORDER BY "No_"

Customer.SETRANGE("Country Code", 'US');
Customer.FINDSET(TRUE);

SET TRANSACTION ISOLATION LEVEL
SERIALIZABLE
SELECT *,DATALENGTH("Picture")
FROM "dbo"."Cronus$Customer"
WITH (UPDLOCK)
WHERE (("Country Code"='US'))
ORDER BY "No_"

Customer.SETRANGE("Country Code", 'US');
IF Customer.ISEMPTY THEN;

SELECT TOP 1 NULL
FROM "dbo"."Cronus$Customer"
WITH (READUNCOMMITTED)
WHERE (("Country Code"='US'))

Customer.SETRANGE("Country Code", 'US');
IF Customer.COUNT <> 0 THEN;

SELECT COUNT(*)
FROM "dbo"."Cronus$Customer"
WITH (READUNCOMMITTED)
WHERE (("Country Code"='US'))

Customer.SETRANGE("Country Code", 'US');
IF Customer.COUNTAPPROX <> 0 THEN;

SET SHOWPLAN_ALL ON
SELECT *
FROM "dbo"."Cronus$Customer"
WITH (READUNCOMMITTED)
WHERE (("Country Code"='US'))

GLEntry.LOCKTABLE;
GLEntry.FIND('+');

SET TRANSACTION ISOLATION LEVEL
SERIALIZABLE
SELECT *
FROM "dbo"."Cronus$G_L Entry"
WITH (UPDLOCK)
ORDER BY "Entry No_" DESC

GLEntry.LOCKTABLE;
GLEntry.FINDLAST;

SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
SELECT TOP 1 *
FROM "dbo"."Cronus$G_L Entry"
WITH (UPDLOCK)
ORDER BY "Entry No_" DESC

A ground rule might be: try to fetch as less data as needed - but all data you need - in one server call. E.g. Don't fetch all records if you're not going to loop those records / Don't fetch data if you just want to know if a record is present for that range or not, ... .

The table above is nice, but it still doesn't make me think about the things I have to take in count when writing code. Let me try to help you with this by asking a few basic questions:

WHEN NOT LOOPING DATA

  • Am I going to change data?

    Suppose you write this code:

    recCust.GET('10000');
    recCust.VALIDATE(Name, 'Van Terp Kantoorinrichting');
    recCust.MODIFY(TRUE);

    Not really exciting. In fact, I see code like this going around quite often. Well, the problem with this is that you do an extra server call, because you didn't apply that you wanted to change data. In fact, you'll see this in the client monitor:

C/AL

SQL Code

Explanation

recCust.GET('10000');

SELECT *,DATALENGTH("Picture")
FROM "CRONUS BELGIË NV$Customer"
WITH (READUNCOMMITTED)
WHERE "No_"='10000'

Read the data without setting a lock (you didn't specify this)

recCust.VALIDATE(...);
recCust.MODIFY(TRUE);

SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK, REPEATABLEREAD) WHERE "No_"='10000'

UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET ...

Read the data again and place a lock.
 

Update the fields.

Two times the SELECT is overkill, and must be avoided. And avoiding it is simple. Specifying that you want to change the data can be done by "LOCKTABLE". This makes our small piece of code like:

recCust.LOCKTABLE;
recCust.GET('10000');
recCust.VALIDATE(Name, 'Van Terp Kantoorinrichting');
recCust.MODIFY(TRUE);

To make the comparison complete, here is what happens behind the scenes:

C/AL

SQL Code

Explanation

recCust.LOCKTABLE;
recCust.GET('10000');

SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (UPDLOCK, REPEATABLEREAD) WHERE "No_"='10000'

Read the data with setting a lock

recCust.VALIDATE(...);
recCust.MODIFY(TRUE);

UPDATE "CRONUS BELGIË NV$Customer" WITH (REPEATABLEREAD) SET ...

  

This being said, take a look at following example:

IF ICHandledInboxTransaction.FIND('-') THEN BEGIN
  ICHandledInboxTransaction.LOCKTABLE;
  ICHandledInboxTransaction.Status := ICHandledInboxTransaction.Status::Posted;
  ICHandledInboxTransaction.MODIFY;
END;

This is code from default NAV (earlier version) which could be replaced by (to not change the behaviour on Native):

IF RECORDLEVELLOCKING THEN
 
ICHandledInboxTransaction.LOCKTABLE;
IF ICHandledInboxTransaction.FINDFIRST THEN BEGIN
 
ICHandledInboxTransaction.LOCKTABLE;
 
ICHandledInboxTransaction.Status := ICHandledInboxTransaction.Status::Posted;
 
ICHandledInboxTransaction.MODIFY;
END;

The statement "RECORDLEVELLOCKING" is the way for checking in code whether you're working on SQL Server (TRUE) or Native (FALSE). Furthermore, I added these two lines of code because it's necessary to lock before we read (to send the UPDLOCK in SQL).

  • Does a record exist?

    There are two (good) ways to check whether a record is present for a certain range:

    • Using FINDFIRST: Use this statement when you need the data of the first record, and the first record only.
    • Using ISEMPTY: this is a somewhat forgotten statement, but it's lighter then the FINDFIRST. If you only want to check if a record exists in the filtered range, but you don't need any value of a field of any of the records, use this statement.

    See table above for what SQL Statement it produces (TOP 1 * or TOP 1 NULL).

    For example:

    IF SalesLine.FIND('-') THEN
      Cust.CheckBlockedCustOnDocs(Cust,"Document Type",TRUE,TRUE);

    Could be replaced by

    IF NOT SalesLine.ISEMPTY THEN
     
    Cust.CheckBlockedCustOnDocs(Cust,"Document Type",TRUE,TRUE);

WHEN LOOPING DATA

That wasn't that exciting, was it? But what if we want to loop data. That's a whole different story Indifferent.

First thing that I have to state is that NAV uses cursors to simulate native behaviour. The new FIND statements (from 4.0) are the first step to go around this native behaviour.

Looping without changing data

In Stryk's tabel above, you can see the difference on SQL level between FIND('-'), FINDFIRST and FINDSET. I'll try to make the comparison of what happens on SQL Server:

IF recCust.FINDFIRST THEN
REPEAT

UNTIL recCust.NEXT = 0;

SELECT TOP 1 *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) ORDER BY "No_"

SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) WHERE "No_">'01121212' ORDER BY "No_"

FETCH 5
FETCH 20
FETCH 40
FETCH 40

IF recCust.FINDSET THEN
REPEAT

UNTIL recCust.NEXT = 0;

SELECT TOP 501 *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) ORDER BY "No_"

  

IF recCust.FIND('-') THEN
REPEAT

UNTIL recCust.NEXT = 0;

SELECT *,DATALENGTH("Picture") FROM "CRONUS BELGIË NV$Customer" WITH (READUNCOMMITTED) ORDER BY "No_"

FETCH 5
FETCH 20
FETCH 40
FETCH 40

What do we learn from this:

  • The best way to go is using the FINDSET. Why? It's fetching a SET of records and will be working with this set all the way. You see that there is no cursor activity (no FETCH statements). If you read further, you'll see that this is not always the case!
  • FINDFIRST is worse then FIND('-') when looping records because of the extra call (TOP 1 *). NAV does not expect a loop after a FINDFIRST. If it sees that you are looping, it's doing a new SELECT * (same as FIND('-') statement) to be ready for the loop and working with cursors to loop through the data (therefore the FETCH statements). Therefore, please do not just replace FIND('-') with FINDFIRST.
  • Never blindly replace FIND('-') by FINDFIRST. Think of what you're doing.

Now, what about the number of records? Does that count?

Well, NAV behaves like this: if the set is bigger then 500 records (or the size you specified as "Record Set" value in the "Alter Database" window), it will not use the recordset, but is going to "cursor" it's way through the rest of the loop, like "old school". I'll try to show it using tho examples. In both of them, the code runs 2840 times through the loop (or in other words, the messages shows "2840").

i := 0;

IF recGL.FIND('-') THEN
REPEAT
 
i := i + 1;
UNTIL recGL.NEXT = 0;

 MESSAGE('%1',i);

SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) ORDER BY "Entry No_"

FETCH 5
FETCH 20
FETCH 60
FETCH 60
FETCH 60

.
.
.

A total of 2845 "FETCHes"

i := 0;
IF recGL.FINDSET THEN
REPEAT
 
i := i + 1;
UNTIL recGL.NEXT = 0;

 MESSAGE('%1',i);

SELECT TOP 501 * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) ORDER BY "Entry No_"

SELECT * FROM "CRONUS BELGIË NV$G_L Entry" WITH (READUNCOMMITTED) WHERE "Entry No_">501 ORDER BY "Entry No_"

FETCH 5
FETCH 20
FETCH 60
FETCH 60
FETCH 60
.
.
.

A total of 2365 "FETCHes"

In case of the FINDSET: A first impression shows you the when the resultset is bigger than 500 records, it just throws an extra database call without the "TOP 500" and is going to cursor it's way through the records. When you examine deeper, you conclude that it's actually looping through the recordset of 500 records, and after that, NAV does not call the next 500 records, but just trows a "SELECT *" and is going to cursor it's way through the rest of the records.

That's why with FINDSET, we have about 500 FETCHes less than FIND('-'). BUT, we have an extra database call, which is quite "heavy" (a "SELECT *" is heavier than a "FETCH 60").

It's hard to discuss what's best. Further benchmarking should point that out. I would bet my money on using FINDSET in all cases. Why?

  • You avoid about 10 times a FETCH x call
  • A "SELECT TOP 500" as extra database call doesn't seem that heavy for me.

If anyone is interested in doing a thorough investigation on this ... Please count me in or let me know the resultsWink.

What do we learn from this:

  • Using FINDSET in front of every loop, is a pretty sure bet.
  • Using FINDSET only gets the first 500 records as a set, but is going to cursor it's way through the rest of the records like FIND('-').

Looping with changing data

The same rules apply like above, only see if you're locking the data before you're reading it. You can do this by either using LOCKTABLE, like:

recGL.LOCKTABLE;
IF recGL.FINDSET THEN
REPEAT
...
UNTIL recGL.NEXT = 0;

Or, if using the FINDSET statement, you can use its parameter like:

IF recGL.FINDSET(TRUE) THEN
REPEAT
...
UNTIL recGL.NEXT = 0;

There is also an option FINDSET(TRUE, TRUE). Only use this if you are going to modify any field value within the current key. Now, I really don't recommend to use this. You have to be aware for the "NEXT FROM HELL" (like Hynek likes to call it).

The "Next from hell" can happen when you filter on a value, and in a loop, you're changing that value. The loop jumps to places that are hard to predict. Also, when you change a key value in a loop (even without modify), a "NEXT" can give unexpected results. Try to avoid these situations by just declaring a new variable, and modify in that variable. For example, this is looking for trouble:

recCust.SETRANGE("Country/Region Code", 'BE');
IF recCust.FINDSET(TRUE) THEN
REPEAT
 
recCust.VALIDATE("Country/Region Code", 'US');
 
recCust.MODIFY;
UNTIL recCust.NEXT = 0;

You're modifying the Country Code while you're filtered on that field. Try to use a seperate record variable, and leave your looping variable intact, like:

recCust.SETRANGE("Country/Region Code", 'BE');
IF recCust.FINDSET(TRUE) THEN
REPEAT
 
recCust2 := recCust;
 
recCust2.VALIDATE("Country/Region Code", 'US');
 
recCust2.MODIFY;
UNTIL recCust.NEXT = 0;

Conclusion

When I started writing this article, I didn't expect that it was so hard to put this subject into words. I hope the above is clear to you, but to be sure, I just wanted to give you an overview of the things mentioned above.

  • Not looping data
    • Modifying data
      • Use LOCKTABLE with GET or FINDFIRST
    • Not modifying data (usually to check if a record exists or not)
      • Use FINDFIRST if you need the data of the record further down the road
      • Use ISEMPTY if you do not need the values of the keys
  • Looping data
    • Not modifying data
      • Use loop like: IF FINDSET THEN REPEAT UNTIL NEXT=0;
    • Modifying data: see that you lock the records that you read.
      • Use loop like: IF FINDSET(TRUE) THEN REPEAT UNTIL NEXT=0;
    • Mind the "NEXT FROM HELL" !

So, always think what the code is doing, then make up your mind what you will need for most optimal performance on SQL Server.    

Writing this article, I used the SQL Profiler and Client monitor to monitor what my code is doing. I noticed very few people know what this client monitor can do, so who knows, there might be an upcoming blog about monitoring your code Wink.

Any comments are appreciated!

   

Comment List
  • Hi Alterant.

    I have did some tests that confirms using secondary keys there is not better performance with FINDSET than with FIND conclusively when table contains more than 500 records (50 in Nav2009R2 without customization).

  • Hi Lars & Waldo : Some comments at your code.

    recCust.SETRANGE("Country/Region Code", 'US');

    IF recCust.FINDSET(TRUE) THEN

    REPEAT

     recCustTemp := recCust;

      // CAUTION!... Some variables inside OnValidate could use and GET other Customer records.

     // In this case, GET retrieves the original records stored in DB

     recCustTemp.VALIDATE("Country/Region Code", 'BE');

     recCustTemp.INSERT;

    UNTIL recCust.NEXT = 0;

    IF recCustTemp.FINDSET(TRUE) THEN

    BEGIN

     REPEAT

       recCust := recCustTemp;

        // CAUTION!... Some variables inside OnModify could use and GET other records modified using same technique.

        // In this case, GET retrieves the original records stored in DB and works with obsolete information without consistency control

       recCust.MODIFY(TRUE);

     UNTIL recCustTemp.NEXT = 0;

    END;

  • Yes. That's a better approach if You do not modify all the records.

    The important thing is to store Your modified data temporary and not just use a second variable.

    /Lars

  • By the way .. this (last) kind of solution can be used when having locking issues.  

    Suppose you're looping 1000 records, and you only have to modify two of them.  Let's say the fourth record and the last record.  In a default loop and modify, NAV would start locking from the 4th record until the end of the loop.

    If you want to avoid this kind of behaviour .. you could work with temp tables .. store the records that have to be updated, and update them after the loop in a seperate piece of code.

  • I have thinking about this ... and besides of performance (I didn't check this), this might be a better way to write your code:

    recCust.SETRANGE("Country/Region Code", 'US');

    IF recCust.FINDSET(TRUE) THEN

    REPEAT

    recCustTemp := recCust;

    recCustTemp.VALIDATE("Country/Region Code", 'BE');

    recCustTemp.INSERT;

    UNTIL recCust.NEXT = 0;

    IF recCustTemp.FINDSET(TRUE) THEN

    BEGIN

      REPEAT

        recCust := recCustTemp;

        recCust.MODIFY(TRUE);

      UNTIL recCustTemp.NEXT = 0;

    END;

    In this case, I am executing the MODIFIED trigger, and everything that has to be executed .. should be executed.

  • Lars,

    Thanks for you comments and suggestions.

    But I'm afraid it a little bit "short through the corner" (like they say in Flemish :-) ).  

    Shure your code is much faster:

    - No OnDelete-trigger

    - No OnInsert-trigger

    This is good ... you don't want that in your purpose ... but where is my OnModify?

    In this case, my linked Contacts wouldn't be updated.

    Less code results in better performance ... but I still want to execute EVERY necessary code, no?

    I wouldn't recommend working like that... .

    And I tested the Next-From-Hell .. and indeed, working with a second variable is producing the same statements like working with one... BUT the recordsets in memory are not updated, which means that the recordset that you're looping through is not updated ... this is a prooven method.

  • Hi Waldo.

    Thanks for a great article!

    I would like to comment on Your strategy to avoid the "Next from hell". I'm sorry to say that You will get a "Next from hell" even if You update second variable.

    The NAV database driver doesn't distinguish between the two record sets as we would like to when communicating with SQL. There's actually no difference in performance between changing the filtered field in the filtered record variable or in a second variable. It's exactly the same trafic going on towards SQL.

    This is how it should look:

    (recCustTemp is declared as a temporary record variable)

    recCust.SETRANGE("Country/Region Code", 'BE');

    IF recCust.FINDSET(TRUE) THEN

    REPEAT

     recCustTemp := recCust;

     recCustTemp.VALIDATE("Country/Region Code", 'US');

     recCustTemp.INSERT;

    UNTIL recCust.NEXT = 0;

    IF recCustTemp.FINDSET(TRUE) THEN

     BEGIN

       recCust.DELETEALL;

       REPEAT

         recCust := recCustTemp;

         recCust.INSERT;

       UNTIL recCustTemp.NEXT = 0;

     END;

    This will give You a great performance gain.

    I did a little test on the G/L Entry table in a CRONUS database. I filtered on Global Dimension 1 = "SALES" and renamed the department code to "SALES2". Using a second variable for the update it took approx 30 seconds (!) with Client Monitor activated to modify those few records. It's a LOT of data to write to Client Monitor with that code. You will se NAV using an update cursor for every row thats changed.

    Using my suggested method it takes 1,5 seconds to update the same data with Client Monitor activated. If I don't activate Client Monitor it's not that big difference since there is very little data in the CRONUS database, but I have seen huge differences in real life between the two approaches.

    //Lars

  • No problem, Asha.

    In my opinion, it's important any developer understands what's going on on the server when he writes certain peaces of code... .

  • Very Interesting! and its very useful for me to understand about "Find". By your Blog i get things how C/AL will have in SQL.

    Thank you for the nice Blog...

  • My two cents concerning using of FINDSET with secondary key and using FINDSET(TRUE,TRUE).

    First, about FINDSET with secondary key. As Waldo said using FINDSET will result in two queries:

    SELECT TOP 500 ...

    and cursor SELECT * ... WHERE PK > 500 ...

    But it is true only for loops based on PK.

    If you have a loop based on a secondary key you'll have extra queries.

    For the table with two fields we will have the following picture:

    First we will get first 500 recs:

    SELECT TOP 500 * FROM "TestFindSet" ORDER BY "SecondKey","EntryNo"

    SecondKey  EntryNo

    A          1

    ...

    A          500

    Then we will have an extra query (cursor) to get the rest recs with SecondKey = 'A':

    SELECT  * FROM "TestFindSet" WHERE  "SecondKey"='A' AND "EntryNo">500 ORDER BY "SecondKey","EntryNo"

    SecondKey  EntryNo

    A          501

    A          502

    A          503

    A          504

    A          505

    And then we will have the query (cursor) to get the rest of the table:

    SELECT  * FROM "TestFindSet" WHERE  "SecondKey">'A' ORDER BY "SecondKey","EntryNo"

    B 506

    ...

    More fields you have in the key more extra queries you will have. For example for the table with 3 fields where two of them are in the same secondary key we will have 2 extra queries:

    1. SELECT TOP 500 ... OREDER BY "SecKey1", "SecKey2", "PK"

    2. SELECT * ... WHERE "SecKey1" = zzz and "SecKey2" = xxx and PK > 500 ...

    3. SELECT * ... WHERE "SecKey1" = zzz and "SecKey2" > xxx ...

    4. SELECT * ... WHERE "SecKey1" > zzz ...

    Now the most interesting part about using FINDSET(TRUE, TRUE).

    When MS said "If you are going to modify any field value within the current key, set this parameter to TRUE." they didn't mean that you should use the same variable for looping and for modification (wich can lead to "NEXT from HELL" situation). You can look at example 3 of the help topic concerning FINDSET.

    So what is the difference between FINDSET(TRUE) and FINDSET(TRUE, TRUE)?

    FINDSET(TRUE) will open the cursor which will fetch a number of records at a time.

    FINDSET(TRUE, TRUE) will result in the cursor which will fetch records one by one.

    And you should know that during modification of the current key Navision will use the cursor which will fetch records one by one. This is the point.

    The following example illustrates what will happen if you use FINDSET(TRUE) for key modification loop:

    start reading

    1. open cursor

    2. fetch x

    3. ...

    4. fetch x

    first modification occurs

    5. close cursor

    6. open cursor

    7. fetch 1

    8. modify

    continue to modify

    9. ...

    10. fetch 1

    11. modify

    no more modifications at the moment - continue reading

    12. close cursor

    13. open cursor to read the rest of the records with the same values in the key (as described in the first part of my post)

    14. close cursor

    15. open cursor

    16. fetch x

    17. fetch x

    another modification

    18. close cursor

    19. open cursor

    20. fetch 1

    21. modify

    22. close cursor

    23. open cursor to read the rest of the records with the same values in the key

    24. close cursor

    25. open cursor

    26. fetch x

    27. ...

    28. close cursor

    As you can see using FINDSET(TRUE) will result in a number of cursor creation and closing procedures.

    As for FINDSET(TRUE,TRUE) the system will create the cursor only once, BUT it will fetch records one by one:

    1. open cursor

    2. fetch 1

    3. fetch 1

    4. ...

    first modification occurs

    5. fetch 1

    6. modify

    7. ...

    no more modifications

    8. fetch 1

    9. fetch 1

    10. ...

    11. close cursor

    So which method is preferable for loops with modifications of the key? In my opinion, unfortunately, there is no universal solution. If you are going to modify every second record in the table than FINDSET(TRUE,TRUE) will be the good choise.

    But if you need to modify only a couple of records in the table with millions of records than I think using FINDSET(TRUE) is more convenient.

    Hope this info will be useful.

  • That's what I say Eric, soon you'll have enough for a small book! Wink

  • Thank you for the kind messages.  Who knows there will be a hardcopy bundle some time Wink.

    Let me create a "part two" blog post where I will handle MODIFYALL, DELETEALL and MARKing records, OK?

  • Nice article, it's indeed a smart idea to use the SQL profiler in combination with Client Monitor to find how record set manipulation CAL get spitted out in SQL statements :-)

    Best Regards from France,

    Tarek Demiati

    Freelance Dynamics NAV (Navision) Developer

  • I think example 3 is best:

    RESET;

    SETCURRENTKEY(...);

    SETRANGE(MyField,'FILTERVALUE');

    FINDSET(TRUE[,TRUE]);

    REPEAT

     recOtherRecord := rec;

     recOtherRecord.MyOtherField := 'NEW VALUE');

     recOtherRecord.MyThirdField := 'ANOTHER NEW VALUE';

     recOtherRecord.VALIDATE(MyFourthField,'YET ANOTHER NEW VALUE');

     recOtherRecord.MODIFY(FALSE); // or (TRUE);

    UNTIL NEXT=0;

    And if you have a VALIDATE-trigger, it is impossible to use MODIFYALL.

  • Actually I also got a question on this I would like to hear your experiences on.

    A long time a go I was told that using the MODIFYALL was always a very good option. But how is SQL Server actually treating this command?

    I mean if I need to update two fields?

    Example 1:

    SETRANGE(MyField,'FILTERVALUE);

    MODIFYALL(MyOtherField,'NEW VALUE');

    MODIFYALL(MyThirdField,'ANOTHER NEW VALUE');

    Example 2:

    SETRANGE(MyField,'FILTERVALUE);

    FINDSET(TRUE);

    REPEAT

     MyOtherField := 'NEW VALUE');

     MyThirdField := 'ANOTHER NEW VALUE';

     MODIFY;

    UNTIL NEXT=0;

    Which example would be better?

    And what if ex. we had to call the VALIDATE trigger?

Related
Recommended