Here is my thoughts about performance in the next task:
select many records from database and put results by some keys into an internal hashed table.
We can insert records into hashed table inside loop or we can sort internal table and delete duplicated records.
What is quicker?
Here is a small program to test:
REPORT ztest_performance.
TYPES: BEGIN OF ekpo_typ,
ebeln TYPE ebeln,
ebelp TYPE ebelp,
matnr TYPE matnr,
bukrs TYPE bukrs,
werks TYPE EWERK,
END OF ekpo_typ.
DATA: lt_ekpo TYPE TABLE OF ekpo_typ.
DATA: lt_ekpo_h TYPE HASHED TABLE OF ekpo_typ WITH UNIQUE KEY bukrs.
DATA: lt_ekpo_2 TYPE HASHED TABLE OF ekpo_typ WITH UNIQUE KEY bukrs.
DATA: i1 TYPE i,
i2 TYPE i,
i3 TYPE i,
i4 TYPE i,
lv_lines1 TYPE i,
lv_lines2 TYPE i,
diff21 TYPE i,
diff32 TYPE i.
FIELD-SYMBOLS: <fs_ekpo> LIKE LINE OF lt_ekpo.
SELECT ebeln
ebelp
matnr
bukrs
werks
FROM ekpo
INTO CORRESPONDING FIELDS OF TABLE lt_ekpo
UP TO 1000000 ROWS.
GET RUN TIME FIELD i1.
LOOP AT lt_ekpo ASSIGNING <fs_ekpo>.
INSERT <fs_ekpo> INTO TABLE lt_ekpo_h.
ENDLOOP.
GET RUN TIME FIELD i2.
lv_lines1 = LINES( lt_ekpo_h ).
REFRESH lt_ekpo_h[].
GET RUN TIME FIELD i3.
SORT lt_ekpo BY bukrs.
DELETE ADJACENT DUPLICATES FROM lt_ekpo COMPARING bukrs.
lt_ekpo_2[] = lt_ekpo.
GET RUN TIME FIELD i4.
lv_lines2 = LINES( lt_ekpo_2 ).
refresh lt_ekpo_2[].
diff21 = i2 - i1.
diff32 = i4 - i3.
WRITE: 'i2-i1 = ', diff21, /.
WRITE: 'i4-i2 = ', diff32, /.
WRITE: 'lines1 = ', lv_lines1.
WRITE: 'lines2 = ', lv_lines2.
In my test system the result is:
i2-i1 = 814.957
i4-i2 = 480.459
lines = 29
So, "delete duplicated records" seems to work quiker than "insert records into hashed table inside loop".