Quantcast
Channel: ABAP Development
Viewing all 948 articles
Browse latest View live

SAP OSS Notes Series - Applying SAP Notes

$
0
0

breaking bad applying oss notes.png

 

After some time, we're back with "SAP OSS Notes Series" - a series of blog posts which aims to cover everything you need to know about SAP OSS Notes.

 

If you haven't read the previous episodes, check out:

 

SAP OSS Notes Series - Part 1 - SAP Versioning and Five Ws about OSS Notes

 

SAP OSS Notes Series - Part 2 - Discovering and reading notes

 

In this episode we are not going to cook blue sky meth or escape from a prison filled with zombies but I hope you get pretty excited about learning how to apply SAP Notes. Being honest, cooking meth, escaping from "walkers" or applying SAP OSS notes are activities with a similar degree of risk. I'd say the last is the riskiest one.

 

There are mainly two ways of implementing or applying SAP OSS Notes in your company's system: via support package or manually.

 

Support Package

 

You might remember the concept of support packages described in the first episode of this blog series. They are just a bunch of corrections or very small improvements (technically speaking) delivered all together related to the same SAP component.

 

fragile box.jpg

In other words, a support package (or SP) is a collection of SAP OSS Notes for a single SAP Component. As SAP software is composed by many components, SP are delivered in "blocks". These blocks are referred to Support Packages Stacks if you log into SAP Marketplace. To make things easier, let's just stick with the term "Support Package" to represent a set of SAP OSS Notes.

 

As each system version (product + EHP) is different than each other, each support packages also diverse. It means that SAP OSS notes contained in Support Package 1 from ECC 6.00 are completely different from the OSS Notes contained in Support Package 1 from ECC 6.07.

 

Usually, the older the system version, the highest is the number of Support Packages for it. At the moment this blog post is written, the last SP (stack) for SAP ECC 6.00 is SP24, which was released on October 2013. On the other hand, the most recent SP (stack) for SAP ECC 6.07 is SP 3, which was released on the end of last January (2014). It's also possible to check a planned schedule for the next support packages on SAP Marketplace.

 

support package schedule.JPG

 

 

 

So if SAP finds a bug present in all EHP versions of SAP ECC 6, it will (or at least should) fix it in every system version. Then, SAP delivers this fix in the next SP for each system version (for example SP 25 for ECC 6.00 and SP 4 of ECC 6.07). By the way, this fix is described by a single SAP OSS Note (that you know how to read from part 2).

 

SAP recommends its customers to upgrade their systems to a newer SP at least once a year. As fixes are delivered all at once, there is no need to test every single stuff and the upgrade process is almost automatic, this should be followed as it represents a lower risk for IT departments.

 

BASIS professionals are the ones mainly involved during SP upgrades. Of course you might want to test if your business processes are still working before upgrading production SP and you might have automated integrated tests or not, but this is a different topic.

 

 

Manual Implementation

 

There are many situations on which you cannot wait until your company upgrade the SP level or even the EHP of the system. These situations are often involved with a present or future issue in production environment.

 

Having already discovered a SAP OSS Note which fixes an issue, it's time to apply it. As you already know, an OSS note which change repository objects and/or customizing data, meaning it will generate change requests to be transported.

 

Please read with attention how notes are applied. You will realize that this activity should not be performed by anyone. Risks of causing a system damage are high if no caution is taken.

 

manual oss notes.jpg

 

 

Pre-implementation steps

Mainly, the implementation of a SAP OSS Note is done automatically by transaction SNOTE. However sometimes there are a few steps you must follow before you implement the note itself. These steps are called pre-implementation steps.

 

How do I know if there is any pre-implementation step inside a SAP OSS Note?

The answer is simple: you have to read it. When there is any pre-implementation step, it will be said inside the note body. The steps you have to take vary a lot according to the note. Common examples include

 

  • Creation/Change of DDIC Objects (Domains, Data Element, Tables, Structures, Table Types, Views etc)
  • Creation/Change of other workbench objects as screens, table maintenance dialogs, PF-STATUS, Titlebars, etc
  • Validation of specific data in the database

 

Normally, pre-implementation steps involve stuff that SAP cannot do it automatically. So they are always done manually.

 

indianajones2.jpg

Keep calm and apply the note successfully.

 

Who is the responsible for doing pre-implementation steps?

 

In most cases, an ABAP developer. Occasionally (then customizing stuff have to be checked of changed), a functional professional might be involved as well. There is also the possibility to have a pre-implementation step which involves the import of a change request directly into the system from a kind of .zip file. These files are always attached to the SAP Note in SAP Martketplace. Tell your BASIS friend to help you on such cases as you probably have no access to import a change request directly in the environment.

 

Things to remember when doing pre-implementation steps

 

As SAP tells you to create/change stuff under their name range (non-Z objects), you must register each object to be changed in SAP Marketplace one by one. If you as an ABAP developer have no access to SAP Marketplace, tell your BASIS friend to register the objects for you so will be able to apply SAP OSS Note accordingly. DO NOT change any standard object without registering it and don't register any object in SAP Marketplace if you don't have a SAP Note telling you to change it.

 

SAP OSS Notes are usually done by SAP using the English language as the original language of workbench objects. To avoid any problems, be logged in English when applying a SAP Note

 

If you have to translate any object to your language, write that down. SAP Notes almost never tell you to do so but you really should to (specially then the changes have an impact in the UI as data element texts).

 

If there are any pre-implementation steps and once they are all done, you will have a change request. Don't release it in the first moment. You will have other changes to be done and you can (and should) use the same change request to avoid missing objects in quality and production systems.

 

Note implementation

 

To implement changes contained inside a SAP OSS Note, you just have to open it using SNOTE transaction and push "Implement note" button. If all pre-implementation steps are done, all pre-requisites (topic of the next blog post) are done and sun is shining to you this process will change some objects and ask you to activate them and include inside a change request. Remember: use the same if you have done pre-implementation steps.

 

Open SNOTE transaction code

 

oss main screen.JPG

If you have downloaded a SAP Note, click on note browser and insert note(s) number(s)

 

oss note filter.JPG

oss note result.JPG

Double click on it to open or click on execute button to implement directly (always open and read the note before applying it).

 

 

If note implementation is done successfully, its status will turn to "completely implemented". If not, something is missing and there are hundreds of reasons for that happen. This will be covered in the next episode.

 

oss note opened with pdf.JPG

 

 

Post-implementation steps

 

In the same way you might have pre-implementation steps, you might have post-implementation steps as well. The last is not as frequent but you should know it is always done manually too. Below are some common post-implementation steps:

 

  • Run a specific report to do something
  • Translate something to other language
  • Activate objects or configurations

 

Code Search and Replacement

 

It's very important to understand how SNOTE transaction replaces broken source code by a fixed one. If a specific report or include must be updated, SNOTE transaction does NOT replace the whole source code by a new one. Instead, it replaces JUST the part of the source code which needs to be replaced.

 

What if there are more than one occurrences of this piece of code and not all must be replaced?

Great question and here is the answer.

 

Context Insert Delete sections

 

Once you have an OSS Note opened in SNOTE transaction, you will have a tree on the left side of the screen. Basically it organizes each fix by each product version. You can expand the tree node according to your system version and see which workbench objects are adjusted by this SAP Note. In the example below, 5 reports (Including LGUSLDTT) are adjusted. Clicking on "Change to Source Code" on the left tree, all changes are displayed in the main area separated by three sections: Context, Insert and Delete.

 

You might correctly guess that what is under "insert" section is the source code which will be inserted once you click on "Implement note" button. The same applies to "Delete" section. So what is the purpose of "Context" section? It's to identify a unique piece of code inside the workbench object (report, include, function, method). The source code inside context section is untouched. However what it comes next will be changed according to the other two sections. This is what makes the OSS Note implementation be automated (when there are no manual steps).

 

snote context insert delete.JPG

 

 

Conclusion

 

Well, now you should be very confident to implement many SAP notes. However be aware that most OSS notes are dependent between each other. This will be the topic of the next and final episode of SAP OSS Notes Series on AMC SCN.

 

autopilot team.jpg

SAP Team responsible for applying SAP OSS Notes: the one in the left usually is not in charge of applying notes with pre-implementation steps.


Dumps are your friend. A different way of thinking.

$
0
0

When I talk with people about ABAP quality, the way they think of dumps always amazes me.

 

Most people believe dumps are their worst enemy. Actually they are not. Actually you're lucky if your software dumps.

 

How is that?

 

In order to understand this, we need to talk about why software dumps in the first place. Software dumps because it runs into a corrupt state it can no longer process, something virtually breaks and further processing is impossible. This results in a dump. Operation ends. Users swears.

 

While this is not good, it provides one important benefit: you now know you have a problem.

 

Ask yourself the following question: "What if my business application runs into a corrupt state and doesn't dump (right away) ?"

 

In this case your program may continue to operate for an unknown time span, potentially corrupting your persistent business data. If this happens, you won't find out for some time. If you find out, you may have a very hard time to recover from this data corruption and to trace it back to the actual programming defect that caused it.

 

Finding a problem that shows no (visible) symptoms can be extremely difficult. And its effects can be devastating once you discover them.

 

How would you - for example -  cure a disease that shows no (visible) symptoms? You couldn't. Because you don't know it's there. Until it may be too late. But if you see the symptoms, you can treat that disease and even take action to improve your health in general.

 

Under that aspect, you're lucky if your program dumps. It's like a problem that waves at you with a white flag: "Hey, here I am. Fix me!".

 

Now if you're a Padawan, you'll find the bug and you'll fix it.

 

If you're a Jedi, you'll think about a process to avoid robustness issues in the future.

 

And if you're a Jedi Master, you'll learn from every future mistake and adapt/improve your processes as new bugs come along.

 

I encourage you to see dumps as a chance, not an enemy. A chance to improve the development process. A chance to avoid similar mistakes in the future. And a reminder that robust programming matters for your business.

 

If you like to know more about avoiding robustness issues, I'd love to point you to another of my blog posts. Unfortunetaly SCN would see this as a "grounds for rejection.". That's why I removed the link.

 

Dumps are not your only friend. Google is, too.

Data not displayed in Excel in ALV report

$
0
0

In ALV report , when we click on Microsoft Excel
button(CTRL+SHIFT+F7) as shown below, the excel sheet does not display data.

 

Image1.JPG
 
 
  image 2.JPG
   
For data to appear here do the following :

 

 

1. Open Excel

 

 

2. Click on  Office Button at the top and click  > Excel Options

 

    Click onTrust Center as shown below:

 

image 3.png

  
  

3.Click on Trust Center Settings

 

 

image 4.jpg

4. Click on Macro Settings and Enable Trust Access

 

image 5.jpg

 

5. Then data will appear in the Excel:

 

image 6.jpg

First real use of secondary indexes on an internal table

$
0
0

Introduction

 

Given the reluctance of the general ABAP community to use new-fangled (that is, over fifteen years old) concepts like SORTED and HASHED tables, I was hesitant to write about something a bit newer, but then I thought - what the heck, perhaps some people will find it an encouragement to use new stuff!

 

And I know this isn't that new!

 

So, we have HASHED tables, where the key is unique and the lookup time is constant for each record, and SORTED tables which mean we don't need BINARY SEARCH any more (except if we need sort descending...). For these tables, there's an index already defined to speed things up - but it's like a database table with just a primary index. Secondary keys are like additional indexes on database tables - but for internal tables.

 

I've heard it said that you should only use these if you've got tables with loads of information in. Well, so long as the data isn't being handled in a loop, I think it doesn't matter. If the data volume being processed is small, a few extra nano-seconds won't matter, and data volumes grow - so there's some future proofing in using the structures which are most efficient with large tables, right from the start.

 

Secondary keys

Here's that syntax, to refresh your memory.

 

TYPES dtype { {TYPE tabkind OF [REF TO] type}

            | {LIKE tabkind OF dobj} }

            [tabkeys]

            [INITIAL SIZE n].

 

And then tabkeys looks like this:

 

... [ WITH key ]
    [ WITH secondary_key1 ] [ WITH secondary_key2 ] ...
    [ {WITH|WITHOUT} FURTHER SECONDARY KEYS ] ... .

 

 

Additions

1. ... WITH FURTHER SECONDARY KEYS

 

2. ... WITHOUT FURTHER SECONDARY KEYS

 

Those additions, we'll forget about. They're for use when you're defining generic table types.

 

Now, for my purposes, I've got a questionnaire, with pages on it, categories of questions and questions. And I need to access it in many ways. So here's how I defined it:

 

TYPES:

     questionnaire_ty TYPE SORTED TABLE OF q_entry_ty WITH NON-UNIQUE KEY page_number cat_seq

                      WITH NON-UNIQUE SORTED KEY by_question COMPONENTS q_id

                      WITH NON-UNIQUE SORTED KEY by_cat_guid COMPONENTS cat_guid q_seq

                      WITH NON-UNIQUE SORTED KEY by_cat_text COMPONENTS cat_text

                      WITH NON-UNIQUE SORTED KEY by_cat_seq  COMPONENTS cat_seq .

 

The idea is that I can access an internal table of this type rapidly by page number, question id, category unique id (guid), category text and category sequence. Seems quite a lot, but the alternatives were to have a standard table and sort it and use binary search for each read, or not bother at all, and just put up with sequential reads.

 

Some problems

I've got the categories in my questionnaire in sequence order. So, naturally, I want to renumber them. The obvious way of doing this is

 

LOOP AT me->questionnaire ASSIGNING <entry> USING KEY by_cat_guid WHERE cat_guid EQ i_guid.

   ADD 1 TO index.

   <entry>-cat_seq = index.

ENDLOOP.

 

But there's a problem there. It dumps. And it dumps because cat_seq is part of the key by_cat_guid!

 

So, I thought, I'll delete the records, collect them and then insert them afterwards

LOOP AT me->questionnaire INTO entry USING KEY by_cat_guid WHERE cat_guid EQ i_guid.

   DELETE TABLE me->questionnaire FROM entry.

   <entry>-cat_seq = index.

    INSERT entry INTO TABLE renumbered.

ENDLOOP.

INSERT LINES OF renumbered INTO TABLE me->questionnaire

 

But data was still going amiss. The problem was, that the delete command deletes the entry that matches the primary key. So it was reading one entry in the LOOP AT, and deleting an entirely different entry (that matched the primary key) at the DELETE.

 

I tried the old DELETE... INDEX, but that got me nowhere. But a quick check of the syntax for DELETE gave me the hint.

 

LOOP AT me->questionnaire INTO entry USING KEY by_cat_guid WHERE cat_guid EQ i_guid.

   DELETE TABLE me->questionnaire FROM entry USING KEY by_cat_guid.

   <entry>-cat_seq = index.

   INSERT entry INTO TABLE renumbered.

ENDLOOP.

INSERT LINES OF renumbered USING KEY by_cat_guid INTO TABLE me->questionnaire

 

What to be aware of

With an internal table with additional keys, there are few things you really need to take care about.

 

1. You can't change a field of an entry you've ASSIGNED to, if that field is part of one of the keys

2. If you access data using one key - you really need to change it using the same key.

3. All of the usual internal table handling statements have the addition USING KEY. Sometimes it's vital - like with the DELETE example. Other times it's a matter of performance. For the INSERT LINES I could have omitted the USING KEY, and it would still work - however it is not as efficient, since I know that all my renumbered entries have the same cat_guid.

 

Final words

When new ABAP commands become available, try to use them. In my application, it probably won't make any difference. But what you don't use, you forget. Surely there will come a time when you do need additional accesses to internal tables - if you've already practiced, the next time it won't take as long.

Tooltip function in New ABAP Editor

$
0
0

Hello SCN,

I just found interesting function in New ABAP Editor. It's not actually any hidden, but I think not many use it.

UPDATE: from comments it seems it is available in SAP from "EHP6 731" (screenshots are from it)

 

You can found it in context menu (right click) on code:

tooltip-1.JPG

It has different outputs on different places. For example here is what it shows when used on FM call:

tooltip-2.JPG

 

It also works on method calls, variables (showing their type) and other code...

 

What might be very helpful is that you can copy text from tooltip and use it (eg. declare variables of needed type for FM/method call).

 

Example:

Instance method: GET_DDIC_FIELD
  Returns type description for Dictionary type
IMPORTING
     Value(P_LANGU) TYPE SYLANGU Optional SY-LANGU
        Current Language
RETURNING
     Value(P_FLDDESCR) TYPE DFIES Optional
        Field Description
EXCEPTIONS
      NOT_FOUND
      NO_DDIC_TYPE

Imagine having this tooltip in switchable side panel (similar like "repository browser" in SE80) and interactive on cursor position.

No more double-clicking on methods, functions, variables etc.. to see their type, description...

Wouldn't that be amazing?

Production order with Multiple Batches

$
0
0

This blog introduces the solution process for creating multiple batches for a production order and using the same batches for a Goods Receipt on the same production order.

Background:

The company uses production orders to convert industrial materials into commercial materials which are both handled in batches. Each batch represents a physical unit which usually contains (say) 100 PCE of a material.

A production order should be able to create multiple batches (of commercial material), batches equal to the number of components used in the production order and it was also required to inherit the batch characteristics from the industrial materials batches (component material batches) to commercial materials batches.

Currently, SAP is not able to create multiple batches within a production order, but as a default setup the production order can create one batch for all the components used in the production order creation. And SAP is also not able to inherit the batch characteristics into the new created batches.

Solution:

The solution process proposed in this blog is a two-step process, although there are many ways of doing it, the solution that worked out for me is described below.

 

The first step in the solution process is to create multiple batches for a production order. It is described in detail in document Creating multiple batches for a production order.

 

The second step is detailed in document Automate distribution of quantity in MIGO which in specific deals with the process of relating the multiple batches (created in the above step) to the production order by automation of the quantity distribution process of a Goods Receipt with repect to a production order in MIGO.

Hope it was Helpful .

Change fieldcatalog and layout of ALV after its initial display.

$
0
0

We can do any number of modification to the fieldcatalog and layout of ALV grid even after it has been displayed on the screen, we can hide certain columns , change the column text, change the column position etc. We can achieve all this simply by using the following methods of the class CL_GUI_ALV_GRID:

 

For fieldcatalog modification:


get_frontend_fieldcatalog

set_frontend_fieldcatalog

 

For layout modification:

get_frontend_layout

set_frontend_layout

 

Steps to change the fieldcatalog after first display:

  1. Trigger the PAI using pushbutton or some other manner.
  2. Now for this triggered function code get the existing fieldcatalog using the method get_frontend_fieldcatalog.
  3. Make the required modification to the fieldcatlog.
  4. Now in order to reflect these changes to the ALV grid make use of the method set_frontend_fieldcatalog.
  5. Call the method refresh_table_display of class CL_GUI_ALV_GRID to refresh the ALV display so as to show the modifications done to the ALV grid.

The ALV layout can be changed in a similar manner using the get and set methods meant for layout.


Example:


This is the initial display of ALV gird.

5.png

On clicking the button Technical Name the existing column headings will be replaced by their equivalent technical name.


6.png


On clicking the Layout 1 button the existing layout will be changed to Zebra layout.


7.png


Similarly you can present multiple layout and display options to the user using this technique.



Dynamic access to internal table (or range)

$
0
0

Hello SCN,

 

So the other day I had the following requirement (I work on a SAP CRM 7.0 system): I wrote a new program in which I needed some data processing which was already coded in the subroutine of another – already existing – program. Since it concerned a pretty large piece of code, I decided not to simply copy-paste the logic but to call the subroutine from within my program like this:

 

PERFORM subroutine INPROGRAM main_program CHANGING t_result.

 

Since the program in which I was calling the subroutine has a selection screen, and some of these parameters are used in the subroutine, I had to add an importing (USING) parameter to the subroutine which contained the values for these parameters. These values are partially supplied by the user in the selection screen of my program, and others are calculated in my program flow. So the above statement was corrected as follows:

 

PERFORM subroutine INPROGRAM main_program

USING     t_selscr_parameters

CHANGING  t_result.

 

Now comes the tricky part . The table T_SELSCR_PARAMETERS is a table with structure RSPARAMS (so basically the standard type for any selection screen, with components SELNAME, KIND, SIGN, OPTION, LOW and HIGH). Containing records with the exact names (SELNAME) of the corresponding selection screen parameter, and – of course – the value to be transferred to the selection screen parameter (e.g. SIGN=’I’, OPTION = ‘EQ’, LOW = ‘xxx’).

 

So I added some logic to the subroutine which we are calling: a loop over SELSCR_PARAMETERS to transfer the value of each table line into the corresponding parameter from our main program’s selection screen.

For a regular parameter, I knew I could work with a field symbol of type ‘any’, and simply assign the name of the parameter (LS_RSPARAM-SELNAME) to this field symbol – let’s name him <FS_ANY>. If the assignment works (which it should, because I named the parameter records in the SELSCR_PARAMETERS table exactly the same as the parameters in the selection screen), you can transfer the value in the selection screen parameter by using the following statement:

<FS_ANY> = LS_RSPARAM-LOW.

 

But.. next to the ‘regular’ parameters, there were also some ranges (SELECT-OPTIONS) which needed to be transferred into the selection screen. Ranges are in fact separate internal tables with header line

scr-1.jpg

So you could use the same statement as for a regular parameter

ASSIGN ls_rsparam-selname TO <fs_any>.

But it would not be useful, since you need to append a structure of type RSPARAMS to your range (assigned to <FS_ANY>) and you can't do that - because <FS_ANY> is not an internal table.

 

So, you might think, I simply create a new field-symbol <fs_anytab> TYPE ANY TABLE . That way I can assign ls_rsparam-selname to <fs_anytab>, and append to that field-symbol.

 

True, syntactically this logic would not cause any problems, and your program would activate without errors. But once you step over the statement, you will get the following shortdump:

scr-2.jpg

So below you can find how I solved this issue. I searched for answers in the forum discussions here on SCN, but couldn't find it immediately. Perhaps it is out there somewhere (especially since this concept is widely used in R/3, not so much in CRM though) but I blogged about this nonetheless, hoping to save a fellow colleague some valuable time ;

 

DATA:            ref(50)         TYPE c,

                 dref            TYPEREFTOdata.

FIELD-SYMBOLS:

                <fs_any>         TYPEany,

                <fs_any_1>       TYPEany,

                <fs_anytab>      TYPEANYTABLE.

 

LOOPAT i_selscr_parameters INTO ls_rsparam.

  CASE ls_rsparam-kind.

    WHEN'P'.

*     This is a regular parameter

      ASSIGN(ls_rsparam-selname)TO<fs_any>.

      IF<fs_any>ISASSIGNED.

        <fs_any> = ls_rsparam-low.

        UNASSIGN <fs_any>.

      ENDIF.

    WHEN'S'.

*     This is a range. Now ranges are in fact tables with header line,

*     and a row structure SIGN OPTION LOW HIGH.

      CONCATENATE: '(' sy-repid ')' ls_rsparam-selname '[]'INTOref.

      CONDENSErefNO-GAPS.

 

      ASSIGN(ref)TO<fs_anytab>.

*     So now we have the table (MAINPROGRAM)S_RANGE[] assigned to a

*     field-symbol of type ANY TABLE without dumping ;-)

      IF<fs_anytab>ISASSIGNED.

*       We still need a structure which has the same line type as <fs_anytab>

        CREATEDATA dref LIKELINEOF<fs_anytab>.

 

*       And now <fs_any> has our line type, we can start transferring the

*       values to the different components of the structure

        ASSIGN dref->* TO<fs_any>.

        IF<fs_any>ISASSIGNED.

          ASSIGNCOMPONENT'SIGN'OFSTRUCTURE<fs_any>TO<fs_any_1>.

          IF<fs_any_1>ISASSIGNED.

            <fs_any_1> = ls_rsparam-sign.

            UNASSIGN<fs_any_1>.

          ENDIF.

          ASSIGNCOMPONENT'OPTION'OFSTRUCTURE<fs_any>TO<fs_any_1>.

          IF<fs_any_1>ISASSIGNED.

            <fs_any_1> = ls_rsparam-option.

            UNASSIGN<fs_any_1>.

          ENDIF.

          ASSIGNCOMPONENT'LOW'OFSTRUCTURE<fs_any>TO<fs_any_1>.

          IF<fs_any_1>ISASSIGNED.

            <fs_any_1> = ls_rsparam-low.

            UNASSIGN<fs_any_1>.

          ENDIF.

          ASSIGNCOMPONENT'HIGH'OFSTRUCTURE<fs_any>TO<fs_any_1>.

          IF<fs_any_1>ISASSIGNED.

            <fs_any_1> = ls_rsparam-high.

            UNASSIGN<fs_any_1>.

          ENDIF.

        ENDIF.

      ENDIF.

  ENDCASE.

ENDLOOP.

 

NOTE: The point of this blog is to elaborate on accessing internal table variables dynamically across programs, I certainly do not claim this was the best or most performant solution to my original requirement . Any comments on this blog however are highly appreciated!

 

Cheers,

Tom.


Seek the most efficient way to detect whether there are table row with duplicate key

$
0
0

The requirement is: there is an internal table with a large number of table row.

 

If all rows have the identical recipient_id, that id( 30273 ) must be returned.

 

UUID

Phone_number

Recipient_id

0412ASFDSFDSFXCVS

138XXXXX1

30273

0412ASFDSFDSFXCVD

138XXXXX2

30273

0412ASFDSFDSFXCVF

138XXXXX3

30273

30273

 

If not, it must return empty.

UUID

Phone_number

Recipient_id

0412ASFDSFDSFXCVS

138XXXXX1

30273

0412ASFDSFDSFXCVD

138XXXXX2

30273

0412ASFDSFDSFXCVF

138XXXXX3

30273

30272

 

The table line type structure in the project looks like below:

clipboard1.png

Three different solutions have been made.

 

Approach1

the idea is a temporary table lt_sms_status is used to hold all the content of the internal table to be checked, and then SORT on the temporary table and delete adjacent entries. If all the table row have the same recipient id, after the operation there must be only one entry left.

    DATA: lt_sms_status LIKE it_tab.    lt_sms_status = it_tab.
SORT lt_sms_status BY recipient_id.
DELETE ADJACENT DUPLICATES FROM lt_sms_status COMPARING recipient_id.
IF lines( lt_sms_status ) = 1.
READ TABLE it_tab ASSIGNING FIELD-SYMBOL(<line>) INDEX 1.        ev_rec_id = <line>-recipient_id.
ENDIF.

The drawback of approach1 is it could lead to the unnecessary high memory assumption. when lt_sms_status = it_tab is executed, no new memory allocation will not occur, until the write operation on the copied content. This behavior is documented as "Delayed Copy".

We also have concern regarding the performance of SORT and DELETE keyword when they are executed on a big internal table.

clipboard2.png

Approach2

Now we fetch the recipient id of the first row, and compare it with the left rows in the table. If most of the table rows have different recipient id, the execution has the chance to quit early. However if unfortunately all the table rows have exactly the same recipient id, this approach has to loop until last table row.

  

 DATA: lv_diff_found TYPE abap_bool VALUE abap_false.
READ TABLE it_tab ASSIGNING FIELD-SYMBOL(<line>) INDEX 1.
DATA(lv_account_id) = <line>-recipient_id.
LOOP AT it_tab ASSIGNING FIELD-SYMBOL(<ls_line>).
IF lv_account_id <> <ls_line>-recipient_id.          lv_diff_found = abap_true.
EXIT.
ENDIF.
ENDLOOP.
IF lv_diff_found = abap_false.       ev_rec_id = lv_account_id.
ENDIF.

Approach3

the idea is similar as approach2, now instead of manual comparison inside each LOOP, we leverage "LOOP AT XXX WHERE condition".

  

 READ TABLE it_tab ASSIGNING FIELD-SYMBOL(<line>) INDEX 1.    LOOP AT it_tab ASSIGNING FIELD-SYMBOL(<ls_line>) WHERE recipient_id <> <line>-recipient_id.    ENDLOOP.
IF sy-subrc <> 0.       ev_rec_id = <line>-recipient_id.
ENDIF.

In order to measure the perfomance, we construct two kinds of test case. In the first one, we generate the internal table with N rows, each has exactly the same recipient id. And for the second, each one has different. Both are extreme kinds of scenarios. We may consider to measure the case between these two, for example for a N rows table there are 50% table rows have the same id and another 50% have difference one.

 

Performance test result

The time spent is measured in microsecond.

N = 1000

For the first test case, approach3 is most efficient. For the second test case, approach2 is the winner, as we expected.

clipboard4.png


N = 10000

clipboard5.png

N = 100000

clipboard6.png

N = 1000000

clipboard7.png

N = 5000000

clipboard8.png

Based on the performance result, we do not consider approach1 any more. For the choice between approach2 and 3, we need to investigate on the distriction of recipient id in the real world.

 

Maybe you can also share if you have better solutions?

Project Objectify - continued

$
0
0

Hi SCN community!

 

If you're not familiar with Matthew Billingham's Project Objectify, please read it before you continue.

 

The idea is simple... let's build a set of highly reusable, flexible and helpful abap classes that we share and collaborate on.

 

Who hasn't had the feeling to be writing the same code over and over again? To get some document flow, pricing conditions, etc. Wouldn't it make more sense to have a set of powerful abap classes, properly designed and coded, that you can easily export/import for reuse?

 

The idea was coined in 2009 by Matthew, and I was surprised to see no one had actually picked it up, so I've created a github repository for this, and I've started by sharing a few very simple classes, that I hope will set the template for future development.

 

Here is the link for it: https://github.com/EsperancaB/sap_project_object

 

Hope to see you there.

 

All the best,

Bruno

Several times call BAPI_GOODSMVT_CREATE in the user program

$
0
0

Here I will not write about the details of using BAPI_GOODSMVT_CREATE, has already
been written about this many times, and SCN including.

I propose to focus on one small detail, without which multiple call BAPI_GOODSMVT_CREATE will not work correctly.


Why BAPI_GOODSMVT_CREATE called repeatedly in his Z program? For example, you specify parameters for moving material,
but BAPI returned an error.
You change something and press the button again, causing BAPI.

 


So, if the call looks CALL FUNCTION 'BAPI_GOODSMVT_CREATE' again you'll get an error, despite the correct parameters.

But if you specify the addition CALL FUNCTION 'BAPI_GOODSMVT_CREATE' DESTINATION 'NONE' - the document will be created!

Thus, using DESTINATION 'NONE', you can be sure that the data buffer previous calls have no impact!

 

P.S. It is also necessary to specify DESTINATION 'NONE'  in calling COMMIT or ROLLBACK like below

IF p_matdoc IS INITIAL .

  CALL FUNCTION 'BAPI_TRANSACTION_ROLLBACK' DESTINATION 'NONE'.

ELSE .

  CALL FUNCTION 'BAPI_TRANSACTION_COMMIT' DESTINATION 'NONE'.

ENDIF .

CALL FUNCTION 'RFC_CONNECTION_CLOSE'    EXPORTING      destination = 'NONE'.

Addressing down

$
0
0

Triggered by this forum post Issue of blank lines removal in address in master page in Adobe form, I’ve decided to tackle a topic that has been eating at me for some years.

 

Why do developers seem so reluctant to use address nodes in forms?

 

This is not a criticism of Anindita, the author of the post who has inherited a form and understandably wants to minimise the amount of change.  It’s more a result of spending years having to convince developers that address nodes and not individual text fields are the best way to deliver this functionality.

 

"I was only doing what I was taught"

 

My own theory as to why this is not adopted puts the blame squarely with the SAP training material.  We’re all familiar with the flight model that is used in the ABAP training (I found my original ABAP training certificate from 1991 recently, and as I recall that course used the same model).  But the problem is that this model pre-dates the introduction of Central Address Management (or Business Address Services as it seems to be called now).  So while it’s fine for the programming courses, the form development courses tend not to give CAM or BAS the focus it deserves.  While the courses for SAPScript, Smartforms and adobe forms all cover the topic of the address node, none of them include the topic in the exercises.

 

When I taught the SAPscript and Smartform courses myself I always checked table ADRC in the training system and found some valid address numbers to both demonstrate their use and include the topic in the exercises, but any trainer focusing solely on the material will inevitably skim over this topic.

 

"I was only doing what I was told"

 

My other theory is that developers are following Functional Specs too closely.  A form FS will often include a mock-up something like this;

form_2.png

Then rather than challenging the specific example or just using an address node because it’s best practice, the developer will slavishly follow what has been specified.  And in the relatively clean data of a project test system all will be well, only when the vagaries of production data are introduced do blank lines appear in the address, and by then there’s a reluctance to make fundamental changes to forms.


The advantages to address nodes are many, compression of blank lines, prioritisation of lines when space is limited, international formatting, fewer fields passed from the print program or initialisation.  I could cover these in detail, but they’re all covered in the SAP help and there’s not a great deal I could add to that.


Now, like any technique I’m sure there are disadvantages to address nodes and please use the comments section to point out their shortcomings.  Otherwise, go out there and champion the often forgotten address node.

A small tip of viewing RAWSTRING field in SE16

$
0
0

Sometimes you would like to view the content of field with RAWSTRING type for a table:

clipboard1.png

The raw string represents the configuration with XML format however the correct format could not be viewed in SE16 directly.

clipboard2.png

In fact, the dynpro in the screenshot above is implemented by a program which is automatically generated by framework. You could find its name via System->Status:

clipboard3.png

clipboard4.png

clipboard5.png

execute report RS_ABAP_SOURCE_SCAN with search key = select * from BSPC_DL_PERSSTOR, search program = /1BCDWB/DBBSPC_DL_PERSSTOR.

clipboard6.png

Set breakpoint on the three search result:

clipboard7.png

relaunch SE16 and access the table, one of the breakpoint is triggered:

clipboard8.png

switch to XML Browser:

clipboard9.png

then you can see XML detail in debugger. With such tip it is not necessary to write any report to select the xml data out of the database table.

clipboard10.png


Shoot Me Up ABAP

$
0
0

Dependency Injection

 

image001.png

 

There is many a true word, Spoken Inject

 

One line summary:-

 

One way to write OO programs with many small classes with less lines of code.

 

Back Story

 

The other day there was a blog on SCN about Dependency injection.

 

http://scn.sap.com/community/abap/blog/2014/01/06/successful-abap-dependency-injection

 

I thought – I know what that is – if an object (say a car object) needs an engine object to work, you don’t have the car object create the engine object, you pass in the engine object through the car objects constructor.

 

I had thought that was solely concerned with unit testing, but if you look at the comments at the bottom, when I started talking about this on the blog comments, people soon put me right, it turns out it has a much wider scope.

 

As soon as I realised I was barking up the wrong tree, I read all I could on the subject, for example …

 

http://en.wikipedia.org/wiki/Dependency_injection

 

http://martinfowler.com/articles/injection.html

 

http://www.jamesshore.com/Blog/Dependency-Injection-Demystified.html

 

… ending with this blog by Jack Stewart

 

http://scn.sap.com/community/abap/blog/2013/08/28/dependency-injection-for-abap

 

I always thought the idea was great – often you have to create a bunch of objects and then “wire them together” by passing them into each other’s constructors so they know about each other.

 

This gives you the flexibility to pass in subclasses to alter the behaviour of the application – as I said I had first heard about this in the context of unit testing, but when I thought about it again naturally you can pass in any sort of subclass to change the way the program runs e.g. different subclass based on whatever criteria makes sense, just like the BADI filter mechanism.

 

That is a wonderful thing to be able to do, and subclassing is one of the few benefits of OO programming that one of my colleagues can get his head around, but it does tend to involve a lot of “boiler plate” programming i.e. lots of CREATE OBJECT statements, passing in assorted parameters.

 

Many Small Classes, make Light Work

 

http://scn.sap.com/community/abap/blog/2013/08/22/the-issue-with-having-many-small-classes

 

The idea is that the smaller and more focused your classes are, the easier they are to re-use and maintain. An OO principle is that a class should only have one reason to change i.e. it should do one thing only. If you follow that principle you get loads of benefits, but you have to create loads of classes in your program.

 

When I first started playing around with OO programming I was too lazy to keep writing CREATE OBJECT so I made everything static. That is not actually a sensible thing to do just to avoid work, as then you can’t subclass things. SAP itself found that out when they initially made ABAP proxy classes static.

 

The NEW Objects on the Block

 

In the Java programming language you create objects by saying GOAT = NEW GOAT as opposed to CREATE OBJECT GOAT.

 

In the “Head First Design Patterns Book” it gives a bunch of about five rules of programming which every Java programmer should aspire to but are in fact impossible to follow in real life.

 

One of those revolved around the rule being never to use the NEW statement because that hard coded the exact type of class you were creating, but how can you create objects if the only way to create them is to use the NEW statement?

 

In both Java and ABAP interfaces come into play here, you declare the ANIMAL object as an interface of type FARM ANIMAL (which GOAT implements) and say CREATE OBJECT ANIMAL TYPE GOAT. Perhaps a better example is in ABAP2XLS when you declare the object that writes out the file as an interface and then create it using the TYPE of the EXCEL version you want e.g. 2007.

 

Now you are always going to have to say the specific type (subclass) you want somewhere, but is it possible to decouple this from the exact instant you call the CREATE OBJECT statement?

 

Since you can have a dynamic CREATE OBJECT statement, you would think so, but how does this apparent diversion link back to what I was talking about earlier?

 

Jack Black and his blog Silver

 

Going back to Dependency Injection the blog by Jack Stewart contained a link to download some sample code. I downloaded it, had a look, thought it was great, and then totally re-wrote it. That is no reflection on the quality of the original; I am just physically incapable of not rewriting every single thing I come across.

 

I am going to include a SAPLINK file in text format at the end of this blog, but first I shall go through the code, top down. Firstly, this test program shows exactly what I am trying to achieve i.e. the same thing in less lines of code.

 

I have created some dummy Y classes which just have constructors to pass in a mixture of object instances and elementary data object parameters, my dear Watson. They only have one method each, just to write out if they are a base class or a subclass. The important thing is the effort involved to create them.

 

The Da Vinci Code Samples

 

First of all, a basic structure to get some elementary parameters and say if we want to use a test double or not. I am sticking with the unit test concept for now, but as I mentioned, you can pass in any old subclass you want, according to the good old, every popular, Liskov Substitution principle.

 

*&---------------------------------------------------------------------*
*& Report  Y_INJECTION_TEST
*&
*&---------------------------------------------------------------------*
* Show two ways to create linked objects, one using dependency injection
*--------------------------------------------------------------------*
REPORT  y_injection_test.

PARAMETERS : p_valid TYPE sy-datum,
             p_werks
TYPE werks_d,
             p_test 
AS CHECKBOX.

INITIALIZATION.
  p_valid
= sy-datum.
  p_werks
= '3116'.

START-OF-SELECTION.
 
PERFORM do_it_the_long_way.
 
PERFORM do_it_the_short_way.

 

It’s a Long Long Way, from there to here

 

Firstly, the traditional way….

 

*&---------------------------------------------------------------------*
*&      Form  DO_IT_THE_LONG_WAY
*&---------------------------------------------------------------------*
* Normal way of doing things
*----------------------------------------------------------------------*
FORM do_it_the_long_way .
 
DATA: lo_logger        TYPE REF TO ycl_test_logger.
 
DATA: lo_db_layer      TYPE REF TO ycl_test_db_layer.
 
DATA: lo_mock_db_layer TYPE REF TO ycl_test_mock_db_layer.
 
DATA: lo_simulator     TYPE REF TO ycl_test_simulator.

 
CREATE OBJECT lo_logger.

 
IF p_test = abap_true.

   
CREATE OBJECT lo_mock_db_layer
     
EXPORTING
        io_logger  
= lo_logger
        id_valid_on
= p_valid.

   
CREATE OBJECT lo_simulator
     
EXPORTING
        id_plant_id  
= p_werks
        io_db_layer  
= lo_mock_db_layer
        io_logger    
= lo_logger.

 
ELSE.

   
CREATE OBJECT lo_db_layer
     
EXPORTING
        io_logger  
= lo_logger
        id_valid_on
= p_valid.

   
CREATE OBJECT lo_simulator
     
EXPORTING
        id_plant_id  
= p_werks
        io_db_layer  
= lo_db_layer
        io_logger    
= lo_logger.

 
ENDIF.

  lo_simulator
->say_who_you_are( ).

 
SKIP.

ENDFORM.                    " DO_IT_THE_LONG_WAY

 

Get Shorty

 

Now we do the same thing, using a Z class I created to use dependency injection.

 

*&---------------------------------------------------------------------*
*&      Form  DO_IT_THE_SHORT_WAY
*&---------------------------------------------------------------------*
*  Using Constructor Injection
*----------------------------------------------------------------------*
FORM do_it_the_short_way .
* Local Variables
 
DATA: lo_simulator  TYPE REF TO ycl_test_simulator.

  zcl_bc_injector
=>during_construction( :
    for_parameter
= 'ID_PLANT_ID' use_value = p_werks ),
    for_parameter
= 'ID_VALID_ON' use_value = p_valid ).

 
IF p_test = abap_true.
   
"We want to use a test double for the database object
    zcl_bc_injector
=>instead_of( using_main_class = 'YCL_TEST_DB_LAYER'
                                 use_sub_class   
= 'YCL_TEST_MOCK_DB_LAYER' ).
 
ENDIF.

  zcl_bc_injector
=>create_via_injection( CHANGING co_object = lo_simulator ).

  lo_simulator
->say_who_you_are( ).

ENDFORM.                    " DO_IT_THE_SHORT_WAY

 

I think the advantage is self-evident – the second way is much shorter, and it’s got Big Feet.

 

If the importing parameter of the object constructor was an interface it would not matter at all. You just pass the interface name in to the INSTEAD_OF method as opposed to the main class name.

 

I have done virtually no error handling in the below code, except throwing fatal exceptions when unexpected things occur. This could be a lot more elegant, I am just demonstrating the basic principle.

 

Firstly the DURING CONSTRUCTION method analyses elementary parameters and then does nothing more fancy than adding entries to an internal table.

 

* Local Variables
 
DATA: lo_description       TYPE REF TO cl_abap_typedescr,
        ld_dummy            
TYPE string ##needed,
        ld_data_element_name
TYPE string,
        ls_parameter_values 
LIKE LINE OF mt_parameter_values.

  ls_parameter_values
-identifier = for_parameter.

 
CREATE DATA ls_parameter_values-do_value LIKE use_value.
 
GET REFERENCE OF use_value INTO ls_parameter_values-do_value.

 
CHECK sy-subrc = 0.

 
CALL METHOD cl_abap_structdescr=>describe_by_data_ref
   
EXPORTING
      p_data_ref          
= ls_parameter_values-do_value
    RECEIVING
      p_descr_ref         
= lo_description
   
EXCEPTIONS
      reference_is_initial
= 1
     
OTHERS               = 2.

 
IF sy-subrc <> 0.
   
RETURN.
 
ENDIF.

 
SPLIT lo_description->absolute_name AT '=' INTO ld_dummy ld_data_element_name.

  ls_parameter_values
-rollname = ld_data_element_name.

 
INSERT ls_parameter_values INTO TABLE mt_parameter_values.

 

It’s the same deal with the INSTEAD_OF method for saying what exact subclass you want to create, except it’s even simpler this time.

 

METHOD instead_of.
* Local Variables
 
DATA: ls_sub_classes_to_use LIKE LINE OF mt_sub_classes_to_use.

  ls_sub_classes_to_use
-main_class = using_main_class.
  ls_sub_classes_to_use
-sub_class  = use_sub_class.

 
"Add entry at the start, so it takes priority over previous
 
"similar entries
 
INSERT ls_sub_classes_to_use INTO mt_sub_classes_to_use INDEX 1.

ENDMETHOD.

 

Now we come to the main CREATE_BY_INJECTION method.  I like to think I have written this as close to plain English as I can, so that this is more or less elf-explanatory.

 

METHOD create_via_injection.
* Local Variables
 
DATA: lo_class_in_reference_details  TYPE REF TO cl_abap_refdescr,
        lo_class_in_type_details      
TYPE REF TO cl_abap_typedescr,
        lo_class_to_create_type_detail
TYPE REF TO cl_abap_typedescr,
        ld_class_passed_in            
TYPE seoclass-clsname,
        ld_class_type_to_create       
TYPE seoclass-clsname,
        ls_created_objects            
LIKE LINE OF mt_created_objects,
        lt_signature_values           
TYPE abap_parmbind_tab.

* Determine the class type of the reference object that was passed in
  lo_class_in_reference_details ?= cl_abap_refdescr
=>describe_by_data( co_object ).
  lo_class_in_type_details      
= lo_class_in_reference_details->get_referenced_type( ).
  ld_class_passed_in            
= lo_class_in_type_details->get_relative_name( ).

 
"See if we need to create the real class, or a subclass
  determine_class_to_create
(
   
EXPORTING
      id_class_passed_in            
= ld_class_passed_in
      io_class_in_type_details      
= lo_class_in_type_details
   
IMPORTING
      ed_class_type_to_create       
= ld_class_type_to_create
      eo_class_to_create_type_detail
= lo_class_to_create_type_detail ).

 
READ TABLE mt_created_objects INTO ls_created_objects WITH TABLE KEY clsname = ld_class_type_to_create.

 
IF sy-subrc = 0.
   
"We already have an instance of this class we can use
    co_object ?= ls_created_objects
-object.
   
RETURN.
 
ENDIF.

 
"See if the object we want to create has parameters, and if so, fill them up
  fill_constructor_parameters
( EXPORTING io_class_to_create_type_detail = lo_class_to_create_type_detail
                              
IMPORTING et_signature_values            = lt_signature_values ).


  create_parameter_object
( EXPORTING id_class_type_to_create = ld_class_type_to_create
                                     it_signature_values    
= lt_signature_values       " Parameter Values
                          
CHANGING  co_object               = co_object ).              " Created Object

ENDMETHOD.

 

There is not a lot of point in drilling into this any further – I would encourage you to download the SAPLINK file, and then run this in debug mode to see what is happening.

 

In summary, I am always on the lookout for ways to reduce the so called “boiler plate” code, so the remaining code can concentrate on what the application is supposed to be doing as opposed to how it is doing it. This dependency injection business seems ideally suited so this purpose.

 

Now, while I am here.

 

image002.png

 

Did I mention I am giving a speech at the “Mastering SAP Technology 2014” conference at Melbourne on the 31/03/2014 – it’s about unit testing of ABAP programs.

 

What’s that? I’ve already mentioned this? Many times?

 

Oh dear, that must have slipped my mind. In that case I won’t go on about it, and I’ll sign off.

 

Cheersy Cheers

 

Paul

 

#SAPTechEd 2013 Interview of the Week: ABAP Code Pushdown through SAP HANA

$
0
0

In 2013's SAP TechEd Las Vegas I had the opportunity and pleasure of chatting with Sudipto Shankar Dasgupta and Pradeep S from the Custom Development and Strategic Projects team about their work on pushdown of ABAP programs to HANA.


The discussion went around the following broad topics

·       Relevance of code push down and its benefits

·       Reasons for choosing code push down as an option for optimization

·       Understanding the topic from a developer’s perspective

·       Customer stories





In 2014, SAP TechEd the SAP TechEd name will be retired and the conference will evolve into an exciting new program called SAP d-code, which will address education, collaboration, and networking for the entire SAP ecosystem of developers and technology professionals, incorporating the best elements of SAP TechEd. Hope to see you there. Learn More


Number Ranges – Internal or External ranges Best Practice Scenario

$
0
0

Speaking about the number ranges, I am trying to give a small write up where in I am presenting the best practice scenario for number ranges.

 

Old Numbers Vs New Numbers

Moving from Legacy system to New environment presents an opportunity to clean up the data.
It is very common that there will be duplicate entries in the legacy system and some master data that’s outdated.

With the new system, we will have an opportunity to get rid of the data that’s of no use for business. This also
reduces data maintenance costs.

 

If we go for Old Numbers (using legacy numbers in the new system also) and if there were
duplicates, it’s possible that there would be gaps in the numbers.

 

Because of the above reasons, Businesses go for new number ranges. Unless there is a specific
‘business reason’ it is strongly advised to go for new numbers.

Who decides Number Ranges (Internal or External)?

 

Master Data: There is some Master Data for which normally businesses go for external numbers – for
example, Finished Products – where number of records is very small (in a few thousands).

Businesses may want to ‘construct’ their numbers based on a particular criteria – for example,

Finished Products start with ‘1’ followed by Plant ‘0002’ then followed by Product Group ‘01’ and then a 5 digit number
10001 – this will give us number 100020110001. Users would be able to find very
easily what this product is and where it is produced etc., very easily.

 

Note:- Even here, every number has to be ‘constructed’ meticulously and it takes lot of efforts.

               
There is some Master Data for which normally businesses go for internal numbers – for

example, Customers or Raw materials – where number of records is very high (in few hundred thousands).

So, Sold-to customers could be from 1000000000 to 1999999999. It does not make sense to have

external number ranges here because it will break the backs of business team members!

 

Transaction Data: In general Transaction Data is always a candidate for ‘Internal Numbers’. I have
never seen any engagement going for external numbers here.


Points:

There will be high resistance from Business Users if existing numbers are going to be changed –
they already memorized everything and they (even we do) hate to lose association with old pals.

         We should present them following things here:

  • How the duplicates effect the gaps in numbers
  • How the SAP system STILL allows them to use old numbers to search new numbers.
  • Involve them in ‘constructing’ the number ranges.


     2. Let the system ‘manage’ numbers – it’s better to let system do the work for us rather than involving 100’s of users

 

While doing Data Migration, we follow a different method.We ask functional people to make every number range as ‘external’ and

then let ETL tool generate the numbers and load the data.

Once the data is loaded, all the number ranges are turned back to their original status.

 

 

 

 

 

 













Raw data serialization to be used in RFC

$
0
0

Hi there.

As data growth is constantly increasing handling such amount of data requires more and more time. Most logical way of solving this issue is to handle data in parallel. Currently for parallel processing ABAP offers only one way - RFC enabled functional modules. This approach is quite old and mostly known. But ... there are some limitations exist in RFC FM's. One of it : it doesn't allow to pass references. Usually it's not a problem, but in my case tool is working with multiple data structures, that usually stored as ref to data.

It's impossible to pass such data directly in RFC FM, so here is the trick:

  1. Assign variable with ref to data to field symbol
  2. Export this field symbol to buffer as XString
  3. Pass xstring to RFC FM
  4. Perform backward transformation

 

There are two small tricks with import/export: first one is that you have to explicitly name objects that you export, and during import use same names. Even if data buffer contains data for exactly one object. Other thing is the compression : in this small test without compression data buffer have length 190 bytes, and with compression just 95.

Documentation says that this export routine could fail in case of out of memory, but current limits are not described.

 

Here is the simple example how it works:

REPORT Z_BINARY_TRANSFORM.
data: lr_data type ref to data,      lt_test type table of string,      wa_string type string,      x_buffer  type xstring.
FIELD-SYMBOLS: <fs1> type any,               <fs2> type any.
INITIALIZATION.
** Fill source table with test data
DO 5 TIMES.  wa_string = sy-index.  CONDENSE wa_string.  CONCATENATE 'test' wa_string  into wa_string SEPARATED BY '_'.  append wa_string to lt_test.
ENDDO.
create data lr_data like lt_test.
assign lt_test to <fs1>.
assign lr_data->* to <fs2>.<fs2> = <fs1>.
export rep_tab = <fs2> to data buffer x_buffer compression on.
PERFORM abc using x_buffer.
form abc  using in_buffer type xstring.  data: lr_data2 type ref to data,        lt_test2 type table of string.  field-symbols: <fs3> type standard table,                 <fs4> type any.  create data lr_data2 type table of string.  assign lr_data2->* to <fs3>.
*  append 'test' to <fs3>.  import rep_tab = <fs3> from data buffer in_buffer.  " Import must be performed on the same variable name that was used for export
****  Output supplied table  LOOP AT <fs3> ASSIGNING <fs4>.    write: <fs4>.    NEW-LINE.  ENDLOOP.
ENDFORM.

Simplification of import of serialized data for RFC usage

$
0
0

Hi there.

As it was shown in previous example it's possible to supply any data into RFC enabled FORM. But, at the same time direct usage of IMPORT clause requires explicit specification of the ID's in data buffer and also involves some manual work to define types. Fortunately there is a special class CL_ABAP_IMPEXP_UTILITIES that could handle all dirty work.

The only limitation I've found - it works pretty good with DDIC types, but for custom defined types it could fail without any detailed explanation.

This greatly simplifies extension of the methods and makes code much more compact and readable.

I've tested this example with 50000 lines and seems ABAP can handle such amount of data. Generally for the parallel processing this should be enough.

 

So, here is the sample code:

REPORT Z_BINARY_TRANSFORM2.
data: x_b2      type xstring,      lt_abc    type STANDARD TABLE OF t000,      wa_t000   type t000.
INITIALIZATION.
* Fill table with sample data
wa_t000-mandt = sy-mandt.
wa_t000-mtext = 'SCN Demo #1'.
wa_t000-ort01 = 'Moscow'.
append wa_t000 to lt_abc.
wa_t000-mandt = sy-mandt.
wa_t000-mtext = 'SCN Demo #2'.
wa_t000-ort01 = 'Tokio'.
append wa_t000 to lt_abc.
* Export data into buffer
export tadir = lt_abc to DATA BUFFER x_b2 COMPRESSION on.
* Check the result
perform describe_buffer using x_b2.
form describe_buffer  using in_buffer type xstring.  data: lt_datatab type tab_cpar.  FIELD-SYMBOLS: <fs> like line of lt_datatab,                 <fs2> type any table,                 <fs3> type any.  lt_datatab = cl_abap_expimp_utilities=>dbuf_import_create_data( dbuf = in_buffer ).  loop at lt_datatab assigning <fs>.        write : <fs>-name.     " Here is the name of the currently processed tab in buffer        new-line.        assign <fs>-dref->* to <fs2>.        loop at <fs2> assigning <fs3>.                  write : <fs3>.                  new-line.        endloop.  endloop.
endform.

Objects serialization for RFC forms

$
0
0

Hi there.

As it was shown previously there are some limitations on usage of RFC enabled forms. Most of it is easily avoidable, but the main limitation is that it's impossible to pass references into such forms. Currently import/export routines doesn't allow to pass references, so here is the problem.

An option to bypass such limitation is to serialize objects into some string container. Lucky us, SAP developed transformation called "id" that could handle everything. But, at the same time, such transformation creates an XML, so there is a big overhead. But, it could be handled as well, as SAP developed kernel-based methods for compression of strings. In this test length of the original XML is 586 bytes, and length of compressed one is 302. Pretty impressive, huh ?

Only thing that should be highlighted is that class used for serialization must implement interface IF_SERIALIZABLE_OBJECT. Actually this interface doesn't declare any methods, so you have to put it in interfaces section. That's it !

 

Another addition : this method is works pretty well if object contains nested objects, after deserialization objects would be recreated and references would be set properly. So, for example this method would work well for linked links. There is no example here, to keep things simple, but it works.

 

So, here is the code (there is no exceptions handling for the simplicity, but in productive software you have to always handle such exceptions):

REPORT Z_BINARY_TRANSFORM3.
class z_ser_data definition.  PUBLIC SECTION.    INTERFACES: IF_SERIALIZABLE_OBJECT.    data: member type string read-only.  METHODS:    constructor      IMPORTING        in_val type string,    get_val      RETURNING VALUE(out_val) type string,    change_val      IMPORTING        new_val type string.
endclass.
class z_ser_data implementation.  METHOD constructor.    member = in_val.  ENDMETHOD.  method get_val.    out_val = me->member.  endmethod.  method change_val.    me->member = new_val.  endmethod.
endclass.
data: lr_class type ref to z_ser_data,      lv_ser_xml type string,      lv_x_gzip  type xstring.
FIELD-SYMBOLS: <fs> type any.
INITIALIZATION.
create object lr_class  exporting      in_val = 'String from calling program'.
CALL TRANSFORMATION id  SOURCE model = lr_class  RESULT XML lv_ser_xml.
CL_ABAP_GZIP=>compress_text(  exporting    text_in        = lv_ser_xml    " Input Text
*    text_in_len    = -1    " Input Length    compress_level = 9    " Level of Compression
*    conversion     = 'DEFAULT'    " Conversion to UTF8 (UC)  importing    gzip_out       = lv_x_gzip     " Compressed output
*    gzip_out_len   =     " Output Length
).
*  catch cx_parameter_invalid_range.    " Parameter with Invalid Range
*  catch cx_sy_buffer_overflow.    " System Exception: Buffer too Short
*  catch cx_sy_conversion_codepage.    " System Exception Converting Character Set
*  catch cx_sy_compression_error.    " System Exception: Compression Error
perform describe_buffer using lv_x_gzip.
form describe_buffer  using in_buffer type xstring.  data: lr_another_obj type ref to z_ser_data,        lv_str type string.  CL_ABAP_GZIP=>decompress_text(    exporting      gzip_in      = in_buffer    " Input of Zipped Data
*      gzip_in_len  = -1    " Input Length
*      conversion   = 'DEFAULT'    " Conversion to UTF8 (UC)    importing      text_out     = lv_str    " Decompessed Output
*      text_out_len =     " Output Length  ).
*    catch cx_parameter_invalid_range.    " Parameter with Invalid Range
*    catch cx_sy_buffer_overflow.    " System Exception: Buffer too Short
*    catch cx_sy_conversion_codepage.    " System Exception Converting Character Set
*    catch cx_sy_compression_error.    " System Exception: Compression Error  CALL TRANSFORMATION id    SOURCE XML lv_str    RESULT model = lr_another_obj.  write: 'Received val:', lr_another_obj->member.  NEW-LINE.  lr_another_obj->change_val( new_val = 'New val' ).  write: 'Changed val:', lr_another_obj->member.  new-line.
endform.

The last runtime buffer you'll ever need?

$
0
0

Hi SCN community!

 

It's me again, with another contribution to Project Object.

 

Has it ever happened to you to be in a situation where you might be requesting the same thing over and over again to the database?

 

And if you're a good developer, you avoided repetitive calls to the database implementing a buffer, correct?

 

Well, what I've got for you today is a class the will serve as a buffer for everything you want! Everything? Everything!

 

 

I can't take full credits for this though... I got this from a guy who got this from another guy... so I have no idea who was the actual developer of this thing. I can take the credit for "perfecting" it though, and implementing some exception classes in it. So at least that

 

You'll be able to find it in nugget and text version in my github, in the utilities section:

GitHub

 

 

Use example

 

Below is just an example of how to use this class. I am fully aware that the first "loop" is not how someone would properly perform this particular select to the database, this is meant simply as an example of how to use this class and for what.

 

 

 

DATA:
      db_counter TYPE i,
      lt_sbook  TYPE TABLE OF sbook,
      ls_sbook  LIKE LINE OF lt_sbook,
      ls_sbuspart TYPE sbuspart.

SELECT * FROM sbook
  INTO TABLE lt_sbook.

BREAK-POINT.

CLEAR db_counter.

LOOP AT lt_sbook INTO ls_sbook.

  SELECT SINGLE * FROM sbuspart
    INTO ls_sbuspart
    WHERE buspartnum = ls_sbook-customid.
  ADD 1 TO db_counter.

ENDLOOP.

"check db_counter
BREAK-POINT.

CLEAR db_counter.

LOOP AT lt_sbook INTO ls_sbook.

  TRY.

      CALL METHOD zcl_buffer=>get_value
        EXPORTING
          i_name = 'CUSTOMER_DETAILS'
          i_key  = ls_sbook-customid.
    CATCH zcx_buffer_value_not_found.

      "If we haven't saved it yet, get it and save it

      SELECT SINGLE * FROM sbuspart
        INTO ls_sbuspart
        WHERE buspartnum = ls_sbook-customid.
      ADD 1 TO db_counter.

      CALL METHOD zcl_buffer=>save_value
        EXPORTING
          i_name  = 'CUSTOMER_DETAILS'
          i_key   = ls_sbook-customid
          i_value = ls_sbuspart.

  ENDTRY.

ENDLOOP.

"check db_counter
BREAK-POINT.

 

Performance remark

 

One last remark that I should make though... due to the high flexibility of this buffer, I think it's not possible to have a sorted read (or, in other words, a fast read) of the value in the buffer. Therefore, if you are using a buffer with a high volume of entries, and if performance is critical, you should create a subclass and redefine the "key" with the type you are interested in particular, and also redefine the get method to replace the "LOOP" statement with a "READ" statement.

 

All the best!

Bruno

Viewing all 948 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>