Quantcast
Channel: ABAP Development
Viewing all 948 articles
Browse latest View live

ZTOAD - Open SQL editor

$
0
0

Hi everybody,

 

I want to share with you one of the best program from my toolbox. I called it ZTOAD, in reference of a famous query builder in the SQL world.

As you know, SAP don't give to developper tools to execute query (there is a crappy tool for admin, and some function modules for devs, but not so usable...).

So i took my keeboard and made my own...

 

With ZTOAD, you can write and execute queries, in Open SQL format (the format used in ABAP program). Result is displayed in an ALV at bottom part of the screen. You can access to DDIC to help you when writing query and use SAP standard help.

You could save your best queries, and also share it to friends.

 

http://quelquepart.biz/data/images/ztoad/ztoad.png

With this tool it become very easy to debug a complex query in production (with lot of join or subqueries for example). Just copy past the query in the editor and run, you will see in live the result.

 

French presentation can be found here : http://quelquepart.biz/article7/ztoad-requeteur-open-sql

 

And there is a direct download link (remember that you will need SAPLINK with table extension to install) : http://quelquepart.biz/telechargements&file=L2RhdGEvZG9jdW1lbnRzL3p0b2FkLnppcCozZThiNjE,&source=scn-ZTOAD

 

Final bonus: if ZSPRO is also installed, ZTOAD will get tables listed in ZSPRO repository and display it in a tree under DDIC tree (to help you to create queries)

 

Feel free to comment here


SALV and PEPPER : Editing individual columns in the SALV

$
0
0

SALV and Pepper – Edit it Real Good


image001.jpg

Editing Specific Columns in CL_SALV_TABLE


One Sentence Summary


How to write a report using CL_SALV_TABLE and have specific columns open for editing.


Back Story


SCN Blogger Naimesh Patel has written several articles about custom workarounds to make CL_SALV_TABLE editable, for example the following:-


http://scn.sap.com/community/abap/blog/2015/06/25/salv-editable-with-single-custom-method

 

The development department at SAP are horrified that developers around the world are trying to make the SALV editable, because it is naughty and against the rules. Nonetheless it is a requirement every developer in every company has and lots of such developers just don’t care that it’s naughty.


I have declared February the 8th“International Editable SALV Day” on the grounds that was when people first started begging SAP to add this functionality – in 2008.


http://scn.sap.com/community/abap/blog/2015/02/08/international-editable-salv-day-2015

 

I imagine I will be writing another blog on the same day in 2016 to celebrate the 8th anniversary of nothing changing.

The irony is at the moment there is a push for companies to adopt the non-on-premise version of S/4 HANA (though none have to date) on the grounds that although there will be far fewer user exits SAP will quickly add missing functionality when you ask for it.

Can anyone see the gap between that promise and the SALV situation?


Column the Barbarian


Anyway, time to stop moaning and move on to the problem at hand. The original solution by Naimesh made the CL_SALV_TABLE editable, but you had to open it in read only mode and then press a button to change to editable mode.


So I asked him if there was a way that you could open up the SALV directly in editable mode, like you can with CL_GUI_ALV_GRID. Every less button pressed by the users is a good thing. He came back with a solution almost at once – it is described in the blog accessed by the link at the top of this blog.


So far so good, but the whole grid was editable – and 99 times out of a hundred you only want one or two columns editable. So I thought it was my turn to contribute something in this area, so I worked out a way to open the SALV in editable mode with only one or two columns changeable.


This may not be the best solution in the world, but it works. In fact what I am going to write about here is about the fourth iteration already, but it suddenly occurred to me I could spend the rest of my life fiddling around trying to get this “perfect” and then this blog would never get written.


MVC Potter and the Philosophers Corner


One area I spent ages agonising over was that given an MVC design what class should do the work of altering the data table so that some columns were editable? It is the models job to know what data a user is allowed to change, and yet the actual mechanism by which you change the data table is very much UI technology specific, and thus the province of the view.


Many people have told me I should not be mixing up philosophical talk – matters of OO design and the like – with the actual instructions on how to do something, but to me they are two sides of the same coin. Right at the moment I am reading one of the “Head First” books about software design and they are presenting how a bad design can really stuff you up down the track even if the program works technically.


One area I still have not finished is to isolate all the code that would be exactly the same in every report into dedicated classes. I am fanatic about this matter – the idea is that if you find yourself with the same task you had six months ago, say creating a pop up box showing a data table, then if you find yourself cut and pasting large chunks of code from the last program and then only changing 5% of it, then something is rotten in the state of Denmark. This is especially true of the SALV class where you might find yourself making twenty lines of data declarations for all the helper objects each time.


My other aim is to have a report framework which is not dependant on the actual UI technology i.e. CL_SALV_TABLE. As an experiment I want to be able to switch between CL_SALV_TABLE and CL_GUI_ALV_GRID by changing just one line in the calling program – or use a configuration table and thus have to change no lines. That way if a successor to the SALV comes along and I think it is much better than I do not have to change half a million report programs.


To summarise – a “perfect” design would be one where the vast bulk of the code is 100% specific to the report at hand and the small proportion remaining consisting of calls to various generic helper classes – thus separating the “things that change from the things that stay the same”. I have not quite got there yet, but as they say, an aim in life is the only treasure worth finding.


Nelson’s Editable Column


In the code that follows you will see that model is shouting out what columns are editable or need to be renamed or allow a drill down etc. and then the view can obey those instructions using its specific UI technology, which in this case is going to be the SALV.


How I am going to do this is by stepping through the program flow so you can see what happens in the same way you would if you filled the custom methods with break points and executed the report.


At the end of the blog will be a good old SAPLINK file so you can install the objects in your own system and play with them.


Bogus Syntax Error


First off I want to mention an annoying problem I have and then people can tell me how to get around it – it is either something obvious, or I am heading down the right path and just need to make things more generic.


The “monster monitor” program is a type one executable report with a selection screen. As we know the selection parameters are global variables (Oh no!). If I have my local class definitions and implementations in the same program as the selection screen then such classes have access to those selection criteria and can use them for database access for example.


However if the class definitions are in one INCLUDE and the class implementations are in another INCLUDE then although the classes still have access to the selection screen parameters and the program will still run, if you do a syntax check whilst in the INCLUDE with the implementations then you get a false syntax error saying something like “variable P_WHATEVER is unknown”.


This makes the syntax check useless and since I do one every ten seconds on average I had to get around this somehow. One way would be not to use INCLUDES though I thought the consensus was that INCLUDES for use with a single program were OK.


What I did was to create a local class for the sole purpose of storing the selection screen parameters, and the local classes that needed any of those values would never access the global parameter variables directly, thus stopping the bogus error message.


START-OF-SELECTION.
 
"This nonsense is the only way I can avoid getting bogus syntax
 
"errors when doing a syntax check on the local class implementations
 
CREATE OBJECT go_selections
   
EXPORTING
      is_numbr
= s_numbr[]
      is_name 
= s_name[]
      ip_vari 
= p_vari
      ip_edit 
= p_edit
      ip_macro
= p_macro
      ip_send 
= p_send
      ip_email
= p_email.


All the selections class does is to take in the values the user entered on the screen and place them in identically named public attributes so they can be used by other local classes. This is of course a right royal pain. If there is not an obvious solution to the bogus error message I imagine the next step would be some sort of generic class where you could add each selection option to a sort of COSEL table and each parameters to a list of value pairs. There might even be such a class available in SAP standard.


Anyway there is one local class called LCL_APPLICATION with a single MAIN static method which is called directly after the selection options have been stored. So there are only two commands after the START-OF_SELECTIONS, creating the selection options and then LCL_APPLICATION=>MAIN( ).


Lion’s MAIN


This is a method that is crying out to be made generic, as the vast bulk of the code would be exactly the same in every report.


CLASS lcl_application IMPLEMENTATION.

 
METHOD main.
* Local Variables
   
DATA: ld_report_name TYPE string,
          ld_repid      
TYPE sy-repid.

   
CONCATENATE sy-tcode sy-title INTO ld_report_name
   
SEPARATED BY ' : '.

   
CREATE OBJECT mo_model.
   
CREATE OBJECT mo_view TYPE zcl_bc_view_salv_table.
   
CREATE OBJECT mo_controller
     
EXPORTING
        io_model
= mo_model
        io_view 
= mo_view.

    mo_model
->data_retrieval( ).
    mo_model
->prepare_data_for_ouput( ).

   
"It is bad news to pass system variables as parameters
    ld_repid
= sy-repid.

In the above code the only thing I might want to vary is the TYPE of the view object being created, so I could pass this in as a parameter. Next, if we are online we want to automatically create a screen with a container so we can do funky tricks like adding our own commands programmatically or enabling editing mode, neither of which you can do in full screen mode.

Naturally if we are running a batch job there is no user to invoke the custom commands or edit the data.

In the below code the model never has direct contact with the view but the things that the model wants to say e.g. which fields are editable get passed on regardless. In a simple program like this, the controller and the application class are pretty much the same thing, I only split them out in case you want to go bananas and have the program call lots of different views for some reason.

   IF sy-batch IS INITIAL.
*--------------------------------------------------------------------*
* Calling a SALV report whilst creating a container
* automatically
*--------------------------------------------------------------------*
* Program flow is as follows:-
* ZCL_BC_VIEW_SALV_TABLE->CREATE_CONTAINER_PREPARE_DATA
* Function ZSALV_CSQT_CREATE_CONTAINER
* ZSALV_CSQT_CREATE_CONTAINER->FILL_CONTAINER_CONTENT
* ZCL_BC_VIEW_SALV_TABLE->PREPARE_DISPLAY_DATA
* --> INITIALISE (Generic)
* --> Application Specific Changes (in this program)
* --> Display (Generic)

      mo_view->create_container_prep_display(
    
EXPORTING
       id_title             
= ld_report_name
       id_report_name       
= ld_repid      " Calling program
       id_variant           
= go_selections->p_vari
       if_start_in_edit_mode
= go_selections->p_edit
       id_edit_control_field
= mo_model->md_edit_control_field
       it_editable_fields   
= mo_model->mt_editable_fields
       it_technicals        
= mo_model->mt_technicals
       it_hidden            
= mo_model->mt_hidden
       it_hotspots          
= mo_model->mt_hotspots
       it_subtotal_fields   
= mo_model->mt_subtotal_fields
       it_field_texts       
= mo_model->mt_field_texts
       it_user_commands     
= mo_model->mt_user_commands
    
CHANGING
       ct_data_table        
= mo_model->mt_output_data ).

   
ELSE.
* If this is running in the background there is no way
* in the world we want/need a container, as there is no
* chance for the user to press any user command buttons or
* edit the data, as there is no user, and no screen for the
* container to live on for that matter
      mo_view
->prepare_display_data(
       
EXPORTING
          id_report_name    
= ld_repid
          id_variant        
= go_selections->p_vari
          it_technicals     
= mo_model->mt_technicals
          it_hidden         
= mo_model->mt_hidden
          it_subtotal_fields
= mo_model->mt_subtotal_fields
          it_field_texts    
= mo_model->mt_field_texts
          it_user_commands  
= mo_model->mt_user_commands
       
CHANGING
          ct_data_table     
= mo_model->mt_output_data ).
   
ENDIF."Are we running in the background?

   
IF go_selections->p_email IS NOT INITIAL.
      mo_controller
->send_email( ).
   
ENDIF.

 
ENDMETHOD.                                               "main

ENDCLASS.                    "lcl_application IMPLEMENTATION

In the above code the program flow was detailed very precisely. I am using the same technique SAP itself uses to avoid having to create a screen explicitly and I have talked about this in blogs as well as the book.

In this case, of course we are going to be editing the data so we need a container. A full screen SALV grid has limited functionality e.g. you cannot add commands programmatically to the toolbar, so we virtually always want a container, but having to create the screen and paint a container on it for each report is mundane work which we want to automate.

Going back to the method calls to the “create container” method all those IMPORTING parameter tables like editable fields and hotspots and the like are all optional as not every report will want to take advantage of all of these features, though most will want one or two and of course as time goes by the users will ask for extra things.

The next bunch of code moves all the provided parameters to the equivalent instance variables in the view class, and then creates a screen by the only means available to a method in a class i.e. it calls a function module. There is a standard function module that SAP uses in its own programs for this purpose, but for reasons best known to themselves the created screen has a big hole in it, so I created a Z copy minus the hole.

METHOD zif_bc_alv_report_view~create_container_prep_display.
*--------------------------------------------------------------------*
* Creating a Container Automatically
*--------------------------------------------------------------------*
* The below function creates a screen and a container, and then does
* a callback to method FILL CONTAINER CONTENT of interface
* IF_SALV_CSQT_CONTENT_MANAGER so that the calling class must
* implement method FILL_CONTAINER_CONTENT
* This way for CL_SALV_TABLE we can add our own functions without having
* to create a PF-STATUS
*--------------------------------------------------------------------*
  md_report_name       
= id_report_name.
  md_edit_control_field
= id_edit_control_field.
  mf_start_in_edit_mode
= if_start_in_edit_mode.
  ms_variant
-report     = id_report_name.
  ms_variant
-variant    = id_variant.
  mt_editable_fields[] 
= it_editable_fields[].
  mt_technicals[]      
= it_technicals[].
  mt_hidden[]          
= it_hidden[].
  mt_hotspots[]        
= it_hotspots[].
  mt_subtotal_fields[] 
= it_subtotal_fields[].
  mt_field_texts[]     
= it_field_texts[].
  mt_user_commands[]   
= it_user_commands[].

 
CREATE DATA mt_data_table LIKE ct_data_table.
 
GET REFERENCE OF ct_data_table INTO mt_data_table.

 
CALL FUNCTION 'ZSALV_CSQT_CREATE_CONTAINER'
   
EXPORTING
      r_content_manager
= me
     
title             = id_title.

ENDMETHOD.

I just can’t Container myself

For 15 years SAP has been pushing people to use OO programming, but in all that time they have not found a replacement for the CALL SCREEN method for bringing up a user interface, something you cannot do from within the OO framework.

So the recommendation is to use function modules for UI processing as you can do a CALL SCREEN from within a function module. The function module in the code above contains a screen definition, just a screen filled with a big container. The function module creates the container and calls up the screen. In the PBO processing of that screen control is passed back to the calling program.

The calling class (the view class) implements an interface which has the method “fill container content” a method which expects a container object to be supplied. The code below is a copy of the standard SAP code in the PBO section of the function modules screen. When a call is made to the “fill container content” control is returned to our view class.

As a PBO is called every time the user interacts with the screen the code has to make sure the container is created only once.

FORM pbo.

 
SET PF-STATUS 'D0100'.

 
IF gr_container IS INITIAL.
   
IF cl_salv_table=>is_offline( ) EQ if_salv_c_bool_sap=>false.
     
CREATE OBJECT gr_container
       
EXPORTING
          container_name
= 'CONTAINER'.
   
ENDIF.

   
SET TITLEBAR 'STANDARD' WITH g_title.

    gr_content_manager
->fill_container_content(
        r_container
= gr_container ).
 
ENDIF.

ENDFORM.                    "pbo

Once the screen is running the CL_SALV_TABLE is going to handle all the user interaction, the screen is just looping through PBO/PAI until such time as the user presses a CANCEL or BACK or EXIT button to shut down the screen.

Next we will back in the generic section of ZCL_BC_VIEW_SALV_TABLE i.e. in a method which will be exactly the same for every single report and thus does not need to be redefined.

All the code below does is make a call to PREPARE_DISPLAY_DATA. You will have seen in the code above that when we are in the background the PREPARE_DISPLAY_DATA method gets called directly, without having to fluff about creating a screen and container, as there is no user to see the screen.

METHOD if_salv_csqt_content_manager~fill_container_content.
*--------------------------------------------------------------------*
* This gets called from function SALV_CSQT_CREATE_CONTAINER PBO module
* which creates a screen and a container, and passes us that container
* in the form of importing parameter R_CONTAINER
*----------------------------------------------------------------------*
* Local Variables
   
FIELD-SYMBOLS: <lt_data_table> TYPE ANY TABLE.

   
ASSIGN mt_data_table->* TO <lt_data_table>.

    prepare_display_data
(
     
EXPORTING
        id_report_name       
= md_report_name              " Calling program
        id_variant           
= ms_variant-variant          " Layout
        if_start_in_edit_mode
= mf_start_in_edit_mode
        id_edit_control_field
= md_edit_control_field
        it_editable_fields   
= mt_editable_fields
        it_technicals        
= mt_technicals
        it_hidden            
= mt_hidden
        it_hotspots          
= mt_hotspots
        it_subtotal_fields   
= mt_subtotal_fields
        it_field_texts       
= mt_field_texts
        io_container         
= r_container 
        it_user_commands     
= mt_user_commands " Toolbar Buttons
     
CHANGING
        ct_data_table        
= <lt_data_table> )." Data Table

 
ENDMETHOD."if_salv_csqt_content_manager~fill_container_content

This might all seem ludicrously complicated, but bear in mind that the code is only written once, and then you never have to bother with it every again, it just gets re-used in every subsequent report. Anyway, the next method that gets executed is in three parts – a generic part which sets up the basic settings for CL_SALV_TABLE, a call to an application specific method which alters the columns in the SALV grid based on the instructions the model class issued, and lastly a simple call to a method to call up the SALV grid.

METHOD zif_bc_alv_report_view~prepare_display_data.
* Step One - Set up the Basic Report
    initialise
(
     
EXPORTING
        id_report_name       
= id_report_name
        id_variant           
= id_variant                       " Layout
        if_start_in_edit_mode
= if_start_in_edit_mode
        id_edit_control_field
= id_edit_control_field
        it_editable_fields   
= it_editable_fields
        io_container         
= io_container 
        it_user_commands     
= it_user_commands
     
CHANGING
        ct_data_table        
= ct_data_table ).

* Step Two – makes changes based on tables sent in by the model
    application_specific_changes
(
      it_technicals 
= it_technicals
      it_hidden     
= it_hidden
      it_hotspots   
= it_hotspots
      it_subtotals  
= it_subtotal_fields
      it_field_texts
= it_field_texts ).

* Step Three- Actually Display the Report
    display
( ).

 
ENDMETHOD."zif_bc_alv_report_view~prepare_display_data

In the next method called we first of all use the factory method to get an instance of CL_SALV_TABLE linked to the container on the screen our lovely function module called up.

We then add the basic toolbar, followed by any custom report specific commands our model class has said it can respond to. This is a good time to set up the MO_COLUMNS object as alter on we will need this to change attributes of various report columns, like making them hotspots or whatever.

We set some handlers for when a user double clicks on a cell in the grid, or presses a n icon in the toolbar. Both actions will cause the CL_SALV_TABLE to raise an event which our custom class needs to handle (in fact all our class does is raise a corresponding event of it’s own for the controller to respond to).

  METHOD zif_bc_alv_report_view~initialise.

   
TRY.
*--------------------------------------------------------------------*
* If we have a container, then we can add our own user defined
* commands programtaically
*--------------------------------------------------------------------          cl_salv_table=>factory(
           
EXPORTING
              r_container 
= io_container
           
IMPORTING
              r_salv_table
= mo_alv_grid
           
CHANGING
              t_table     
= ct_data_table[] ).

          display_basic_toolbar
( ).
         
IF it_user_commands[] IS NOT INITIAL.
            add_commands_to_toolbar
( it_user_commands ).
         
ENDIF.

        mo_columns
= mo_alv_grid->get_columns( ).
        set_layout
( id_variant ).
        set_handlers
( ).

*--------------------------------------------------------------------*
At long last, this is where things start getting interesting i.e. we are going to set things up so that certain columns are editable i.e. the purported purpose of this entire blog.

In the blogs I linked to at the start the subject of creating a custom class ZCL_SALV_MODEL was discussed, this class has one purpose in life, and that is to get hold of the underlying CL_GUI_ALV_GRID instance which lives hidden like Rapunzel in the tower that is CL_SALV_TABLE. This is a subclass of CL_SALV_MODEL_LIST. As we shall see later it has a method for climbing up the tower and rescuing the princess from the evil Rumpelstiltskin who is head of ABAP development at SAP. Or getting the underlying grid object – it’s one or the other; I can’t remember which offhand.

There is another method to the class called “set editable”. This was the miracle solution discovered by Naimesh whereby our custom code can respond to the event raised when the SALV object is ready to burst onto our screen, so we can hold it up for a second and make it editable.

We will get into that code in a minute, and discuss the slight additions I have made to it, and afterwards discuss the other method call in the code below, which changes the data table such that certain columns can be edited. To make a column editable in the SALV (or indeed in CL_GUI_ALV_GRID) both the data in the table and the field catalogue have to be fiddled with.

*--------------------------------------------------------------------*
       
DATA: lo_salv_model TYPE REF TO cl_salv_model.

       
"Narrow casting
       
"CL_SALV_MODEL is a superclass of CL_SALV_TABLE
       
"Target = LO_SALV_MODEL = CL_SALV_MODEL
       
"Source = MO_ALV_GRID   = CL_SALV_TABLE
        lo_salv_model ?= mo_alv_grid
.

       
"Object to access underlying CL_GUI_ALV_GRID
       
CREATE OBJECT mo_salv_model
         
EXPORTING
            io_model
= lo_salv_model.

       
IF if_start_in_edit_mode = abap_true.
         
"Prepare the Field Catalogue to be Editable
          mo_salv_model
->set_editable(

          io_salv               = mo_alv_grid
          id_edit_control_field
= id_edit_control_field
          it_editable_fields   
= it_editable_fields ).
         
"Prepare the Data Table to be Editable
          make_column_editable
(
           
EXPORTING id_edit_control_field = id_edit_control_field
                      it_editable_fields   
= it_editable_fields
           
CHANGING  ct_data_table         = ct_data_table ).
       
ENDIF.

     
CATCH cx_salv_msg.
       
MESSAGE 'Report in Trouble' TYPE 'E'.
   
ENDTRY.

 
ENDMETHOD.                    "zif_bc_alv_report_view~initialise

It is time to introduce yet another custom class ZCL_BC_SALV_EVENT_HANDLER. This class has the job of responding to the event raised when the SALV has finished building itself and is ready to be displayed (the REFRESH event). The code below I copied from the prior blog, all I added was some lines to store a table of what fields (columns) we want to make editable, and what the name of the control field in the data table is. A control field is one that has the structure LVC_T_STYL. I usually call such a field CELLTAB but I could call it FRUIT_LOOPS if I wanted, so I don’t want to hard code the field name, instead the model declares what such a field is named. The table of editable fields and the control field will be used later when the “refresh” event of the SALV is called.

METHOD set_editable.
* Local Variables
 
DATA: lo_event_handler TYPE REF TO zcl_bc_salv_event_handler.

 
"Ensure one, and only one, static event handler exists
 
IF zcl_salv_model=>mo_event_handler IS NOT BOUND.
   
CREATE OBJECT zcl_salv_model=>mo_event_handler
     
TYPE zcl_bc_salv_event_handler.
 
ENDIF.

  lo_event_handler ?= zcl_salv_model
=>mo_event_handler.

  lo_event_handler
->md_edit_control_field = id_edit_control_field.
  lo_event_handler
->mt_editable_fields    = it_editable_fields.

 
APPEND io_salv TO lo_event_handler->mt_salv.

 
"At such time as any SALV object is displayed, call the
 
"after refresh event to make the grid editable
 
SET HANDLER lo_event_handler->on_after_refresh
   
FOR ALL INSTANCES
    ACTIVATION
'X'.

 
"Sometimes the icons needed for an editable grid do not
 
"display, so we have to force the issue
 
IF io_salv->get_display_object( ) = 3.
   
SET HANDLER lo_event_handler->on_toolbar
     
FOR ALL INSTANCES
      ACTIVATION
'X'.
 
ENDIF.

ENDMETHOD.

Now we come to a method in ZCL_BC_VIEW_SALV_TABLE called “make column editable”. This takes in the data table from the model and changes the control field in each row of the data table so that our desired columns are made ready to be editable. You will see I have not quite finished it yet as there are comments saying what I still need to do. As mentioned at the start of the blog I was going to wait till I was 100% ready, but that would mean this blog never be written, as I am never 100% satisfied with the code I write.

METHOD make_column_editable.
* Local Variables
 
DATA :ls_celltab      TYPE lvc_s_styl,
        lt_celltab     
TYPE lvc_t_styl,
        ld_index       
TYPE sy-tabix,
        ldo_table_line 
TYPE REF TO data,
        ld_editable_field
LIKE LINE OF it_editable_fields.

 
FIELD-SYMBOLS: <ls_data_table> TYPE any,
                 <ls_celltab>   
TYPE lvc_s_styl,
                 <lt_celltab>   
TYPE lvc_t_styl.

* Dynamically create work area for looping through the table
* that was passed in
 
CREATE DATA ldo_table_line LIKE LINE OF ct_data_table.

 
ASSIGN ldo_table_line->TO <ls_data_table>.

 
LOOP AT ct_data_table ASSIGNING <ls_data_table>.

   
"Need a TRY/CATCH block here – “if the control field is not of type LVC_T_STYL
   
"then a system generated exception will be thrown
   
ASSIGN COMPONENT id_edit_control_field OF STRUCTURE <ls_data_table>

    TO <lt_celltab>.

   
IF sy-subrc <> 0.
     
"We cannot go on, the control field is not in the structure
     
"Need a fatal error here, violated pre-condition
     
RETURN.
   
ENDIF.

   
LOOP AT it_editable_fields INTO ld_editable_field.

     
READ TABLE <lt_celltab> ASSIGNING <ls_celltab>

      WITH KEY fieldname = ld_editable_field.

     
IF sy-subrc <> 0.
        ld_index            
= sy-tabix.
        ls_celltab
-fieldname = ld_editable_field.
       
INSERT ls_celltab INTO <lt_celltab> INDEX ld_index.
       
READ TABLE <lt_celltab> ASSIGNING <ls_celltab>

        WITH KEY fieldname = ld_editable_field.
     
ENDIF.

     
IF <ls_celltab>-style EQ cl_gui_alv_grid=>mc_style_enabled.
        <ls_celltab>
-style = cl_gui_alv_grid=>mc_style_disabled."Read Only
     
ELSE.
        <ls_celltab>
-style = cl_gui_alv_grid=>mc_style_enabled."Editable
     
ENDIF.

   
ENDLOOP."List of Editable Fields
 
ENDLOOP."Lines of the Data Table

ENDMETHOD.”Make Column Editable

Originally I had the “application specific changes” method redefined in every calling report, but I have changed this design to instead have the model class saying what fields it wants changed e.g. length changed, turned into a hotspot, description changed etc.

A certain German someone will say my model class is the devil incarnate and that I certainly shouldn’t be using it to say what the column headings should be. Well you know what? As Chas and Dave would say “I don’t care. I don’t care, I don’t care, I don’t care if he comes round here, I’ve got my model class on the sideboard here, let your mother sort it out if he comes round here.”

METHOD zif_bc_alv_report_view~application_specific_changes.
**********************************************************************
* The job of the model is to say waht fields can be drilled into, and what
* alternative names they have etc...
* The job of the view is to realise this technically
* Since this is CL_SALV_TABLE we cannot make fields editable here, but
* we can do all the other adjustments needed
**********************************************************************
* Local Variables
 
DATA: lo_error          TYPE REF TO cx_salv_msg,
        lo_data_error    
TYPE REF TO cx_salv_data_error,
        lo_not_found     
TYPE REF TO cx_salv_not_found,
        ls_error         
TYPE bal_s_msg,
        lf_error_occurred
TYPE abap_bool,
        ld_field_name    
TYPE lvc_fname,
        ls_alv_texts     
TYPE zsbc_alv_texts.

 
TRY.
     
IF if_optimise_column_widths = abap_true.
        optimise_column_width
( ).
     
ENDIF.

* Technical Fields
     
LOOP AT it_technicals INTO ld_field_name.
        set_column_attributes
( id_field_name   = ld_field_name
                               if_is_technical
= abap_true ).
     
ENDLOOP.
* Hidden Fields
     
LOOP AT it_hidden INTO ld_field_name.
        set_column_attributes
( id_field_name   = ld_field_name
                               if_is_visible  
= abap_false ).
     
ENDLOOP.
* Hotspots
     
LOOP AT it_hotspots INTO ld_field_name.
        set_column_attributes
( id_field_name   = ld_field_name
                               if_is_hotspot  
= abap_true ).
     
ENDLOOP.
* Renamed Fields / Tooltips
     
LOOP AT it_field_texts INTO ls_alv_texts.
       
IF ls_alv_texts-tooltip IS NOT INITIAL.
          set_column_attributes
( id_field_name = ls_alv_texts-field_name
                                 id_tooltip   
= ls_alv_texts-tooltip ).
       
ENDIF.
       
IF ls_alv_texts-long_text IS NOT INITIAL.
          set_column_attributes
( id_field_name = ls_alv_texts-field_name
                                 id_long_text 
= ls_alv_texts-long_text ).
       
ENDIF.
       
IF ls_alv_texts-medium_text IS NOT INITIAL.
          set_column_attributes
( id_field_name  = ls_alv_texts-field_name
                                 id_medium_text
= ls_alv_texts-medium_text ).
       
ENDIF.
       
IF ls_alv_texts-short_text IS NOT INITIAL.
          set_column_attributes
( id_field_name = ls_alv_texts-field_name
                                 id_short_text
= ls_alv_texts-short_text ).
       
ENDIF.
     
ENDLOOP.
* Subtotals
     
LOOP AT it_subtotals INTO ld_field_name.
        set_column_attributes
( id_field_name  = ld_field_name
                               if_is_subtotal
= abap_true ).
     
ENDLOOP.

   
CATCH cx_salv_not_found INTO lo_not_found.
      lf_error_occurred
= abap_true.
     
"Object = Column
     
"Key    = Field Name e.g. VBELN
      zcl_dbc
=>require( that             = |{ lo_not_found->object } { lo_not_found->key } must exist|
                        which_is_true_if
= boolc( lf_error_occurred = abap_false ) ).
   
CATCH cx_salv_data_error INTO lo_data_error.
      ls_error
= lo_data_error->get_message( ).
     
MESSAGE ID ls_error-msgid TYPE 'E' NUMBER ls_error-msgno
             
WITH ls_error-msgv1 ls_error-msgv2
                   ls_error
-msgv3 ls_error-msgv4.
   
CATCH cx_salv_msg INTO lo_error.
      ls_error
= lo_error->get_message( ).
     
MESSAGE ID ls_error-msgid TYPE 'E' NUMBER ls_error-msgno
             
WITH ls_error-msgv1 ls_error-msgv2
                   ls_error
-msgv3 ls_error-msgv4.
 
ENDTRY.

ENDMETHOD.”Application Specific Changes

You will notice a nice lot of error handling at the end, errors raised here indicate a serious bug in the calling program –i.e. trying to manipulate a field which does not exist - and so should stop processing dead until the bug is corrected.

I am using the “design by contract” class here…

http://scn.sap.com/community/abap/blog/2012/09/08/design-by-contract-in-abap

… to make it crystal clear that the calling program should not be trying to change a field that is not in the output structure. If it does then the program will not run until such time as the bug is fixed.

The “display” method in ZCL_BC_VIEW_SALV_TABLE just calls the “display” method of CL_SALV_TABLE. If we were using a different UI technology maybe the “display” method would have to be more complicated. In any event the time has come, the Walrus said, to actually display the SALV grid on the screen.

At some point in the bowels of the CL_SALV_TABLE “display” method the event “on after refresh” is raised (this is an event of the CL_GUI_ALV_GRID that lives inside the SALV) and our custom event handler class has been set up to intercept that event.

The On After Refresh Prince of Bel Air

Let us have a sticky beak at the two methods in the ZCL_BC_SALV_EVENT_HANDLER class. First off we shall look at the ON_AFTER_REFRESH method. During creation the event handler class was (optionally) passed a list of editable fields and the name of the edit control field. If that information was not passed in at the time of creation, the code below will just make the whole grid editable.

If a list of editable fields was passed in the field catalogue is modified to make the desired fields editable. We could not do this earlier as the CL_GUI_ALV_GRID hidden inside CL_SALV_TABLE is not instantiated till the DISPLAY method is called.

The instance of the SALV lives inside an internal table of SALV instances called MT_SALV. The import parameter “sender” in the code below refers to the CL_GUI_ALV_GRID instance that sent the “on after refresh” method. Thus you could have a screen with more than one SALV on it, and the correct grid would be processed.

METHOD on_after_refresh.
*--------------------------------------------------------------------*
* What we are doing here is enabling the SALV GRID to open in editable
* mode
*--------------------------------------------------------------------*
* Local Variables
 
DATA: lo_grid            TYPE REF TO cl_gui_alv_grid.
 
DATA: ls_layout          TYPE lvc_s_layo,
        lt_fcat           
TYPE lvc_t_fcat,
        ls_editable_fields
LIKE LINE OF mt_editable_fields.
 
DATA: lo_salv            TYPE REF TO cl_salv_table.
 
DATA: lo_salv_model      TYPE REF TO cl_salv_model,
        lo_sneaky_model   
TYPE REF TO zcl_salv_model.

 
FIELD-SYMBOLS: <ls_fcat> LIKE LINE OF lt_fcat.

 
TRY .
     
LOOP AT mt_salv INTO lo_salv.
       
"Narrow casting
       
"CL_SALV_MODEL is a superclass of CL_SALV_TABLE
       
"Target = LO_SALV_MODEL = CL_SALV_MODEL
       
"Source = MO_ALV_GRID   = CL_SALV_TABLE
        lo_salv_model ?= lo_salv
.

       
"Object to access underlying CL_GUI_ALV_GRID
       
CREATE OBJECT lo_sneaky_model
         
EXPORTING
            io_model
= lo_salv_model.

        lo_grid
= lo_sneaky_model->get_alv_grid( ).
       
CHECK lo_grid EQ sender.

       
"Deregister the event handler
       
"i.e. we do not want to keep calling this every time
       
"the user refreshes the display.
       
"Once the report is running the user can control whether
       
"the grid is editable by using the icons at the top of the screen
       
SET HANDLER me->on_after_refresh
         
FOR ALL INSTANCES
          ACTIVATION space
.

       
"Set editable
       
IF md_edit_control_field IS NOT INITIAL.
         
"Make certain fields editable based on FIELDCAT
          ls_layout
-stylefname = md_edit_control_field.
lo_grid
->get_frontend_fieldcatalog( IMPORTING et_fieldcatalog = lt_fcat ).
         
LOOP AT mt_editable_fields INTO ls_editable_fields.
           
READ TABLE lt_fcat ASSIGNING <ls_fcat>

            WITH KEY fieldname = ls_editable_fields.
           
IF sy-subrc = 0.
              <ls_fcat>
-edit = abap_true.
           
ENDIF.
         
ENDLOOP.
          lo_grid
->set_frontend_fieldcatalog( lt_fcat ).
       
ELSE.
         
"Make everything editable
          ls_layout
-edit = 'X'.
       
ENDIF.
        lo_grid
->set_frontend_layout( ls_layout ).
        lo_grid
->set_ready_for_input( 1 ).
     
ENDLOOP.
   
CATCH cx_salv_error.

 
ENDTRY.

ENDMETHOD.”On after refresh method of ZCL_BC_SALV_EVENT_HANDLER

Toolbar Bar Bar, Bar Barbara Ann

The next problem – as solved in the blog by Naimesh - is that the SALV does not expect the grid to be editable, so some ICONS in the toolbar you would normally expect for editable grids are missing. This is really only applicable when all the fields are editable, as that is the only situation where copying a row makes sense, as then you would change a key field. If only one field is editable you would not want the user to be able to delete a row either.

The most important thing for me is to get the separators working! There is no point in my regurgitating the code here – you can see it in the other blog, and I have not changed anything, though I am going to add something to only add the extra buttons if we do not have a specific list of editable fields.

Lord Editable Column-Ostomy Bag

Now we see the final result! The grid opens with several columns editable.


image002.jpg

List of Ingredients

Here are the custom objects I had to create to get this working. In lots of ways it doesn’t matter how many were needed as apart from the calling report they are all totally generic and can get re-used again and again.

·         Calling Report

·         Interface ZIF_BC_ALV_REPORT_VIEW (UI Technology Agnostic)

·         Class ZCL_BC_VIEW_SALV_TABLE (SALV Specific)

·         Class ZCL_SALV_MODEL (to access the CL_GUI_ALV_GRID)

·         Class ZCL_BC_SALV_EVENT_HANDLER (for opening in edit mode)

·         Function Module ZSALV_CSQT_CREATE_CONTAINER (to create screen)

I will whip up a SAPLINK file and attach it to this blog at some point in the near future.

Going Forward

As I keep stressing this is a work in progress, I am going to have concentrate on my presentation for the SAP Australian Users Group next week (SAUG, pronounced “Sausage”), and then on my presentation for SAP TECHED in Las Vegas in October (boast boast) and then I will be able to work on the next iteration of this.

In the interim I am happy to take questions, or suggestions for improvements.

Cheersy Cheers

Paul

 

 

 

 

 

 

 

The infamous SORT DESCENDING (Do not use it with infotypes tables!)

$
0
0

As a support engineer, now and then I get incidents that are caused by a SORT PXXXX BY BEGDA DESCENDING ABAP command (Could be also SORT by ENDDA). Normally this kind of sentences are placed in customer includes.

 

e.g.

 

SORT P00001 BY BEGDA DESCENDING


On the other hand, a very popular way to read the information contained in infotypes in the HCM / Payroll modules is the provide statement:

 

Read more about the provide statement:

 

http://help.sap.com/abapdocu_70/en/ABAPPROVIDE.htm

 

Well, the provide statement is powerful but may be also a little tricky and what is more:

 

PROVIDE ONLY WORKS WITH INFOTYPE TABLES THAT ARE SORTED BY DATES

 

got it?.

 

One of the most popular places where the sentence "provide" is used is function WPBP:

 

PROVIDE massn massg stat1 stat2 stat3         FROM p0000

           persg persk bukrs werks btrtl kostl plans gsber

           vdsk1 ansvh orgeh stell                       

           fistl geber fkber grant_nbr sgmnt            

           budget_pd                          

           dysch wkwdy arbst                               

           schkz empct zterf                     FROM p0007

           subty

           trfar trfgb trfgr trfst

           bsgrd divgv waers                     FROM p0008   

   BETWEEN pn-begda AND pn-endda

 

How hasn't ever debugged this beautiful piece of code?.

 

So if the following sentence is executed before FUWPBP in a payroll run (Typically in retrocalculations):

 

SORT P00001 BY BEGDA DESCENDING


No wonder you'd get a rejection like this in the next WPBP execution:

 

wpbp.png

 

Of course, the above example is one of the most typical but not the only one: Basically there are as many forbidden SORT DESCENDINGs in your  payroll as different PROVIDE sentences there are.

 

So if PROVIDE P0456 is in your code, never put a SORT P0456.


What to do?.


Use something like:


AUX_P0456[] = P0456

SORT AUX_P0456[]


Being AUX_P0456 your own local variable.


Remember that PXXXX are global variables in your payroll driver and any manipulation may cause side-effects.


Take care and enjoy your vacation!

 


Part.1) OO Capitalization: How to create dynamically a method into a class (SAP 740 mini.)

$
0
0

This topic is the first one of a serie of articles coming soon, concerning a global approach of abap oriented-object in order to capitalize your development in a long term.

 

To introduce basically, often when we code a custom OO framework, this framework is only used for the current project. What do you think, if your framework can be more global, can grow up without any direct action by you and finally, can be usable for several projects and several clients?


That is why, it could be relevant to know that the creation of some methods dynamically is possible, in order to give more flexibility, more factorization, better evolution and artificial intelligence design of your framework. Let's it begin...

 

After some investigations resulting in successful tests and obsolete codes found, I would like to share with you regarding my point of view, the best way to improve dynamically a class during the runtime, This snippet is based on the framework used by SAP for the version 740, including security and a minimum of lines of code.

 

Well, for this example, I will create a CLASS "ZCL_TEST1" that will create a method into the class "ZCL_TEST2" dynamically. The process is below:


1) Call a MF to create a method into the target class

2) Call a MF to include the implementation of the fresh method

3) Call a MF to regenerate the sections of the target class

4) Call the dynamic method of the target class


Here we go...


1) In SE24, create the class "ZCL_TEST2". This class will see their own methods created through the "ZCL_TEST1".


2.jpg

 

 

2) In SE24, create the class "ZCL_TEST1" and create a CONSTRUCTOR to simplify our example. This class will create method into the "ZCL_TEST2" class during the instantiation.


2.jpg


3) In the CONSTRUCTOR, you have to call 3 MF (complete source code in attachment) succinctly:

 

    1. 'SEO_METHOD_CREATE'
    2. 'SEO_METHOD_GENERATE_INCLUDE'
    3. 'SEO_CLASS_GENERATE_SECTIONS'



3.1) Variables initialization


2.jpg


3.2) Call the right MF to create a method in the target class

2.jpg



3.2) Init & Call the right MF to create an implementation into the target class

2.jpg



3.3) Call the right MF to regenerate the sections of the target class (Private,Protected & Public Section)

2.jpg


(source code in attachment)


At this time, your class ZCL_TEST1 is able to interact onto the class ZCL_TEST2.


Finally, execute many times ZCL_TEST1 class locally with F8 and check out the result in ZCL_TEST2.


2.jpg

2.jpg



That is done!



For the next topic, I will enjoy to share with you in which context this kind of development can be useful and how is it exciting to enhance your custom framework including several design patterns, in order to improve dynamically your architectural layers during the runtime.

 

The final goal will be to decrease your lines code, increase the reusability, be able to call generic methods that do not exist yet during your writing code and at the same time, thinking about how can you grow up your custom framework by adding these generic methods freshly created throughout your miscellanous projects.



Persistent classes: revival of a spirit. Query by range.

$
0
0

Dear ABAPers,

 

Probably most of you are aware of persistent classes, meanwhile I don't know many developers who're actively using them.

 

Moreover, I just get a response from SAP:

 

I would propose to avoid the usage of

persisten classes in projects where the underlying structure of the

persistent class is not stable. Generating new persitent classes

is easy, but maintenance of excisting persitent classes could led in

high effort. For this reason I would propose to take the future

maintenace effort into account when you decide to use or not to use

persistent classes.

 

After spending 2 years on standard TR-TM solution support, that is persistent classes based, I don't think it's such a bad thing.

 

What I want to reach with my blog is to present my personal ideas about how we can make usage of persistent clasess more attractive.

 

Let's speak today about query service:

 

So every persistent class has embedded interface if_os_ca_persistency. Details are here:

 

Query Service Components - ABAP - Object Services - SAP Library

 

This interface has very interesting method: get_persistent_by_query

 

In fact once generating a class for a table we automtically have query service for it - that sounds promising. Let's go to example:

 

Here is a code from a standard DEMO_QUERY_SERVICE program.

 

agent = ca_spfli_persistent=>agent.    TRY.        query_manager = cl_os_system=>get_query_manager( ).        query = query_manager->create_query(                  i_filter  = `AIRPFROM = PAR1 AND AIRPTO = PAR2` ).        connections =          agent->if_os_ca_persistency~get_persistent_by_query(                   i_query   = query                   i_par1    = airpfrom                   i_par2    = airpto ).        LOOP AT connections ASSIGNING FIELD-SYMBOL(<connection>).          connection = CAST #( <connection> ).          result-carrid = connection->get_carrid( ).          result-connid = connection->get_connid( ).          APPEND result TO results.        ENDLOOP.        cl_demo_output=>display( results ).      CATCH cx_root INTO exc.        cl_demo_output=>display( exc->get_text( ) ).    ENDTRY.

What we can see here is that SAP developers suggest us to use a generic request from a string.

 

Which critical things I see in this example:

 

  • we have only 3 parameters
  • there is no reference to DDIC structure that means where-used-list will not work for this statement
  • select-options and ranges are not supported

 

So after that I decided that if we transform the same example into the code like this we  can make things simplier:

 

REPORT ZDEMO_QUERY_SERVICE.
tables: spfli.
parameters:  p_from type spfli-airpfrom,  p_to type spfli-airpto.
select-options:  so_carid for spfli-carrid.
start-of-selection.  types: begin of query_ts,    airpfrom type spfli-airpfrom,    airpto type spfli-airpto,    carrid type range of spfli-carrid,  end of query_ts.  data(connections) = zcl_os_api=>select_by_query(    exporting      io_agent     =  ca_spfli_persistent=>agent   " Class-Specific Persistence Interface      is_selection = value query_ts(        airpfrom = p_from        airpto = p_to        carrid = so_carid[]      )    ).

As you can see from the example I represented query not like a single string, but as local variable of structure type where fields have same names as in source table. Moreover, to support multiple selection, you can define parameter as a range (CARRID).

 

To perform range selection I decided to convert range to a set of OR statements ( SIGN = 'I' ) + set of AND statements (SIGN = 'E').

 

This simple class now let me easily generate simple classes for selection.

 

1) Generate persistent class

2) Define local variable for query

3) Call query with agent and query structure.

 

The provided class is just a prototype. If you wish - you can copy it and try to use it.

 

If I'll find some supporters then we can create some open-source simple project as I have more interesting ideas about persistent classes such as:

 

  • get_structure method instead of multiple calls of get_ methods through serialization + XSLT tranformation.
  • query hash ( keep result for generic queries) using query structure serialization + hash sum - now it does select every time

 

But that will be described in next posts.

 

Enjoy =)

 

class ZCL_OS_API definition  public  abstract  final  create public .
public section.  class-methods SELECT_BY_QUERY    importing      !IO_AGENT type ref to IF_OS_CA_PERSISTENCY      !IS_SELECTION type ANY    changing      !CO_TYPE type ref to CL_ABAP_STRUCTDESCR optional    returning      value(RT_RESULT) type OSREFTAB .
protected section.
private section.  types:    begin of range_ts,            sign type c length 1,            option type c length 2,            low type string,            high type string,          end of range_ts .  class-methods GET_QUERY_RANGE_VALUE    importing      !IS_RANGE type RANGE_TS      !IO_EXPR type ref to IF_OS_QUERY_EXPR_FACTORY      !IV_NAME type STRING    returning      value(RO_EXPR) type ref to IF_OS_QUERY_FILTER_EXPR .  class-methods GET_QUERY_RANGE    importing      !IT_RANGE type TABLE      !IV_NAME type STRING      !IO_EXPR type ref to IF_OS_QUERY_EXPR_FACTORY    returning      value(RO_EXPR) type ref to IF_OS_QUERY_FILTER_EXPR .  type-pools ABAP .  class-methods IS_RANGE    importing      !IO_TYPE type ref to CL_ABAP_TABLEDESCR    returning      value(RV_RANGE) type ABAP_BOOL .  class-methods GET_QUERY    importing      !IS_SELECTION type ANY    changing      !CO_TYPE type ref to CL_ABAP_STRUCTDESCR optional    returning      value(RO_QUERY) type ref to IF_OS_QUERY .
ENDCLASS.
CLASS ZCL_OS_API IMPLEMENTATION.  method get_query.    data(lo_query) = cl_os_query_manager=>get_query_manager( )->if_os_query_manager~create_query( ).    data(lo_expr) = lo_query->get_expr_factory( ).    if co_type is not bound.      try.          co_type = cast #( cl_abap_typedescr=>describe_by_data( is_selection  ) ).        catch cx_sy_move_cast_error.          " ToDo: message          return.      endtry.    endif.    data: lt_and type table of ref to if_os_query_filter_expr.    " for each selection criteria    loop at co_type->get_included_view(
*            p_level =        ) into data(ls_view).      " parameter or range?      case ls_view-type->kind.        " parameter        when ls_view-type->kind_elem.          field-symbols: <lv_component> type any.          unassign <lv_component>.          assign component ls_view-name of structure is_selection to <lv_component>.          check <lv_component> is assigned.          try.              " goes to and condition              append                lo_expr->create_operator_expr(                    i_attr1                     = ls_view-name                  i_operator                  = 'EQ'                  i_val                       = conv #( <lv_component> )                ) to lt_and.            catch cx_os_query_expr_fact_error.    "          endtry.        when ls_view-type->kind_table.          " check: is range?          check is_range( cast #( ls_view-type ) ) eq abap_true.          " must be not initial          field-symbols: <lt_range> type table.          assign component ls_view-name of structure is_selection to <lt_range>.          check <lt_range> is assigned.          check <lt_range> is not initial.          " goes to and condition          append get_query_range(            iv_name = ls_view-name            it_range = <lt_range>            io_expr = lo_expr )  to lt_and.      endcase.    endloop.    " build and conditions    loop at lt_and into data(lo_and).      if sy-tabix eq 1.        data(lo_filter) = lt_and[ 1 ].      else.        lo_filter = lo_expr->create_and_expr(          exporting            i_expr1 = lo_filter            i_expr2 = lo_and        ).      endif.    endloop.    lo_query->set_filter_expr( lo_filter ).    ro_query = lo_query.  endmethod.  method get_query_range.    data: lt_and type table of ref to if_os_query_filter_expr,          lt_or type table of ref to if_os_query_filter_expr.    data: ls_range type range_ts.    " .. for each range value    loop at it_range assigning field-symbol(<ls_range>).      move-corresponding exact <ls_range> to ls_range.      " E = AND, I = OR      case ls_range-sign.        when 'E'.          append io_expr->create_not_expr(            get_query_range_value(             is_range = ls_range             io_expr = io_expr             iv_name = iv_name ) )  to lt_and..        when 'I'.          append get_query_range_value(            is_range = ls_range            io_expr = io_expr            iv_name = iv_name ) to lt_or.      endcase.    endloop.    " First of all combine all OR in to a single expression    loop at lt_or into data(lo_or).      if sy-tabix eq 1.        data(lo_filter_or) = lt_or[ 1 ].      else.        lo_filter_or = io_expr->create_or_expr(          exporting            i_expr1 = lo_filter_or            i_expr2 = lo_or        ).      endif.    endloop.    " make all or statements as one of ANDs    append lo_filter_or to lt_and.    loop at lt_and into data(lo_and).      if sy-tabix eq 1.        ro_expr = lt_and[ 1 ].      else.        ro_expr = io_expr->create_and_expr(          exporting            i_expr1 = ro_expr            i_expr2 = lo_and        ).      endif.    endloop.  endmethod.  method get_query_range_value.    try.        case is_range-option.          " is operator          when            'EQ' or            'NE' or            'LE' or            'LT' or            'GE' or            'GT'           .            ro_expr = io_expr->create_operator_expr(                  i_attr1                     = iv_name                  i_operator                  = conv #( is_range-option )                  i_val                       = is_range-low              ).          " is mask          when 'CP'.            data(lv_pattern) = is_range-low.            replace all occurrences of '*' in lv_pattern with '%'.            ro_expr = io_expr->create_like_expr(              i_attr                      = iv_name              i_pattern                   = lv_pattern
*              i_not                       = OSCON_FALSE          ).          " is mask with not          when 'NP'.            lv_pattern = is_range-low.            replace all occurrences of '*' in lv_pattern with '%'.            ro_expr = io_expr->create_like_expr(              i_attr                      = iv_name              i_pattern                   = lv_pattern              i_not                       = oscon_true          ).
*            when 'BT'.          when others.            " not supported        endcase.      catch cx_os_query_expr_fact_error.  "    endtry.  endmethod.  method IS_RANGE.    CHECK io_type->table_kind eq io_type->tablekind_std AND          io_type->key_defkind eq io_type->KEYDEFKIND_DEFAULT AND          io_type->key eq value ABAP_KEYDESCR_TAB(           ( name = 'SIGN')           ( name = 'OPTION')           ( name = 'LOW')           ( name = 'HIGH') ).    rv_range = abap_true.  endmethod.  method select_by_query.    " check agent    check io_agent is bound.    try.        " get result by using generated method        rt_result = io_agent->get_persistent_by_query(          exporting            " create query by selection criteria            i_query = get_query(              exporting                is_selection =  is_selection   " Must be structure              changing                co_type      = co_type     " Runtime Type Services            )   " Query        ).      catch cx_os_object_not_found.    "      catch cx_os_query_error.    "    endtry.  endmethod.
ENDCLASS.

Importance of T247 Table in Output Types

$
0
0

Often when we work with Sales Documents like Invoice, Output Types that were configured play a major role as it sends the necessary information to customer using any of the communication medium . The Below screen snap shows a sample Invoice with custom Output Type and medium as Print

 

1.png

 

Recently our Customer encountered a strange issue where he was not able to see the month printed exactly in the output print. He also shared success case as well where month was getting printed as expected. We compared both the documents and finally came to know that Partner maintained here (Ship-to-Party) was maintained with Turkish Language (TR) as seen above, and the partner in the other successful cause was of English (EN). This is the only difference we could find and still we are in the mid-way and quite doubtful on how a language could make such a difference in Printing.

See below sample Jetform prints for the two cases

 

2.png

We tried to debug if there is any custom logic that actually prevents the text from printing.

But we came to know that it’s actually the table T0247 in which the translation text entries has to be maintained

This works fine for English as it will be available by Default.However when we talk about other languages it’s important that we maintain necessary translations

in this table

 

As you can observe the entry for language TR is missing here which is the root cause for this problem

3.png

Please note that changing the language of customer from 'TR' to 'EN' would actually might work but it's not the suggested approach . Our Customer's business can have global presence and we can have end users situated at various locations across the world. The would always expect the language to be their Local Language. Hence changing the language might make the situation worse. So it's not recommended unless suggested by customer himself

 

Cheers,

Lakshman

Persistent classes: single get( ) method instead of multiple get_xxx() methods calls

$
0
0

Dear mates,

 

Here is the next part of my blog related to OS functionality. You can find the beginning here:

 

Persistent classes: revival of a spirit. Query by range.

 

I presented the way how we can use local variables as more transparent way to create a query request.

 

As a result of IF_OS_CA_PERSISTENCY~GET_PERSISTENT_BY_QUERY method we have a table of OSREFTAB type.

 

So let's first consider how SAP proposes to process such references in DEMO_QUERY_SERVICE example:

 

LOOP AT connections ASSIGNING FIELD-SYMBOL(<connection>).          connection = CAST #( <connection> ).          result-carrid = connection->get_carrid( ).          result-connid = connection->get_connid( ).          APPEND result TO results.        ENDLOOP.

That's probably OK - but let's assume we have really pretty big number of attributes. The idea of numerous get_() method calls each time i want to read the full object structure didn't make me happy. I'm too lazy for it.

 

In my background there were several tasks when I used asXML serialization-deserialization and i decided to try it here:

 

The logic should be very simple:

 

  1. we serialize our object to asXML for object instance

 

When I tried to serialize CL_SPFLI_PERSISTENT instance i got nothing interesting:

<asx:abap version="1.0" xmlns:asx="http://www.sap.com/abapxml"><asx:values>  <STRUC href="#o52"/></asx:values><asx:heap xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:abap="http://www.sap.com/abapxml/types/built-in" xmlns:cls="http://www.sap.com/abapxml/classe//www.sap.com/abapxml/types/dictionary">  <cls:CL_SPFLI_PERSISTENT id="o52"/></asx:heap></asx:abap>

The reason of such a short file is that object must have IF_SERIALIZABLE_OBJECT interface to be serialized.

 

As I was going to use it for my own object I just created new Z class for SPFLI table but with this interface defined.

 

This time the result looked rather better:

<asx:abap version="1.0" xmlns:asx="http://www.sap.com/abapxml"><asx:values>  <STRUC href="#o52"/></asx:values><asx:heap xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:abap="http://www.sap.com/abapxml/types/built-in" xmlns:cls="http://www.sap.com/abapxml/classes/global" xmlns:dic="http://www.sap.com/abapxml/types/dictionary">  <cls:ZCL_SPFLI_PERSISTENT id="o52">   <ZCL_SPFLI_PERSISTENT>    <CARRID>UA</CARRID>    <CONNID>3517</CONNID>    <COUNTRYFR>DE</COUNTRYFR>    <CITYFROM>FRANKFURT</CITYFROM>    <AIRPFROM>FRA</AIRPFROM>    <COUNTRYTO>US</COUNTRYTO>    <CITYTO>NEW YORK</CITYTO>    <AIRPTO>JFK</AIRPTO>    <FLTIME>495</FLTIME>    <DEPTIME>10:40:00</DEPTIME>    <ARRTIME>12:55:00</ARRTIME>    <DISTANCE>6162.0</DISTANCE>    <DISTID>KM</DISTID>    <FLTYPE/>    <PERIOD>0</PERIOD>   </ZCL_SPFLI_PERSISTENT>  </cls:ZCL_SPFLI_PERSISTENT></asx:heap></asx:abap>

2. Well, now we need to think how we can fill SPFLI structure from XML. To see which file should I have I serialized SPFLI structure first:

<asx:abap version="1.0" xmlns:asx="http://www.sap.com/abapxml"><asx:values>  <STRUC>   <MANDT/>   <CARRID>UA</CARRID>   <CONNID>3517</CONNID>   <COUNTRYFR>DE</COUNTRYFR>   <CITYFROM>FRANKFURT</CITYFROM>   <AIRPFROM>FRA</AIRPFROM>   <COUNTRYTO>US</COUNTRYTO>   <CITYTO>NEW YORK</CITYTO>   <AIRPTO>JFK</AIRPTO>   <FLTIME>495</FLTIME>   <DEPTIME>10:40:00</DEPTIME>   <ARRTIME>12:55:00</ARRTIME>   <DISTANCE>6162.0</DISTANCE>   <DISTID>KM</DISTID>   <FLTYPE/>   <PERIOD>0</PERIOD>  </STRUC></asx:values></asx:abap>

lns:dic

3. Now we have asXML with object instance from one side, and we have asXML that we need to create .

There are several ways how to create it:

  • Parsing first one and renderring new one manually
  • Do the same over XSLT transormation

 

As I'm lazy i chose the last one. I'm not a big expert in XSLT so this is what actually I managed to create:

<xsl:transform version="1.0"  xmlns:xsl="http://www.w3.org/1999/XSL/Transform"  xmlns:sap="http://www.sap.com/sapxsl"><xsl:strip-space elements="*"/><xsl:template match="/"><asx:abap  version="1.0"  xmlns:asx="http://www.sap.com/abapxml">  <asx:values>    <STRUCT>       <xsl:copy-of select="./*/*/*/*/*"/>    </STRUCT>  </asx:values>  </asx:abap></xsl:template></xsl:transform>

To my big surprise it worked:

So what I did I provided a new method for this operation:

 

class-methods TO_DATA    importing      !IO_OBJECT type ref to IF_SERIALIZABLE_OBJECT    changing      !CS_DATA type ANY .  method TO_DATA.     try.        " Object to asXml:serializable object        call transformation id          source struc = io_object          result xml data(lv_xml).        " asXml:serializable object to asXml:data        call transformation zcw_obj2struc          source xml lv_xml          result xml data(lv_xml_struc).        " asXml:data to data        call transformation id          source xml lv_xml_struc          result struct = cs_data.      catch cx_transformation_error into data(lo_cx).        if 1 eq 2.          " here you can go from the debugger          data(lo_output) = cl_demo_output=>new( ). " <-place the cursor here and press Shift+F12          lo_output->begin_section( 'Object to asXml:serializable object' ).          lo_output->write_xml( xml = lv_xml  ).          lo_output->begin_section( 'asXml:serializable object to asXml:data' ).          lo_output->write_xml( xml = lv_xml_struc  ).          lo_output->write_text( lo_cx->get_text( ) ).          lo_output->display( ).        endif.    endtry.  endmethod.

and finally this SAP code:

agent = ca_spfli_persistent=>agent.    TRY.        query_manager = cl_os_system=>get_query_manager( ).        query = query_manager->create_query(                  i_filter  = `AIRPFROM = PAR1 AND AIRPTO = PAR2` ).        connections =          agent->if_os_ca_persistency~get_persistent_by_query(                   i_query   = query                   i_par1    = airpfrom                   i_par2    = airpto ).        LOOP AT connections ASSIGNING FIELD-SYMBOL(<connection>).          connection = CAST #( <connection> ).          result-carrid = connection->get_carrid( ).          result-connid = connection->get_connid( ).          APPEND result TO results.        ENDLOOP.        cl_demo_output=>display( results ).      CATCH cx_root INTO exc.        cl_demo_output=>display( exc->get_text( ) ).    ENDTRY.

became more elegant to me:

 

types: begin of query_ts,    airpfrom type spfli-airpfrom,    airpto type spfli-airpto,  end of query_ts.  data: lt_results type table of spfli.  zcl_os_api=>select_by_query(      exporting        io_agent     =  zca_spfli_persistent=>agent   " Class-Specific Persistence Interface        is_selection = value query_ts(          airpfrom = p_from          airpto = p_to        )      importing        et_data = lt_results      ).

You probably think that such a technique will decrease overall performance sufficiently but that was a big surpsrise to me that trasnformation works really fast.

Also I think that XSLT transformation can be optimized somehow but here we need experts in this area to recieve an advise.

 

Will be glad to see your comments here. Critics is appreciated.

 

Petr.

Persistent classes: hashing generic query requests

$
0
0

Hi to everybody,

 

This is the third part of my blog about using persisntent classes as a quick tool for generating query classes and convenient reusing of them.

 

This is the beginning:

 

Part 1:

Persistent classes: revival of a spirit. Query by range.

 

and part 2:

Persistent classes: single get( ) method instead of multiple get_xxx() methods calls

 

 

Using this query class I figured out that in the case when we use the same request twice, we do the selection every time.

 

To improve the performance we need to create some buffer for retrieving previously selected requests.

 

So what we have as incoming parameters: request that is represented by a structure of any type and agent class.

 

Снимок.PNG

To solve this abstract task I decided to use serialization technique again.

Снимок.PNG

 

To be honest this is the first time I applied such a logic based on serialization and checksum calculation.

 

Has anyone used similar methods for generic hashing of anything? What is your opinion about overall performance: frequently called transormations and checksum calculations.

 

Thanks.

 

The full example you can find here: ZCL_OS_API


GUI_DOWNLOAD with Field Names with more than 10 characters.

$
0
0

Hi All,

 

I have seen many posts for downloading from internal table to PC and many replies for the same. Many people have suggested different ways. But I saw those posts are yet Not Answered. Some complained that they are able to download with Field name. But Field name characters are only 10.

 

So for all these, I got a suitable way to download with proper field names. Some might have tried this method, some may be seeing newly. I thought of sharing this anyway.

 

Here I will be having 2 internal tables.

1. Final Internal table with the data to be downloaded.

2. Field names of the final internal table.

 

 

Fetching data and getting field names.

sap1.PNG

 

 

Downloading the Field names internal table.

 

sap2.PNG

 

 

After calling GUI_DOWNLOAD function module, again call GUI_DOWNLOAD and put the final internal table with data.

 

Downloading the Final Internal table

 

sap3.PNG

 

 

Check the exporting parameters passed while calling the function module both times.

 

Result:

sap4.PNG

How to use messages properly in the code. Common rules.

$
0
0

Dear all,

 

Today I'm going to discuss message handling with you.

 

So basically message code - this is something important to a support team. When we have this code we can navigate to SE91 and by using where-used-list for the message to find all the places in the code where this message occurs. However developers sometimes don't care about future support issues and make fast solution based just on a text:

  • message 'Some message' type 'S'

 

In this case we have a generic message without having long text description at all. To find the reason of such a message is rather more diffciult debugging task than when having a code.

 

Rule #1: Use message code as much as you can. 

 

Instead of direct text try to use SE91 message number.

 

These are pretty simple steps to find out the reason of the message:

 

1. Double click on the message in the buttom of the screen or F1 key for the popup message of type 'I'.

2. We go to a technical information

Безымянный.png

3. In the popup window we double click on the message number

Безымянный.png

4. Put the cursor on the message number and go to the Where-Used-List (Ctrl+Shift+F3)

Безымянный.png

5. Now execute the search and all the possible places where message can occur

 

Rule #2. Try to keep the number of places with the same message code very low

 

I guess you know very well the case when you have some standard SAP message, you look for place where it's been called  and you have like a list of dozens diffirent programs with the same code. That's very difficult to find the place where you certain message occured.

 

In common words I would cover this rule with more abstract one:

 

Rule #2.1 Don't copy the same code twice.

Even it's just message call, but you're going to use it widely - provide some program unit for that.

 

Rule #3. Use static dummy message calls while dynamic message declaration when it's possible.

 

Sometimes we do not need to output message immediately but to store it into some log.

 

So in the code like this:

 

  CLEAR ls_msg.      ls_msg-msgty     = 'I'.      ls_msg-msgid     = 'FMFEES'.      ls_msg-msgno     = '68'.      ls_msg-probclass = '3'.      CALL FUNCTION 'BAL_LOG_MSG_ADD'        EXPORTING          i_log_handle = iv_log_handle          i_s_msg      = ls_msg.

Just don't forget to add a very simple but so important line:

message 068(fmfees) into data(lv_dummy).

This tiny 5 seconds effort can save up even hours to a person who will probably debug your code.

 

Rule #4. When creating an own exceptions that are going to be used as output messages implement IF_T100_MESSAGE.


As example you can check CX_SALV_X_MSG class.

Снимок.PNG

In opposite, if you perform steps from Rule #1, you'll navigate to this class.

 

Please notice that in this case once the exception has been caught:

 

catch cx_salv_x_msg into data(lo_cx).

 

you should output the message not like this:

message lo_cx->get_text( ) type 'S'.

 

but like this:

message lo_cx type 'S'.

 

In the case you have only abstract cx_root instance you can try the following apporach:

Advanced navigation to a source code from the message long text.

 

Rule #5. Use long text explanation.

Despite source code navigation is an important thing, don't forget that the main goal of our message is to explain the reason of the error to the end user. Properly documented software can let users sort the problem out even without contacting support team at all, that automatically moves your software on the next level of quality.

 

Therefore remove self-explanatory checkbox and provide some key details for the user how to get rid of this message by his own.

 


By following this simple rules you can provide rather better solution.


 

I hope you liked it.

 

Adios.

Advanced navigation to a source code from the message long text.

$
0
0

Hi again.

 

In the previous post I described the basic concepts of programming using SE91 messages

 

How to use messages properly in the code. Common rules.

 

If you used to do OO programming your logic probably works on class-based exceptions.

 

In most of cases I would choose IF_T100_MESSAGE variant to explain the reason of the error (Rule #4)

 

Meanwhile, sometime you have a foreign code you're not responsible to modify and this code raises an exception.

 

Now we speak about the case when you want to output the message immediately. To be abstract let's just use cx_root example.

 

If you go the easiest way:

try.
do_something( ).
catch cx_root into data(lo_cx).  message lo_cx->get_text( ) type 'I' .
endtry.

you will get the popup:

Снимок.PNG

 

but unfortunatelly F1 button won't work here. Debugger on you, my friend.

 

But let's jut imagine that we press F1 and have a documentation like this:

 

Снимок.PNG

and when we click "Navigate to source" link we go directly to the source code when the exception has been raised:

Снимок.PNG

 

Pretty cool, isn't it?! =)

 

Let's just see how many actions do we need to do this? Saying it before, i wanted to reuse standard SAP UI without own screen creation.


1. We need 3 SET/GET parameters.


Go to SE80.


Edit object (Shift+F5) -> Enhanced options -> SET/GET parameter ID -> type zcw_nav_prog -> Create (F5).


repeat these steps for zcw_nav_incl and zcw_nav_line parameters.

 

2. Go to SE38 and create a very simple program:

 

program zcw_navigate_to_source.
parameters:  p_prog type syrepid memory id zcw_nav_prog,  p_incl type syrepid memory id zcw_nav_incl,  p_line type num10 memory id zcw_nav_line.
start-of-selection.  /iwfnd/cl_sutil_moni=>get_instance( )->show_source(      iv_program    = p_prog    " Source Program      iv_include    = p_incl    " Source Include      iv_line       = conv #( p_line )   " Source Line      iv_new_window = ''    " New Window  ).

I really hope you have this component. If not - you can find something similar in where-used-list for 'RS_ACCESS_TOOL' FM

 

3. Create ZCW_NAV_SRC transaction in SE93.


Choose report transaction and assign ZCW_NAVIGATE_TO_SOURCE report to it.

 

4. We need a real SE91 message.


Just create some message with the text &1&2&3&4. Remove self-explanatory flag and go to long text.


Put the cursor where you wish to place a link ->Insert menu -> Link


Choose "Link to transaction and skip first screen" as Document class, use the transaction from step 3.


"Name in Document" is the real text that you see on the screen like "Navigate to source".


5. Now we're ready to code.

try.  do_something( ).
catch cx_root into data(lo_cx).   
" get source code position      lo_cx->get_source_position(        importing          program_name =  data(lv_prog)   " ABAP Program: Current Main Program          include_name =  data(lv_incl)          source_line  =  data(lv_line)      ).      " it's not possible to store integer as parameter value      data(lv_line_c) = conv num10( lv_line ).      " export parameter values      set parameter id 'ZCW_NAV_PROG' field lv_prog.      set parameter id 'ZCW_NAV_INCL' field lv_incl.      set parameter id 'ZCW_NAV_LINE' field lv_line_c.      types:          begin of message_ts ,             msgv1 type bal_s_msg-msgv1,             msgv2 type bal_s_msg-msgv2,             msgv3 type bal_s_msg-msgv3,             msgv4 type bal_s_msg-msgv4,           end of message_ts .      " parse our string to message format      data(ls_message) = conv message_ts( lo_cx->get_text( ) ).      " Output, don't forget we always use static message definition      " Put here message created in Step 4.      message id 'ZCW_COMMON' type 'I' number 124        with ls_message-msgv1             ls_messagemsgv2             ls_message-msgv3             ls_message-msgv4.
endtry.

That's it! What I actually did - I put this handling logic into a minimalistic method ZCL_MSG=>CX( lo_cx ) and actively use it in my code.

 

I hope you enjoyed it.

 

Petr.

 



The Journey of an E-Bite author

$
0
0

Later this month SAP Press will be introducing its new E-Bite publications format to the SAP community. These small size electronic books concentrate on a specific topic that, due to practical constraints, are covered only generally in books where the topic is only one of a larger collection of topics, at best having its own chapter and at worst being reduced to only a few pages. E-Bites overcomes this limitation, taking the deep dive into the details of a specific subject, thoroughly exploring the nuances of its associated concepts, and I am honored to be included as one of the authors of this inaugural release of the E-Bites series.


I first learned about E-Bites during a conversation in late March 2015 with SAP Press editor Kelly Weaver, who was aware I was an ABAP programmer and, through an exchange of emails, had become familiar with some of my articles on Agile Software Development. She explained to me that she felt I had a good, clear writing style and that perhaps I would be interested in becoming an E-Bite author on one of the ABAP topics being considered for publication. I was flattered at what I considered such high praise coming from a representative of a well-regarded publishing company, but initially declined the invitation to write an E-Bite. 


A week later I called Kelly saying I had reconsidered and thought I could do an acceptable job on a book about using regular expressions with ABAP, one of the topics she had mentioned in our previous conversation. Thus began my journey as an E-Bite author.


The journey begins


After my initial chat with Kelly I had begun searching the internet for articles dealing with regular expressions, finding little of anything aimed at beginners. This, I thought, could explain why so many programmers avoided the use of regular expressions – there was no easy way to learn about the concept, and the dearth of such information is what prompted me to contact Kelly and accept the challenge of filling this void via E-Bite. Integrating regular expressions into ABAP programs would require the developer to be familiar not only with ABAP syntax but also with the syntax associated with the regular expression language, a syntax so cryptic it is suspected of causing headaches, stomach cramps and cases of glazed eyes, so no wonder it is shunned by programmers unfamiliar with it.


I spent my spare time over the next few weeks thoroughly researching the subject and writing sample ABAP programs illustrating the use of regular expressions, composing my initial E-Bite draft as I went along. At first I was not convinced I could manage to fill the 50 to 100 pages of text recommended as the size of an E-Bite, but soon found it necessary to eliminate content that would have caused the book to exceed this high limit.  Paul Hardy, in his superb account of his experience writing the book ABAP To The Future (http://scn.sap.com/community/abap/blog/2015/03/27/my-monster-its-alive-its-alive), also expresses his initial panic with not being able to identify enough topics to fill 15 chapters of a book, but then over time identifying more than 15 topics and having to decide what to leave out.


I found writing about regular expressions caused me learn much more about the subject, and eventually I found a way to introduce programmers to its language syntax in small, manageable bites, hoping to avoid the anxiety many might experience while trying to learn this on their own. At long last I had a complete draft I felt could convey the necessary concepts to seasoned programmers who were new to regular expressions.


Overcoming technical difficulties


Over the past few years I had been writing articles using the LibreOffice Writer application running on the Ubuntu operating system. These files are saved using the“open document text” format, the file extension for which is “.odt”.  Naturally, I intended to use the same application for the E-Bite draft.  The folks at Rheinwerk Publishing, Inc. required a book draft to be formatted using a Microsoft Word template and the file saved in the “.doc” format.


I did not own a copy of the Microsoft Word application, and we agreed at the start to find a way to exploit the features both applications had in common; to persevere and resolve any problems as we encounter them.This was unexplored territory for all of us, and we were learning as we moved through the process – the E-Bite book format was new and had yet to be tested in the marketplace, and this seemed to be the first time Rheinwerk Publishing, Inc. dealt with an author using an open source document editor.  To their credit, the technicians at Rheinwerk Publishing, Inc. created for me a LibreOffice Writer template equivalent to the one used with Microsoft Word, with detailed instructions to me on how to make it available during editing sessions.


My editor, Hareem Shafi, and I soon discovered many of the incompatibilities between Microsoft Word and LibreOffice, but eventually we found a way to overcome the challenges presented by these different applications. Hareem was very patient with the difficulties we were experiencing, and I commend her for the magnificent job she did wrestling my draft into submission. In some ways I felt we were trailblazers helping to establish a process by which open office documents could be used as the basis for future book drafts.


The dawn of a new day


Now that the work of writing the book is completed, I feel privileged that this E-Bite will accompany other E-Bites in the first release using this new book format. With its potential for providing a book on a narrowly focused topic without having to acquire a book also dealing with a host of other concepts, perhaps this new E-Bite format will appeal to the SAP community.


Jim


https://www.sap-press.com/


SAP Fiori & SAP Business Workflow - Generic Decision App

$
0
0

Hy everybody,

 

i'm working alot with SAP Fiori Applications (SAP UI5) and created a simple application that integrates with SAP Business Workflow.

 

I wanted to create a application, which allows a simple user decision kind of a question with yes/no answers. I ended up in creating a customizable application wich dinamicly creates the selection options and the to be answered question(s).

 

Now i would like to share quickly what i've been doing for this application & share the experience i've made.

 

  1. Created a simple 1 step workflow - without any start events or something -> I wrote a simple 20line of code report to start a workflow instance from sap gui transaction.
  2. Created a customizing table with scenario information to determine asked question and answer possibilities.
  3. Created a  NW Gateway Service to read the customizing tables & to prcoess workflow items.
  4. Checked everything with SWIA to look at workitem container and other elements.

 

  • I wanted to keep everything as simple as possible and with SAP Fiori and SAP Business Workflow it was really simple to achieve a simple solution.
  • The SAP UI5 framework is quite stable and after getting into it it's nice to work with it.
  • SAP Business Workflow will be my choice if i have to create business processes wich are supported by a workflow engine.
  • Programming of SAP UI5 & ABAP in Eclipse works quite nice.

 

How could this app be extended (What i did not implement)

  • User Assignement - I did not provide a way to select the next user/org. unit. I've implemented many rules FM's and other logic to determine a user or org. unit therefore it was not a key learning i wanted to have and therefore i skipped it.
  • Attachments - I did not implement a Attachment List or any other link to a document management system. I've implemented several ICF services & multiple SAP UI5 Applications already including a document/attachment handling because of this, i skipped it in this application.
  • Business Data - I did not implemenet a kind of generic Business Data Visualization Area to display business data in a generic way. Altough with SAP UI5 and NW GW this can be easily achieved.

 

The end result is a simple, customizable application for generic decisions that does not require huge development efforts and that will be used in a future SAP project at our enterprise.


I wish you all a nice day.


Regards,

Michael

Tables Parameter is not a bad option always..!!

$
0
0

Hi sdnmates,

While writing a RFC today I got some interesting stuff ( an example of how much does SAP concentrates on performance ), worth sharing..!!

 

Scenario :

So here it goes .. -- ) :

 

Untitled.png

Well, you will get this Warning popup ( labeled as Information ) while declaring a parameter to hold Internal Table data in Import, Export, Changing parameter of a RFC enabled FM.

 

So, what does SAP suggests ?

You should Declare it as Tables parameter ( but Tables has already been marked Obsolete)

 

Sounds strange,, Right ??

Probably Yes, if you do not know the reason..!!

 

Reason :

So, I searched for the root cause :

This Information / Check was included with OSS 736660 - RFC: Implementing performance checks in transaction SE37


For releases lower than 7.2 or 7.0 EHP2 SAP uses Internal Binary Format for flat types and Tables parameters and xRFC is used for Deep parameters, as per the protocol defined for communicating between systems in case of RFC.


For releases 7.2 onwards or higher than 7.0 EHP2 SAP uses basXML ( Binary ABAP Serialized XML ), which is again expected to change in coming releases.


In terms of performance Internal Binary Format method is the topper followed by basXML and then comes xRFC.



Prevention :

So, if you do not want this popup and you are on a higher release supporting basXML,

 

Do the following changes :

1. Specify the Transfer protocol as basXML in SM59,

   

Untitled.png
2. Tick mark the Check Box, basXML supported in SE37.

 

Untitled.png

 

Pls. Note : If your RFC FM has only flat parameter then this "basXML"  will result in loss of performance.This will help you to achieve better performance only if your RFC FM has too many complex parameters.

 

Suggestion :

Kindly come up with addition / comments / suggestion, that can add further value to this article.

 

Thanking You All..!!

ABAP Trapdoors: Size Does Matter

$
0
0

Welcome to another ABAP Trapdoors article. If you are intersted in the older articles, you can find a link list at the bottom of this post.

 

There are various ways to handle XML data in ABAP, all of them more or less well-documented. If you need a downwards-compatible event-based parsing approach, for example, you might want to use the iXML library with its built-in SAX-style parser. (Note that iXML still constructs the entire document, so it's more like a DOM parser with a SAX event output attached to it. If you're looking for a strictly serial processing facility, check out the relatively new sXML library instead.)

 

The iXML documentation has a, let's say, distinctive writing style, and the library proudly distinguishes itself from the remaining ABAP ecosystem (for example, by using zero-based indexes instead of one-based lists in various places), but all things considered, it's a viable and stable solution. That is, if you observe the first rule of SAX: Size Does Matter. Consider the following example:

 

REPORT ztest_ixml_sax_parser.
CLASS lcl_test_ixml_sax_parser DEFINITION CREATE PRIVATE.  PUBLIC SECTION.    CLASS-METHODS run.
ENDCLASS.
CLASS lcl_test_ixml_sax_parser IMPLEMENTATION.  METHOD run.    CONSTANTS: co_line_length TYPE i VALUE 100.    TYPES: t_line   TYPE c LENGTH co_line_length,           tt_lines TYPE TABLE OF t_line.    DATA: lt_xml_data       TYPE tt_lines,          l_xml_size        TYPE i,          lr_ixml           TYPE REF TO if_ixml,          lr_stream_factory TYPE REF TO if_ixml_stream_factory,          lr_istream        TYPE REF TO if_ixml_istream,          lr_document       TYPE REF TO if_ixml_document,          lr_parser         TYPE REF TO if_ixml_parser,          lr_event          TYPE REF TO if_ixml_event,          l_num_errors      TYPE i,          lr_error          TYPE REF TO if_ixml_parse_error.    DATA: lr_ostream TYPE REF TO cl_demo_output_stream.    " prepare the output stream and display    lr_ostream = cl_demo_output_stream=>open( ).    SET HANDLER cl_demo_output_html=>handle_output FOR lr_ostream.    " prepare the data to be parsed    lt_xml_data = VALUE #( ( '<?xml version="1.0"?>' )                           ( '<foo name="bar">' )                           ( '  <baz number="1"/>' )                           ( '  <baz number="2"/>' )                           ( '  <baz number="4"/>' )                           ( '</foo>' ) ).    " determine the size of the table - since the lines have a fixed length, that should be easy    l_xml_size = co_line_length * lines( lt_xml_data ).    " initialize the iXML objects    lr_ixml = cl_ixml=>create( ).    lr_stream_factory = lr_ixml->create_stream_factory( ).    lr_istream = lr_stream_factory->create_istream_itable( table = lt_xml_data                                                           size  = l_xml_size ).    lr_document = lr_ixml->create_document( ).    lr_parser = lr_ixml->create_parser( stream_factory = lr_stream_factory                                        istream        = lr_istream                                        document       = lr_document ).    lr_parser->set_event_subscription( if_ixml_event=>co_event_attribute_post +                                       if_ixml_event=>co_event_element_pre +                                       if_ixml_event=>co_event_element_post ).    " the actual event handling loop.    lr_ostream->write_text(        iv_text   = 'iXML Parser Events'        iv_format = if_demo_output_formats=>heading        iv_level  = 1    ).    DO.      lr_event = lr_parser->parse_event( ).      IF lr_event IS INITIAL. " if either the end of the document is reached or an error occurred        EXIT.      ENDIF.      CASE lr_event->get_type( ).        WHEN if_ixml_event=>co_event_element_pre.          lr_ostream->write_text( |new element '{ lr_event->get_name( ) }'| ).        WHEN if_ixml_event=>co_event_attribute_post.          lr_ostream->write_text( |attribute '{ lr_event->get_name( ) }' = '{ lr_event->get_value( ) }'| ).        WHEN if_ixml_event=>co_event_element_post.          lr_ostream->write_text( |end of element '{ lr_event->get_name( ) }'| ).      ENDCASE.    ENDDO.    " error handling    l_num_errors = lr_parser->num_errors( ).    IF l_num_errors > 0.      lr_ostream->write_text(          iv_text   = 'iXML Parser Errors'          iv_format = if_demo_output_formats=>heading          iv_level  = 1      ).      DO l_num_errors TIMES.        lr_error = lr_parser->get_error( sy-index - 1 ). " because iXML is 0-based        lr_ostream->write_text( |{ lr_error->get_severity_text( ) } at offset { lr_error->get_offset( ) }: { lr_error->get_reason( ) }| ).      ENDDO.    ENDIF.    lr_ostream->close( ).  ENDMETHOD.
ENDCLASS.
START-OF-SELECTION.  lcl_test_ixml_sax_parser=>run( ).

You can copy this program into your system and execute it, it doesn't do anything harmful: It simply assembles a simple XML document (in a real application, you would get this from a file, a database, a network source - whatever), constructs an input stream around it, passes it to a parser and executes a parse-evaluate-print-loop until either the end of the output is encountered or something bad happens.

 

If your system is a non-unicode (NUC) system (you can easily check if this is the case using System --> Status), the program will run just fine, producing an output similar to the following image:

 

OutputNormal.png

 

If your system happens to be a unicode (UC) system, the program won't behave quite the same way - you will get a rather nondescriptive error message (error at offset 0: unexpected symbol; expected '<', '</', entity reference, character data, CDATA section, processing instruction or comment).

 

OutputError.png

 

It certainly does not help that the parser does not return an offset (or a line and column number) when assembling the error message. However, the events logged prior to the error messages provide a hint: The error always occurs after half of the lines of the table have been processed. You can easily verify this by changing the number of baz elements in the sample above. Since I've already mentioned that this issue occurs on UC systems only, it's now easy to deduce what went wrong here:

 

iXMLInterface.png

 

The iXML stream factory expects the size to be the number of bytes, not the number of characters. The code works as long as a character is represented by a single byte, but in UC systems, that's not the case. The solution - or maybe one of the solutions - is relatively simple:

 

    " determine the size of the table for both UC and NUC systems    l_xml_size = co_line_length * lines( lt_xml_data ) * cl_abap_char_utilities=>charsize.

This trapdor is a rather devious contraption because it will not be detected by the standard unicode checks and the error message is about as misleading as it can get. Also, whether you get to see the message at all depends on the actual implementation of the parsing program. If the original developer thought that error handling might be left to be implemented by those who follow - well, it's a long way down...

 

Older ABAP Trapdoors articles


Let's chat...

$
0
0

                                                                scn.jpg

Hi,

 

Everyday we are using lot's of application to sharing our information (personal, professional..etc). But what about business information, our work information, process information, yes information, it could be any type of. let's take an example, some time a end user has to inform another business user related to created document, then he makes call or mail to deliver the same.

 

Here i have made a simple program is a kind of chat program, where a user can see who all are logged in system and also send a message to them instantly and make a conversation without make any call or mail.

 

yes, for the privacy, the program is storing every chat information in customize table for later use.

 

In below steps, i have explain logic and steps.

 

    Initial Screen:

I made initial screen size same as other chat application.

                                                  chat.JPG

In above screen, two users are logged in. In this screen i have use CL_GUI_TIMER class and it's check the online user information every second.(interval value set in class attributes), The login chat screen will only refresh, if any new user found or vise versa.

 

If any new user logged in or any user left the system. respective user information will add or remove from the above screen.

Means, the above screen will only provide the logged / online users or bookmarked users.

 

In login exit, a logic has been maintained to check the user is already using the chat program or not, or if any reason user's SAPGUI crashed. the old login information will be deleted from customize table.

 

Exit Information: SUSR0001 [User exit after logon to SAP System].

attached code in ZXUSRU01.txt

 

Make the first chat:

To do chat with user, select the any user from above chat screen.

After select an user, a chat screen will open.

 

              chat screen.JPG

Selected user name will be the title of the screen, as display in above screen. (i have selected Rohan Sen).

To send message the selected user, enters the respective text in open text window.

              enter.JPG

 

After press 'ENTER' button. System will check for the selected user is logged-in or not  and chat screen is open or not.

If chat screen is not open for the selected user. A pop-message will open to selected user screen as below.

 

              popmsg.JPG

 

Used Function Module: TH_POPUP


Else message will be store in maintained table.


After executing the chat t-code, user will get initial screen (as above), but though user has received a message from user, then blue chat chat-out.pngicon will be displaying at the place of yellow chat icon, as below screen.


Suppose user is not online, then all received message will be store in maintained table and while login into the system, a pop-up message box will appear as below screen, which will have message sender and date information.

    

                    message receive.JPG

User can only send message to off-line users if they have bookmarked.

 

 

Message Received

 

When both users are using application and sharing information the view will be as below.

 

    chat start.jpg

 

It's same like other chat screen, user enters will be at the right side of the chat screen and receiver enters will be at the left side. (as above).

with other necessary information like time and date.

Here if date is current date, then it will display as "today".

 

One more tool has provided to see the earlier conversation.

 

          erlier conversation.JPG

By clicking the above button, the very next day available conversation will be display in chat screen as below.

 

            earlier load.JPG

If you see, the conversation details are more, so automatic a scroll bar will be appear (as above screen in red area), which is very thin and only display when user scroll the screen.

 

even only the current conversation will display in screen, mean scroll will always be on the bottom of chat screen.

 

 

Technical Information:

Though all solution is depends on HTML, CSS and java script, some additional installation requires from the SAPGUI, Local system side.

 

1. SAPGUI Installation, Please find below screen about SAPGUI.

              SAPGUI.JPG

    For more information, please go through with below discussion.

    https://scn.sap.com/thread/3782213

 

2. Java Script setting, internally SAP used IE in CL_GUI_HTML_VIEWER class, to make available java script functionality enable setting in IE.

 

    IE setting.JPG

    IE-Setting->Advance Tab


3. MIME setting

          Created a new folder inside SAP->PUBLIC->SAP_CHAT and imported jpg & png file from attached zip file.



All respective programs, include and class are in attached zip in below google drive link for further analysis.

 

SAP Communicator

 

Program Name : zcommunication

Class Name    : zcl_communication

Tables Name  : zcom_books (bookmark information)

                        zcom_run    (Communicator execution information)

                        zcommun    (Current Communication Information)

                        zcommun_his (Communicator history)


Thanks & Regards,

Praveer


Spool List with blank line

$
0
0

We found the list spool with one blank line in every 10 lines.  . It’s because NW740 (SAPKB74010)  has an SAP standard FM ‘RSPO_GET_LINES_AND_COLUMNS’.  Since the default layout X_PAPER defines lines to 10 and following condition doesn’t meet, lines keep as 10 then spool lists with a blank line in every 10 lines (i.e. 10 lines page):

 

    if lines < 5.

     lines = 0.

    endif.

 

 

In NW 731 the parameter show_realheight is always false line becomes 0. Then the spool doesn’t show the blank line.

 

    if lines < 5 OR show_realheight = abap_false.

     lines = 0.

    endif.

 

 

 

In order not to show the blank line in spool, one possible solution was to modify the CFI content builder and do not use default layout X_PAPER but use X_SPOOLERR. But this solution probably requires very heavy regression test and I am not sure if worth it or not.

 

 

Applying SAP Note 2169148 resolved the issue for us.

 

 

Cheers,

 

 

Dan Mead

How to add Generic Object Services to your context menus

$
0
0

Quick intro: This post is part of a series in which I show you some interesting ABAP tips and tricks. I'll present this in the context of our own developments at STA Consulting to have a real-life example and to make it easier to understand.

 

Requirement: we have a Business Object displayed in a field of an ALV grid and we want to add the Generic Object Services to our custom context menu.

 

Background information:

 

Business Objects

 

In practically all ALV grids you will find fields that contain Business Objects (BO to keep it short). For example, a Plant, a Vendor, a Material is a BO defined by SAP. You can display BOs using transaction SWO1. Our example will be BUS1001 (Material).

 

http://sta-technologies.com/wp-content/uploads/2015/08/blog_GOS_01_resized.jpg

 

In order to uniquely identify a BO, there is a link to at least one field of a database table. BO BUS1001 is linked to MARA-MATNR, which is the unique identifier of a material.

 

http://sta-technologies.com/wp-content/uploads/2015/08/blog_GOS_02_resized.jpg

Click on the image above to see the full screenshot

 

This makes it easy to identify if an ALV field contains a BO or not: simply check the field catalog of the ALV: if there is a reference to a table field which is also referenced by a BO, then we can add the GOS menu to it.

 

Generic Object Services (GOS)

 

GOS is a very useful standard tool that allows us to do certain things with BOs. You can add notes and attachments, start and display workflows, link BOs together, send BOs as attachments in messages etc. I'm sure you've seen the classic toolbar menu of GOS in many transactions like MM03:

 

http://sta-technologies.com/wp-content/uploads/2015/08/blog_GOS_04_resized.jpg

 

Why is it needed?: The basic reason we made this development is that the GOS menu is only available in certain transactions. For example, if you want to attach a file to a material, you have to launch MM03. In order to do this, you have to open a new window, copy-paste the material number, hit enter etc. It would be great to attach the file in the transaction you are in.

 

Solution: let's assume that we have already identified which ALV field contains the material number. After this, we will use a standard class to add the GOS menu to our context menu.

 

First declare and create the object:

 

DATA: lo_gos TYPE REF TO cl_gos_manager.
CREATE OBJECT lo_gos   EXPORTING     ip_no_commit = 'R'   EXCEPTIONS     others       = 1.

It is important to add parameter ip_no_commit to control database commits made by GOS, which may interfere with the current program. Space and 'X' are pretty trivial, 'R' means that updates will be performed using an RFC call. Naturally you have to add your own error handling in case there was any error.

 

The next step is to get the GOS menu as a context menu object. We have to supply the BO type and the BO key (BUS1001 and the material number the user right-clicked on):

 

DATA: lo_gos_menu TYPE REF TO cl_ctmenu,       ls_object   TYPE borident.
ls_object-objtype = 'BUS1001'.
ls_object-objkey  = lv_matnr.
CALL METHOD lo_gos->get_context_menu   EXPORTING     is_object = ls_object   IMPORTING     eo_menu   = lo_gos_menu.

The object reference received in parameter eo_menu will be exactly the same as in the toolbar of MM03.

 

The last step is to add this context menu to the context menu of the ALV grid. There are hundreds of forum posts about creating your custom context menus so I won't elaborate it here. There is a standard demo program where you can check it out: BCALV_GRID_06. The bottom line is that you will have a context menu object that you can manipulate:

 

CALL METHOD lo_alv_context_menu->add_submenu   EXPORTING     menu     = lo_gos_menu     text     = text-027.     " Generic Object Services

The end result will look like this (we have actually added the GOS menu under our own nested submenus "STA ALV Enhancer - Material"):

 

http://sta-technologies.com/wp-content/uploads/2015/08/blog_GOS_05.jpg

Click on the image above to see the full screenshot


Conclusion: This is pretty useful because now you can access the GOS in any ALV you want. Naturally if you attach a file using this context menu, it will be visible in MM03 and vice versa.

 

I hope you liked this first post, there are lots more things to come. Have a nice day!

 

p.s.: Actually it is possible to dynamically add this menu to all BOs in ALVs of all standard and custom reports, so 'BUS1001' it is not hardcoded...

 

Easy and efficient way of uploading pricing conditions in SAP system using a single exclusively designed program

$
0
0

Introduction

In order to upload pricing conditions in SAP system, we need to create a conversion program which caters to all condition tables and uploads the respective data. Now since all condition tables have different structures/key fields/fields, the BDC approach can’t solve the problem unless we do recordings for all Condition Types. Also in case a new condition type is added, then we need to add a new recording to the code, which increases the development/maintenance hours.

Each time pricing conditions need to be uploaded in the SAP system, a technical resource is required to create a conversion program which uploads the data into the system. This tool helps in uploading the pricing conditions for SD and MM module, thereby eliminating any multiple manual intervention.


Solution Details

As the requirement is to upload any condition table, we have to design a solution which caters to all condition type uploads.

So in order to make it generic for all condition types, we would be expecting the Condition Table name to be as a part of upload fields. Now using this Table Name, we would fetch the schema of corresponding Condition Table and map fields dynamically. Also we would be using the Conversion Exits on the various field values using the field information returned for each table.

 

For example, say the condition table name coming in upload file is A652. We will use FM “CATSXT_GET_DDIC_FIELDINFO” to the details for table fields. As a result from this FM, we will get all the fields with their attributes like:-

  • Key Flag (denotes whether the field is a part of Primary Key or not)
  • Domain Name
  • Data Element Name
  • Check Table Name
  • Length
  • Field Labels
  • Conversion Exit, etc..

 

The basic common fields in upload structure can be:-

  • KAPPL (Application)
  • KSCHL (Condition Type)
  • TABLE (Condition Table Name)

 

Now the point lies how we map the data from file to different condition tables, as each table has a different structure and also varying in number of fields.

Since any Database Table can have only 16 fields (max) as a part of Primary Keys. And there are 5 fields which are common in all A* tables:-

  • MANDT (Client)
  • KAPPL (Application)
  • KSCHL (Condition type)
  • KFRST (Release status)
  • DATBI (Validity end date of the condition record)

 

So remaining number of key fields left are 11 (16 – 5), now the upload structure of the file would have 11 fields as floating/generic value. So we keep FLD1-FLD11 of type FIELDNAME (Char 30).

 

The other fields in the upload file structure (common to all condition types) are:-

  • DATAB (Start Date of Condition Record)
  • DATBI (End Date of Condition Record)
  • KBETR (Condition Value)
  • KPEIN (Condition Price Unit)
  • MEINS (Unit of Measurement)
  • KRECH (Calculation Type for Condition)

 

So the final upload structure is:-

Field NamesData TypeDescription
KAPPLKAPPLApplication
KSCHLKSCHLCondition Type
TABLETABNAMETable Name
FLD1FIELDNAMEField Name
FLD2FIELDNAMEField Name
FLD3FIELDNAMEField Name
FLD4FIELDNAMEField Name
FLD5FIELDNAMEField Name
FLD6FIELDNAMEField Name
FLD7FIELDNAMEField Name
FLD8FIELDNAMEField Name
FLD9FIELDNAMEField Name
FLD10FIELDNAMEField Name
FLD11FIELDNAMEField Name
DATABKODATABValidity start date of the condition record
DATBIKODATBIValidity end date of the condition record
KBETRKBETR_KONDRate (condition amount or percentage) where no scale exists
KPEINKPEINCondition pricing unit
MEINSMEINSBase Unit of Measure
KRECHKRECHCalculation type for condition


Now since in every A* table first 3 key fields are:-

  • MANDT
  • KAPPL
  • KSCHL

(NOTE: other 2 fields KFRST and DATBI are the last 2 key fields)

And first 3 fields in upload file are:-

  • KAPPL
  • KSCHL
  • TABLE

 

So rest of the key fields from A* table would be mapped to upload file fields FLD1-FLD11 based on the number of primary keys. Thus we will start mapping from field 4 of condition table to FLD1, FLD2 and so on till FLD11, based on the number of key fields.

In case we have 3 more key fields (excluding 5 common key fields), then in the upload file we will have values in fields FLD1, FLD2 and FLD3. In case any other field has a value, then it is an erroneous data, and nor these 3 fields can be blank (as they are part of primary keys).


For instance, let consider the table A652 (refer the snapshot in attachments)

The mapping of upload file to Condition Table would be like:-

1.      FLD1 --> VBELN

2.      FLD2 --> MATNR

3.      FLD3 --> VRKME

4.      FLD4 -to- FLD11 would remain as blank.

 

Also the data coming in these fields should be in continuous chain.

For instance if FLD1, FLD2 and FLD4 has values and FLD3 is initial, then also this record is erroneous.

 

1.      In case of the above erroneous situation, an error message “Discontinuity in Variable Key Fields” is appended.

 

2.      Validate the Processing Status from table T686E. In case no valid record found then append an error message “Invalid Processing status for conditions”.

 

3.      Now using the field information returned from FM “CATSXT_GET_DDIC_FIELDINFO”, check if any Conversion Exit is applicable, and then use the value coming in the upload file field and apply the same to convert the value and then we can pass this to IDoc structures. In case any error occurs then append message coming from the Conversion Exit.

 

4.      Check if the field is present in Segment

  • E1KOMG,
  • E1KONH, and
  • E1KONP

If field is found in the segment, then pass the value in the segment(s).

 

5.      Also concatenate the key field values in a string called Variable Key.

 

6.      After all key fields are covered with the above steps specified; then check for the length of the field. If field length is greater than 100, append a message “Variable Key too big”.

 

7.      Get the Logical System Name from table T000 where Client = SY-MANDT. In case no record found then append an error message “No Partner Function Found”.

 

8.      Concatenate ‘SAP’ SY-SYSID to form the Port Number.

 

9.      In case no error is found till now and test run is not requested, then populate the IDoc Segments.

    1. a.      Pass Control Records
      • Pass the Sender and Receiver Information.
      • IDoc Type as COND_A04
      • Message Type as COND_A
      • Basic Type as COND_A04
      • Direction as Inbound
      1. b.      Pass Data Records
        • Now pass the above prepared data into segments
          • E1KOMG
            • Application
            • Condition Type
            • Variable Key
            • Region
            • E1KONH
              • Start Date
              • End Date
              • E1KONP
                • Condition Type
                • Condition Value
                • Condition Unit
                • Condition Price Unit
                • Calculation Type for Condition
              1. c.      DIRECT POST– Post the data using FM “IDOC_INPUT_COND_A
                • Pass all the above prepared data into the FM.
                • If some error is returned, then append the same.
                • If no error found, and data is successfully posted then check for Status as 53. If found, append a success message “Changes done successfully”
                1. d.      IDOC POST– Post the data using FM “IDOC_INBOUND_WRITE_TO_DB
                  • Pass the data records into the FM.
                  • If some error returned from FM, then append the same in the log, to be displayed to the user. If no error found, Commit Work and append message “Idoc successfully posted:” with the IDOC number.


Business benefits

The above explained approach will upload all the relevant condition records in to the SAP system for SD and MM module using the iDoc approach (which is faster as compared to using BDC’s or LSMW’s).

The only thing which is crucial for using this tool is to understand the mapping of condition table with the upload file format. Once the mapping is done and a Tab Delimited Text file is provided to this program, it uploads the data in the desired tables; and thereby saving around 80% of estimated time. For instance, the general effort spent in developing the conversion program is 40 hours -versus- 8 hours spent in using this tool.


In addition, no maintenance is required in case any other change request is to be catered.

 

Thus this solution minimizes:

  • The functional effort of manually entering conditions records one by one.
  • The technical effort of developing conversion program using BDC’s for different condition tables. The number of these conversion program can vary depending upon the conditions to be uploaded.
  • The maintenance effort required as and when new condition types are added.

The best debugging tool - your brain

$
0
0

Introduction

Usually when I blog on SCN I write about some specific development problem and the solution I found for it. In contrast this blog is about a more abstract topic, namely how to efficiently debug code. While it is quite easy to debug SAP code (the business suite is open source after all, at least the applications written in ABAP) debugging a certain problem efficiently is sometimes quite complex. As a result I've seen even seasoned developers getting lost in the debugger, pondering over an issue for hours or days without being close to a solution. In my opinion there are different reasons for this. One, however, is that some special approaches or practices are necessary in order to find the root cause of complex bugs using debugging.

In this blog I try to describe the approaches that are from my experiences successful. However, I'd also be interested which approaches you use and what your experiences are. Therefore I'm looking forward to some interesting comments.

 

Setting the scene

First I'd like to define what I would classify as complex bugs. In my opinion there are basically two categories of bugs. The simple ones and the complex ones . Simple bugs are all the bugs that you would be able to find and fix with a single debugger run or even by simply looking at the code snippet. For example, copy and past errors or missing checks of boundary conditions fall in this category. By simply executing the code once in the debugger every developer is usually able to immediately spot and correct these bugs.

The complex ones are the once that occur in the interaction of complex frameworks or APIs. In the SAP context these frameworks or APIs are usually very sparsely documented (if documentation is available at all). Furthermore, in most cases the actual behaviour of the system is influenced not only by the program code but also by several customizing tables. In this context identifying the root cause of a bug can become quite complex. Everyone that has every tried to e.g. debug the transaction BP and the underlying function modules (which I believe were the inspiration for the Geek & Poke comic below) or even better a contract replication form ERP to CRM knows what I'm talking about. The approaches I will be discussion in the remainder of this blog are the ones I use to debug in those complex scenarios.

http://geekandpoke.typepad.com/.a/6a00d8341d3df553ef016767875265970b-800wi

Know your tools

As said in the introduction I want to focus on the general approach for debugging in this blog. Nevertheless, an important prerequisite for successful debugging is knowing the available tools. In order to get to know the tools you need to do two things. First, its important to keep up to date with new features. In the context of ABAP development SCN is a great resource to do so. For example, Olga Dolinskaja wrote several excellent blogs regarding new features in the ABAP debugger (cf. New ABAP Debugger &#150; Tips and Tricks, News in ABAP Debugger Breakpoints and Watchpoints , Statement Debugging or News in ABAP External Debugging – Request-based Debugging of HTTP and RFC requests). Also Stephen Pfeiffers blog on ABAP Debugger Scripting: Basics or Jerry Wangs blog Six kinds of debugging tips to find the source code where the message is raised are great resources to learn more about the different features of the tools. Besides the debugger also tools like checkpoint groups (Checkgroups - ABAP Development - SCN Wiki) or the ABAP Test Cockpit (Getting Started with the ABAP Test Cockpit for Developers by Christopher Kaestner) can be very useful tools to identify the root cause of problems.  However, reading about new features and tools is not enough. In my opinion it is important to once in a while take some time to play with the new features you discovered. Only if you tried a feature in toy scenario and understood what is able to do and what now will you be able to use the feature in order to track down a complex bug in a productive scenario.

Besides the development tools there are other important tools you should be able to use. Recently I adopted the habit to reply to questions by colleague whether I knew what the cause of a certain bug could be if they already performed a search on SCN and in the SAP support portal. In a lot of cases the answer is no. However, in my opinion searching for hints in SCN and the SAP support portal should be the first step whenever you encounter a complex bug. Although SAP software is highly customizable and probably no two installations are the same those searches usually result in valuable information. Even if you won't find the complete solution you will at least get information in which areas the cause of the bug might be. And last, but not least, also an internet search usually turns up some interesting links.

 

Thinking about the problem...

The starting point for each debugging session is usually an error ticket. Most likely these tickets was created by a tester or a user that encountered an unexpected behaviour. Alternatively the unexpected behaviour could also be encountered by the developer during developer testing (be it automated or manual). In the first case the next step is normally to reproduce the error in the QA system. Once a developer is able to reproduce the error it is usually quite easy to identify the code that causes an error message or an exception (using the tools described in the previous chapter). If no error message or exception but rather an unexpected result is produced identifying the starting point for debugging can already become quite challenging.

In both cases I recently adopted the habit to not start up the debugger immediately. Instead I start by reasoning about the problem. In general I start this process of by asking myself the following questions:

  • What business process triggers the error?
    The first question for me is always which business process triggers a certain error. Without an detailed understanding of which business process and its context causes an error identifying the root cause might be impossible.
  • What does the error message tell me?

In the case of a dump this is pretty easy. The details of the dump clearly show what happened and where it happened. However, in the case of an error message the first step should always be to check if a long text with detailed explanations is available. Most error massages don't have an detailed e

description available. But if a detailed description is available it is usually quite helpful.

Even the error messages without detailed descriptions can be very helpful. For example error message following the pattern "...<some key value> not available." or "....<some key value> is not valid." usually point to missing customizing. In contrast to that a message like "The standard address of a business partner can not be deleted" points to some problem in the process flow. Once one gets used to reading the error messages according to those kind of patterns they are quite useful to narrowing down the root cause of a error.

  • Which system causes the error?

Even if it seams to be trivial question it is in my opinion a quite important on. Basically all software systems in use today are connected to other software systems. So in order to identify the root cause of an error it is important to understand which system (or which process in which system) is responsible for triggering the error. While this might be easy to answer in most cases there are a lot of some where answering this question is far from trivial. For example consider SAP Fiori application that is build using oData service from different back end systems.

  • In which layer does the error occur?

Once the system causing an error is identified, it is important to understand in which layer of the software the error occurs. Usually each layer has different responsibilities (e.g. provide the UI, perform validation checks or access the database) For example, in a SAP CRM application the error could occur in the BSP component building the UI, the BOL layer, the GenIL layer or the underlying APIs. Understanding on which layer an error occurs helps to take short cuts while debugging. If the error occurs in the database access layer it's probably a good idea to not perform detailed debugging on the UI layer.

 

Usually I try to get a good initial answer to this questions. In my opinion it is important to come up with a sensible assumptions for answers to these questions. If the first answers obtained by reasoning about the error are not correct the iterative process described below will help to identify and correct these.

 

...and the code

The next step I take is looking at the code without using the debugger. After answering the question mentioned in the previous section I usually have a first idea in which part of the software the error occurs. By navigating through the source code I try to come up with a first assumption what the program code is supposed to do and which execution path leads to the error. This way I get a first assumption what I would expect to see in the debugger and also test my assumptions if come up with so far.

Note that trying to understand the code might not be sensible approach in all cases. Especially when dealing with very generic code it is usually far easier to understand what happens using the debugger. Nevertheless, I've had the experience that first trying to understand the code without the debugger allows me to debug much more efficient afterwards.

 

Debugging as an experiment

After all the thinking it is time to get to work and start up the debugger. I try to thinks about debugging as performing an experiment. After understanding what the scenario and context are in which the error occurs (by thinking about the problem) and getting a first assumption what the cause of the error might be (by thinking about the code) I use the debugger to test my assumptions. So basically I use the cycle depicted below to structure my debugging sessions.

debugging_as_experiment.png

First I try to think of an "experiment" to test my assumptions about the problem. Usually this is simply performing the business process that causes the error. Especially if an error occurs in a complex business process it might be better to find a way to test the assumptions without performing the whole complex process. The next step is to execute the "experiment" in order to test the assumptions. This basically is the normal debugging everyone is used to. If the root cause of the problem is identified during debugging the cycle ends here. If not, the final step of the cycle is to refine the assumptions based on the insights gained during the debugging. On the basis of  the new assumptions we can redesign the experiment and start the cycle over again. In this step it is important to move forward in small increments. If you change to many parameters between to debugging sessions it might be very difficult to identify the cause of a different system behaviour. For example consider a situation where an error occurs during the address formatting for a business partner. If order to identify the root cause of the problem it might be sensible to first test the code for the address formatting with a BP of type person and after that with a BP of type organization with the same address. This will enable to check if the BP type is part of the formatting problem or not.

 

<F5> vs. <F6> vs. <F7>

During the debug step of the cycle presented above the important question in each debugging step is if to hit <F5>, <F6> or <F7> (step in, step over or step out respectively). Using <F5> it is easy to end up deep down in some code totally unrelated to the problem at hand. On the other side using <F6> at the wrong position might result in not seeing the part of the source code causing the problem.

In order to decide if to step into a particular function or method or to step over it I use a simple heuristic that has proven very useful for me:

  • The more individual a function or method is the more likely is it to use <F5>
  • The more widely used a function or method is the more likely is it to use <F6>.

Using this heuristic basically leads to the following results:

  1. I will almost always inspect custom code using <F5>. the only exception is that I'm sure the function or method is not the cause of the problem.
  2. I will only debug SAP standard code if I wasn't able to identify the root cause of a problem in the custom code.
  3. I will basically never debug widely used standard function modules an methods and instead focus on new ones (e.g. those delivered recently with a new EhP).

As an example consider an error in some SEPA (https://en.wikipedia.org/wiki/Single_Euro_Payments_Area) related functionality. When debugging this error I would first focus on the custom code around SEPA. If this doesn't lead to the root cause of the error I would start also debugging SEPA related standard functions and methods. The reason is that this code has only been recently developed (compared to the general BP function modules). If I would encounter function modules like BAPI_BUPA_ADDRESS_GETDETAIL or GUID_CREATE in the process I would allways step over them using <F6>. These function modules are so common that it is highly unlikely they are the root cause of the problem.

Nevertheless it might turn out that in rare cases everything points to a function module or method like e.g. BAPI_BUPA_ADDRESS_GETDETAIL as the root cause of an error. In this case I would always check the SAP support portal first before debugging these function modules or methods. As these are widely used for quite some time it is highly unlikely I'm the first one encountering the given problem. Only if everything else fails I would start debugging those function modules or methods as a last resort.

 

The right mind set

For all the techniques described before it is important to be in the right mind set. I don't know how often I heard sentenced like "How stupid are these guys at SAP?" or "Have you seen this crappy piece of code in XYZ". I must admit I might have used sentences like these one or two times myself. However, I think this is the wrong mind set. The developers at SAP are neither stupid nor mean. Therefore, whenever I see something strange I try to think what might have been the reason to build a particular piece a code a certain way. What was the business requirement they tried to solve by the code. This usually has the nice effect that with each debugging session I learn something new about some particular area of the system. This will in the future help me to identify the root cause of new issues more quickly.

 

And probably the most important technique of all is the ability to take a step back. It happened to me numerous times already that I was working on a problem (be it a bug or trying to implement a new feature) for a while without any progress. For whatever reason I had to stop what I was doing (e.g. because the night guard walked in and ask me to finally leave the building). After coming back to the problem the next day i quickly found the solution. It then always seemed like I had been blind for the solution the day before. So whenever I get stuck working on a problem I started to force myself to step back, doe something else, and revisit the problem afresh a few hours later.

 

What do you think?

Finally I'd like to here from you what your approaches to debugging are. Do you use similar practices? What are the ones you find useful in identifying the root cause of complex errors?

 

Christian

Viewing all 948 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>