Quantcast
Channel: ABAP Development
Viewing all 948 articles
Browse latest View live

Why Object Oriented? - Encapsulation

$
0
0

I've been away from coding for some time (on project management tasks) so I haven't been able to write as much as I would like, but recently I had the time to do some coding and again I was reminded of the benefits of going object oriented. This time - encapsulation, something I indirectly wrote about in my previous blog (Why object oriented? - Class Attributes), but I feel it's a topic that's more generic and deserves its own entry.

 

To provide some context, I was doing some work related to material valuation, where I wanted to discover the mean average price for a given plant over the last few year considering only goods receipts and stock migrations.

 

I had an object for the material movement line item, and I has making a sum of all the values /quantities, and then calculating na average.  In terms of code I had something like this:

 

data(lt_movements) = ZCL_MOVEMENT_ITEM=>GET_ITEMS_WITH_FILTER( filter ).
Loop at lt_movements into data(lo_movement)     lv_total_quantity = lv_total_quantity + lo_movement->get_quantity(  ).     lv_total_value = lv_total_value + lo_movement->get_value(  ).
Endloop.
Lv_map = lv_total_value / lv_total_quantity.

 


While I was testing the program I discovered a bug related to STOs, where the value of the transfer is posted in the 641 not the 101. I had to change the GET_VALUE( ) method to take into consideration this logic.  If you extrapolate to a situation where the GET_VALUE( ) was used in multiple places you can easily see the benefit of encapsulation.

 

But why is this specific to object oriented programming? I can encapsulate in procedural programming too right? Yes you can, but with some drawbacks:



     1.   Too Verbose

 

The equivalent procedural code, just for one attribute would be something like:

 

   perform get_value_of_movement_item using lv_docnumber                                                                                     lv_docyear                                                                                     lv_docitem                                                                  Changing lv_value.  lv_total_value = lv_total_value + lv_value.


Not only does it take more work to write the code (and laziness takes over), it's harder to read.

 

     2.    Lack of general rules

 

If you consider that GET_VALUE( ) only has (in the first version) a SELECT statement to MSEG, you can easily concluse that many programmers won't bother creating a FORM just for the SELECT, they will write the SELECT directly in the LOOP (forget the FOR ALL ENTRIES discussion please, not the point).

 

You can then say "I create a FORM for every DB access", but this is just one example. The GET_VALUE( ) could return a calculation between attributes of the object:  lv_value = me->quantity * me->unit_price. Don't try to convince me that a procedural programmer would create a form just for a multiplication.

 

In Object Oriented Programming there are rules to enforce this, I don't have to think:

  • Every characteristic of the object is accessed through a getter: This prevents me from putting the quantity * net_price outside my class. I use charateristic and not attribute to separate what is a formal attribute of the class and what is a characteristic of the movement line item. For example in my case, the value of the line item was not an attribute of the class;

  • Every DB access must be made inside the respective class: This prevents me from having a rogue SELECT * FROM MSEG somewhere in my code, instead of retrieving the value from the class via getter.

 

I don't have to think if GET_VALUE( ) is only a SELECT or 100 lines of code, it has to exist according to OO rules, and the only way to get the value of the movement is through GET_VALUE( ) so there is only 1 point of access to  update.

 

Encapsulation is extremely important because things change, and like in my example, what in the beginning was simply a SELECT FROM MSEG, later changed into something that had to have a different behaviour for STOs.

 

PS: I know I take a hit in performance due to encapsulation, but having scalable and bug free code is a priority for most of the project I handle.


Using Google URL Shortener service via ABAP.

$
0
0

Sometimes you need to use short URL instead of the full URL. For example your URL in a table printed inside a PDF form with adobe livecycle, may be very very long, and sometimes is not so beautiful (or not working at all).

 

How to do that in ABAP?

 

There are different ways, as you can see from this great blog post from Ivan Femia

 

Anyway you may know, Google offer this service for free if you stay under the following limit:

 

1,000,000 requests/day


So... we all guys know Google as one of the best service providers in the world, why don't we leverage this opportunity?


Maybe you know this blog post: Integrating Google Glasses with SAP


As I explained there, the GOOGLE API ABAP CLIENT is already supporting Google Glasses, and has the same structure of the standard Google Php APIs, so I could add the support for the Google URL shortener service just adding few code lines and two abap classes!


And in just 1 hr, we got out functionality



1.1.PNG


1.2.PNG


2.PNG


If you have any other google services that may be useful for your business processes, just drop me an email and I will extend the framework, or join directly the project on GitHub here: Gh14Cc10/google-api-ABAP-client · GitHub


----


To make the new functionality working, just clear (eventually) the table zgoogle_access (this is required because the requested access token has a different scope from the one requested with the google glasses demo report), and run the demo report ZGOOGLE_TEST_URLSHORTENER.



International Editable SALV Day 2015

$
0
0

no change.png

 

 

http://scn.sap.com/thread/733872

 

Dear Programmers,

On the 8th of February 2008 James Hawthorne started the discussion (link above) as to why SAP does not give the CL SALV TABLE the option to be editable, as CL GUI ALV GRID is.

Today is the 7th anniversary of that blog, and hence is “International Editable SALV Day”.

 

I started a little discussion on this here on SCN last June:-

 

http://scn.sap.com/thread/3567633

 

Back in the year 2003 I went to SAPPHIRE in Brisbane and the speakers from SAP made much of the convergence of reports and transactions.

 

To put that another way – people did not just want a list of blocked sales documents where the system could not determine the price. They wanted a list of such documents on a screen where you could actually change the price yourself to fix the problem, and then release the document there and then.

 

The obvious answer is an editable grid, rather like the table controls in classical DYNPRO.

 

In CL_GUI_ALV_GRID this was a piece of cake. You could set whatever cells you like to be editable, and programmatically add extra custom commands at the top of the screen e.g. a custom “release” button.

 

The CL_SALV_TABLE was supposed to be the successor of CL_GUI_ALV_GRID and SAP pushed us to use that instead. Whilst far better in many ways, CL_SALV_TABLE has some surprising drawbacks, making it actually have less functionality than its predecessors.

 

·         You cannot edit the data

·         You cannot programmatically add commands in the full screen mode

·         You cannot add separator lines between icons

 

All of this was available in CL_GUI_ALV_GRID. The really strange thing is that at the heart of a CL_SALV_TABLE instance is a a CL_GUI_ALV_GRID instance, so obviously all of the above functionality could be added if it was so desired.

 

You will see from the prior blogs that the history of this is as follows:-

·         Every single ABAP developer desires this extra functionality in CL_SALV_TABLE

·         Many workarounds have been proposed on assorted SCN blogs

·         SAP development looks for those workarounds, and changes the CL_SALV_TABLE to block them

·         People tried to do the right thing, and put a proposal on “idea place” to make the SALV editable, a proposal which got a lot of votes

·         A senior SAP person from the development department got very upset indeed, said the SALV was never meant to be editable, and closed off the idea.

 

To quote from “V for Vendetta” however you cannot kill an idea.......

 

Ideas are Bullet Proof.png

 

So eight years on and workarounds are still springing up. SAP development can close off certain avenues but with the enhancement framework it is possible for developers to get around virtually any barriers placed in their way.

 

Wouldn’t it be easier if this was not a fight between the central team at SAP who develop the ABAP language and the users of that language?

 

Someone once said that if there was a law, and virtually everybody broke that law on a day to day basis, would it not be worth looking at the law to see if it actually made sense?

 

I don’t mind admitting I have created several SALV reports in my company where the users can edit data and I imagine I am not alone. I am also sure SAP development would be horrified by this fact. They hate people doing workarounds, cannot fathom my anyone would do such a rule-breaking dangerous risky thing, just to keep the business users happy.

 

As I said at the end of my last posting on this subject might I humbly suggest to the red nosed SAP ALV development team that the easiest way to stop people doing workarounds is to remove the need for a workaround in the first place, by having the functionality as standard.

 

So the question is – in 12 months’ time will I be posting another blog to celebrate the 9th anniversary of International Editable SALV Day?

 

Cheersy Cheers

 

Paul

 

 

Procedure to upload an excel data into an internal table while debugging in SAP ABAP.

$
0
0

Hello SCN Members,

 

Good Morning,

I have faced a situation, where in SAP Development system does not have data and Testing system has data. So, i am trying to add data on the internal table manually and it is taking much time. so, i tried to copy only changed data in a row from 3rd column to end of the column on the excel sheet. Like that i have done many times and after that i have got an idea and do as below:

 

While debugging, wherever i need to fill data to an internal table i put  break point and filled the data by uploading the excel which was downloaded from the Test server and the process is as follows:

 

First put a break-point where we need to fill data to an internal table. So, break point was stopped.see the below picture in debugging - standard tab,services on the Tool as high lighted below and click on the internal table you will get the data as below:

 

 

sample1.png

you will get the data as above,

 

go to services on the tool icon as high lighted below: Internal table here is Lt_Final with 6 records.

sample2.png

you will get the pop-up as below: Under Table display services, Services<Standard<

 

sample 3.png

 

 

Double click on it to upload the test server data, you will get the below pop-up message, click on yes button as below.

 

sample4.png

after that again you will get the below pop-up message and click on 'Allow'.

 

sample 5.png

 

After that it will fill the data to that particular internal table .(here in this case internal table (LT_Final with 40 records).

 

With that, i did my program changes successfully. This blog will be benefited to our members a lot i hope.

 

Regards,

Siva kumar.

Delivery Picking, Packing, Unpacking, Goods Issue.

$
0
0

As I was working to create a program that would do the picking , creation of handling units , packing of HU’s , unpacking of HU’s  , delivery/picking Quantity
update , I found it rather very difficult to execute the Bapi’s that were provided or use the function modules that I found while debugging the standard.

 

The Function Modules that I found while debugging the standard, would not work when executed stand alone, it required another FM that would help in buffering the data to global values.

 

Also after referring, the OSS 581282 Note,it clearly says that BAPI_HU_CREATE,BAPI_HU_DELETE,BAPI_HU_PACK,BAPI_HU_REPACK, BAPI_HU_UNPACK do not make updates on delivery and hence cannot be used for packing and also In the same way it is not possible to pack
deliveries with function modules of function group V51E (HU_CREATE_ITEM, HU_CREATE_ONE_HU,HU_DELETE_HU, HU_REPACK, HU_UNPACK).

 

Hence there is a way that I have used a FM to do various scenarios related to the delivery and I hope that it might help you all in future.

 

  1. Picking and Packing Materials to Handling
    Units

 

 

We can use FM WS_DELIVERY_UPDATE to update the picking quantity in order to complete the delivery and also to pack the materials to Unique Handling Units created.

 

In table lt_hvbpok various details need to be added to the table, specially keeping in mind the following fields

 

 

        lt_hvbpok-vbeln_vl = delivery no.
       lt_hvbpok-posnr_vl = delivery item.
        lt_hvbpok-posnn    = delivery item.
        lt_hvbpok-vbeln    = delivery no.
        lt_hvbpok-vbtyp_n  = 'Q'.
        lt_hvbpok-pikmg    = Qty to be picked.
        lt_hvbpok-lfimg    = Qty to be picked.
        lt_hvbpok-lgmng    = Qty to be picked.
        lt_hvbpok-meins    = Qty to be picked unit.
        lt_hvbpok-ndifm    = 0.
        lt_hvbpok-taqui    = ‘X’.
        lt_hvbpok-werks    = Plant.
        lt_hvbpok-lgort    = Storage Location.
        lt_hvbpok-matnr    = material.
           

    

If you need to pack the materials to specific handling units, they can be created by using FM BAPI_HU_CREATE. Once the External Ids are created you could use the following FM to attach materials  to these handling units. The following fields need to be taken care of.

 

lst_verko-exidv =   external handling unit id.

 

(This can be multiple as one delivery can have n no if handling units)

 

lst_verpo-exidv_ob = HU external id.
               lst_verpo-exidv    = HU external id.
               lst_verpo-velin    = ‘1’
               lst_verpo-vbeln    = delivery no.
               lst_verpo-tmeng    = material quantity to be packed.
               lst_verpo-matnr    = material no.
               lst_verpo-werks    = plant.
               lst_verpo-lgort    = storage location.

 

Example :

CALL FUNCTION 'WS_DELIVERY_UPDATE'
     
EXPORTING
        vbkok_wa                   
= lst_vbkok
        synchron                   
= 'X'
       
commit                      = 'X'
        delivery                   
= Delivery no
        update_picking             
= 'X'
        if_database_update         
= '1'
        nicht_sperren              
= 'X'
        if_error_messages_send_0   
= 'X'
     
IMPORTING
        ef_error_any_0             
= lw_ef_error_any
        ef_error_in_item_deletion_0
= lw_ef_error_in_item_deletion
        ef_error_in_pod_update_0   
= lw_ef_error_in_pod_update
        ef_error_in_interface_0    
= lw_ef_error_in_interface
        ef_error_in_goods_issue_0  
= lw_ef_error_in_goods_issue
        ef_error_in_final_check_0  
= lw_ef_error_in_final_check
        ef_error_partner_update    
= lw_ef_error_partner_update
        ef_error_sernr_update      
= lw_ef_error_sernr_update
     
TABLES
        vbpok_tab                  
= lt_hvbpok
        prot                       
= lt_prot
        verko_tab                  
= lt_verko
        verpo_tab                  
= lt_verpo.

 

2. Unpacking

 

Now once the materials are packed to the handling units, what if you need to unpack them and repack them to different handling units.

 

There are mainly 2 approaches.

  1. You could use the same FM  WS_DELIVERY_UPDATE and pass the table it_repack where the source and dest HU are according to the requirement. Source and Dest HU are nothing but External Handling Units.

 

So consider a case where M1 is packed to Handling Unit H1 and for
some reason you need to repack the material M1 to Handling unit H2. In that
case your source HU will be H1 and dest HU will be H2 with the material details
and its quantities.

  B. Second approach, which I took, as my program requirement required it, was to completely unpack the materials from the handling units, in this case we           can again use the same function module  WS_DELIVERY_UPDATE but here you need to send the quantity as –QTY. The minus plays a crucial
          role as otherwise ull keep wondering ways why this FM is not working.

 

So remember, pass the same values in table lt_verko and lt_verpo  as above but send the quantities with a ‘-‘sign. : D

 

3. Update table Vepo/Vekp for HU’s

 

The best way to work around this is to use the FM V51S_HU_UPDATE_DB.

 

In this FM you can create, delete or update entries in table vepo or vekp. Even if the there is change in qty, it can be encountered by calling this fm .Do not forget to use  commit in the end ;). You might get an error stating that one of the tables is not defined. For this programmatically you need to declare all the tables as below even if you are passing values only in one of the table.

           CALL FUNCTION 'V51S_HU_UPDATE_DB'
             
EXPORTING
                it_hdr_insert
= lt_ins_vekp
                it_hdr_update
= lt_upd_vekp
                it_hdr_delete
= lt_del_vekp
                it_itm_insert
= lt_ins_vepo
                it_itm_update
= lt_upd_vepo
                it_itm_delete
= lt_del_vepo
                it_his_insert
= lt_his_ins
                it_his_update
= lt_his_upd
                it_his_delete
= lt_his_del.

 

4. Change in Delivery Qty of material for a delivery.

 

What If you need to update the delv qty and picking qty of a delivery , you could do so by passing the table vbpok_tab where the new delivery quantity and picking quantity needs to be passed.

    
lst_vbpok_tab-pikmg    = picking qty
      lst_vbpok_tab-lfimg    = Delivery qty.

 

5. Goods Issue

We can also do goods issue using the same FM. All we need to do is pass vbkok_wa structure and in that lst_vbkok-wabuc    = ‘X’ should be marked. Also
the table vbpok_tab needs to be populated.

 

CALL FUNCTION 'WS_DELIVERY_UPDATE'
     
EXPORTING
        vbkok_wa                          
= lst_vbkok
        synchron                          
= ‘X’
*       NO_MESSAGES_UPDATE                 = ' '
       
commit                             = ‘X’
        delivery                          
= Delivery no
        update_picking                    
= ‘X’
        nicht_sperren                     
= ‘X’
        if_error_messages_send_0          
= ‘X’
     
TABLES
        vbpok_tab                         
= lt_hvbpok
        prot                              
= lt_prot.

 

So as you can see, one function module can help us in covering so many functionalities, all we need to do is know the right table that need to be passed !!

 

Good Luck!!

Three simple use of object oriented concepts in your daily work

$
0
0

Changing habits is a hard thing, especially if we are trying to change it for something which looks more complex but change is also inevitable for a developer if they want to adapt evolving environment. As a developer who wants to do things in the right way ABAP objects is introduced more than a decade ago but there are still some practical difficulties (or not ) to use it in our all daily work which can be found in several blogs like this or this. But In my opinion there are more positives than negatives  Which I  will try to explain.

 

Using below explained three methods, We will not only replace our way of working with object oriented equivalent which has some other advantages which will also be explained below, but also get more and more familiar with ABAP objects so we can use other fundamental object orientation approaches like inheritance and encapsulation.

 

  • Use local classes for internal modularization

When you check  F1 help for ABAP for a subroutine you will face the ugly truth that perform statement is obsolete where you can use some other little ugly features like tables statement. Even if you don’t have any intention to improve your knowledge on the subject, using local classes and methods in executable programs for modularization  is the only valid way since 2011.

 

Luckily using methods for modularization is not the only advantage, There is more strict syntax check while using  classes where we can not use things that are obsolete (  like header line for internal tables,  range statement etc) , by just using methods we will automatically avoid them, Anything obsolete is obsolete for a reason and we should never use them, but if you keep developing procedural way and do not read ABAP news or documentation and do not use quality check tools you may not even be aware that commands that you use is obsolete, you will avoid this just using methods instead of subroutines.

 

You can find a simple example at the end of my blog, ( or just search for using methods for modularization in SCN ).

 

 

  • Using global classes instead of function modules

It is almost the same approach with local class, to replace function modules,you can crate a global class and depending on your case create static or instance methods for modularization. As far as I know there is no class alternative for RFC  and Update modules so, to use Remote function calls it is still necessary to create function modules .

 

 

If you start  above approach for modularization first of all you will get familiar with OO aproach and its syntax standards,  and start to have well structured reusable developments if you also follow good modularization standards as its suggested in ABAP help documents. Having this well-structured  programs with reusable components will help you a lot in your Object Oriented programming journey if you wish to proceed further that way.

 

  • SALV

SALV  ( Global class CL_SALV_TABLE for example  is different approach to have same ALV output  SALV) which came with software component 640 is better than previous ALV tools and designed really with OO approach. By analyzing how it is created and using it we can get the idea how our developments should be like if we are really developing something object oriented.

 

All different attributes are created in separate class, for example if you need to change a column  or use events  you need to reference relevant class cl_salv_columns_table or cl_salv_events_table. We wouldn’t be just using this also we should analyze and use its model. After you get familiar with it, you will see that it is the simplest of all ALV tools. To find all kinds of SALV examples just search "SALV_DEMO*" programs via SE38.

 

 

Above three practice can be good starting point for people who want to deal with daily programming tasks in object oriented way and next step should be improving  modularization approach. When you had experience on all above and focus on having well structured, reusable, single purpose method’s in your developments. It will be easier for you to go one step forward to make complete OO design.

 

Example ( Using local class for modularization).

Definiton

Capture.PNG

Global reference

Capture.PNG

 

Instance creation


Capture.PNG

 

Use of methods

Capture.PNG

 

Capture.PNG

Using Categories in the ABAP development space

$
0
0

The ABAP development space contains more than 370 000 documents, blogs or other contributions! Formerly we introduced subspaces to organize the content. Despite the obvious advantage to establish some order in the space, there are some disadvantages. Quite often the content does not belong to exactly one topic, but you would like to assign it to two or more spaces equally. To overcome these disadvantages we decided to use the Category feature, SCN is offering.

The first two categories we introduced are:

  • ABAP Development
  • ABAP Trials and Developer Editions

 

Searching for Categories

 

Open the content tab in the ABAP development space. The categories figure on the left hand side.

Categories.PNG

 

Just click on a category link and the content is filtered accordingly.

 

Categorizing Content

 

As a contributor you would like to categorize your content. Just create it as usual.In the lower part you find the section Categories and the list of all available categories. Just mark the relevant ones.

 

Flagging.PNG

 

When to use the Category ABAP Development?

 

Use this category for ABAP language related contributions to distinguish it from the complete space.

 

When to use the Category ABAP Trials and Developer Editions?

This category is reserved for all content related to the ABAP trials and developer editions in the SAP Cloud Appliance Library and the ABAP download systems.

 

What about the Existing Content?

The existing content will not be categorized by us: more than 370.000 documents – nobody would accept this slavery. But you can categorize your existing content if you feel the need.

 

Will there be New Categories?

 

We are not analyzing existing content to come up with a complete classification system. The categories will grow on demand. So if you would like a new category to be introduced notify one of the space editors.

 

What is the Difference to Tags?

 

Tags are not bound to a space. Categories belong to a space. You can create your own tags but not your categories.

 

 

For more information, see this thread .

Delte entrys from table with variabel table name

$
0
0

Hello guys,

 

Maybe thats a repost, but i couldn't find something like this or maybe it's too easy for most of you.

My problem was while develping a report, I had to fill several tables. So i needed a report to reset my Z-tables (delete all entries). It's an easy lession if you use just one table in a report or clear all your table with hardcoding. But i wanted to create a programm which i can use for every project and every Z-Table.

 

Please beware to use this on your own risk. Deleteing data from a table is always sensitiv issue, so don't use this one if you dont know what you exacly doing.

 

There is a parameter where you can enter the name of the table which you want so empty and two radiobuttons. The radio buttons are for  showing the data which should be deleted and for the deleteing itself.

 

I'll tried to add a lot of commands, but i you have questions fell free to ask.

 

 

DATA: ref_tab TYPE REF TO DATA.                         "Generic data referenc
DATA: lv_char TYPE C.                                   "First letter to test if you want to delte a Z Table
DATA: l_tabname TYPE dd02l-tabname VALUE 'z_testtable'. "Name of the Z Table
DATA: lv_lines TYPE I.                                  "line count
DATA: gr_table TYPE REF TO cl_salv_table.               "For preview, so you can see waht you want to delte

FIELD-SYMBOLS: <fs> TYPE TABLE.                         "Fieldsymbole, which will get assigned late with the type of the entered table.

PARAMETERS: p_dbname TYPE string DEFAULT 'z_testtable'. "Name of the table you want to delete
PARAMETERS: rb_test TYPE flag RADIOBUTTON GROUP grp1 DEFAULT 'X'. "testrun
PARAMETERS: rb_del TYPE flag RADIOBUTTON GROUP grp1.              "Every Entry in this table will be !!!DELETED!!!


try.
     CREATE DATA ref_tab TYPE TABLE OF (p_dbname).   "Create the table with the type of the entered table
   catch CX_SY_CREATE_DATA_ERROR.                    "Table name was not found.
     Message 'Not able to find structur' type 'E'.
   CLEANUP.
endtry.
ASSIGN ref_tab->* TO <fs>.                    "Asssigne structure

SELECT * FROM (p_dbname) INTO TABLE <fs>.     "Select data

WRITE p_dbname TO lv_char.                    "Write the first letter to the Character field
TRANSLATE  lv_char TO UPPER CASE.             "Transalte to upper case, so you can thest if it is a ZTable

"Now doning some checks before you are able to delete. You can insert and authorizations object, but that will go to far...
IF rb_del = 'X'.
   IF sy-uname <> 'HANI'.                      "<-- Oay this is my Username, you have to repalce it with your own
     MESSAGE 'Sorry, seems to be a programm your a not allowed to use!' TYPE 'E'.
     LEAVE PROGRAM.
   ENDIF.

   IF lv_char <> 'Z'.                            "<-- Now ill check if you deleteing a ZTable. You dont HAVE TO delete sap tables within this report.
     MESSAGE 'Naaahhh, we will not delete sap tables, only z tables.' TYPE 'E'.
     LEAVE PROGRAM.
   ENDIF.


   DESCRIBE TABLE  <fs> LINES lv_lines.        "Count how many lines you will deltete
   DELETE (p_dbname) FROM TABLE  <fs> .        "delete
   WRITE: 'Table: ', p_dbname , ' - ', lv_lines , ' Entries been delteted'. "Screen output.

elseif rb_test = 'X'.                         "Data will be show which should be deltete, copied from a sap-BC XY report
   CALL METHOD cl_salv_table=>factory
     IMPORTING
       r_salv_table = gr_table
     CHANGING
       t_table      = <fs>.
   gr_table->display( ).
endif.


Mass download from solution manager

$
0
0

Hi,

 

Recently I am involve with a project that use solution manager .

 

We have a need for "mass download" .

 

In the forum there was a mention of this screen:

screenshot_01.png

 

So I debug the code and found class cl_sa_doc_factory .

 

The cl_sa_doc_factory=>get_read_url returned URL string, this string can be used in cl_http_client .

 

Program R_EITAN_TEST_60_02 (attached) demonstrate the process:

 

The program received as SELECT-OPTIONS a list of "Logical document" .

screenshot_06.png

 

for each "Logical document"

 

- Verify the value .

- Use cl_sa_doc_factory .

- Use cl_http_client .

- Use OPEN DATASET dataset_name FOR OUTPUT IN BINARY MODE  .

 

The result shown using function module BAL_DSP_PROFILE_POPUP_GET

screenshot_07.png

 

And the files:

screenshot_08.png

 

Regards .

UML Class Diagram Export to XMI format with ABAP standard classes

$
0
0

Introduction

Did you ever have the situation that you are working on a customer system landscape were you wanted to export an UML class diagram for a package, but the necessary JNet was not installed? We had that situation several times now and it was always a big effort to install JNet or it was not done, because it was to much effort to add the configuration to the automated software installment process in the customer landscape.

 

Therefore we decided to write a little program which allows us to export UML class diagrams to an XMI format, which can be then used for an import to an UML tool. For us that is a big advantage for the documentation of the system functionality, because we must not create/adjust the class diagrams manually in the UML tool.

 

The problem ...

In the ABAP Workbench the functionality is available to display an UML Diagram (Context menu on package name -> Display - UML Class Diagram).

uml01.png

This functionality starts the report UML_CLASS_DIAGRAM which analysis (depending on the made settings on the selection screen) the classes/interfaces and tries to display them with JNet integrated in the SAP GUI.

In case JNet is not installed (not installed by default with SAP GUI) you get just a list of the analyzed objects, but not the diagram. Our main problem is that without JNet the report cannot export the diagram for further usage.

uml02.png

 

The current solution for us ...

We created a little report based on program UML_CLASS_DIAGRAM which uses the ABAP standard class CL_UML_CLASS_SCANNER, CL_UML_CLASS_DECOR_XMI and CL_UML_UTILITIES to export an UML Class Diagram to XMI format without JNet.

You find the sources attached to this post. Just create the report and add the text symbol texts and selection screen texts. But please be aware that we did not invest so much time and effort to make it stable for each situation . Please consider also that the program was implemented on a NW 7.40 SP08, so if you wanna use the program on a system with a lower release some easy changes have to be done.

 

So I can do following now.

 

  1. Start the program and enter a class (e.g. CL_UML_CLASS_SCANNER) which should be exported.
    uml03.png
  2. Choose the required XMI version.
    uml04.png
  3. Import XMI file to a supported UML tool.
    uml06.png

  4. Finished
    uml07.png

 

If you have any comments, please let me know.

Lines of Code Check with Code Inspector

$
0
0

 

Introduction

SAP delivers several metrics checks for the Code Inspector. These are for example

  • Number of Executable Statements
  • Procedural Metrics with Statements check
  • OO Size Metrics (number of methods, attributes ...)
  • ...

From my point of view checks which are counting statements have  the problem, that they do not say very much about the compliance of the coding with different principles like Single Responsibility Principle or the Separation of Concerns Principle described by Clean Code Development. For example, think about a function in which you call ten functions. A statement count would only recognize the ten CALL FUNCTION statements which would be ok, in case you have defined a limit of ten statements in the statement count check. But what about the purpose of that? Makes it sense to really call that ten functions from a maintenance point of view or would it make sense to structure the coding, that the functionality is more reusable? Another gap of the statement count is also, that you can define big interfaces in functions or methods which would also not be recognized. So if you define a function with 100 parameters and call it in another function the statement check would be ok, but from a clean code development perspective it is terrible.

 

So from my point of view a check for Lines of Code is more useful in most cases. Violations of defined Lines of Code metrics give some hints about the violations of Clean Code Development principles (e.g. Separation of Concerns). In some projects I have third party check tools which are able to check Lines of Code on processing block level. But in some projects I do not have these tools or are not allowed to use them. Therefore I decided to make a little Code Inspector check implementation which does the check for us. Our main goal was to check the length of methods and functions, therefore in the first version the check supports only these two kinds of objects.

 

Lines of Code check explained from a user point of view

The implemented Lines of Code check is available under the standard Code Inspector check category "Metrics and Statistics" if you define a Code Inspector check variant.

ci01.png

The check has following attributes which allow configure the check.

ci02.png

The attributes have following meaning and can be configured separate for methods and functions:

  • Message if more than ...: Here the number of lines can be defined which are ok for the object. An message is just reported if the object has more lines than the defined ones. In case no number is defined the check for the specific object is not executed.
  • Message Level: This attribute defines if the message should be reported as Error, Warning or Information/Note.
    • E = Error
    • W = Warning
    • N = Information/Note
  • Count start/end statement: Defines if the start and end statement is also counted (e.g. for a function the lines with FUNCTION and ENDFUNCTION would also be considered).
  • Count empty lines: Defines if empty lines are counted.
  • Count comments: Defines if comment lines are counted (lines beginning with a star or lines which do only have a comment started with an apostroph).

 

After an execution of an inspection with the check variant you get following messages in case of a found violation.

ci03.png

Of course the standard Code Inspector functions like navigation to the reported object are available here too.

 

How to implement the Lines of Code check and make it usable?

I do not want to describe it here how to create an own Code Inspector check from scratch, because Peter Inotai has already described it a long time ago in this blog Code Inspector - How to create a new check. I just wanna describe it, what steps you have to do, to be able to use the Lines of Code check in your environment.


  1. Create ABAP Class
    In the attached TXT files you find the sources for the Lines of Code check. First create a class ZZ_CL_CI_LINES_OF_CODE (or name it whatever you want) and copy the coding of the TXT file to the new created class. I would recommend to use ABAP in Eclipse or the source based view in the class builder (otherwise you have to copy each method manually). The local helper class coding should be copied to the "local types" area. After that you should be able to activate the class. That is all you have to do with code.
    Consider that the check was written on a NW 7.40 system. Therefore some of the new ABAP syntax (e.g. table expressions, NEW constructor expression) were used. If you wanna use the code on a system with a lower release, some little changes have to be made.

  2. "Register" ABAP Class for Code Inspector
    If you now start the Code Inspector transaction and define a new check variant you will not see the new check. To be able to see it you have to go to "Goto -> Management of -> Tests" in the Code Inspector transaction.
    ci04.png
    In the upcoming screen you should see the class you have created before. Just tick the flag in front of the class and the new check is then visible when you define a check variant.
    ci05.png

Integration with ABAP Test Cockpit

A great advantage of the Code Inspector checks is that they are used by the ABAP Test Cockpit. So in the ABAP Workbench and in the ABAP Development Tools for Eclipse the new check can be used without further big effort. The only thing to be done is that you define the Code Inspector check variant which has to be used by the ABAP Test Cockpit runs. This can be done by

  • defining your check variant as global default variant in transaction ATC.
  • defining the check variant each time you execute the ABAP Test Cockpit run by "Check -> Check with ABAP Test Cockpit (ATC) with" in the ABAP workbench
  • defining the check variant in the properties of an ABAP project in the ABAP Development Tools for Eclipse
    ci06.png

So you get the results of your check directly after executing the ATC checks. Here is an result example of the new in the new ATC problems view within Eclipse.

ci07.png

 

Conclusion

The implemented check is a nice helper for me to have an immediate feedback about my code size. Next steps are to enhance the check so that is supports more objects (e.g. forms because of legacy reasons, whole classes, programs). If you have any comments or questions please let me know.

Calculation within the program without fixed point arithmetic

$
0
0

First of all, I need to quote from the F1 help of "Fixed point arithmetic" checkbox in the program attributes for

the meaning of this technical terms:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Fixed point arithmetic
If you mark this checkbox, all calculations in the program will use fixed point arithmetic.


If you do not, packed numbers (ABAP/4 type P, Dictionary types CURR, DEC or QUAN) will be treated  as

integers when they are used in assignments, comparisons, and calculations, irrespective of the  number of

decimal places defined. Intermediate results in arithmetic calculations will also be  rounded to the next whole

number. The number of decimal places defined is only taken into account  when you output the answer using

the WRITE statement.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

 

SAP Note 886532 point 5 has provided an example to explain the calculation within program without fixed

point arithmetic and showed you why the adjustment factor is needed in order to get the correct result.

 

Here I'd like to give another example to show you some further details about how does the internal calculation
work without fixed point arithmetic.

 

Prerequisite: In the program attributes, "Fixed point arithmetic" checkbox is NOT set.

Test program:

     REPORT  ZZ_NO_FIXED_POINT_ARITH_V1.

     DATA: waers LIKE konv-waers,
           zzkawrt LIKE konv-kawrt,
           zzkbetr LIKE konv-kbetr,
           zzkbetr2 LIKE konv-kbetr,
           zzkwert LIKE konv-kwert,
           zzkwert1 LIKE konv-kwert,
           zzkwert2 LIKE konv-kwert.
     zzkawrt = '1000.00'.
     waers = 'EUR'.
     WRITE : 'Base price', zzkawrt CURRENCY waers, waers, /.
     zzkbetr = '105.00'.
     waers = '%'.
     WRITE : 'Percentage rate', zzkbetr CURRENCY '3', waers, /.
     zzkbetr2 = 100000 - zzkbetr.                                   <<< step 1
     waers = '%'.
     WRITE : 'Percentage rate 2', zzkbetr2 CURRENCY '3', waers, /.
     zzkwert = zzkawrt.
     zzkwert1 = zzkwert * 100000 / zzkbetr2.                        <<< step 2
     waers = 'EUR'.
     WRITE : 'Gross price', zzkwert1 CURRENCY waers, waers, /.
     zzkwert2 = zzkwert1 - zzkwert.
     waers = 'EUR'.
     WRITE : 'Condition value', zzkwert2 CURRENCY waers, waers, /.

 

Execution result:
          Test program for no fixed point arithmetic
          Base price             1.000,00  EUR
          Percentage rate          10,500  %
          Percentage rate 2          89,500  %
          Gross price          1.117,32  EUR
          Condition value            117,32  EUR

 

 

Some detailed explanation on calculation steps 1 & 2:
For step 1:

zzkbetr2 = 100000 - zzkbetr.

Check the parameters value in debugging:

debug screen 1-1.jpg

It may look confusing that how could 100000 - 105.00 come to the result as 895.00.
Within a program without fixed point arithmetic, the internal calculation is done like this:
zzkbetr is treated as integer during internal calculation, irrespective of the number of decimal places defined.
So the calculation is done with 100000 - 10500 = 89500, afterwards it gets assigned to zzkbetr2.
Since zzkbetr2 is defined as decimal places 2, you can see zzkebtr2 = 895.00 in debugging.

 

For step 2:

zzkwert1 = zzkwert * 100000 / zzkbetr2.

Check the parameters value in debugging:

debug screen 2.jpg
The same rules applied here. During internal calculation, both zzkwert & zzkbetr2 are treated as integers:
100000 * 100000 / 89500 = 111731.8435...

Please notice that the calculation is performed using integer accuracy here, therefore this intermediate result

will be rounded to the whole number: rounded commercially to 111732

Afterwards it gets assigned to zzkwert1 which is defined as 2 decimal places ->
zzkwert1 = 1117.32 in debugging.

====================================================================================

 

This is how we could interpret the calculation happens internally within the NO fixed point arithmetic program.

As it's been stated in SAP Note 886532 point 5, the 'fixed point arithmetic' checkbox in the attributes of the
SAPLV61A program is not selected, that is, pricing does not work with fixed point arithmetic.
Therefore the above explanation could be helpful to understand the calculation happening inside the pricing
module. With the understanding of this internal calculation within NO fixed point arithmetic program, when the
system is working with customer own defined calculation formula for pricing, we'll be able to see clearly how it

comes to the result in each calculation step, and check whether the appropriate adjustment factor has been set

in order to get the expected result, just like '100000' in the above test program.

Code Review: Success factors

$
0
0

We often hear that code reviews are frustrating and waste of time. Really?? Or is the lack of adoption of a suitable, well-defined process that is the root cause of it appearing futile. Think again!!

 

There are many articles highlighting the importance of doing code reviews, listing out to-do and not-to-do instructions, as well as explaining different code review alternatives (automated, peer review etc.), hence I am not detailing those here.

 

Through this blog, I want to stimulate the thought and highlight the importance on "how" the "right" code review process tailored per your "organization structure and need" play a significant role in embracing the code review mind-set within the project team and stakeholders.

 

By "right" code review process, I mean it is accepted and easily integrated within your software development/release life cycle (SDLC). It shouldn't look disparate, additional step or hindrance.

 

To evolve to "right" and robust code review process, your first step is to ensure that the code reviews are encouraged and it happens. This is truly possible only if there is awareness on its importance and have buy-in from key stakeholders. It is not only the developers that drive it but sincere acceptance and encouragement from Project Managers, Business leads, Analysts and End-users that primarily contribute to its success.

 

I am part of SAP Development team and feel proud to say that in my current organization this process is thoughtfully customized, neatly defined and well-integrated with other phases of SDLC.

 

We have clearly written code review check-list, coding standards document that is easily accessible to stakeholders. These documents contain answers to FAQs, security related coding norms, tips/pointers to improve performance and also inputs to write code for easy maintainability and support globalization/reuse.

 

To effectively implement the code review process, we have a dedicated code review team which is an independent, unanimously recognized group that governs the process and the coding standards. It functions across modules and projects and owns accountability of code changes moving to production environment.

 

The code review process is also tightly integrated with the subsequent Change Control process (process that focuses on moving code changes to production) in SDLC. The change control mechanism checks for code review status and warns when a code that is not reviewed is proposed to move to Production. In such case, it alerts and triggers action points for stakeholders to take appropriate action.

 

I am highlighting key points that have worked well for us

 

Mind-set:

To develop this mind-set, project team is kept informed and educated via different forums(seminars, blogs, trainings and question hours) to have their buy-in and feedback. This has helped in evolving to "Right" process.

 

The Team:

Forming of "dedicated", "independent", "unbiased" group is key to its unanimous acceptance. The role and responsibility of the team is clearly defined and accepted. The team has a good mix of experienced professionals having wide-ranging technical understanding.

 

Documentation:

The content is simple, precise and easy to understand without missing out on the exceptions/specifics. It is easily available to all. Any updates to it and its communication are well governed.

 

Deep integration:

The process is thoughtfully tailored per our need and is well-integrated with other phases of Software Development Life Cycle (SDLC). Though an independent process, it is an integral part of SDLC.

 

With the above thoughts on the code review process. I open up for further discussion and invite you to share your experiences on it within your organization.

News about mockA

$
0
0

The current release of mockA is available at Github. It contains an error fix that I would like to outline in today´s blog post.

 

The bug

MockA allows you to mock classes as described in one of my previous blog posts. Technically, mockA tries to create a subclass of the class which is subject to the mock creation. This means, it will only work, if the class is not marked as final and has a constructor which is at least protected or public.

MockA overrides methods that should be mocked, with a local implementation that returns the values expected to be returned. It follows the specifications set up by the unit test, according to the method( ), with( ) and exports( ) or returns( ) -calls (and so on) during mock creation.

 

There is another feature that reuses generated subroutine pools that have been created by mockA, because the Web Application Server ABAP allows only about 36 subroutine pools for each program, or, in our case, per unit test. The generated code does not contain any hard coded method output parameters as there would be no benefit in buffering the generated coding then. Instead, the instance of type ZIF_MOCKA_MOCKER is passed to the mock object. In the mock object´s method implementations, the fake values are read from that instance. If a new mock object should be created, a new instance of ZIF_MOCKA_MOCKER will be passed to the mock object. Hence, the method output may change.

 

If mockA generates the local implementation of an interface, each method is implemented during the initial mock class generation, so this feature poses no issues here.

However, in case a class needs to be mocked, mockA also tried to reuse these generated subroutine pools in the past. Do you see the error?

 

What could possibly happen

Imagine, mockA should create a mock implementation of the following class: ZCL_I_CAUSE_TROUBLE which has two methods:

  • METHOD_A
  • METHOD_B

  In our first unit test, we will tell mockA to simulate the output of METHOD_A, without defining any output for METHOD_B. 

When the mock object is created, mockA will generate a local subclass of ZCL_I_WILL_CAUSE_TROUBLE, with a local implementation of method METHOD_A that overrides the parent´s class method. The parent´s class method cannot be called any longer via the mock object. METHOD_B_ remains untouched.

After generation of the subroutine pool, the class implementation is buffered for later usage.

 

If another unit test, that is executed after the first one, wants to control the output of METHOD_B, mockA won´t return that output, as the method has not been overridden in the first unit test and therefore no output control takes place in the locally created class implementation: The logic that is responsible for returning the specified fake values is simply not called. Instead, the super implementation of ZCL_I_WILL_CAUSE_TROUBLE is called.

 

The solution

Subroutine pool buffering is now generally switched off if a class needs to be simulated. For interfaces, the current logic remains unchanged.

Unfortunately, this change is a breaking change, which can lead to failing unit tests, which have passed in the past.

This change could cause some new issues that I would like to outline briefly:

  1. Subroutine pool limits might be violated for existing unit tests. As there might be multiple implementations generated per class within the same unit test report, the subroutine pool limit might be violated once you updated mockA. In this case, please split up your test methods into various reports, if possible.
  2. Please see the example above: If your second unit test tells mockA to simulate the output of METHOD_B, but actually expects a result that is returned by the super implementation, your unit test might fail now, as the super implementation is not called any longer due to the correction and instead, the specified fake values will be returned.
    I know that this is just a theoretical consideration but important to be mentioned. Nevertheless, these test cases can be considered incorrectly implemented, as the unit test possibly expects other values than the values that have been defined as output for METHOD_B. Hence, these test cases should be reviewed anyway!

 

Feedback welcomed

There is no possibility to switch off the currently implemented behaviour of mockA as I think it is more important to fix the error than to allow old and incorrect unit test implementations not to fail.

 

Please let me know, if you run into trouble with the new update, and if issue 1) or 2) applies, or maybe both. Please also tell me, if you figured out another issue that I didn´t think of now.

Static ABAP Code Analysis using ConQAT

$
0
0

 

Introduction

 

In the following sections I wanna give an overview about the usage of ConQAT as static code analysis tool from an enduser point of view. I wanna explain why I use an additional tool and what information I get from it.

 

Why an additional tool for static code analysis?

With the Code Inspector (SCI) and the ABAP Test Cockpit (ATC) SAP already provides powerful tools for the static code analysis. I am using these tools (especially SCI) already for years, but there are some usability and functionality gaps which can be closed using ConQAT. Examples for that gaps are:

  • Usable visualization of results (with text and graphics).
  • Baseline mechanism (define a code baseline for which no remarks are reported, e.g. in case of maintenance projects).
  • Historical view on check results (how the code quality increases/decreases over the time).
  • Check for redundant coding.

 

What is ConQAT?

ConQAT (Continuous Quality Assessment Toolkit) is a software quality analysis engine which can be freely configured cause of using the pipes and filters architecture. For a detailed description have a look to ConQAT - Wikipedia. Some key points I wanna take out are:

  • Configuration of analysis via Eclipse (using an Eclipse plugin).
  • Support of different languages (e.g. Java, C/C++/C#, ABAP). Due to the flexible architecture of the scan engine it can be enhanced for any language. So also for example SQLScript, which comes more and more in the focus for us.
  • Integration of results from other tools (e.g. SCI, FindBug).

 

How is ConQAT used in our ABAP projects?

In our ABAP projects ConQAT is used in the following way:

  • It is configured to analyze the coding two times a day. This means that the coding of the to be analyzed packages is extracted and analyzed by the ConQAT engine. This process also starts a configured SCI variant. The results of the SCI run are also extracted and considered by ConQAT. From my point of view I would prefer a higher frequency of analysis runs, but at the moment this is not possible within our landscape. In the future this problem will be solved by the successor of ConQAT (but more on that in the Outlook section).
  • The results of ConQAT (with the integrated SCI results) are provided as a HTML Dashboard. On the dashboard an overview section gives a first insight to the results. Within the specific sections different detailed data regarding the analysis can be found. In the dashboard the developer can navigate also down to code level (displayed in the browser) where the remarks are marked at line level. Via an integration of the ADT Links the developer can directly jump out of the dashboard to the coding in Eclipse to edit it.

 

ConQAT General Information

ConQAT provides the following general information in the result output. In the following chapters I show just the graphical output of a demo analysis, but of course there is also a text output for the objects for which remarks exist.

 

Overview

The overview page gives an overview about the metrics. It displays how many remarks I have in the whole system and it displays how many remarks I have in the delta to a defined baseline.

01_general_information__overview.png

 

Architecture Specification

With ConQAT it is possible to define an architecture specification. It describes which objects can be used by which other objects (e.g. so it can be defined, that the UI layer cannot directly use objects from the data access layer). The relationships can be defined from package level down to single object level. From an ABAP point of view this than be compared to the ABAP package interfaces. The following figure displays a specification which defines the relationships on package level.

01b_architecture_specification.png

 

Treemap Outline

The Treemap Outline gives displays the analyzed ABAP packages. If the developer hovers with the mouse over a package he gets more information about e.g. the package size (lines of code).

02_general_information__treemap_outline.png

 

System Size Trend

On the System Size Trend page it can be identifed how the system size grows over time. It is also visible how many lines of code are generated and how many are coded manually (generated objects can be marked in the configuration).

 

LoC = All Lines of Code (manual, generated, comments)

SLOC = Manual Lines of Code without comment lines

LoCM = Manual Lines of Code with comment lines

LoCG = Generated Lines of Code

03_general_information__system_size_trend_01.png

03_general_information__system_size_trend_02.png

 

Modified Source

The modified source code is also visualized using treemaps. So in an easy way it can be found out where the changes were done in the system (added/changed/removed coding).

01_modified_source.png

 

Task Tags

ConQAT also can be configured to report task tags (e.g. TODO, FIXME).

05_general_information__task_tags.png

 

ConQAT Code Metrics

 

Architecture Violations

Violations of the defined architecture (see section above) are displayed in the same graphical way as the architecture specification itself. In addition to the "green" arrows displaying the allowed relations, violations are displayed as "red" arrows.

 

Clone Coverage

ConQAT analyzes clones within the coding. This is not just a search for exactly the same code parts. The algorithm considers same code structures.

This check helps to detect coding which can be encapsulated in reusable functionalities and it helps also to detect "copy & paste" coding which will lead in most cases to error situations when in later versions not all places are adjusted (because of e.g. a defect, an enhancement). From a Clean Code Development perspective it helps to avoid violations of the "Don't Repeat Yourself" (DRY) principle.


In case the information provided in the dashboard is not enough (even not on code level), ConQAT allows to compare clones with the help of an Eclipse plugin in detail.

 

01a_clone_coverage.png

01b_clone_coverage.png

 

Long Programs

ConQAT allows to check for "long programs"; classes, programs, ... which have more lines of code than defined in the configuration. To long classes for example are in most cases an evidence that the Single Responsibility principle is violated.

As in the following figure can be seen, it was configured that e.g. classes up to 600 lines of code are ok (green). Objects with up to 1500 lines of code have to be checked (yellow). More than 1500 lines of code are not ok.

For "lines of code metrics" (like Long Programs and Long Procedures as described in next section) it can be configured if comment lines are considered or not (by default they are excluded). Empty lines are ignored in general.

02_long_programs.png

 

Long Procedures

The long procedures metric checks methods, functions, ... regarding their lines of code length. Violations for that metric gives us an evidence that to much is done in e.g. one method which has to be extracted in more granular reusable code blocks. The following configuration defines that up to 60 lines of code are ok, up to 150 lines the object has to be checked. All objects with more than 150 lines of code are not ok.

 

02_long_procedures.png

 

Deep Nesting

Deep nesting is a classical metric which is also checked by ConQAT. Coding is identified which will be to complex to read and to understand because of very deep nestings.

 

Our configuration allows up to 5 deep nesting levels (which is already a high number). Up to 7 it the coding has to be checked. More than 7 it is not allowed.

 

03_deep_nesting.png

 

integrated SCI results

As mentioned before, ConQAT allows to integrate the results of SCI check runs. It can be defined which check results are marked as critical warnings or as guideline violations. The integration of the SCI results in the ConQAT Dashboard has the advantage, that not several different places have to be checked for remarks and that the results are also provided in a graphical way which gives a better overview. And of course I see directly what has been changed over time.

 

Further features

In the previous chapters I gave a general overview about the ConQAT features at a high level. The following features were partly already mentioned in these chapters, but I wanna make some further explanations to them to turn out these functionalities.

 

Baseline Mechanism

ConQAT supports code baselines. That means that you can define a code baseline for which no remarks should be reported in a delta comparsion.

 

Depending on your project following quality goals are possible:

  • No remarks: No remarks in general. That can be applied for new projects, were you start from scratch. But in case a maintenance project is taken over in most cases that quality goal cannot be applied, because the problems are integrated to deep in the system which would lead to additional implementation and tests effort if the problems should be solved (and as we all know, no one wants to pay for such things).
  • No new remarks in changed objects: In changed objects no new/additional remarks are introduced.
  • No remarks in changed objects: In changed objects no new/additional remarks are introduced and all existing remarks are solved.

 

Regarding the quality goals "No new remarks in changed objects" and "No remarks in changed objects" the baseline definition helps us to compare what was already there and what is new.

 

ConQAT analyzes the complete coding, reports the remarks for the whole system, but returns also just the delta compared to the baseline (if configured).

 

 

Detailed code analysis in browser

In the HTML Dashboard the developer can navigate down to code level. On a single object level he sees all remarks for the object on the top. When he scrolls through the coding he sees the remarks also by a marker on the left side. So without entering the system, the problems can already be analyzed in detail.

 

01a_code.png

01b_code.png

 

Integration with ABAP in Eclipse

With the ADT Tools so called ADT Links were introduced. Links which can open ABAP objects directly in Eclipse. This ADT Link feature is integrated in the dashboard. So a developer does not need to copy & paste the object name if he wants to edit it. He just has to click on the link to open the object directly in Eclipse ready for editing.

02_adt.png

 

Blacklisting

Not every remark of ConQAT must be a valid remark (cause of different reasons). For that ConQAT supports blacklisting of remarks, so that the remarks are ignored in the further analysis runs.

 

Conclusion & Outlook

As you have seen, ConQAT is a powerful tool for static code analysis which gives a better overview over the systems code quality. With the integration of the SCI results you have the option to define one single place where all check results can be found and analyzed. Due to the flexible architecture ConQAT allows also that further languages can be analyzed which are in focus in the SAP development context (e.g. SQLScript or JavaScript).

At the moment the only thing I do not really like is that the code analysis only runs twice a day due to our configuration.

 

Finally I can say, that the code quality has made a big step forward since I am using ConQAT.


Releasing Internal Table Memory

$
0
0


It is a well known fact, that you release the memory occupied by an internal table using either CLEAR or FREE, where FREE releases also the initial memory area. You normally use CLEAR, if you want to reuse the table and you use FREE, if you  really want to get rid of it and don't want to refill it later on. Assigning an initial internal table to a filled internal table does also release the target table's memory in the same way as CLEAR does.

 

But last week a colleague pointed out to me that it is not such a well known fact that deleting lines of internal tables with DELETE normally does not release the memory occupied by the deleted lines. Instead, there seem to be people deleting lines of internal tables in order to release memory. Therefore as a rule:

 

Deleting lines of an internal table using the DELETE stament does not release the table's memory.

 

For an internal table that was filled and where all lines are deleted using the DELETE statement the predicate IS INITIAL in fact is true. But the internal table is only initial regarding the number of lines but not regarding the memory occupied. You can check that easily using the memory analysis tools of the ABAP debugger.

 

So far so good. For releasing the memory of an internal table you use CLEAR or FREE and you do not simply DELETE all lines.

 

But what about the use case, where you want to delete almost all lines from a big internal table and to keep the rest? After deleting, the internal table occupies much more memory than needed for its actual lines. If memory consumption is critical, you might want to get rid of the superfluous memory occupied by such an internal table. How to do that?

 

Spontaneous idea:

 

DELETE almost_all_lines_of_itab.


DATA buffer_tab LIKE itab.
buffer_tab = itab.
CLEAR itab.
itab =  buffer_tab.
CLEAR buffer_tab.

 

Bad idea! Check it in the ABAP Debugger. Due to table sharing, after assigning itab to buffer_tab, buffer_tab is pointing to the same memory area as itab. Assigning buffer_tab back to itab after clearing itab is simply an effectles roundtrip and you gain nothing.

 

Improved idea:

 

DELETE almost_all_lines_of_itab.


DATA buffer_tab LIKE itab.
buffer_tab = VALUE #( ( LINES OF itab ) ).
CLEAR itab.
itab =  buffer_tab.
CLEAR buffer_tab.

 

Now it works! Instead of copying itab to buffer_tab you can transfer the lines of itab sequentially to the initial target table and the memory is not shared. Before 7.40, SP08 you have to use INSERT LINES OF itab INTO TABLE buffer_tab instead of the VALUE expression, of course.

 

What also works for the use case is:

 

DELETE almost_all_lines_of_itab.


DATA buffer_tab LIKE itab.
buffer_tab = itab.
INSERT dummy_line INTO TABLE buffer_tab.
DELETE buffer_tab WHERE table_line = dummy_line.
CLEAR itab.
itab =  buffer_tab.
CLEAR buffer_tab.

 

By inserting a dummy line into buffer_tab and deleting it again, the table sharing is canceled and buffer_tab is built from scratch (but only, if it needs considerably less memory than before; otherwise it is copied and nothing is gained again).

 

Ingenious minds might also find the following ways:

 

DELETE almost_all_lines_of_itab.


DATA buffer_string TYPE xstring.
EXPORT itab TO DATA BUFFER buffer_string.
CLEAR itab.
IMPORT itab FROM DATA BUFFER buffer_string.
CLEAR buffer_string.

 

or even

 

DELETE almost_all_lines_of_itab.


CALL TRANSFORMATION id SOURCE itab = itab
                       RESULT XML DATA(buffer_string).
CLEAR itab.
CALL TRANSFORMATION id SOURCE XML buffer_string
                       RESULT itab = itab.

CLEAR buffer_string.

 

Yes, those work too, but put some GET RUN TIME FIELD statements around them to see that those are not the best ideas ...

Create a Formatted Excel in a Background Job

$
0
0
related page 1
related page 2
related page 3


NOTE: Before beginning, the XLSX Workbench functionality must be available in the your system.

 

Suppose we need to generate Excel-file in the background mode.

For ease of example, lets create form, that contains only the classical phrase "Hello world !" nested in the rectangte area. The resultant Excel-file we will send via SAP-mail (in this case - to themselves).

 

1 PREPARE A PRINTING PROGRAM.

 

As you can see, most of the code takes the mailing (does not apply to the form creation) :

 

REPORT  z_hello_world .

 

* declare and fill context

DATA gs_context TYPE lvc_s_tabl .

DATA gv_document_rawdata  TYPE mime_data .

gs_context-value = 'Hello world!' .

 

* call the form

CALLFUNCTION'ZXLWB_CALLFORM'

   EXPORTING

     iv_formname    = 'HELLO_WORLD'

     iv_context_ref = gs_context

     iv_viewer_suppress  = 'X'

   IMPORTING

     ev_document_rawdata = gv_document_rawdata

   EXCEPTIONS

     OTHERS         = 2 .

IF sy-subrc NE0 .

   MESSAGEID sy-msgid TYPE sy-msgty NUMBER sy-msgno

           WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4 .

ENDIF .

 

* mailing

PERFORM send_mail USING gv_document_rawdata .

 

*&---------------------------------------------------------------------*

*&      Form  send_mail

*&---------------------------------------------------------------------*

FORM send_mail USING pv_document_rawdata TYPE mime_data .

   DATA:

     lv_attachment_size  TYPE sood-objlen ,

     lv_subject          TYPE so_obj_des ,

     lv_document_size    TYPEi ,

     lt_document_table   TYPE solix_tab .

   DATA:

     lr_send_request     TYPE REF TO cl_bcs ,

     lr_mail_message     TYPE REF TO cl_document_bcs ,

     lr_recipient        TYPE REF TO if_recipient_bcs ,

     lr_error            TYPE REF TO i_oi_error ,

     ls_retcode          TYPE soi_ret_string ,

     lv_attachment_type  TYPE soodk-objtp VALUE'XLS' .

 

   CALLFUNCTION'SCMS_XSTRING_TO_BINARY'

     EXPORTING

       buffer        = pv_document_rawdata

     IMPORTING

       output_length = lv_document_size

     TABLES

       binary_tab    = lt_document_table.

 

   lr_send_request = cl_bcs=>create_persistent( ) .

 

   lv_subject = 'test mail' .

   lr_mail_message = cl_document_bcs=>create_document(

       i_type      = 'RAW'

       i_subject   = lv_subject ) .

 

   lv_attachment_size = lv_document_size .

   TRY .

       lr_mail_message->add_attachment(

           i_attachment_type     = lv_attachment_type

           i_attachment_subject  = space

           i_attachment_size     = lv_attachment_size

           i_att_content_hex     = lt_document_table ) .

     CATCH cx_document_bcs .

   ENDTRY .

   lr_send_request->set_document( lr_mail_message ) .

 

   lr_recipient = cl_sapuser_bcs=>create( sy-uname ).

 

   lr_send_request->set_send_immediately( abap_on ) .

 

   lr_send_request->add_recipient(

       i_recipient = lr_recipient

       i_express   = abap_on ) .

 

   lr_send_request->send( i_with_error_screen = abap_on ) .

 

   COMMITWORK .

ENDFORM .                    "send_mail


2 PREPARE A FORM.

 

2.1 Launch XLSX Workbench, and in the popup window specify a form name HELLO_WORLD , and then press the button «Process»:

 

 

 

Empty form will be displayed:

 

123.PNG

2.2 Push button444_19_2.PNGto save the form.

 

 

2.3 Assign context LVC_S_TABL to the form:


 

 

Herewith, you will be prompted to create a form's structure automatically (based on context):

00_6_3.PNG

Let's press the button: .

 

As result,  «Pattern» () and «Value» () will be added under the «Sheet» in the form structure tree :

 

124.PNG

 

Added components will already have a binding with context. For this components, only template binding is required.

We'll do it later, but first we perform markup of template.

 

 

 

2.4 Make markup in the Excel template:


 

 

 


2.5 Template binding:


Assign «Pattern» to a target area in the Excel-template; For assigning, You have to perform next steps successively:

 

  • Pose cursor on the node in the form's structure tree;
  • Select a cell range [A1 : C3] in the Excel-template;
  • Press a button located in the item «Area in the template» of the Properties tab:

 

 

 

 

Similary, assign «Value» to a target area in the Excel-template; For assigning, You have to perform next steps successively:
  • Pose cursor on the node in the form's structure tree;
  • Select a cell range [B2] in the Excel-template;
  • Press a button located in the item «Area in the template» of the Properties tab:

 

 

 

Scheme of bindings:

 

2.6 Activate form by pressing button444_30.PNG.

 

 

3 EXECUTION.


Launch SE38 and run your report Z_HELLO_WORLD in background mode :


 


125.PNG


 

 

 

126.PNG

 

 

Using the Repository Information System (SE84) to find standard objects to use in Z objects

$
0
0

As an ABAP Consultant, one thing that you hear and discover from Day 1 is that it is always good to use standard objects. But, there are a lot of times that we feel that the standard does not suit our requirements.

 

Consider a scenario for a Smartform for an SO Output. We may need to access just say 10 fields of VBAP table, but due to the restriction of smartforms that the srtucutres or tables should be DDIC structures, we either end up using the entire structure of VBAP or create a Z structure. This may not only impede the performance of the program but may also lead to compatibility issues in the future.

 

Here is where SAP helps us out with the Repository Information System (Transaction SE84). SE84 is a powerful transaction to display data dictionary objects and development properties.

In this blog, I will outline a few uses that I have encountered with SE84 which has helped me to greatly reduce my development times and to use standard objects as opposed to Z objects


1. Structures

 

Structures are one of the most used objects in SAP. Everything from passing data through interfaces to the creation of internal tables can use structures. So I believe this to be a very important point where we can, or I would say, must use standard objects to the extent possible.

 

To find a structure meeting your requirements using SE84,

Let’s consider a scenario where you need the data elements VBELN and EBELN together in one structure.

Just go to SE84 and enter the fields as below in the selection screen.

 

1.png

 

On execution, the transaction returns us with all the structures that meet our requirements.

 

2.png

 

Similarly, we can also find out other DDIC structures like Tables, Table Types, Views Domain etc.

This tool is especially useful in finding standard data elements meeting our requirements when we need to create Z tables or structures.

For example, if we need to find a data element of type CURR length 13.



3.png


4.png


2. Message Class

 

As it is said regarding any application, it should be highly interactive. This interaction relies largely on the messages that we display on the screen.

Most of the times, we either just hard code the message in the program or we create a new message class to suit our needs.

 

Using this transaction, we can find out the existing messages to suit our needs.

 

For Example, if we need to find a message to show something like “Material Not Found”,

 

Just enter something like below in the selection screen.

 

5.png

 

The transaction will then return all the applicable messages to suit our needs.

 

6.png


3. GET / SET Parameters

 

Many a times a requirement arises where we need to use the parameter ids of variables, or we need to know all the relevant parameter ids for a particular type of field, say “Customer”

 

At such times, we can take help of this tool to get a list of all the parameters that we can use to get the relevant parameter IDs.

 

For getting all the relevant parameter IDs, we can enter the required search term in the selection screen as below.

 

7.png

 

On execution, we will get a list of all the relevant parameter IDs that we can use.

 

8.png

 

An important advantage of the transaction SE84 is that as the screen is consolidated with all the objects that we can possibly use, it becomes a one stop shop for all our standard needs.

I would not recommend using this T code for finding out enhancement spots. Don’t get me wrong, it is a great utility. But as far as enhancement is concerned, it is my experience to always find the best enhancement spots in debug mode or through code walkthroughs. This not only helps you understand the flow and decide on the best enhancement spots, but will also help you make sure that the changes you make are as desired and does not get changed again by the SAP Standard process flow.

 

Hope this short tip helps you in your future assignments.

SAP Jet forms : Step-by-step Process for designing and generating a PDF

$
0
0

This blog gives you a brief overview about Jet forms used in SAP through a third-party software and followed with an example on it.


I do accept Jet forms is an old concept. But, when you get any objective on this and when any client you are working with have a requirement on it, I hope this document will be helpful. Because, I underwent this kind of situation where the client I am working with gave his business requirement on Jet forms. I searched everywhere in SCN and Google but, the information I gathered was quite insufficient and I have striven in delivering the object.

So I am writing this blog for all who are new to Jet forms and if they get any business requirement on this, I am sure that this blog will definitely help.

 

Jet Form Definition:

Organizations require e-document presentment solutions that enable them to output documents from a single system, in a variety of formats for delivery via multiple channels including mail, e-mail, fax, and the Web.

With an effective end-to-end solution delivery, organizations can also offer customers flexible alternatives for receiving information. An organization can automatically send information to its customers, and at the same time, its customers can actively seek out up-to-the-minute information.

This can be achieved through Jet forms-the times when, smart forms/adobe forms weren't existing.

There is a misconception that, Jet forms are supplants of Scripts in SAP. Jet forms are completely different from SAP Scripts.

 

How does a jet form synced with SAP Scripts:

From SAP the data comes out in the form of RDI (Raw Data Interface), the sap scripts have to be set to print to RDI. This is then sent to a specific server set on the sap script, with details of the end printer. Jet form then processes this data into a more reasonable format, and then processes this against the jet form.

The result is then sent to the printer.

 

Pros and Cons:

Jet Form Design is a full “What You See Is What You Get” (WYSIWYG) program. As you draw a form design, you can see it take shape on the screen. All the features, such as text, fields, lines, boxes and logos, and their attributes, such as fonts and shading, appear on-screen as they will print on the target printer.

In Jet Forms, you can incorporate graphical objects, such as lines, boxes, circles and arcs. You can choose attributes such as line widths, line styles, and shading patterns. The product supports the most popular graphic formats for logos, such as .BMP, .PCX, .TIF and .WMF. As well, you can include bar codes printed in many of the common formats.

You can establish a tabbing sequence that specifies the order in which Filler prompts for data entry. You can designate fields as mandatory, or as protected, where the user can see the data values but cannot modify them.

Alongside, there are few disadvantages with them as they cannot be tested with our design before sending it to SAP system which is a main drawback.

After the existence of smart forms/adobe forms, jet forms became obsolete as it could not meet many of the requirements and features that the present forms are providing.

Also as the jet forms are designed by a third-party software, they became the least opted choice.

 

Software Used:

Jet forms are developed by a third-party software which is provided by Adobe. The software is named as “Adobe Output Designer”.

 

 

 

Step-by-step Process of creating a Jet form and generating a pdf:


Jet form is created in .IFD format and compiled to .MDF (cannot open; raw format) format which has to be uploaded to UNIX server of the client.

Now, we will see how to create a simple jet form and generate its pdf file through a script in sap.

 

We shall create a jet form design for the below pdf:

 

2.png

 

In the above pdf we can see, there is a Title field, Date box field, and three box fields for demo purpose.

Now we shall design a Jet form which meets the above sample requirement through Adobe Output designer software.

 

 

After installing the software – Adobe Output Designer, you will see a blank screen as shown below after opening it.

 

3.png

 

Now, we can create a design on the blank page with which ever fields we require. Select the field box in the Toolbox and click on the page.

 

4.png

 

5.png

 

Double click on the box and you can use options like field name, width, height, font alignment, etc. according to your requirement.

 

6.png

 

 

7.png

 

As shown above we can design a page which consists of few field boxes of the required fields and logo if we need. To know more about the tools provided and used in the jet forms, we can place the cursor on the particular box and see why it is used for.

 

8.png

 

Like this, we can create three field boxes and place them accordingly on the page.

 

9.png

 

We can utilize the outline box tool to highlight the field boxes if needed.

 

10.png

 

Now, we create various field boxes for Invoice number, Date, Total amount, etc. and arrange them in an order.

 

11.png

 

As shown above, If we are using a particular field for Invoice number (in the case of invoice pdf), we have to name it as Invoice Number as shown above. Same with Date field, Amount field and whichever field we are using.

 

After arranging the field boxes in an order, we highlight all of them as discussed earlier.

 

12.png

 

We can even create a Title for our pdf by placing few more data field boxes on the top of the page.

 

13.png

 

We can place a logo of the respective company by clicking on Logo tool and placing it wherever we need. In this scenario, I am placing a logo on the top left corner of the page.

 

14.png

 

 

15.png

 

Select OK and place the Logo in the required place on the page.

 

16.png

 

 

And now we have completed our first design in Jet Forms. Press the Save 17.png  button and you can test present your design.

 

Go to menu options -> File -> Presentment targets. Here, select the default printer as PDF, so that you can view the test pdf of your design.

 

18.png

 

Now go to menu options -> File -> Test Presentment.

 

19.png

 

The design which you have created will be generated in the form of PDF as shown below:

Output:

20.png

 

You have successfully created your first Jet form.

 

As the SAP system cannot read .IFD format (the format your design is in), we have to compile the Jet form and convert it to .MDF format. As I earlier told you - “From SAP the data comes out in the form of RDI (Raw Data Interface), the sap scripts have to be set to print to RDI. This is then sent to a specific server set on the sap script, with details of the end printer. Jet form then processes this data into a more reasonable format, and then processes this against the jet form.

The result is then sent to the printer.”

 

The .MDF file which you have generated will be uploaded by BASIS team in the clients UNIX server.

 

The .MDF file is generated as shown:

 

Go to menu options -> File -> Presentment Targets and select the list of printers as per your client requirement and click OK.

 

23.png

 

Now go to File -> Compile and give the path where you want to place the .MDF file on your system.

 

24.png

 

In colossal designs where a jet form contains 2 or more pages, there is a necessity of “Preamble”, in which we have to write few commands and mention the field names. In few cases, the preamble file will be generated automatically once we compile the design to .MDF format.

 

You can see the preamble file in menu options-> Format -> Template Preamble

 

25.png

 

 

Many of us might have this doubt whether, if the preamble file is created automatically by Jet Form design or whether it’s created manually?

Well, it’s half and half. As long as you have a newer version of Output Designer, it will create the JFPREAMBLE_1 (and _2, _3 and so on as needed) and also make a stub JFPREAMBLE for you.

If you want to change the way it’s doing things, you need to edit the JFPREAMBLE. You can’t edit the others as they will be overwritten every time you recompile the form. Making edits in the JFPREAMBLE, you should put them *after* the ^FILE lines that pull in the other preambles, so that they override whatever might have been in those. Often you’ll find yourself copying lines from JFPREAMBLE_1 to JFPREAMBLE and making only minor changes.

 

This blog will be continued and I will post again on How to associate the Jet Forms with SAP Scripts in a short period.

Automation of Text Symbol Creation

$
0
0

Many times we are required to create text symbols for hardcoded texts, reason being that with text symbols translation becomes possible and then the same program can be used across different regions/countries with other languages.

 

Depending upon the number of the hardcoded text one has in his/her program text symbol creation can become either an easy or a difficult job. To create text symbol one can simply double  click the hardcoded text and it will ask if you want to create a text symbol, then you move to the text symbol screen where you save and return back.

 

As a part of a CI initiative my company we managed to automate this process. We simply execute a program (source code provided below) and provide it with the 'PROGRAM NAME' of which we want to create text symbol and that’s it! The program create the text symbols where ever its necessary.

 

The information on which text is required for translation is picked up from extended code check program. Also if you have yourself given a number to a hardcoded text and forgot to create a text symbol for it, the program would do it for you. If you already have some text symbols created and are adding new ones then the next number is picked up by the program.

 

Using this program we were able to cut down on the development time. When I am coding I do not worry if I have to create a text symbol for the text and surely save some CLICKS.

 

One can modify the code as per his/her requirement. Also if you have any suggestions or improvement please do let me know.

HAPPY CODING!!!!

Viewing all 948 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>