Quantcast
Channel: ABAP Development
Viewing all 948 articles
Browse latest View live

Do SAT trace on applications which could not be launched by SAT

$
0
0

We know that it is quite convenient to use transaction code SAT to trace traditional dialog application. However the application to be traced is not dialog application, for example, it is Webdynpro or Fiori application, the steps are not so straightforward. This blog Next Generation ABAP Runtime Analysis (SAT) – How to analyze program flow introduces the step how to trace such application with the help of "In Parallel Session" below.


clipboard1.png

This blog introduces a simple altnernative to trace a web application, written based on SAP_BASIS 7.40.

 

In my Fiori application I use an odata service to return the share calendar of given colleague. The odata service is launched by Chrome extension, postman. I need to do performance trace on it.

clipboard2.png

Step1: since I know the entry point of this odata service implementation is located in line 12, I set breakpoint there and click send button in postman, breakpoint is triggered. Click "New Tool" icon:

clipboard3.png

Step2: Special Tools, launch Trace( SE30/ST05 ):

clipboard4.png

double click the icon below:

clipboard5.png

Now the status turns green, which means the ABAP trace is switched on.

clipboard6.png

Step3: set another breakpoint at the end of the traced code, and click F7 to reach there. double click the icon in column "On/off" again, and now status turns red - trace is deactivated. And you should see an icon under column "TraceFile" which means the trace file is generated.

clipboard7.png

Step4: go back to SAT and you could find the generated trace file. Double click it, and the left steps are the same as you trace the dialog application.


clipboard8.png


clipboard9.png

As you see in this blog, you can also use this way to launch ST05 trace whenever you like during your debugging. The only limitation is, as could be found from the SAT file description, "Trace started with kernel default values", it is not possible to use your own customized trace variant to do the trace. If you do need some features which are only available by customizing trace variant, you have to use the steps mentioned in the blog.



Transport query analysis

$
0
0

Subject Matter.

The situation is familiar for any SAP consultant—transport queries are transferred to the target system with errors caused by violations of integrity of the transferred object. For example, a transferred program uses new tables (or new versions of older tables) but the target system does not “know” them, as these created/modified tables were not transferred and are absent from the query transferred with the program.

Such situations are fraught with problems: not only headaches and delays, but also more present the possibility of strategically important risks and losses.

Such risks are especially noticeable when the authors of these changes (programmers employed by the Contractor) are unable, that is to say, do not have the authority to independently transfer changes into the Customer’s target system. As a result of existing regulations, they are forced to contact the Customer’s basic consultants about these questions. Thus, a strict system of control of transferring changes is put into effect.

This situation is fraught with problems related to the loss of time and money (extension of deadlines and, as a result, terms of payment for completed work), but also can put a company’s reputation at risk. For example, unfriendly employees of the Customer have a reason to call into question the level of qualification of the Contractor’s programmers—they’re so stupid that they cannot even complete a simple transfer query! Yet the terms of the contract promised only highly qualified programmers.

As anyone familiar with the situation knows that even the most professional and experienced programmer is defenceless against such errors. There is always a risk—as a result of haste or carelessness, or because of more objective reasons e.g. the changes were made a long time ago, or due to complex interdependencies. Standard SAP tools do not offer sufficient protection.

All of this can eventually lead to real financial losses. Thus, the subject matter of this article goes beyond the scope of programmers.

However, by approaching the issue in a slightly more creative manner, it can be feasible to create tools that can help manage these risks.

The general concept of realization.

In order to analyse the integrity of queries, we must perform the following steps:

  1. 1.)    Select all objects contained in the transfer query and its subqueries.
  2. 2.)    Chain select all functions that are used directly or indirectly within this query and its subqueries. The topic of mutual use of functions is too vast to cover in this article. So we will limit ourselves to questions related to the use of programming functions, i.e. ABAP-code functions: АВАР-reports, АВАР-includes, classes. We will not analyse the use of data elements in table fields, domains in data elements, etc. Likewise we will also limit the types of functions used: only АВАР-includes and global types: glossary entries (database tables, structures, types of tables, pools and their components, data elements), АВАР-classes (with their components). Of course this is a limitation, but when you implement your tool and begin using it, you will see that even this limited variant “closes” 80-90% of problems. And even if this tool’s possibilities don’t cover your needs, with little effort will be able to easily build up a circle of analysed dependencies in its “image and likeness”.
  3. 3.)   For those functions used that are not present in the transport query, we should a remote version comparison between systems (we will test for differences. Of course, we will compare the source system and the target system.
  4. 4.)    If the function differs between systems and is absent in the transport query, we should signal the problem.

Implementation.

 

The analyser is carried out like a regular executed ABAP-report.

При For the conclusion of the analysis we will not use sophisticated UI elements like ALV. For our purposes, a conventional listing conclusion is sufficient.

I will deliberately be using the old-fashioned ABAP-code style (using HEADER-LINE and so on) in the interest of brevity and clarity (and in order to remove inessential structures necessary to maintain ABAP-code in a modern style). For unassisted implementation, I recommend following SAP’s recommendations related to the outdated constructions.

  1. The only input parameters we need are the query number and the name of SAP system whose compatibility we want to check. One could take the name of the target SAP system from the attributes of the query, were it not for the usual the three-levelled SAP system landscape (“Development”à”Test”à”Product”). Often it is necessary to determine the integrity of the query at all transfer stages.

PARAMETERS: p_trkorr TYPE e070-trkorr    OBLIGATORY MEMORY ID trkorr " What we transfer

          , p_system TYPE e070-tarsystem OBLIGATORY MEMORY ID tarsys " Where we transfer it

          .

Chain select all functions, contained in the transferred query and its subqueries.

The transport query’s header and the hierarchy of its subqueries are kept in table E070. The number of the query/task is in field E070-TRKORR. The number of the higher query for the task (subquery) is in field E071-STRKORR.

Objects contained in the queries/tasks are kept in table E071.

Objects can be directly linked either to the “main” query or to the subqueries (tasks).

*We will form RANGES from the number of the query and will receive a list of all objects contained in the query, in the field E071- TRKORR.

 

DATA: ltr_korr TYPE RANGE OF e070-trkorr WITH HEADER LINE,

      BEGIN OF ltd_objk OCCURS 0,

        object TYPE e071-object,

        obj_name TYPE e071-obj_name,

      END OF ltd_objk.

 

ltr_korr-sign = 'I' .

ltr_korr-option = 'EQ' .

 

SELECT trkorr INTO ltr_korr-low

  FROM e070

WHERE trkorr EQ p_trkorr

    OR strkorr EQ p_trkorr .

  APPEND ltr_korr .

ENDSELECT .

 

SELECT DISTINCT

        object

        obj_name INTO TABLE ltd_objk

  FROM e071

WHERE

       trkorrINltr_korr

   AND pgmid    IN ('LIMU', 'R3TR')

ORDER BY OBJECT .

In Table LTD_OBJKwe now have a list of all objects included in the query and its subqueries.

In this article we are limiting ourselves to the following object types:

  1. 1.) АВАР-programs;
  2. 2.) АВАР-code includes;
  3. 3.)    Structures of global glossaries: database tables, structures, types of tables and data elements, as well as their components.
  4. 4.)    Global АВАР-classes (built in SE24), and their methods;

Field valueOBJECT

Object Type

PROG

АВАР-programs and АВАР-includes

TABL

Definition of tables and structures of data glossaries

TYPE

Pools (groups) of types

FUGR

Groups of functions

CLAS

АВАР-class

METH

Separate method of АВАР-class

Chain select all objects directly or indirectly used as objects within the query or its subqueries.

Insofar as we are limiting ourselves in this article only to objects used as ABAP programs (and are not considering other uses, for example, of data elements in tables), we need the data sources from which we can receive information about the use of objects as ABAP programs.

SAP offers a good tool for this task: “Usage journal.”

Data on the use of objects as programs are stores in *CROSS* tables.

Table Name

Purpose

WBCROSSI

Use of АВАР includes in programs.

WBCROSSGT

Useofglobaltypesinprograms, includinguseofcomponentsoftablesandstructures. Including the use of ABAP classes, their fields and methods.

CROSS

Use of functional modules in programs. In this table, we will also find information about the use of pools (groups) of types. In addition, this table contains data about the use of various other objects types, e.g. messages, but we will not consider them in this article.

WBCROSSGT is the most interesting and varied of these tables.

In SAP, the concept of “global type” is quite broad: glossary objects, tables, structures, data elements and their components. АВАР classes are also global types.

Every line of АВАР code with access to a global type or its component is given in a line of WBCROSSGT.

АВАР Code

OTYPE

WBCROSSGT- NAME

IF sy-subrc = 0 .

SY

SY\DA:SUBRC

DATA:

BEGIN OF ltd_objk OCCURS 0,

        object TYPE e071-object,

        obj_name TYPE e071-obj_name,

      END OF ltd_objk.

TY

E071\TY:OBJECT

E071\TY:OBJ_NAME

CALL METHOD o_grid-> set_table_for_first_display

ME

CL_GUI_ALV_GRID\ME:SET_TABLE_FOR_FIRST_DISPLAY

 

The field WBCROSSGT-INCLUDE contains the names of includes and programs, which refer to the global type or its component. Please note: it is not the name of the main program that is prescribed, but rather the name of the direct piece of the referring ABAP code.

 

Thus we can get a list of all global types involved directly or indirectly in the work of the ABAP programs, if we (beforehand) make a list of all includes involved in the program’s operations.

We simultaneously receive information about the usage of functional modules.

 

*We perform a search of all includes involved in program operations.

  DATA: BEGIN OF ltb_crossi OCCURS 0,

          name TYPE wbcrossi-include,

        END OF ltb_crossi,

        ltd_wbcrossi TYPE TABLE OF wbcrossi WITH HEADER LINE .

 

  LOOP AT ltd_objk .

    ltb_crossi-name = ltd_objk-obj_name .

    COLLECT ltb_crossi .

  ENDLOOP .

  SELECT DISTINCT * INTO TABLE ltd_wbcrossi

    FROM wbcrossi

         FOR ALL ENTRIES IN ltb_crossi

   WHERE

         include EQ ltb_crossi-name

      OR master EQ ltb_crossi-name .

 

* For all includes located, we perform a search of all global types used in these includes.

  DATA: ltd_wbcrossgt TYPE TABLE OF wbcrossi WITH HEADER LINE .

  SELECT * INTO TABLE ltd_wbcrossgt

    FROM wbcrossgt

                   FOR ALL ENTRIES IN ltd_wbcrossi

   WHERE include EQ ltd_wbcrossi-include .

* For all includes located we performa search of all functional modules involved.

  DATA: ltd_cross TYPE TABLE OF cross WITH HEADER LINE .

  SELECT * INTO TABLE ltd_cross

    FROM cross

                   FOR ALL ENTRIES IN ltd_wbcrossi

   WHERE include EQ ltd_wbcrossi-include .

Intermediate Results

We have received lists of the following:

  1. 1.)    All objects included in the analysed query and its subqueries (tasks);
  2. 2.)    Objects connected to the objects being developed.

 

Next, we must be guided by the following logic:

  1. 1.)    If the object is located in that same query, all is well—its current version will be transferred with the objects using it;
  2. 2.)    If the object is not found in that query, we must check if it should be, so that the transfer query goes smoothly. It’s obvious that if the given object’s version is identical in both systems, it won’t be necessary to transfer it.
  3. 3.)    If the object used is absent in the query and its versions differ between systems, we must warn about the presence of potential problems in the transfer of such a query. It is necessary to either include such objects in the query or transfer them into the target system in advance.

Comparing object version between systems.

 

There is a standard way to accomplish this task in SAP: the version control system with remote inter-system version comparison.

It is possible to check for differences between versions by calling FM.

SVRS_CHECK_VERSION_REMOTE_INT

 

Completion of parameters:

Parameter

Description

E071_ENTRY

The parameter with the structure of table E071, but only fiels  PGMI, OBJE, OBJ_NAME have real meaning. We can get the field values by making a selection from E071 by object name (we complete these fields in any query in which it is present.

DESTINATION

The name of the RFC connection for communication with a remote system. This RFC connection is created automatically when you configure the transport system. His name is composed according to the rule

‘TMSADM@’ + SystemName + ‘.’ + DomainTransferName .

The transfer domain name can be found by the name of the target system using the FM call system

TMS_CI_GET_SYSTEMLIST

Or one can use use FM call SVRS_GET_RFC_DESTINATION to receive the completed RFC address.

Result

If no differences are found, FM will work without error.

If differences are found, EXCEPTIONwill appear

NOT_EQUAL  – if versions are not identical

NO_VERSION – if the object has not yet transferred into the target system.

 

However, this FM has a strange feature: in theory, the message EXCEPTIONNO_VERSION will appear if the object is absent from the target system. Nonethless, it will work without error, as if the object was present in the target system and no differences had been found.

 

Therefore, before calling SVRS_CHECK_VERSION_REMOTE_INTwe will first call FMFIND_OBJECT_40, which will check for the presence of the existence of the object in the target system:

 

CALL FUNCTION 'FIND_OBJECT_40'

EXPORTING

   DESTINATION                 = TMSADM@’ + ИмяСистемы + ‘.’ + ИмяДоменаПереноса

   OBJNAME                     = Technical name of the objext

   OBJTYPE                     = Type of object from field E071-OBJECT (FUNC, METH, etc.)

IMPORTING

   OBJECT_NOT_FOUND            = Flag: object not found in target system

 

Afterward, if OBJECT_NOT_FOUND is in fact empty, we will callSVRS_CHECK_VERSION_REMOTE_INT.


Victor Amosoff, ABAPFactory.com

Create GUI STATUS from ABAP code

$
0
0

I was always wondering why we cannot create GUI STATUS directly from the code as it can be done in other programming languages ( VB or C# ). I was trying to find any solution for that but every source was saying that this is not possible. But it's only partially true. It's true we cannot dynamically create new function codes but as we can change icons and text of existing function codes that uses dynamic text. Using this technique we can prepare an GUI STATUS with all codes using dynamic texts and then update it in the runtime.

 

Ok, but if you like to do it then you wouldn't like to do it each time for each program as this makes no sense. But as you can call GUI STATUS of other program then you can create it only once and the reuse when needed. But still updating all icons, keeping in mind program name which stores GUI status would not be so handy, that's why in my blog I've posted a way to do it in nice and really reusable way.

 

In the example below you can see than I use static methods of a class zca_ab_dynamic_gui to add new buttons, show gui statu or title bar. Thanks to its class-data we can pass dynamic text to a program which keeps GUI STATUS.

 

2015-07-13_14h58_23.png

 

Result of this program will be:

dynamic_gui_status4.png

 

Seems nice right? But you have to remember also about the disadvantages of this solution:

  • you cannot change function code name
  • reading of PAI module will not be as clear as if you would put proper names to function code
  • separator does not look the same as in standard GUI

 

Advantage of using it:

  • you don't have to create GUI STATUS in SE41 each time, especially for small programs/reports
  • you can create / update everything using ABAP

 

 

Full code of class and explanation you can find here ABAP Blog - Dynamic GUI STATUS & TITLE with ABAP code.

ABAP Unit Testing: Encapsulate Database using a local Classes

$
0
0

Introduction

 

The Goals of Test Automation at XUnitPatterns.com include Keywords like "robust", "repeatable" "fully automated" and so on. In ABAP you can simply use Database Access using the OpenSQL Statements, wich maybe scattered all over your Code. Unit Testing with Database Access in General is a Problem, because you easy miss those goals.

 

For blackbox tests you need a Constant given State as Precondition for the Test and your Test should run as quickly as possible. For repeatable Test's this would mean that you may have to flush Tables, insert a consistent State, run your Test and finally rollback the LUW - and hope that you have replaced all dependency's that might have executed an explicit COMMIT WORK Statement. Apart from messy Setup Code you may suffer Performance Problems executing your Tests. 

Let's do one step back - what is your goal? We have to our code in several Layers, with different techniques. On the lowest level you start testing your classes in isolation, and then start to test upwards with classes in combination, a complete subsystem or End-To-End with GUI and so on.


Tesing the logic of a Class without relaying on the Database can be simply archieved using local classes. I originally found this Idea in Rüdiger Plantiko's Wiki Article ABAP Unit Best Practices. The Idea is to encapsulate all SQL statements using a local Class. Before you run a unit test you have to replace the lcl_db Instance with an Instance which does not access the database - a so called stub. The Stub returns a Structure or internal table defined by the test.


Advantages

  • Placing all the SQL statments in a local class gives you inside the class fever points of change for your database logic. Also you may reduce the number of statements, because you get a good overview of your class'es SQL Statements. 
  • You can test different execution path's of your class under test by returning different data

Disadvantages

  • To inject a local class stub instance you have to have access to the private Instance Attribute.
  • Navigating to the local class implementations using SE80 may be confusing for collegues
  • You cannot execute DB queries in your constructor if the construcor is not private (which is in general not a good idea)


This approach can also be used encapsulation Function Modules using an lcl_api class.



How to use it

 

Step 1: The Class under Test

 

In your global Class Overview you navigate to the class local definitions using the Short-Keys [CTRL]+[F5]. In this Include I define the Interface lif_db and the classes lcl_db and lcl_db_stub.

 

 

INTERFACE lif_db.   METHODS:     get_ztb_test_1       IMPORTING         i_land1 TYPE land1       RETURNING value(rs_ztb_table_1) TYPE ztb_table_1.
 ENDINTERFACE.
 CLASS lcl_db DEFINITION  FINAL.   PUBLIC SECTION.     INTERFACES lif_db.
 ENDCLASS.
 CLASS lcl_db_stub DEFINITION  FINAL.   PUBLIC SECTION.     DATA:       ms_ztb_table_1__to_return  TYPE ztb_table_1.     INTERFACES lif_db.
 ENDCLASS.

 

Then you go back the the global class definition and jump to the local class definition using [CTRL]+[SHIFT]+[F6].

 

CLASS lcl_db IMPLEMENTATION.   METHOD lif_db~get_ztb_test_1.     SELECT SINGLE *       FROM         ztb_table_1       INTO         rs_ztb_table_1       WHERE         land1 = i_land1.   ENDMETHOD.
 ENDCLASS.
CLASS lcl_db_stub IMPLEMENTATION.   METHOD lif_db~get_ztb_test_1.     rs_ztb_table_1 = me->ms_ztb_table_1__to_return.   ENDMETHOD.
 ENDCLASS.


The next Step is to add the Database Instance to your primary Class. Add an Member-Attribut in the Private Section:


mo_db TYPE REF TO lif_db.


In the Constructor you have to instantiate it.


CREATE OBJECT me->mo_db     TYPE REF TO lcl_db.


In your Class with the production Code you can access the Database by calling the methods of mo_db.

 

Step 2: The Test-Class


Now let's have a look at the Test-Class. I don't use setup the generate Test Instances, normally i have a get_fcut Method, that returns the Instance to Test.

 

CLASS ltcl_test_my_class DEFINITION     FOR TESTING          DURATION SHORT          RISK LEVEL HARMLESS          FINAL.
PRIVATE SECTION.
METHODS:       get_fcut
IMPORTING           i_for_vkorg TYPE vkorg         RETURNING VALUE(ro_fcut) TYPE REF TO zcl_encapsulated_db_access_1,       get_db_stub
IMPORTING           i_for_vkorg TYPE vkorg         RETURNING value(ro_db_stub) TYPE REF TO lcl_db_stub,       run_a_test FOR TESTING.
ENDCLASS.

 

Between Definition and Implementation you have to make the local Test-Class a friend of the Class under Test. That's necessary to access the private mo_db Instance and replace it with the Stub. Whenever possible you should use other techniques for Dependency-Injection.

 

CLASS zcl_encapsulated_db_access_1 DEFINITION LOCAL FRIENDS       ltcl_test_my_class.

 

Resist the temptation the use any other "internal" private Attributes oder Methods - knowing the Internals of the Class you're testing is not a good Idea.

 

CLASS ltcl_test_my_class IMPLEMENTATION.
METHOD get_fcut.
CREATE OBJECT ro_fcut.     ro_fcut->mo_db = me->get_db_stub( i_for_vkorg ).
ENDMETHOD.
METHOD get_db_stub.
CREATE OBJECT ro_db_stub.
" Setup Stub Values     ro_db_stub->ms_ztb_table_1__to_return-vkorg = i_for_vkorg.
ENDMETHOD.
METHOD run_a_test.
ENDMETHOD
ENDCLASS.

 

That's it!

 

 

Resume

 

Building your code this way allows you the test the Logic inside the class without the Database itself. With Parameterised setup oder get_fcut Methods you can test multiple Execution Path's in your Logic. 
But be aware: Sometimes it's a narrow Path between a good Test with good Test coverage and complicated and messy Test's that get brittle overt time and complicate Changes instead as acting as a safety net.

 

Even if I don't unit Test the Class I tend to extract the SQL Queries in an "db" class. That allows my to hide the actual query behind an expressive Method Name.

Why it’s not a good practice to use text-symbols as literals in your code

$
0
0


From time to time the issue of text-symbols usage keeps popping-up in my daily work, so I decided to write a short blog about why I think it is better to avoid using text-symbols as literals in your code.

You have a report / function module / class where you need to use a text that has to be translatable. One way of doing this is to define a text-symbol.

 

Now the tricky part is that you can define / use a text-symbol in two ways and it will behave differently when you want to change it:

  1. You can create a text-symbol by using Goto --> Text elements and reference it in your code via text-ccc(E.g.:text-001)OR
  2. You can create a literal, reference the text-symbol via a 3-characters ID and use forward navigation (double click on it) to create the text-symbol (E.g.:l_string = ‘Hello world!’(001))

 

When you choose the second option to create and reference a text symbol, keep in mind the followings:

  • If you modify the literal, you always need to use forward navigation to transfer the new value into the text-symbol. Otherwise, the value in use will be the old one.

    E.g.: You change
    l_string = ‘Hello world!’(001) into
    l_string = ‘Hello ABAP!’(001) and you forget to use forward navigation to replace the text-symbol's old value with the new one.
    If you output l_string’s value you will see it’s actually ‘Hello world!’ instead of what you might have expected, that is ‘Hello ABAP!’.

  • If you modify the text-symbols via Goto --> Text elements, the text-symbol will have a value which differs from the literal used in your code. The value that is actually in use is the one from the text-symbol.

    E.g.: You go to Goto --> Text elements and you change the value of the text-symbol 001 from ‘Hello world!’ to ‘Hello ABAP!’. In your code, you are still usingl_string = ‘Hello world!’(001).
    If you output l_string’s value you will see it is ‘Hello ABAP!’ which, at a first glance, might seem awkward because in your code you have ‘Hello world!’.



Therefore, in order to avoid a mismatch between the actual value in use (which is always the text-symbol) and the value of the literal, reference text-symbols as text-ccc in your code.

Transport query analysis

$
0
0

Subject Matter.

The situation is familiar for any SAP consultant—transport queries are transferred to the target system with errors caused by violations of integrity of the transferred object. For example, a transferred program uses new tables (or new versions of older tables) but the target system does not “know” them, as these created/modified tables were not transferred and are absent from the query transferred with the program.

Such situations are fraught with problems: not only headaches and delays, but also more present the possibility of strategically important risks and losses.

Such risks are especially noticeable when the authors of these changes (programmers employed by the Contractor) are unable, that is to say, do not have the authority to independently transfer changes into the Customer’s target system. As a result of existing regulations, they are forced to contact the Customer’s basic consultants about these questions. Thus, a strict system of control of transferring changes is put into effect.

This situation is fraught with problems related to the loss of time and money (extension of deadlines and, as a result, terms of payment for completed work), but also can put a company’s reputation at risk. For example, unfriendly employees of the Customer have a reason to call into question the level of qualification of the Contractor’s programmers—they’re so stupid that they cannot even complete a simple transfer query! Yet the terms of the contract promised only highly qualified programmers.

As anyone familiar with the situation knows that even the most professional and experienced programmer is defenceless against such errors. There is always a risk—as a result of haste or carelessness, or because of more objective reasons e.g. the changes were made a long time ago, or due to complex interdependencies. Standard SAP tools do not offer sufficient protection.

All of this can eventually lead to real financial losses. Thus, the subject matter of this article goes beyond the scope of programmers.

However, by approaching the issue in a slightly more creative manner, it can be feasible to create tools that can help manage these risks.

The general concept of realization.

In order to analyse the integrity of queries, we must perform the following steps:

  1. 1.)    Select all objects contained in the transfer query and its subqueries.
  2. 2.)    Chain select all functions that are used directly or indirectly within this query and its subqueries. The topic of mutual use of functions is too vast to cover in this article. So we will limit ourselves to questions related to the use of programming functions, i.e. ABAP-code functions: АВАР-reports, АВАР-includes, classes. We will not analyse the use of data elements in table fields, domains in data elements, etc. Likewise we will also limit the types of functions used: only АВАР-includes and global types: glossary entries (database tables, structures, types of tables, pools and their components, data elements), АВАР-classes (with their components). Of course this is a limitation, but when you implement your tool and begin using it, you will see that even this limited variant “closes” 80-90% of problems. And even if this tool’s possibilities don’t cover your needs, with little effort will be able to easily build up a circle of analysed dependencies in its “image and likeness”.
  3. 3.)   For those functions used that are not present in the transport query, we should a remote version comparison between systems (we will test for differences. Of course, we will compare the source system and the target system.
  4. 4.)    If the function differs between systems and is absent in the transport query, we should signal the problem.

Implementation.

 

The analyser is carried out like a regular executed ABAP-report.

При For the conclusion of the analysis we will not use sophisticated UI elements like ALV. For our purposes, a conventional listing conclusion is sufficient.

I will deliberately be using the old-fashioned ABAP-code style (using HEADER-LINE and so on) in the interest of brevity and clarity (and in order to remove inessential structures necessary to maintain ABAP-code in a modern style). For unassisted implementation, I recommend following SAP’s recommendations related to the outdated constructions.

  1. The only input parameters we need are the query number and the name of SAP system whose compatibility we want to check. One could take the name of the target SAP system from the attributes of the query, were it not for the usual the three-levelled SAP system landscape (“Development”à”Test”à”Product”). Often it is necessary to determine the integrity of the query at all transfer stages.

PARAMETERS: p_trkorr TYPE e070-trkorr    OBLIGATORY MEMORY ID trkorr " What we transfer

          , p_system TYPE e070-tarsystem OBLIGATORY MEMORY ID tarsys " Where we transfer it

          .

Chain select all functions, contained in the transferred query and its subqueries.

The transport query’s header and the hierarchy of its subqueries are kept in table E070. The number of the query/task is in field E070-TRKORR. The number of the higher query for the task (subquery) is in field E071-STRKORR.

Objects contained in the queries/tasks are kept in table E071.

Objects can be directly linked either to the “main” query or to the subqueries (tasks).

*We will form RANGES from the number of the query and will receive a list of all objects contained in the query, in the field E071- TRKORR.

 

DATA: ltr_korr TYPE RANGE OF e070-trkorr WITH HEADER LINE,

      BEGIN OF ltd_objk OCCURS 0,

        object TYPE e071-object,

        obj_name TYPE e071-obj_name,

      END OF ltd_objk.

 

ltr_korr-sign = 'I' .

ltr_korr-option = 'EQ' .

 

SELECT trkorr INTO ltr_korr-low

  FROM e070

WHERE trkorr EQ p_trkorr

    OR strkorr EQ p_trkorr .

  APPEND ltr_korr .

ENDSELECT .

 

SELECT DISTINCT

        object

        obj_name INTO TABLE ltd_objk

  FROM e071

WHERE

       trkorrINltr_korr

   AND pgmid    IN ('LIMU', 'R3TR')

ORDER BY OBJECT .

In Table LTD_OBJKwe now have a list of all objects included in the query and its subqueries.

In this article we are limiting ourselves to the following object types:

  1. 1.) АВАР-programs;
  2. 2.) АВАР-code includes;
  3. 3.)    Structures of global glossaries: database tables, structures, types of tables and data elements, as well as their components.
  4. 4.)    Global АВАР-classes (built in SE24), and their methods;

Field valueOBJECT

Object Type

PROG

АВАР-programs and АВАР-includes

TABL

Definition of tables and structures of data glossaries

TYPE

Pools (groups) of types

FUGR

Groups of functions

CLAS

АВАР-class

METH

Separate method of АВАР-class

Chain select all objects directly or indirectly used as objects within the query or its subqueries.

Insofar as we are limiting ourselves in this article only to objects used as ABAP programs (and are not considering other uses, for example, of data elements in tables), we need the data sources from which we can receive information about the use of objects as ABAP programs.

SAP offers a good tool for this task: “Usage journal.”

Data on the use of objects as programs are stores in *CROSS* tables.

Table Name

Purpose

WBCROSSI

Use of АВАР includes in programs.

WBCROSSGT

Useofglobaltypesinprograms, includinguseofcomponentsoftablesandstructures. Including the use of ABAP classes, their fields and methods.

CROSS

Use of functional modules in programs. In this table, we will also find information about the use of pools (groups) of types. In addition, this table contains data about the use of various other objects types, e.g. messages, but we will not consider them in this article.

WBCROSSGT is the most interesting and varied of these tables.

In SAP, the concept of “global type” is quite broad: glossary objects, tables, structures, data elements and their components. АВАР classes are also global types.

Every line of АВАР code with access to a global type or its component is given in a line of WBCROSSGT.

АВАР Code

OTYPE

WBCROSSGT- NAME

IF sy-subrc = 0 .

SY

SY\DA:SUBRC

DATA:

BEGIN OF ltd_objk OCCURS 0,

        object TYPE e071-object,

        obj_name TYPE e071-obj_name,

      END OF ltd_objk.

TY

E071\TY:OBJECT

E071\TY:OBJ_NAME

CALL METHOD o_grid-> set_table_for_first_display

ME

CL_GUI_ALV_GRID\ME:SET_TABLE_FOR_FIRST_DISPLAY

 

The field WBCROSSGT-INCLUDE contains the names of includes and programs, which refer to the global type or its component. Please note: it is not the name of the main program that is prescribed, but rather the name of the direct piece of the referring ABAP code.

 

Thus we can get a list of all global types involved directly or indirectly in the work of the ABAP programs, if we (beforehand) make a list of all includes involved in the program’s operations.

We simultaneously receive information about the usage of functional modules.

 

*We perform a search of all includes involved in program operations.

  DATA: BEGIN OF ltb_crossi OCCURS 0,

          name TYPE wbcrossi-include,

        END OF ltb_crossi,

        ltd_wbcrossi TYPE TABLE OF wbcrossi WITH HEADER LINE .

 

  LOOP AT ltd_objk .

    ltb_crossi-name = ltd_objk-obj_name .

    COLLECT ltb_crossi .

  ENDLOOP .

  SELECT DISTINCT * INTO TABLE ltd_wbcrossi

    FROM wbcrossi

         FOR ALL ENTRIES IN ltb_crossi

   WHERE

         include EQ ltb_crossi-name

      OR master EQ ltb_crossi-name .

 

* For all includes located, we perform a search of all global types used in these includes.

  DATA: ltd_wbcrossgt TYPE TABLE OF wbcrossi WITH HEADER LINE .

  SELECT * INTO TABLE ltd_wbcrossgt

    FROM wbcrossgt

                   FOR ALL ENTRIES IN ltd_wbcrossi

   WHERE include EQ ltd_wbcrossi-include .

* For all includes located we performa search of all functional modules involved.

  DATA: ltd_cross TYPE TABLE OF cross WITH HEADER LINE .

  SELECT * INTO TABLE ltd_cross

    FROM cross

                   FOR ALL ENTRIES IN ltd_wbcrossi

   WHERE include EQ ltd_wbcrossi-include .

Intermediate Results

We have received lists of the following:

  1. 1.)    All objects included in the analysed query and its subqueries (tasks);
  2. 2.)    Objects connected to the objects being developed.

 

Next, we must be guided by the following logic:

  1. 1.)    If the object is located in that same query, all is well—its current version will be transferred with the objects using it;
  2. 2.)    If the object is not found in that query, we must check if it should be, so that the transfer query goes smoothly. It’s obvious that if the given object’s version is identical in both systems, it won’t be necessary to transfer it.
  3. 3.)    If the object used is absent in the query and its versions differ between systems, we must warn about the presence of potential problems in the transfer of such a query. It is necessary to either include such objects in the query or transfer them into the target system in advance.

Comparing object version between systems.

 

There is a standard way to accomplish this task in SAP: the version control system with remote inter-system version comparison.

It is possible to check for differences between versions by calling FM.

SVRS_CHECK_VERSION_REMOTE_INT

 

Completion of parameters:

Parameter

Description

E071_ENTRY

The parameter with the structure of table E071, but only fiels  PGMI, OBJE, OBJ_NAME have real meaning. We can get the field values by making a selection from E071 by object name (we complete these fields in any query in which it is present.

DESTINATION

The name of the RFC connection for communication with a remote system. This RFC connection is created automatically when you configure the transport system. His name is composed according to the rule

‘TMSADM@’ + SystemName + ‘.’ + DomainTransferName .

The transfer domain name can be found by the name of the target system using the FM call system

TMS_CI_GET_SYSTEMLIST

Or one can use use FM call SVRS_GET_RFC_DESTINATION to receive the completed RFC address.

Result

If no differences are found, FM will work without error.

If differences are found, EXCEPTIONwill appear

NOT_EQUAL  – if versions are not identical

NO_VERSION – if the object has not yet transferred into the target system.

 

However, this FM has a strange feature: in theory, the message EXCEPTIONNO_VERSION will appear if the object is absent from the target system. Nonethless, it will work without error, as if the object was present in the target system and no differences had been found.

 

Therefore, before calling SVRS_CHECK_VERSION_REMOTE_INTwe will first call FMFIND_OBJECT_40, which will check for the presence of the existence of the object in the target system:

 

CALL FUNCTION 'FIND_OBJECT_40'

EXPORTING

   DESTINATION                 = TMSADM@’ + SystemName + ‘.’ + DomainTransferName

   OBJNAME                     = Technical name of the objext

   OBJTYPE                     = Type of object from field E071-OBJECT (FUNC, METH, etc.)

IMPORTING

   OBJECT_NOT_FOUND            = Flag: object not found in target system

 

Afterward, if OBJECT_NOT_FOUND is in fact empty, we will callSVRS_CHECK_VERSION_REMOTE_INT.


Victor Amosoff, ABAPFactory.com

SAP Lockbox enhancement to improve invoice matching.

$
0
0

LOCKBOX ENHANCEMENT

 

The standard SAP lockbox processing program uses

MICR# (Bank Key and Bank account) and invoice numbers included in bank lockbox
file to apply and clear open AR items from customer accounts. There are cases
where the detailed remittance information is not available in the lockbox file.
This enhancement is to build additional algorithms to search invoices and apply
cash. The following additional algorithms are needed to improve invoice
matching:

 

Process Flow:

Standard SAP provides two Function exits EXIT_RFEBLB20_001& EXIT_RFEBLB20_002.

The Parameters for the function exit EXIT_RFEBLB20_001 are.

The import parameter I_AVIK contains the Payment Advice Header details from file.

1.png

 

The export parameter (E_AVIK) contains the manipulated values which need to be
updated in the Payment advice header table (AVIK).

2.png

 

 

The table parameter will have the following tables.

3.png

 

The Parameters for the function exit EXIT_RFEBLB20_002 are.

Import Parameters:

4.png

Export Parameters:

5.png

 

The logic for calculating the total amount is written in the function exit EXIT_RFEBLB20_001.

 

The following are the important scenarios in Lockbox.

 

1) Payment advice header details (I_AVIK) are available and Payment Advice Line Item
details (I_AVIP) are not available.

 

For the above scenario fetch all the open items form BSID table by passing the Company
code (I_AVIK-BUKRS), Account number (I_AVIK-KONTO) and last date of previous
month.

 

Calculate the due date and delete the records where Baseline Date for Due Date Calculation
is less than last date of current month.

 

Calculate the total amount by Cumulating the Amount in document currency (BSID-WRBTR)
which is fetched from BSID table.

 

If the total amount is equal to the check amount (I_AVIK-RWBTR) then,

 

Move all line items with the corresponding fields from BSID to the table parameter
T_AVIP.

 

 

2) Payment advice header details (I_AVIK) are available and Payment Advice Line Item
details (I_AVIP) are available.

 

For the above scenario fetch all the open items form BSID table by passing the Company
code (I_AVIK-BUKRS), Account number (I_AVIK-KONTO) and Reference Document
Number (T_AVIP-XBLNR) for all the entries in T_AVIP table.

 

Check whether all the line items in T_AVIP table have a corresponding line item in
BSID.

 

If Net Payment Amount (NEBTR) in the table T_AVIP contains value then, calculate the
total amount by Cumulating the Net Payment Amount (T_AVIP-NEBTR).

 

Else calculate the total amount by Cumulating the Amount in document currency
(BSID-WRBTR) which is fetched from BSID table.

 

If the total amount is greater than the check amount (I_AVIK-RWBTR) then it is
“Under-Payment”. Insert a line in T_AVIP with the difference of check amount
(I_AVIK-RWBTR) from the total amount.

 

If the total amount is lesser than the check amount (I_AVIK-RWBTR) then it is
“Over-Payment”. Insert a line in T_AVIP with the difference amount of total
amount from check amount (I_AVIK-RWBTR).

 

 

 

Cancelation of Goods Receipt from Vendor (101 Movement Type along with 543 Movement Type) via IDOC

$
0
0

 

1.   BUSINESS REQURMENT

 

Goods movement cancellation through IDOC or custom program for Subcontracting PO scenario (movement type 101 and 543 which is including component materials). In this above business scenario goods movement cancellation through IDOC for 101 movement type along with 543 movement type, IDOC has been failed and status is 51 and the status message is PU WITHRAWN QTY EXCEEDED by 543 movement type line item Qty: WITH Material no Plant and Batch no”.

But same material document has successfully cancelled and it creates the new material document with movement type 102 and 544, during the manual process in MIGO transaction.

 

 

2. DESCRIPTION

 

When Sub-contracting PO scenario goods movement for material document cancelation in MIGO transaction (Manual process), it cancels the Material document and creates the new material document for 102 and 544.
image 1.png

 

Cancelled Material Document

image 2.png

 

3.   EXPLORATION

We are able to cancel the material document if we use the transaction MIGO, however If we cancel the goods movement through other transactions / IDOC then the Function Module “BAPI_GOODSMVT_CANCEL’ is called and cancels the goods movement.   But in the IDOC call Function Module has return the exception “PU WITHRAWN QTY EXCEEDED by 543 movement type line item with Qty: Material no Plant and Batch no”.

 

image 3.png

 

4.   SOLUTION PROPOSED

 

Reverse Goods Movements with Function module “BAPI_GOODSMVT_CANCEL” which calls one more Function module ‘MB_SET_BAPI_FLAG' is used to Set flag XBAPI for BAPI_GOODSMVT_CANCEL. If MIGO transaction is used then the Function module ‘MB_SET_BAPI_FLAG' Global parameter XMIGO = ‘X’ else the XMIGO variable value is SPACE other than MIGO.

 

Create Enhancement in Function Module ‘MB_SET_BAPI_FLAG' to check the IDOC message type from memory.

 

ASSIGN '(RBDMANI2)T_EDIDC-MESTYP' TO <LFS_MESTYP>.

 

If the message type <LFS_MESTYP> is a valid message type or SY-REPID = Z*, then set the parameter XMIGO = “X”.

 

Then It cancels the material documents like MIGO if it is Z-program or through IDOC.

 

 

5. Related SAP Modules

 

             SAP-MM and SCM

 

 

6. Assumptions

 

     IDOC ‘BAPI_GOODSMVT_CANCEL’ has been triggered in Middleware to cancel the Material Document.

 

 

Cancelled Material Document

 

image 4.png

 

image 5.png

 

7.   TEST RESULT

 

Step 1: Before creating the enhancement, If the IDOC has executed then the flag in Function module ‘MB_SET_BAPI_FLAG' Global parameter XMIGO = space, and the IDOC status is 51 and the status message is “PU WITHRAWN QTY EXCEEDED by 543 movement typeline item Qty: Material no Plant and Batch no”. And the Material document hasnot cancelled.

 

image 6.png

 

Step 2: After creating the enhancement, set the flag in Function module ‘MB_SET_BAPI_FLAG' Global parameter XMIGO = ‘X’, the IDOC successfully posted and the IDOC status has 53.

image 7.png

 

image 8.png

image 9.png

And Material document created successfully for Movement type 102 and 544.

image 10.png

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 


Why Integration is Easier and Harder than Ever Before

$
0
0

A few days ago, I had a conversation about the challenges ABAPers are facing and about current trends with respect to these challenges and the skills developers need to meet them. The person I was talking to mentioned that integration was specifically brought up as a key issue: Apparently, it is something that developers struggle with and that poses them great difficulties. My conversation partner asked why integration was more difficult today than it used to be.

My response was, it is so hard because it has never been easier. In a nutshell, when integrating systems, user expectations set the bar higher than ever before.

“My god, it's full of silos!”

In the past, users understood that each IT system was a silo:

  • Their login would be “I30540003” in on system, “MILLERTHEO” in another, “TMILLER0001” in the third system, etc.
  • Each system had its own client software that you would have to launch from your desktop. SAPGUI here, a Borland Delphi windows client executable there, a Telnet TN3270 emulation connecting to an IBM host system there. Maybe not even functioning copy-paste between windows.
  • Each system would have its own database, with long time spans between synchronization: A nightly file download and upload here, weekly synch there. A change in one system would show up in another systems days later.

Current expectations

Today, when using two different pieces of software that are at least remotely related, what we expect is roughly this:

  • Real-time backend data integration: All data must be perfectly consistent with other data we consume from other related sources, in real-time.
  • Comfortable single sign-on: It’s not user-friendly to require a user to logon as long as there is a valid logon with some other system which could vouch for them.
  • Use identities from other system: Why should I create usernames and passwords with new systems if they could just accept my Facebook/Google/Windows domain identity and credentials?
  • Cross-system personalization: Why should I have to teach two or three systems the same about what I like and what I don’t? Can’t they inform each other about these things?
  • Seamless navigation: If a screen flow takes me from one system to the next, it should be a seamless and consistent navigation flow with, at the very least, properly functioning back and forth navigation, and ideally a consistent look and feel, screen layout patterns, icon language, etc.
  • Consistent process flow and monitoring: Flows across multiple systems should be as robust and easy to monitor as flows within a single system. If something goes wrong, I want to know what went wrong and how, and how to get it going again.

From the point of view of an architect or software developer, there are even more requirements:

  • Architectures should be designed so that there is the least possible amount of redundant development. Business logic, screen fragments, and data structures that are needed in two different applications (or IT systems) should somehow be built once and reused wherever else required. It doesn’t matter if one environment is a native iPhone application, a second environment is an offline HTML5 page running in a web browser, and a third environment is an ABAP background processing a nightly job – people already become nervous when even a trivial chunk of business logic such as “birthdate must not be initial” is implemented redundantly across these platforms.
  • As the functionalities required by a single end-to-end process are spread across multiple systems, we have to find solutions in case one of these systems is temporarily down. So we have to use mechanisms like message queues, make sure that messages are sent exactly ones or that a web service that handles them is idempotent, we have to be able to monitor the message flow associated with a business process and track and correct errors, and maybe our process flows need to have a Plan B in case a required subsystem is down so we can tell the user: “Your entry has been submitted and is pending approval, you will get a notification as soon as it has been processed,” instead of giving them the usual instantaneous feedback.
  • Applications should be maintenance-friendly: Even though the steps of an end-to-end process are spread across multiple systems and platforms, changes in one platform should not require adaptations in the other systems (or break them).

576px-Pole_vault_Its_all_for_this_moment.jpg

Fig.: The bar is high

 

Who raised the bar?

Putting it all together, we can see that the bar with respect to seamless frontend and backend integration is infinitely higher today than it used to be. But why? The answer lies in another question and answer: Why did the mountaineer climb the mountain? – Because it exists. Why did the architect create a tight coupling between the systems? – Because there’s a way to do it.

From silos to a service web

The first driver is SOA: In enterprise IT, the siloed landscapes were shaken up when SOA came along and it was possible to call functions provided by one system from another. This led to a reduction in redundancy: Functions that used to have many implementations on different platforms in a company’s application landscape were reduced to one implementation, which can be called as a web service by other systems. This possibility has changed people’s thinking and the trend has just been going on and on.

Internet experience as a driver of change

The second driver is the user experience people have with Google and Facebook. Think about how many Google sites you use: Out of Calendar, Gmail, Search, News, Drive, Music, Movies, Maps, there are certainly going to be a few you will use on a regular basis. They all log you on transparently and automatically, without asking for a password or username, and you never have to tell them anything twice. I don’t even remember my Google password, but I use my Google account on a half dozen sites almost daily.

It’s similar with Facebook. I don’t use any sub-sites of Facebook but whenever a new non-Facebook site I want to use gives me the choice between “Sign up” (create a new username and password combination that tortures me from now on) and “Log in with Facebook,” I certainly know what I do. Facebook offers great value to me, and many other users, as a leading identity provider for third-party web sites.

Users take these expectations to work and demand that the enterprise IT systems they work with live up to the same standards of simplicity and user-friendliness. They want to have as few identities as possible, and be able to use them with as many systems as possible.

Availability of standards

Getting back to the mountaineer who climbed the mountain because it existed – the availability of wide-spread IT standards for integration put us in a similar position. Single sign-on between your new custom system and an existing system running anywhere is possible thanks to open standards – and so it is seriously considered and added to your new project’s backlist.

Near real-time data replication between systems is easy to implement with SAP LT Replication, and so it is considered and possibly added to your project’s backlog.

The same goes for other features such as sophisticated message queuing while a system is down, provisioning data and functions to mobile clients, secure login for mobile devices through the internet, navigation between cloud-based and on-premise system, seamless user integration across companies (e.g. employees of other companies whose identities are maintained in that other company’s system signing on to your  partner-facing portal, with single sign-on from their Windows domain).

New state of the art

The bottom line is that the state of the art has changed greatly, and the level of integration that was normal five years doesn’t cut it anymore today. An architect or developer who is tasked with creating that type of integration may find that they have to read up on internet standards, and get their feet wet learning how to actually implement use them in a real system.

This, I believe, explains why developers are struggling with integration now more than ever before. There may have been a time when IT departments were in the driver’s seat – but now we live in a consumerized enterprise IT world. New tools exist, and users demand that we use them. We have to learn how and do it.

The Specification Pattern: a Tool to write more maintainable Business-Logic?

$
0
0

In Eric J. Evans book "Domain-Driven Design" I read about the Specification Pattern. In this Article I describe how and in which cases it could be used in ABAP.

 

An Example

 

Let's have a Look at a pseudo-Code:

 

lo_spec_can_be_used_for_production =    lo_spec_factory->is_in_warehouse(          )->and(              lo_spec_factory->can_be_produced_with_machine_a(                    )->or(                        lo_spec_factory->can_be_produced_with_machine_b( )                    ).          )->and(              lo_spec_factory->requires_fork_lift_to_move( )->not( )          ).
IF lo_can_be_used_for_production->is_satisfied_by( lo_warehouse_unit) = abap_true.    " Okay, we can use that Unit for Production!
ENDIF.

Every Specification chained is a Question, that contains itself probably a lot of  (complex) Business-Logic. If you read the Code, you read english Text and know what the intent of the Code is. It is an Alternative to the use of Multiple (nested) IF Statements.

 

Code with many Conditions can become hard to understand, looking at the Example above you can Imagine that it contains a lot of Conditions hat we have to put together.

 

IF          ( lo_warehouse_unit->current_area = 'WAREHOUSE_1 OR lo_warehouse_unit->current_area = 'WAREHOUSE_2' ......... )     AND ( ( lo_warehouse_unit->material->width > 120 OR lo_warehouse_unit->material_height > 500 OR lo_warehouse_unit->material->total_weight < 200' ........ )              OR               ( lo_warehouse_unit->material->width <= 120 OR lo_warehouse_unit->material_height <= 500 OR lo_warehouse_type->material->type= 'YZS' )              )    AND ( lo_warehouse_unit->total_weight < 123 OR lo_warehouse_unit->type = zif_warehouse_unit_type=>special_pallet OR ...... )

To make the code cleaner you could also extract the conditions in private bool Methods:

 

IF          me->is_in_warehouse( lo_warehouse_unit )    AND ( can_be_produced_with_machine_a( lo_warehouse_unit ) OR can_be_produced_with_machine_b (lo_warehouse_unit ) )    AND ( does_not_require_fork_list_to_move( lo_warehouse_unit )

The Code is much cleaner this way and you don't have to read the Implementation Details for every condition.

 

This maybe fine for many Situations - but consider you have complex Business-Logic Conditions, that are used in multiple Places of you Application. It's is not a good Idea to copy & paste the IF Statement(s) - because the day of Change-Request will come.

 

 

Implementation

 

Let's have a look an ABAP Implementation of the Specification Pattern.  Because you cannot inherit from the ZCL_ABSTRACT_SPECIFICATION in it's local Class Definition the AND, OR and NOT Classes are Gobal Classes.

 

uml_specification.PNG

 

To Implement an Specification Class you you just have to inherit from ZCL_ABSTRACT_SPECIFICATION and redefine the ZIF_SPECIFICATION~IS_SATISFIED_BY Method.

 

To Hide the Implementation (the actual Specification Class), you can create a Factory Class which Methods return an ZIF_SPECIFICATION Instance.

 

 

In which Situations can the Specification Pattern by useful?

 

As you've seen the Specification Pattern in ABAP comes with a bit of Overhead. Let's have a look at the pro's and con's:

 

+

  • Conditions in human-readable Words
  • Reduces the Amount of IF Statements
  • Encapsulates the Conditions in Classes (can be tested in Isolation / may use internally an BRF+ Function?)
  • Where-used search (Factory Methods)

-

  • Increases Class & Object Count
  • Slower Performance in frequent called (e.g. LOOP) Code Segments
  • If you want to use generalised Specification Classes you have to cast from the "object" reference-Type.

 

In Scenarios that have complex Logic, that is used in multiples Places the Specification Pattern may help you to write Code that is easier to reuse, read and maintain.

 

 

Example ABAP Implementation

 

You can view and download the Example Implementation at GitHub, containing Text-Files with the Code for the Interface and the Classes.

 

 

Feedback

 

  • What do you think about the Specification Pattern in general and about the Example Implementation?
  • Have you already used this Pattern, maybe in a different Variation?
  • Have you cases in Mind where you would use the Specification Pattern?

CDS - One Concept, Two Flavors

$
0
0


If you search the web or SCN for CDS, SAP's Core Data Services, you can easily find statements like "Core data services (CDS) is an infrastructure for defining and consuming semantically rich data models in SAP HANA.". On the other hand, there also seems to be something called ABAP CDS in ABAP Dictionary. How are they connected? Let's have a look from the ABAP (and even ABAP CDS) docu writer's worm's eye view.

 

ABAP Dictionary

 

In order to develop (business) applications, you need something to model your data. In the ABAP world, meaning on the ABAP application server, for this purpose you used the ABAP Dictionary or tools like the data modeler for years. The ABAP Dictionary was and is the platform independent metadata repository for database tables and database views that can be accessed in ABAP using Open SQL. The definition of the database entities is done in the form based ABAP Dictionary tool on the application server and  the according DDL is passed to the database via the DBI (Database Interface). You can examine that by looking at the "Database Object" in SE11. E.g., for a view, you see the according DDL statement CREATE VIEW.  For an ABAP program, the entities defined in the ABAP Dictionary act like global types, making it simple to declare ABAP data objects as targets or sources for DML statements in Open SQL.

 

HANA CDS

 

With the dawn of SAP HANA and the possiblity to develop applications directly on the database, not using an application server any more, the need arised, to create a meta model repository directly on the database. As with the ABAP Dictionary on the application server, there should be more in the box for you than using native SQL's CREATE TABLE or CREATE VIEW. Especially regarding the need of enriching pure technical definitions wih semantics. And that's one of the main reasons for SAP's Core Data Services.

 

Core Data Services provide a specification for an SQL-based DDL that is enriched with further possibilities like annotations or associations that generally can be implemented on different platforms. On SAP HANA, CDS provides the possibility "to define the artifacts that make up the data-persistence model." The DDL of CDS allows you  to define database tables, database views, data types by wrapping the according native HANA SQL statements and enriching them with semantical properties. From an ABAP programmer's point of view, one might say: There's a a source code based dictionary tool directly on the HANA database.

 

ABAP CDS

 

From the beginning, CDS was not designed for HANA alone. Also the ABAP application server should be able to benefit from the enhanced capabilities that are offered by the DDL of CDS compared to the form based ABAP Dictionary tool. Since the ABAP Dictionary with its capabilty of defining tables, views and data types was already there, the natural way of introducing CDS on the ABAP application server was to add it to the ABAP Dictionary. An ADT based source code editor allows you to create DDL sources. On activation,  the CDS entities defined in such a DDL source become full citizen ABAP Dictionary objects. They work as ABAP types that can be named after a TYPE addition and they can be accessed in Open SQL. As a first step, the advanced view building capabilities of CDS have been implemented in ABAP CDS. With many, many tables already defined in the ABAP Dictionary, the DEFINE VIEW statement of ABAP CDS makes the full wealth of the CDS universe readily availble to existing ABAP data models. You can either define sophisticated new views or simply wrap an existing table in a CDS view in order to enrich it semantically. For example, ABAP CDS offers a new authorization concept based on roles defined wih DEFINE ROLE (released wth 7.40, SP10) in a DCL source code. In order to let an existing table participate in the new authorization concept, you can simply create a CDS view for that table that is connected to a role. Other examples are advanced joining of tables with associations or using specific annotations in order to connect existing tables to new technologies like OData or UI5. Even in ABAP itself, CDS views are handled in an advanced way compared to classical dictionary objects - automatic client handling is an example. So, with introducing CDS views in the ABAP Dictionary one big step is already taken. Capabilities to create database tables, database functions, and data types in ABAP CDS might follow. This would ultimately allow you to create data models in the ABAP Dictionary from scratch using ABAP CDS only.

 

ABAP CDS vs. HANA CDS


With HANA CDS and ABAP CDS we have two implementations based on the same specification. The implementations are very similar but not 100 % equal.  If you know the DDL of CDS, you should be able to understand definitions of CDS entities in both flavors. But as rule you will not be able to copy DDL sources from HANA to ABAP and vice versa without modifications.  The implementation of HANA CDS and ABAP CDS is ongoing and with different priorities.That's why a blog like New Core Data Services Features in SAP HANA 1.0 SPS 10  is interesting for developers working directly on SAP HANA, ABAP developers have to look for the news in ABAP CDS.

 

While HANA CDS has to function on SAP HANA only, ABAP CDS is open. Consequently there are some restrictions for ABAP CDS that do not exist for HANA CDS (in the same way as Open SQL is more restricted than Native SQL). A good example are built-in functions. A built-in function like CURRENCY_CONVERSION offered in ABAP CDS must be available on any database platform and  - very important - must behave in the same way on all platforms. Same for expressions like arithmetic expressions, aggregates or the CAST expression. Before releasing such a functionality in ABAP CDS, all platforms have to participate. Quiet a task!  And thats why ABAP CDS cannot offer all SQL Functions of SAP HANA yet (but working on it ...). On the other hand, it is open! And there are also ABAP specialties like client handling or table buffering that are not available in HANA CDS, but supported in ABAP CDS.

 

 

Conclusion

 

SAP's Core Data Services provide a single concept for an infrastructure for data modelling that is implemented  in SAP HANA as well as on the ABAP application server. The design principles are the same for both implementations but due to differences in the respective environments, there are naturally some differences between these flavors.

 

 

PS

 

To my knowledge, there is no native implementation of the CDS concept on other database platforms than on SAP HANA up to now. In order to model in CDS for databases other than HANA, you can of course use ABAP CDS in the ABAP Dictionary, and let it push down the model for you.

Loose coupling with commands and dependency injection

$
0
0

Hello everyone,

 

as this is my very first blog post, I really would like to give a very short introduction to myself. I live with my family in Ingelheim, Germany. A small town located near Mainz in the beautiful wine region of Rhinehessen (Rhineland-Palatinate).

 

I work since 20 years mainly as a functional consultant in the area of accounting and controlling. But I have a dark side...I love programming

The recent years I have changed my programming style from the procedural style to more object oriented style. Actually it was a quite hard transition and took me a while to fully understand the concepts behind it.

 

Over the years I realized that my developments still are not well designed, even if they are object oriented. Every time when new requirements came in, I had to change a lot of the existing code. Also I wanted to be sure that the new code did not break the existing code. And in addition I was looking for some kind of methods, where you can do developments on your own system and not the final customers environment, but still be able to test the code with proper test data. After researching and studying for a while I found what is known as the SOLID Design Principles and Test-driven development. Since this time I try to follow those principles whenever it is possible.

 

There is one concept which I discovered recently and I thought this is something worth to give it a try. The original design is described here https://www.cuttingedge.it/blogs/steven/pivot/entry.php?id=91. I have transformed the examples from there into ABAP and thought others might be interested into the design as well. So here it is.

 

The basic idea behind the whole concept is that methods should either perform an action (Commands) or return data (Queries), but not both. This concept is also known as Command-Query Separation (CQS). We will have a look at the command site for now.

 

We first need an Interface which represents our command.

 

interface lif_command.
endinterface.

 

Well, this doesn't look to complicated. The next thing we need is our interface for the command handler.

 

 

interface lif_cmd_handler.  methods handle importing i_command type ref to lif_command.
endinterface.

 

We are now able to decouple the business logic from the data. The command handler now operates on the command that we provide. Using interfaces give us a lot of flexibility, which we will see later.

 

We can now create our concrete command. As you can see our command is a pure data object without any logic (Setter and Getter methods would be possible). We could also create a data reference for our data, but I decided to use objects for data representation as well.

 

 

class lcl_move_customer_cmd definition.  public section.    interfaces lif_command.    data customer_id type i.    data new_adress  type string.
endclass.
class lcl_move_customer_cmd implementation.
endclass.

 

Here is the implementation of our concrete command handler.

 

 

class lcl_move_customer_cmd_handler definition.  public section.    interfaces lif_cmd_handler.
endclass.
class lcl_move_customer_cmd_handler implementation.  method lif_cmd_handler~handle.    data cmd type ref to lcl_move_customer_cmd.    cmd ?= i_command.    cl_demo_output=>write( 'I handled the command.' ).    cl_demo_output=>write( 'Customer:' && ` ` && cmd->customer_id ).    cl_demo_output=>write( 'Adress:' && ` ` && cmd->new_adress ).  endmethod.
endclass.

 

The command handler receives our command and process the data. The only thing here which is not nice, that we have to cast to the concrete command type ( cmd ?= i_command ). The original C# code shows some kind of type checking using generics which is something that is not available in ABAP. At least I haven't found anything in ABAP. So, if somebody knows a possibility to avoid the casting here, please let me know.

 

We now need the controller which knows how to operate.

 

 

class lcl_controller definition.  public section.    methods constructor   importing i_handler type ref to lif_cmd_handler.    methods move_customer importing i_customer_id type i i_new_adress type string.  private section.    data handler type ref to lif_cmd_handler.
endclass.
class lcl_controller implementation.  method constructor.
*   Constructor injection. Assert ensures that controller is working correct    assert i_handler is bound.    me->handler = i_handler.  endmethod.  method move_customer.    data(cmd) = new lcl_move_customer_cmd( ).    cmd->customer_id = i_customer_id.    cmd->new_adress  = i_new_adress.
*   Passing the data object to the handler    me->handler->handle( cmd ).    cl_demo_output=>display( ).  endmethod.
endclass.

 

The handler will be injected into the controller via constructor injection

 

 

class lcl_main definition.  public section.    class-methods start.
endclass.
class lcl_main implementation.  method start.    data(handler) = new lcl_move_customer_cmd_handler( ).    data(controller) = new lcl_controller( handler ).    controller->move_customer( i_customer_id = '12345' i_new_adress = 'The new adress' ).  endmethod.
endclass.

 

Now we come to the most interesting part. As our command handler is based on a abstraction, the command handler interface, we are now able to create simply a Decorator for the handler, which means we can add additional functionality like validation or implementing cross cutting concerns like logging without changing the existing code.


It is just a simple example, but I think you can imagine how powerful this can be.

 

 

class lcl_customer_cmd_decorator definition.  public section.    interfaces lif_cmd_handler.    methods constructor importing i_decorated_handler type ref to lif_cmd_handler.  private section.    data decorated_handler type ref to lif_cmd_handler.
endclass.
class lcl_customer_cmd_decorator implementation.  method constructor.
*   Constructor injection.    assert i_decorated_handler is bound.    me->decorated_handler = i_decorated_handler.  endmethod.  method lif_cmd_handler~handle.    data cmd type ref to lcl_move_customer_cmd.    cmd ?= i_command.    cl_demo_output=>write( 'I decorated the command before.' ).    cl_demo_output=>write( 'Customer ID validated.' ).    me->decorated_handler->handle( cmd ).    cl_demo_output=>write( 'I decorated the command after.' ).    cl_demo_output=>write( 'Save Customer ID with new adress.' ).  endmethod.
endclass.

 

All we have to do is setting up the handler correctly with the decorated handler. For details about the Decorator Design Pattern you can look here.

 

 

class lcl_main implementation.  method start.    data(handler) = new lcl_customer_cmd_decorator( new lcl_move_customer_cmd_handler( ) ).    data(controller) = new lcl_controller( handler ).    controller->move_customer( i_customer_id = '12345' i_new_adress = 'The new adress' ).  endmethod.
endclass.

 

The output will be now:

output.PNG

 

A while after I discovered the original blog post, I noticed that there is already a formal description for the concept. The design is described as the Command-Processor Pattern.

 

Command Processor.gif

 

 

 

I hope you'll find this blog useful and gave you some inspiration. Also if you find something that can be improved or enhanced, please let me know. The next time I will show an example for the query part.

 

The complete source code for the Example report can be found here.

 

Best regards,

Tapio

All Consultants are Evil doers meant to KILL your system

$
0
0

Do I have your attention?   YES!   The title worked.

 

So here I go...   I have done both been the evil consultant and the customer.   I am the customer right now. 

 

Quickly - for me - why am I writing this blog?   I've left one too many comments and would love to generate a discussion here.

 

Let's start with the Evil Consultant  (EV fort short.   In-house IH for short)

 

IH:  Hi Mr consultant, I'm so happy to work with you.   (Meaning why on Earth would they bring in a consultant?)

EV:   Sure.   I'm happy to work with you too.    (Meaning - you idiot, I'll tell you what to do.)

IH: So have to had time to look over the new project.   Do you have any questions for me?  (Meaning, of course you do, you don't know our system or our way of doing things.)

EV:  <Getting a little mad>  Of course I have.   The problem is that I'll have to rewrite all the old code.   It is inefficient and doesn't use objects.   So I had to double my hours.   (Meaning your programs are horrible)

IHr:  Yes, that was one of the early programs, but can't you just use the basics.   We don't have anyone on staff who knows objects.   <Kind of worried now.   What is this idiot thinking we can't support that.>

EV:   Well you'll have to learn then.   He goes on to program a complete system.   It runs great in development, so he is on his way to his next client.

In-house developer:  <Project has gone to production>   Oh no!   It's crashing!   We don't know objects, we have no idea what he did, and this is critical to our business.   No holiday for me and some very late nights fixing this while learning basic objects.

 

Sound familiar?  Maybe with a different twist?  The consultant stays there when it goes to production, but when he leaves it breaks?   All the sudden your production system is running too slow?   Oh boy, so not good.  I bet we all have some stories to tell.

 

The good consultant (GC) / the Evil in-house developer: (EH)

 

EH:  Hi Mr. Consultant.   I'm so happy to work with you.   (Meaning I hate you, and wish you were gone.  You idiot.)

GC:  Hi, I'm happy to work with you as well.  (meaning this could be a fun job)

EH:  Well here are all of our standards make sure you follow them exactly.   I will be watching you.   I also code review all your work, and nothing will slide by me.

GC:   Well I've noticed that you are using for all entries and are not using joins.    I think I can prove to you where some joins are more effective.   I could also help you learn something about objects while I'm here.   I'll make some time for it.

EH:  What did I tell you?!!!  (Idiot)  Just follow our standards and we will get along.

 

Sound familiar?   I bet it does.

 

So what would I suggest?

 

In my infinite wisdom, you know since I know everything.  (wha ha ha - that's my evil laugh)  Objects are actually an older technology.  New programming techniques are coming quick with HANA.  I'm an In-house developer.   So if/when we bring in outside consultants,  I want to pick their brains.   Will I let them do everything they want?   WILL I?   Not my call.   It's my bosses call, but I'm guessing we will have to compromise a bit.   Also if they are leaving me any code, I want to sit with them and completely understand it.   That can't happen without some give and take.   We don't have to be friends, but it would be nice if we respect each other a bit.

 

OK - now there are some truly bad consultants out there.   There work is below even the lowest standards.  My advise, if you aren't the boss get with him immediately.   If you are the boss, send them packing.

 

Some horrible customers (In-house developers) - well it is up to you Mr. Consultant if you want to put up with it or not.   You can always look for a different gig.  Or you can try to make those horrible customers be better.  (Not always possible)

 

Here's something to think about.  You're consultant is amazing; however, she is horrible with people skills.   Would you keep her around?

 

Last but not least - some assumptions:

 

  • In-house developers do not keep up with the latest ways of coding.   Not always true, but remember we have to get things out quickly and sometimes it is easier to use something we know than what we are learning.
  • Consultants are inflexible.   Well they wouldn't get very many jobs if that was true.   They are just like the in-house programmers.
  • In-house developers are inflexible - demand consultants follow standards.  Not true, they may say that but give them (us) a reason to do something different that is better, and we will.   We will demand to understand it.
  • Consultants are horrible.  There code is inefficient.   Some are - get rid of them.  Some aren't.   Some are exceptional.   Remember they probably have a higher ratio of people staying up to date with the newest technology.  Of course they have the higher ratio of new developers as well.   Be careful.

 

And so - now have fun with this one!   What is the worst and the best that has happened to you?   I may start the comments too!  Of course leave names out.  And never post something that you wouldn't mind your employer reading.

 

OH - BTW I'm great at ranting.   But here is my point.   If you are a consultant don't assume your customer doesn't know anything.   If you are a customer be able to be flexible.   And really this was just a rant.   And I'm leaving it open for others to rant as well.

How to control a transport request's description - Part I

$
0
0

When we are prompted to create a new transport request, we are able to fill the description field freely.

 

But when you have a lot of transport requests in the development system, it would be certainly hard and time consuming to find a limited number of specific requests (based on a client reference, a project name, a project manager, a unit of work, ...).

 

One good solution would be to force all the developers in the company to respect a certain format of the description when creating any new transport request.

 

So we may want to define some kind of pattern to be respected : basically some fields (username, client reference, SAP Module, Team leader, Project, ...) and a separator.

 

 

  • Control the description entered before creating the request

 

In order to control the request's description entered by the user, we would have to implement the BADI: CTS_REQUEST_CHECK and specially the method CHECK_BEFORE_CREATION.

 

1.PNG

 

The parameter TEXT of the method CHECK_BEFORE_CREATION naturally contains the description entered by the user while creating the request.

 

2.PNG

 

 

So, we just would have to control the TEXT variable (some SPLIT, CONCATENATE, ... operations ) and ensure it meets our predefined requirement.

 

A Part II is coming, in which I will present a more interesting way to achieve this requirement by implementing an enhancement and creating a custom screen.

 

I hope this post is useful to all.

SAP Suspend and Release Jobs during SAP Upgrade

$
0
0

                            When you are doing upgrade for SAP system, you need to cancel/suspend jobs that are in released status in the transaction SM37. But, you realized that there are too many released jobs that need to be manually delete before an upgrade and to then manually schedule back after the upgrade


Suspend released jobs before the upgrade


                    By using transaction SE38 to call report BTCTRNS1 before starting the upgrade, jobs that have already been released are ‘Suspend for upgrade’.

• The status of all released jobs is set to a non-standard status. That is, to a status other than planned, released, active, finished or cancelled.


Re-releasing the jobs after the upgrade

 

                     To do this, call the BTCTRNS2 report after the upgrade is completed successfully, also by using transaction SE38.

• The jobs that BTCTRNS1 set with a special status are then restored to their original status (released).
• All jobs suppressed before the upgrade with BTCTRNS1 are then displayed as ‘Released’ in the job overview (transaction SM37). This also affects RDDIMPDP jobs. There should not be more than one RDDIMPDP running in any client.


SALV Tree to Excel (xlsx)

$
0
0

This document is to explain how to convert the SALV Tree, created using CL_SALV_TREE class, to an Excel file. There is a standard method to convert the SALV Table, created using CL_SALV_TABLE, into an Excel file. But, I couldn’t find a similar method isn’t available for SALV Tree. There are other solutions available in the market. But, this solution, I felt, has the lowest amount of custom code (~100 lines).

 

Approach

If you look closely at the SALV Tree output, you may see that, the Tree is essentially a Table itself.

2015-07-24 15_25_46-SAP.jpg

2015-07-24 15_26_26-Tree.jpg

To make a Table to a Tree, the only additions required are Groupings and Indentations. This is the concept I have used here, to create the Excel file – create a table and then add groupings and indentations to it.

 

Solution

The input for this development is a CL_SALV_TREE object. The code below creates an simple object with name LR_TREE, using common tables. This portion of the code, isn’t part of the solution – it is just used to get ‘test data’ to test the solution.

 

data:     begin          of   ls_mara,

          matnr          type matnr,

          maktx          type matnr,

          end            of   ls_mara,

          lt_mara        like standard table of ls_mara,

 

 

          begin          of   ls_marc,

          matnr          type matnr,

          werks          type marc-werks,

          name1          type t001w-name1,

          end            of   ls_marc,

          lt_marc        like standard table of ls_marc,

 

 

          begin          of   ls_mard,

          matnr          type matnr,

          werks          type marc-werks,

          lgort          type mard-lgort,

          lgobe          type t001l-lgobe,

          end            of   ls_mard,

          lt_mard        like standard table of ls_mard,

 

 

          begin          of   ls_mchb,

          matnr          type matnr,

          werks          type marc-werks,

          lgort          type mard-lgort,

          charg          type mchb-charg,

          clabs          type labst,

          cumlm          type umlmd,

          cinsm          type insme,

          ceinm          type einme,

          cspem          type speme,

          cretm          type retme,

          end            of   ls_mchb,

          lt_mchb        like standard table of ls_mchb,

 

 

          begin          of   ls_output,

          area           type string,

          clabs          type labst,

          cumlm          type umlmd,

          cinsm          type insme,

          ceinm          type einme,

          cspem          type speme,

          cretm          type retme,

          end            of   ls_output,

          lt_output      like standard table of ls_output,

          ls_marakey     type lvc_nkey,

          ls_marckey     type lvc_nkey,

          ls_mardkey     type lvc_nkey,

 

 

          lr_table       type ref to cl_salv_table,

          lr_columns     type ref to cl_salv_columns_tree,

          lr_tree        type ref to cl_salv_tree,

          lr_node        type ref to cl_salv_node,

          lt_nodes       type salv_t_nodes,

          ls_node        like line of lt_nodes.

 

 

select matnr maktx

       up to 50 rows

       from makt

       into table lt_mara

       where spras = sy-langu.

 

 

select matnr a~werks name1

       from marc as a join t001w as b

         on a~werks = b~werks

       into table lt_marc

       for all entries in lt_mara

       where matnr = lt_mara-matnr.

 

 

select matnr a~werks a~lgort lgobe

       from mard as a join t001l as b

         on a~lgort = b~lgort

        and a~werks = b~werks

       into table lt_mard

       for all entries in lt_marc

       where matnr = lt_marc-matnr

         and a~werks = lt_marc-werks.

 

 

select matnr werks lgort charg clabs cumlm cinsm ceinm cspem cretm

       from mchb

       into table lt_mchb

       for all entries in lt_mard

       where matnr = lt_mard-matnr

         and werks = lt_mard-werks

         and lgort = lt_mard-lgort.

 

 

cl_salv_tree=>factory(

  importing

    r_salv_tree = lr_tree

  changing

    t_table      = lt_output ).

 

 

lr_columns = lr_tree->get_columns( ).

lr_columns->set_optimize( abap_true ).

loop at lt_mara into ls_mara.

  clear ls_output.

  concatenate ls_mara-matnr ls_mara-maktx into ls_output-area separated by space.

  loop at lt_mchb into ls_mchb where matnr = ls_mara-matnr.

    add-corresponding ls_mchb to ls_output.

  endloop.

  lr_node = lr_tree->get_nodes( )->add_node( related_node = space

                                data_row     = ls_output

                                relationship = cl_gui_column_tree=>relat_last_child ).

  ls_marakey = lr_node->get_key( ).

 

 

  loop at lt_marc into ls_marc where matnr = ls_mara-matnr.

    clear ls_output.

    concatenate ls_marc-werks ls_marc-name1 into ls_output-area separated by space.

    loop at lt_mchb into ls_mchb where matnr = ls_mara-matnr and werks = ls_marc-werks.

      add-corresponding ls_mchb to ls_output.

    endloop.

    lr_node = lr_tree->get_nodes( )->add_node( related_node = ls_marakey

                                  data_row     = ls_output

                                  relationship = cl_gui_column_tree=>relat_last_child ).

    ls_marckey = lr_node->get_key( ).

    loop at lt_mard into ls_mard where matnr = ls_mara-matnr and werks = ls_marc-werks.

      clear ls_output.

      concatenate ls_mard-lgort ls_mard-lgobe into ls_output-area separated by space.

      loop at lt_mchb into ls_mchb where matnr = ls_mara-matnr and werks = ls_marc-werks and lgort = ls_mard-lgort.

        add-corresponding ls_mchb to ls_output.

      endloop.

      lr_node = lr_tree->get_nodes( )->add_node( related_node = ls_marckey

                                    data_row     = ls_output

                                    relationship = cl_gui_column_tree=>relat_last_child ).

      ls_mardkey = lr_node->get_key( ).

      loop at lt_mchb into ls_mchb where matnr = ls_mara-matnr and werks = ls_marc-werks and lgort = ls_mard-lgort.

        clear ls_output.

        ls_output-area = ls_mchb-charg.

        move-corresponding ls_mchb to ls_output.

        lr_node = lr_tree->get_nodes( )->add_node( related_node = ls_mardkey

                                      data_row     = ls_output

                                      relationship = cl_gui_column_tree=>relat_last_child ).

      endloop.

    endloop.

  endloop.

endloop.

 

 

lr_tree->display( ).

That builds our 'test data' tree.
2015-07-24 15_25_46-SAP.jpg

 

Ideally, you should be creating the routine to convert SALV Tree to Excel file, as a global method/ function module, so that many programs may reuse it. Which means, that common routine will have to work with the SALV Tree object and extract the data for output, titles, headers etc from it. So, I am regenerating the output table here, using SALV Tree object.

 

The code starts here….

 

constants:lc_xlspace     type c value ' '. "Hexa value for this field should be 0030

data:     lv_level       type i,

          lv_xlsx        type xstring,

          lt_table       type ref to data,

          lr_data        type ref to data.

field-symbols: <data>  type any,

               <table> type standard table,

               <str>   type any.

lt_nodes = lr_tree->get_nodes( )->get_all_nodes( ).

loop at lt_nodes into ls_node.

  lr_node = ls_node-node.

  clear lv_level.

  do.

    try.

        lr_node = lr_node->get_parent( ).

        add 1 to lv_level.

      catch cx_salv_msg.

        exit.

    endtry.

  enddo.

  lr_data = ls_node-node->get_data_row( ).

  assign lr_data->* to <data>.

  if <table> is not assigned.

    create data lt_table like standard table of <data>.

    assign lt_table->* to <table>.

  endif.

  assign component 1 of structure <data> to <str>.

  subtract 1 from lv_level.

  do lv_level times.

    concatenate lc_xlspace <str> into <str>.

  enddo.

  append <data> to <table>.

endloop.

cl_salv_table=>factory(

  importing

    r_salv_table = lr_table

  changing

    t_table = <table> ).

lr_table->display( ).

 

2015-07-24 15_24_55-Tree.jpg

2015-07-24 15_28_58-Microsoft Excel - BatchQty.jpg

At this point, the table has indentations, but no groupings. So, the next step is to add that to the table. Groupings cannot be directly added to the internal table, it has to be done in the XML file directly. Excel needs 3 identifiers to understand the row groupings.

  1. The row element in the file, should have an attribute outlineLevel and it should have the value of the level.
  2. The sheetFormatPr element in the file, should have an attribute outlineLevelRow and it should have the number of levels used in the file.
  3. The sheetPr element in the file, should have a child node named outlinePr, with an attribute summaryBelow, with value as false. The Excel file created by SAP, doesn’t have sheetPr element, so, with this routine, we will add the new element in the file.


lv_xlsx = lr_table->to_xml( if_salv_bs_xml=>c_type_xlsx ).


data:     lr_zip         type ref to cl_abap_zip,

          lr_xlnode      type ref to if_ixml_node,

          lr_xldimension type ref to if_ixml_node,

          lr_xlsheetpr   type ref to if_ixml_element,

          lr_xloutlinepr type ref to if_ixml_element,

          lv_file        type xstring,

          lr_file        type ref to cl_xml_document,

          lr_xlrows      type ref to if_ixml_node_list,

          lr_xlrow       type ref to if_ixml_element,

          lr_xlformat    type ref to if_ixml_element,

          lr_xlworksheet type ref to if_ixml_element,

          lv_tabix       type i,

          lv_maxlevel    type i,

          lv_levels      type string.

 

create object lr_zip.

lr_zip->load( lv_xlsx ).

 

*Get Worksheet XML file

lr_zip->get( exporting name = 'xl/worksheets/sheet1.xml'

             importing content = lv_file ).

 

create object lr_file.

lr_file->parse_xstring( lv_file ).

 

*Row elements are under SheetData

lr_xlnode = lr_file->find_node( 'sheetData' ).

 

lr_xlrows = lr_xlnode->get_children( ).

 

do lr_xlrows->get_length( ) times.

  lv_tabix = sy-index - 1.

  lr_xlrow ?= lr_xlrows->get_item( lv_tabix ).

 

*Find the same node in the SALV Tree object

  read table lt_nodes into ls_node index lv_tabix.

  if sy-subrc eq 0.

    lr_node = ls_node-node.

*Find the level of the node

    clear lv_level.

    do.

      try.

          lr_node = lr_node->get_parent( ).

          add 1 to lv_level.

        catch cx_salv_msg.

          exit.

      endtry.

    enddo.

    subtract 1 from lv_level.

    if lv_level ne 0.

      lv_levels = lv_level.

      if lv_level > lv_maxlevel.

        lv_maxlevel = lv_level.

      endif.

      condense lv_levels.

*Assign the level to row

      lr_xlrow->set_attribute( name = 'outlineLevel' value = lv_levels ).

      lr_xlrow->set_attribute( name = 'hidden' value = 'true' ).

    endif.

  endif.

  1. enddo.

 

*Set maximum levels used in the sheet

lv_levels = lv_maxlevel.

condense lv_levels.

lr_xlformat ?= lr_file->find_node( 'sheetFormatPr' ).

lr_xlformat->set_attribute( name = 'outlineLevelRow' value = lv_levels ).

 

*Create new element in the XML file
lr_xlworksheet ?= lr_file->find_node( 'worksheet' ).
lr_xldimension ?= lr_file->find_node( 'dimension' ).
lr_xlsheetpr = cl_ixml=>create( )->create_document( )->create_element( name = 'sheetPr' ).
lr_xloutlinepr = cl_ixml=>create( )->create_document( )->create_element( name = 'outlinePr' ).
lr_xlsheetpr->if_ixml_node~append_child( lr_xloutlinepr ).
lr_xloutlinepr->set_attribute( name = 'summaryBelow' value = 'false' ).
lr_xlworksheet->if_ixml_node~insert_child( new_child = lr_xlsheetpr ref_child = lr_xldimension ).

*Create Xstring file for the XML, and add it to Excel Zip file
lr_file->render_2_xstring( importing stream = lv_file ).
lr_zip->delete( exporting name = 'xl/worksheets/sheet1.xml' ).
lr_zip->add( exporting name = 'xl/worksheets/sheet1.xml'
content = lv_file ).

lv_xlsx = lr_zip->save( ).

 

LV_XLSX is the output file. You may download it to your desktop and see how it looks like. Now, with these changes the file Excel file shows groupings and hierarchy.

 

2015-07-24 15_27_22-Microsoft Excel - BatchQty.jpg

Displaying header details in delivery while creating outbound delivery using BAPI_OUTB_DELIVERY_CREATE_STO

$
0
0

Scenario-

If outbound delivery for STO is created using BAPI-  BAPI_OUTB_DELIVERY_CREATE_STO, it is not possible to populate value in few fields of header of delivery document (eg.-to display values in administration tab & Shipment tab) by simply passing values in the export parameters and table parameters of BAPI.

import.PNG

tables.png

Solution-

For the values to be populated in delivery header, Badi LE_SHP_DELIVERY_PROC is to be implemented and method FILL_DELIVERY_HEADER is used to populate details of delivery .

The method is called during delivery creation, each time that a new delivery header is filled with data. This method can be used to populate your own delivery header fields.


Solution implementation-

For displaying the desired values in delivery header tabs in delivery document, those values needs to be populated in CS_LIKP.

If these values are coming through any interface (idoc),then it should be exported from the corresponding  function module or program and imported inside method FILL_DELIVERY_HEADER using import statement. This values in turn will be used to fill CS_LIKP to populate the delivery header.

The structure of CS_LIKP is same as LIKP.

One can update header fields using method-FILL_DELIVERY_HEADER, which could not be updated using parameters of BAPI-BAPI_OUTB_DELIVERY_CREATE_STO.

For example- For displaying value in ext.delivery in administration tab and TrnsID code in shipment tab, populate CS_LIKP-LIFEX  and CS_LIKP-TRAID.Data can be used using VL03n-->click on header details icon .

headericon.PNG

adminisstration tab.PNG

shipment.PNG

Similarly, for item level data method- FILL_DELIVERY_ITEM can be used.

Upload and download Code Inspector variants via XML

$
0
0

Code Inspector variants are technically saved as ABAP programs using data clusters,variant.png

 

This makes it difficult to move the data between systems, and tell what exactly the variant contains without installing it. The project https://github.com/larshp/upDOWNci tries to address this issue by allowing upload and download of Code Inspector variants in XML format. It analyzes the ABAP code inside each check to determine the structure of the check settings and exports it to XML, the checks must  use class attributes with direct data dictionary for this to work, most of the checks use this approach, but expect errors and bugs in the program.

 

Start the program via report ZUPDOWNCI,

selection.png

and enter the variant, this will upload or download the variant to the frontend PC. Leave username blank for global variants.

 

Downloaded XML files will look like the following,

example.png

 

Upload will show a log for the imported check variant. The check variant will be created if it does not exist, and add additional settings to existing check variant if the check version in the XML matches the check version in the system.

result.png

 

Project page: https://github.com/larshp/upDOWNci

Issue list: https://github.com/larshp/upDOWNci/issues

Creation of a Folder on Application server via Programming, Is it Possible ?

$
0
0

Background - I got a requirement to create a folder on a specified path on application server, on monthly basis, and place a file in the folder.

 

Initially I thought it would be very simple, as we have used OPEN DATASET statements to access the path on the application server. I believed that if we could pass the new folder concatenated with the path then it would create a folder automatically. I have made necessary changes to the program. Only at the time of testing, I have realized that the statement OPEN DATASET failed with reason “Invalid path”.

 

With my requirement “Creation of a folder on application server”, I have searched on the internet and could found information about external command, it makes possible to create a folder on the applications server. I got very little support from my basis on external command, so I have searched again on how to create an external command and how to use it.

 

Then I came to know about Function Module ‘SXPG_COMMAND_EXECUTE’ which can be called by passing external command and external Paramters which can be passed to the external command. I even got a piece of information about external command, which I thought I would share it here.

 

External Command – External command operates at OS level which calls a batch file in Windows or a Shell file in UNIX (This information is limited to my knowledge) and can be created in SAP from transaction SM69 or SM49.

 

Eg: External Command – ZCREATE_FOLDER;

      OS – Windows NT;

      OS Command – C:\Path\batch.bat ;

      Paramters for OS command - &1

 

Batch file / Shell file – These files contains simple DOS commands or Shell commands.

 

Eg: To create a folder with dynamic parameter we can batch file have a command similar to

      MKDIR C:\PATH\%1 

 

Where C:\Path – is known Path

%1 – Parameter 1, the value can be passed from external Command

SAP ArchiveLink Invoices check for SD

$
0
0

With the advent of Electronic Invoices, the government's started to request that the company's store the Print Invoice Documents for a determined period of time. So, some companies were begun to use SAP Archive Link solution in order to store the invoices.

But, with the passing of years, some companies realize that some documents were archived more than once or, even worse, they weren't archived at all. This situation occurs because the companies didn't create mechanisms that allow them archive all Invoice documents and that avoid those documents are archived more than once.

In order to avoid that gap, I created a new tool that enables the companies to identify which documents are archived, and how many times they are stored. You can visit my project page on GitHub for more details.

Viewing all 948 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>