Quantcast
Channel: ABAP Development
Viewing all 948 articles
Browse latest View live

ABAP keyword syntax diagram

$
0
0

As a Fiori developer, I am now reading this famous Javascript book.

clipboard1.png

In this book, the following graph is used to explain the Javascript grammar in a very clear way.

clipboard2.png

And today I just find in ABAP help documentation there are also similar syntax graph to illustrate the grammar of each keyword.

 

Just open one ABAP report, select any keyword and press F1, and you can find "ABAP syntax diagrams".

clipboard3.png


Double click on it and choose one keyword like "APPEND" in the right area:

clipboard4.png

Then the syntax diagram is opened. Ckick small "+" icon to drill down.

clipboard5.png

Click the "?" icon to get the meaning of each legend used in the graph.

clipboard6.png

Hope this small tip can help those ABAP newbie to fall in love with ABAP


Step by Step to generate ABAP code automatically using Code Composer

$
0
0

Today I am going through the SAP help for BRFplus stuff and come across with some introduction about ABAP code composer.

 

I would like to share with you a very simple example to demonstrate its logic.

clipboard1.png

How to find the above help document in a quick way? Just google with key word "ABAP CODE COMPOSER" and click the first hit.

clipboard2.png

And here below are steps how to generate ABAP codes which contains a singleton pattern using ABAP code composer.

 

1. Create a new program with type "INCLUDE":

clipboard3.png

And paste the following source code to include and activate it:

 

*---------------------------------------------------------------------* 
*       CLASS $I_PARAM-class$ DEFINITION 
*---------------------------------------------------------------------* 
*       Instance pattern: SINGLETON 
*---------------------------------------------------------------------* 
CLASS $I_PARAM-class$ DEFINITION 
@if I_PARAM-GLOBAL @notinitial   \ PUBLIC 
@end 
\ FINAL CREATE PRIVATE.   PUBLIC SECTION.     INTERFACES:       $I_PARAM-interface$.     CLASS-METHODS:       s_get_instance         RETURNING           value(r_ref_instance) TYPE REF TO $I_PARAM-interface$ 
@if I_PARAM-exception @notinitial         RAISING           $I_PARAM-exception$ 
@end 
\.   PRIVATE SECTION.     CLASS-DATA:       s_ref_singleton TYPE REF TO $I_PARAM-interface$.     CLASS-METHODS:       s_create_instance         RETURNING           value(r_ref_instance) TYPE REF TO $I_PARAM-class$ 
@if I_PARAM-exception @notinitial         RAISING           $I_PARAM-exception$ 
@end 
\. 
ENDCLASS.                    "$I_PARAM-class$ DEFINITION 
*---------------------------------------------------------------------* 
*       CLASS $I_PARAM-class$ IMPLEMENTATION 
*---------------------------------------------------------------------* 
*       Instance pattern: SINGLETON 
*---------------------------------------------------------------------* 
CLASS $I_PARAM-class$ IMPLEMENTATION. 
************************************************************************ 
*       METHOD S_CREATE_INSTANCE 
*----------------------------------------------------------------------* 
*       Constructs an instance of $I_PARAM-class$ 
*......................................................................*   METHOD s_create_instance. 
*    RETURNING 
*      value(r_ref_instance) TYPE REF TO $I_PARAM-class$ 
@if I_PARAM-exception @notinitial 
*    RAISING 
*      $I_PARAM-exception$ 
@end 
************************************************************************ 
@if I_PARAM-exception @notinitial     DATA:       l_ref_instance TYPE REF TO $I_PARAM-class$. 
************************************************************************     CREATE OBJECT l_ref_instance. 
@slot object_construction 
*   Construction of the object which can lead to $I_PARAM-exception$ 
@end     r_ref_instance = l_ref_instance. 
@else     CREATE OBJECT r_ref_instance. 
@end   ENDMETHOD.                    "s_create_instance 
************************************************************************ 
*       METHOD S_GET_INSTANCE 
*----------------------------------------------------------------------* 
*       Keeps track of instances of own class -> only one 
*......................................................................*   METHOD s_get_instance. 
*    RETURNING 
*      value(r_ref_instance) TYPE REF TO $I_PARAM-interface$ 
@if I_PARAM-exception @notinitial 
*    RAISING 
*      $I_PARAM-exception$ 
@end 
************************************************************************     IF s_ref_singleton IS NOT BOUND.       s_ref_singleton = s_create_instance( ).     ENDIF.     r_ref_instance = s_ref_singleton.   ENDMETHOD.                    "s_get_instance 
ENDCLASS.                    "$I_PARAM-class$ IMPLEMENTATION

The string wrapped with a pair of @,for example, the string "$I_PARAM-class$", acts as a importing parameter of code composer, which means during the code generation, you must tell code composer what is the actual class name in generated code, by passing the actual name to this parameter.

 

This activated include will act as a code generation template. We now have the following importing parameter:

 

  • $I_PARAM-class$
  • $I_PARAM-global$
  • $I_PARAM-interface$
  • $I_PARAM-exception$

 

2. create another driver program which will call code composer API to generate the code with the help of the template include created in step1. The complete source code of this program could be found from attachment.

clipboard5.png

I just use the cl_demo_output=>display_data( lt_tab_code ) to simply print out the source code.

 

In the output we see all of the placeholder ( $XXXX$ ) in the template have been replaced with the hard coded value we specify in the driver program.

clipboard6.png


Although the google result shows the code composer API is marked as for SAP internal use only and thus could not be used in application code, however I think we can still leverage it to design some tool which can improve our daily work efficiency.

clipboard7.png

Clarification on Secondary Indexes limitations on database tables

$
0
0

Couple of frequently asked questions in SCN forum,

1. How many secondary indexes can be created on a database table in SAP?

2. How many fields can be included in a secondary index (SAP)?

 

By seeing many threads over the above couple of questions in SCN forum marked as 'Answered' (correctly) with different answers, I have decided to test the limitations on the Secondary Indexes. The different answers are like 9, 10 (1 Primary and 9 Secondary), 15, 16 (15 Secondary, 1 Primary), No such limit.

 

So, to check, I have created Secondary indexes on table SFLIGHT.

 

1. How many Secondary Indexes can be created on a database table in SAP?

Ans. I have created 18 secondary indexes, but the system has not objected at 9 or 10 or 15 or even 16.

 

Capture.PNG

 

So, I believe that, there is no such limit for number of Secondary indexes to create on database table in SAP. But it is not at all recommended to create more than 5 Secondary indexes on a database table.

 

2. How many fields can a Secondary Index can contain?

 

When I am testing this I have created Secondary Index for EKKO table and for  an Index I have assigned all the table fields (134). Then the system says that 'max 16 fields can be assigned' with an error message.

 

error.PNG

 

So, for a Secondary index we can assign maximum of 16 fields in a database table. But it is recommended to create a secondary index with not exceeding 4 fields.

 

 

> These are the points to be remembered before creating an Index.


a. Create Secondary Indexes for the tables that you mainly read. Because every time we update a database table, it would update indexes also. Let's say there is a database table where we create (or update) 100s of entries in a single day. Avoid using Indexes in such cases.    

b. We should take care that an index shouldn't have more than 4 fields and also the number of indexes should not exceed 5 for a database table. Or else, it would result in choosing a wrong one for particular selection by an optimizer.

c. Place the most selective fields at the beginning of an Index.

d. Avoid creating an Index for a field that is not always filled i.e., if it's value is initial (null) for most entries in a table.

 

> These are the points to be remembered while coding in ABAP programs for effective use of Indexes i.e., to avoid the full table scan.

a. In the select statement, always put the condition fields in the same order as you mentioned in the INDEX. Sequence is very important here.

b. If possible, try to use positive conditions such as EQ and LIKE instead of NOT and IN which are negative conditions.

c. Optimizer might stop working if you use OR condition. Try to use IN operator instead of that.

d. The IS NULL operator can cause a problem for the Index as some of the database systems do not store null values in the Index structure.

 

 

Thanks and Regards,

Vijay Krishna G

Real life examples & use cases for ABAP 740 and CDS views

$
0
0

It has been long time since I post my previous blog which draw more attention than I expected and I was thinking what could be next. Luckily I am working on a S4HANA project and having opportunity to try new syntax options and I will compile some real life examples and test samples about new syntax options and CDS views and try to explain why and how they are useful and we should try to use them.

 

First of all it is possible to get information about all of them in ABAP keyword documentation in Release specific changes as shown below. There are many changes, in my blog I will only briefly mention the ones that I had opportunity to use.

 

Some of the examples was only created to see what can be done, as they may not fully fit to a business case.

 

Release specific changes branch in keyword documentation

1.png

 

1. Inline declarations

Field symbol with inline declaration

2.png


Data declaration

3.png

I only code in ABAP for a long time and I can say it is really nice to avoid necessity of going top of the block just to define something so using inline declarations is really practical and time saving.

 

2. Constructor expressions (  Thanks to  Ahmet Yasin Aydın  )


Constructor expression new

4.png

Value operator

5.png


 

It is again time saving have better readability and helps us to have shorter source codes, no need to say there can be countless different usage options.


3. Rule changes for joins

6.png

  7.png


Above statements is directly from ABAP keyword documentation which will allow us to build more complex join statements, we can now use only restriction from another left side table and we can use fields from left side table in where condition which is quite revolutionary and it is possible to build one big select statement which means some reports can now only built using one select statement by also help of other changes (using literals, case and more) that can be seen in keyword documentation.  I did verify it and coded some reports in both logic and compared the results it is simpler to code and faster in HANA Here comes the example.

 

Also restriction to use only equality comparison on “On condition” is removed for outer joins.


Excluded the field list for the below select since it was really a big one

Joins ( some of the tables may better be connected using  inner join just created to test left outer joins ) 

 

8.png

 

Where condition also contains fields from left side tables:

9.png

 

4.  Source codes  that can only be edited in Eclipse( ADT )

 

This one is not a syntax option but it is something that we need to know so I wanted to add this to my list. Eclipse is in place for a long time but so far we were able to edit every development object in Eclipse or in SAP GUI (Correct me if I am wrong ) but it is changed, now there are CDS ciews and AMDP (Abap Managed Database Procedures) that can only be edited in Eclipse. So if for any reason you need to develop these objects you also need to have Eclipse in your PC and it may be nice to start coding in Eclipse if you have not started yet.

 

Message if we try to edit AMDP in GUI:

10.png

 

Eclipse edit display:

 

11.png

 

5. CDS Views

 

After HANA we had different tools like Analytical and Calculation views and external view, database procedure proxies to read these views directly from ABAP but there are some practical difficulties to use them if most of the development tasks in a project is handled by ABAP programmers (learning SQL script, granting project team members authorization in DB level and having different level of transport management which can easily cause problems), CDS views can be a good alternative they are at least managed in application layer and have same transport  procedure as older abap development objects.


We keep searching use cases for the CDS’s  in our project. We so far created master data views and tried to create some reusable CDS views which can be used by several reports can be seen below.


We also tried to convert some old logic ( Select data from different tables and merge them inside loops) into CDS’s its performance is better but could not test with some really big data yet but I also need to mention it is at the same level of performance with a big select shown in Open SQL explained in point 3 .

 

Example view on material data, it can be used in select statements in ABAP or viewed in SE16

12.png

 

6. CDS Union all example

This helped us to simplify a case: There are different price tables with different structures and several reports needs to read price data. We designed one big structure with necessary fields from different tables, now we can use this one view instead of 5 different tables wherever we need to read price data. I am only adding first two tables but 3 more table is added with union all and all can read at once now.

 

13.png

Result for the view

 

14.png

 

There are many other changes to explore for some it may take long time to have  a proper case to apply, would be happy to know if you also used some new syntax changes and how they made life easier for you.

The Add-on Assembly Kit 5.00 is available

$
0
0

"If you develop industry-, company-, or country-specific enhancements to SAP solutions, the SAP Add-On Assembly Kit can help you plan and deliver those enhancements as software add-ons. The SAP Add-On Assembly Kit guarantees quality software development by using a standardized process flow from planning to delivering the add-on. The delivery tools work smoothly with SAP’s maintenance strategy, helping you integrate new developments into your existing environment and providing maintenance throughout the enhancement’s life cycle. The SAP Add-On Assembly Kit and its comprehensive documentation help ensure high-quality product development from the planning phase. The add-on tools also help you efficiently install, update, and maintain the enhancement."

 

see help.sap.com/aak

 

In other words if you want to stop deliver your ABAP software via transports, you can request and ask for the AAK. You will get it via an separate contract.

 

Your ABAP software can even be deinstalled conveniently via the SAP tool SAINT/SPAM.

BSP application which adds external document (URL) to the purchase requisition

$
0
0


This blog explains how to create an URL attachments to the purchase requisition using BSP application.

Prerequisites

  • Basic knowledge on BSP applications, OOABAP and HTML.

Creating URL's manually from SAP

We can create URL attachments manually by going to transaction ME51N

click on "services for Object" button-> Create->Create External Document(URL)

sap.png



You will get the popup, please enter title and address and click on green tick shown in the below screen shot.

img2.png



Now the URL will be saved to the SAP.

img 3.png




Step by step procedure to create URL attachments using BSP application.

Step 1: Let's create BSP application using transaction SE80, choose BSP application from the drop down and give the name of the application.


Step 2: Right click on the BSP application and create the controller

img 4.png



Step 3:Create Controller class in the controller as shown in the below screen shot

img5.jpg

 



Step 4: Place the cursor on DO_REQUEST method and click on redefine button.

img7.png

 



Step 5: we are going toImplement our logic within the DO_REQUEST method.


Here I am giving an overview for the creating the .For complete code find the attached "ABAP codedocument" in this blog.

Follow the below steps for URL attachments.


  i. Get all the form field values using the method "get_form_field"

    For example

              CALL METHOD request->get_form_field
                   EXPORTING
          name 
= c_url
        RECEIVING
                      value = lv_url.

 

ii.  Creating the reference for the View(UI).

       r_obj_view = create_view( view_name = 'zcreateexturl_view.htm').  

 

iii.  To specify that the purchasing document is purchase requisition,use the bus type as  BUS2105 and pass as parameter to the below function module.

 

iv. Use the Standard function module "SO_FOLDER_ROOT_ID_GET" to get the Folder root ID(All the URL attachments are stored under this folder ).

 

v. We are going to pass the Pass title and URL as parameters to the function module  "SO_OBJECT_INSERT". This function module will create the Object Id.


vi. To make the necessary changes to the database create the binary relation and commit it. Now the Title and URL will be attached to the purchase requisition.

 

vii. Send the response back to the UL using the method set_attribute. weather created successfully of failed.

For Example :

             CALL METHOD r_obj_view->set_attribute
                    EXPORTING
          name 
= c_message
                        value = lv_message.

 

 

Step 6: Right click on the BSP application create the page and choose radio button "View" as shown in the below screen shot.

img9.png

 

Click on the Layout tab using HTML Code design the UI.

Please find the HTML code in the attached "BSP Code document"


Step 7: Declare page attributes as shown in the below screen shot. The values for the below variables are being sent from the controller.

img10.png



Step 8: Right click on the BSP ppplication and test it. You will be navigated to browser.

           

***Note: Before you test it make sure both the CONTROLLER and VIEW should be activated.***

 

Step 9: Give the purchase requisition number and hit enter.

img11.png




Step 10: The Item details of given purchase requisition number and two input fields will be displayed when the user gives the

Enter the title and URL click on submit button.

img12.png

 

 

Step 11: If you get the status message, your URL will be attached to the purchase requisition successfully

img13.png



Step 12: Check weather the URLis added to ME51N transaction or not.

img14.jpg

 

When you double click on title, URL will be opened in the browser...

Hungarian beginner's course - a polemic scripture against Hungarian Notation

$
0
0

I love the atmosphere of constructive debating - lively and resolutely debated, but not becoming personal. In this mood, the following blog post became composed.

 

There is no other way around it by auditing, maintaining or understanding other's coding: "lt_" stalks you! Everywhere! In SAP's Coding. In customer's ABAP code and in their official developers guidelines. In SAP-PRESS-Books of Rheinwerk Verlag, even though they also published the official developers guidelines of SAP (which tells us, that's bad coding) - WTH their editorial office was hired for? I also think it's a bad idea and I'm not alone with this opinion. I'm gonna tell you right now, why - in the following blog post.

 


Hungarian Notation, so it's called in most cases, was invented by Mirosoft, which is a valid reason for being for most of the developers on the planet. That's the tale.

 

The truth is: Founder of the Hungarian Notation was Charles Simonyi, an hungarian developer (that's why it's called "Hungarian Notation") at Microsoft, who wrote an article, but it's epidemical spreading misunderstanding by masses of developers around the planet was not his intention!

 

Following my main rule ("I don't like metaphors, I prefer to speak in pictures"), I'll illustrate the problems by showing it at an example:

 

Using three indicators to identify a data type

 

Let's take a common data type's name lt_ekko. What tells us it's name? It tells us, that it's a local table, which linetype equals the well known linetype EKKO. To make a long story short: It tells us masses of redundant information.

 

1. The local/global indicator

 

For an ambitious software developer, global data types don't exist. We should not work with it, that's what SAP told us for years, and they were right and they are still right - but why their own employees are permanently breaking this rule?

 

Developers, working with other programming languages, can not believe, that ABAPers work with methods of before OO was invented. In a well encapsulated environment, global data has no reason for being, because they are conflicting basically with object oriented software development paradigm.

 

And - this question may be allowed - what is the definition of global? All data types, defined in a program or class, are locally by definition, because outside this development object they do not exist. The only data types, which are existing globally, are defined in the Data Dictionary. Same as classes and interfaces (which are just data types with a higher grade of complexity): Global classes and interfaces are defined in SE24/SE80 and not inside an ABAP. A class, defined in an ABAP, is a local class by definition.

 

In conclusion to this statements, all so called global data types are also locally by definition (program wide locally, to be exact). This doesn't touch the rule, that we should not use this, but in order to this blog post, it's important, that an ABAP can not define global data types, so the prefix "g" won't be used correctly. This results into the question: If everything is locally by definition, why the hell we do need a prefix for that?

 

And, pals: Don't tell me, a static class attribute is the same like a so called global program variable, because it's valid in the whole class and accessible system-wide! An attribute is called an attribute, because it has a context (the classes' context!), this is way different from what a variable is! And the accessibility of such an attribute depends on it's visibility configuration. A private attribute is not accessible system wide.

 

2. The data type dimension indicator

 

The next question is, why I should use an indicator, describing the dimension of a data type. A table is just a data type, same as a structure or a field. In most cases, I simply don't know, what dimension a data type has, I work with - i. e. while working with references and reference variables (what we should do, most of the times). And what is (from a developer's view) the difference of a clear-command to a field in comparison of the same command to an internal table? It does simply the same: The clear command clears, what stands to the right of this command. It's that simple. What kind of information will tell me the "t" in lt_ekko in this context???

 

What's about nested tables? In

 

TYPES: 

  begin of ls_main,

    materials type standard table of mara,

    ....

end of ls_main, 

lt_main type standard table of ls_main.

 

the table materials should be named lt_materials. No? Why not? Why a "such important" information, that this is a table, suddenly gets worthless, just because it's a component? That this is a table, is only important in relation to the access context. Which means: For a statement like

 

ASSIGN COMPONENT 'materials' OF STRUCTURE ls_main ....

 

materials is a component, not more or less.

 

I'm not kidding: I really read some developers guidelines, which strictly orders, a field symbol has to have the prefix "fs_", what is really dump, because a field symbol has it's own syntax element definition "<...>"! Is this the way, a professional developer should work???

 

Next example is a guideline, which says, that I don't have to use "lv_" for local variables, but "li_" for local integers, "ln_" for local numerics, "lc_" for local characters (which is in conflict to local constants) and so on. A developer needs to have a list of "magic prefixes" on his desk, to bear in mind this dozens of prefixes!

 

But this causes a problem: What, if you have to change the data type definition during development or maintenance process? You really have to rename it through the complete calling hierarchy through all of the the system, which means, you may have to touch development objects, only for the renaming process. You have to test all this objects after changing the code! What a mess! You need some Hobbies, if you need to fill your time, but not this kind of evil work.

 

It's a well known rule: The more development objects you have to change, the more likely is, that you'll get to objects, which are locked by other developers.

 

A public example: The change of data type definition from 32 to 64 Bit in Windows. All the developers, who have used Hungarian Notation, are now using a data type's name, referring to a definition, which has nothing to do with it's type!

 

What's about casting? I could find more questions like this, but that'll it for now, because it's enough for you to get the key statement.

 

3. The structure's description

 

This is another surplus information, because the structure's or the basic data type definition is just a double click (in SAPGUI) or a mouse over (Eclipse) far from the developer's cursor.

 

Now that we know, which redundant, surplus information we can get, let's have a look, what kind of important information we won't get from lt_ekko:

 

What kind of data we will find in lt_ekko? EKKO contents different kinds of documents: Purchase Order headers, contract headers, and so on. And by deep inspection, there are a few different kinds of Purchase Orders. Standard PO? Cross Company? What a cross company purchase order exactly is, depends on the individual definition of the customer's business process, so it's identification is not easy!

 

To get to know, what kind of documents are selected into table lt_ekko, we have to retrace the data selection and the post data selection processing, which is much more complex than a double click. For this reason, this is the most important information, we have to place in the table's name!

 

If you select customers, what do you select in detail? Ship-to-partners? Payers? Or the companies, who will get the bill? Whatever you do, lt_kna1 won't tell me that! ship_to_partners will do!Conclusion:

 

To get rid of all surplus information and replace them with relevants, we should not name his table lt_ekko, but cc_po_hdrs, to demonstrate: This are multiple (hdrs = plural = table, if you really want to do that) cross-company purchase order headers. A loop could look like this:

 

LOOP AT cc_po_hdrs     "<--- plural = table

INTO DATA(cc_po_hdr).  "<--- singular = record of

   ....

ENDLOOP.

 

No surplus information, all relevant information included. Basta!

 

I am not alone

 

You may ask, why this nameless silly German developer is telling you me how you have to do your job? I am not alone, the following quotes proof:

 


“No I don’t recommend ‘Hungarian’. I regard ‘Hungarian’ (embedding an abbreviated version of a type in a variable name) a technique that can be useful in untyped languages, but is completely unsuitable for a language that supports generic programming and object-oriented programming”


  • Robert Martin, Founder of Agile Software Development, wrote in “Clean Code: A Handbook of Agile Software Craftsmanship”:


"...nowadays HN and other forms of type encoding are simply impediments. They make it harder to change the name or type of a variable, function, member or class. They make it harder to read the code. And they create the possibility that the encoding system will mislead the reader.”



"Encoding the type of a function into the name (so-called Hungarian notation) is brain damaged—the compiler knows the types anyway and can check those, and it only confuses the programmer.”


Brain damaged", to repeat it. Is this the way, we want to talk about our work, we should be proud of?


Conclusion


Of course, I know, that masses of developers will disagree, only because the always worked like this (because they learned it from others, years or decades ago) and they don't want to change it. Hey, we're Software Developers! We are the one, who permanently have to question the things we do. Yesterday, we did procedural software development, today our whole world is object oriented, tomorrow we're gonna work with unbelievable masses of "Big Data", resulting in completely new work paradigms, we don’t know, yet. And those guys are too lazy, to question their way of data type naming? Are you kidding?We are well payed IT professionals, permanently ahead in latest technologies, working on the best ERP system ever (sic!) and the SAP themselves shows all of us, that they can throw away the paradigms of 20 years to define new ones (to highlight the changes to S/4HANA, I never would have estimated as possible).Let's learn from our colleagues, who also develop applications with a high grade of complexity. Let's learn from the guys, who invented the paradigms we work with. Let's forget the rules of yesterday....


Appendix


I've been asked, lately, if I don't like prefixes at all. The answer is: No. Indeed, there are prefixes, indeed, making sense:

  • importing parameters are readonly, so they may have the prefix "i_".
  • exporting parameters have to be initialized, because their value is undefined, if they are not filled with a valid value. So we should give them a prefix "e_".
  • changing parameters transport their value bidirectional, so they should marked with a "c_" and
  • returning parameters will be returned by value, so we should mark them with prefix "r_".

 

This is a naming rule, I'd follow and support, if requested. Because this prefixes transports relevant, non-redundant information (in terms of the things, which are not obvious), influencing the way we handle this data types.


Request for comments


Your opinion differs? Am I wrong, completely or in some details? You'd like to back up me? Feel free to leave a comment....                                                                                                                                                                                 

 

Disclaimer: English ain't my mother tongue - Although I do my very best, some things maybe unclear, mistakable or ambiguous by accident. In this case, I am open to improve my English by getting suggestions

Reasons for so many ABAP Clones

$
0
0

Note: I did originally publish in the following post in my company's blog on software quality. Since it might be interesting for many ABAP developers, I re-publish it here (slightly adopted).

 

From the code audits and quality control of ABAP projects we do in our company, we observe again and again that ABAP code tends to contain a relative high rate of duplication within the custom code. The data of our benchmark confirm this impression: From the ten projects with the highest rate of duplicated code, six projects are written in ABAP (but only 16% of all projects in the benchmark are ABAP projects). In this post I will discuss what are the reasons for the tendency to clones in ABAP.

 

What is Cloning and Why is it Important?

 

Code clones are duplicated fragments (of a certain minimal length) in your source code. A high amount of duplicated code is considered to clearly increase maintenance efforts on the long term. Furthermore, clones bear a high risk of introducing bugs, e.g. if a change should affect all copies, but was missed in one instance. For more background information see e.g. the post of my colleague or »Do Code Clones Mater?«, a scientific study on that topic.

 

The following figure shows a typical example of an ABAP clone:

 

abap_clone.png

 

The code is fully identical, unless the name of the variable over which is iterated. As mentioned before, in many ABAP projects we see many of such clones (frequently, the cloned part is much longer—some hundred lines are no surprise).

 

So, What Might be the Reasons for the High Tendency Towards Code Cloning in ABAP?

 

First, it is not a lack of language features to re-use code: The most important mechanism is the ability to structure code in re-usable procedures. There exist form routines, function modules and methods—but it seems the barrier to consequently use these concepts is higher than in other languages. Why? I see three main causes:

 

  • Poor IDE support
  • Constraints in the development process
  • Dependency fear

 

Besides these constructive reasons, there is also a lack of analysis tools to detect duplicated code. The SAP standard tools are not able to detect clones within custom code. Thus a third-party tool is required for clone detection. However, in this post I will focus the before mentioned constructive reasons and discuss them.

 

Poor IDE support

 

In every language, the fastest way to implement a function, which only differs in a tiny detail from an already existing function, is to copy the source code and modify it. To avoid the duplication, these are common practices:

 

  • Extract the common code to a separate procedure where it could be used form the old and new functionality
  • Add a parameter to a procedure’s signature to make it more generic
  • Rename a procedure (to reflect the adopted common function)
  • Move a procedure (method, function module) to a different development object (class, function group) for common functionality
  • Introduce a base class a move common members there

 

Most IDEs for other languages provide support for these refacotrings, e.g. method calls are updated automatically if a method was moved. The ABAP Workbench SE80 (which many developers still use) provides hardly any refactoring support required to resolve duplicates. Even with ADT refactorings are limited to be local in one development object, are supported yet. This makes restructuring the code more difficult, it is more time-consuming and the risk of introducing errors is increased. The last issues is especially relevant since not even syntax errors in non-edited objects might be detected, but these errors first unveil at runtime or during the next transport to another SAP system. All these makes duplicating ABAP code more »productive« during the initial development—but it will hinder maintenance as in any other program language.

 

Constraints in the Development Process

 

The shortcomings of the ABAP IDEs are obvious reasons for duplicated code. More surprisingly, but with even more impact are constraints in the development process. When we discuss duplicated ABAP code with developers, this is often justified by restrictions of the development scope: Assume program Z_OLD was copied to Z_NEW instead of extracting common functionality and re-use it from both programs. Sometimes the development team copied the program since they were not allowed to alter Z_OLD since the change request is bound to specific development objects or packages. The reason for such restrictions is an organization structure where the business departments »own« the respective programs and every department fears that changes initiated by others could influence their specific functionality.

 

A similar situation arises when changing of existing code is avoided to save manual test effort in the business departments. Especially if the change request for Z_NEW was issued by a different department, the owners of Z_OLD may refuse to test it. (Maybe the wouldn’t if tests were automated.—Having only manual tests is not the best idea.)

 

Dependency Fear

 

Not specific to ABAP, but here more widespread is the fear of introducing dependencies between different functionalities, especially if these are loosely related. Often the benefit of independent code / programs is seen, since a modification of the code is always local to one instance and would not influence other parts. It is hard to say why this fear is more common in the ABAP world, one reason is the before mentioned organization of the development process. An other reason may be the lack of continuous integration where the whole code base is automatically built. The lack of automated testing might be the major reason: Whereas substantial test suites for automated unit tests are the rule in Java or C# projects, ABAPUnit tests are not that widespread.

 

No matter what the reason for this fear of dependencies is, there is an assumption that future changes of one copy should not affect the other copies. But in many cases the opposite is true! Cloning makes the code independent, but not the functionality—it will still be a similar thing. Thus it is an apparent independence only. Yes, there might be cases where a future change should only affect one of many copies. But very often a change should be applied at all occurrences of the related functionality. Consider bug fixes for example: in general, these must be done in all copies. We’ve observed the same change in two copies under two different change requests (were the second change was done several time later). This will almost double the maintenance effort without any need.

 

Can we Avoid Cloning in ABAP?

 

Yes, I’m sure cloning could be avoided as in any other programming language. Despite the fact that in many ABAP projects there is a high trend towards cloning, we’ve also seen counter-examples with only few clones. It is possible to have a code base with many hundreds of thousands lines of ABAP code and keeping the clone coverage low. From the reasons for intensive ABAP cloning discussed above we can conclude these recommendations to avoid it:

 

 

  • Dismiss copy-and-paste programming and encourage your developers to avoid duplication and restructure existing code instead. Accept that this is a bit more time-consuming in the beginning.
  • Make intensive use of common code and utilities, which are intended to be used by several programs. This code should be clustered in separate packages.
  • The development team should be the owner of the code, not the business departments—at least not for common functionalities. The developers should be free to restructure code if it is worth for technical reasons. Keeping the code base maintainable is a software engineering task which hardly can be addressed by the business department.
  • Make use of test automation, e.g. using ABAPUnit and execute all of these tests at least once a day. Many regression errors could be detected this way.

 

If these is given, also ABAP code could be mainly free of redundancies. Of course, additionally you should introduce an appropriate quality assurance to keep your code base clean. This could be either by code reviews or static analysis. More about how to deal with clones can be found in part 2 of Benjamin’s posts on cloning.


Deadlock Holiday

$
0
0

To whom it may concern ...

 

For any write access to a line of a database table the database sets a physical exclusive write lock on that line. This lock prevents any other write access to the line until it is released by a database commit or database rollback.

 

How can we see that in ABAP?

 

Rather simple, write a program:

 

DATA(wa) = VALUE scarr( carrid = 'XXX' ).

DELETE scarr FROM wa.
INSERT scarr FROM wa.

DO 100000000 TIMES.
ENDDO.

MESSAGE 'Done' TYPE 'I'.

 

Run it in one internal session. Open another internal session and run another program in parallel:

 

DATA(wa) = VALUE scarr( carrid = 'XXX' ).

DELETE scarr FROM wa.
INSERT scarr FROM wa.

MESSAGE 'Done' TYPE 'I'.

 

The program in session 2 finishes only when the first program has finished.

 

This is  as expected. The second program tries to write to the same line as the first program and therefore is locked.

 

You must be aware that such locks do not only occur for Open SQL statements but for all write accesses to database tables. Clearly all writing native SQL statements are other candidates. But also other ABAP statements access database tables. Recently, I stumbled over EXPORT TO DATABASE.

 

Program in internal session 1:

 

EXPORT dummy = 'Dummy' TO DATABASE demo_indx_table(xx) ID 'XXX'.

DO 100000000 TIMES.
ENDDO.

MESSAGE 'Done' TYPE 'I'.

 

Program in internal session 2:

 

EXPORT dummy = 'Dummy' TO DATABASE demo_indx_table(xx) ID 'XXX'.

MESSAGE 'Done' TYPE 'I'.

 

The program in session 1 locks the parallel execution of the program in session 2 because the same lines in the INDX-type database table are accessed. A deadlock situation, where you might have not expected it.

 

To prevent such deadlock situations, the write locks must be released as fast as possible. These means, there must be database commits or database rollbacks as soon as possible. In classical ABAP programming a lot of implicit database commits occur. E.g., each call of a dynpro screen leads to a rollout of the work process and a database commit. If there is only a short time between write access and database commit, you don't realize such deadlocks in daily live. But if you have long running programs (as I have simulated above with the DO loop) without a database commit shortly after a write access, you can easily run into unwanted deadlock situations. In my recent case, I experienced deadlock situations during parallelized module tests with ABAP Unit: no screens -> no implicit database commits.

 

Therefore, as a rule:  If there is the danger of parallel write accesses to one and the same line of a database table, avoid long running processes after a write access without having a database commit in between.

 

In the examples above, you could prevent the deadlock e.g. as follows:

 

DATA(wa) = VALUE scarr( carrid = 'XXX' ).

DELETE scarr FROM wa.
INSERT scarr FROM wa.


CALL FUNCTION 'DB_COMMIT'.

DO 100000000 TIMES.
ENDDO.

MESSAGE 'Done' TYPE 'I'.

 

or

 

EXPORT dummy = 'Dummy' TO DATABASE demo_indx_table(xx) ID 'XXX'.


CALL FUNCTION 'DB_COMMIT'.

DO 100000000 TIMES.
ENDDO.

MESSAGE 'Done' TYPE 'I'.

 

By calling function module DB_COMMIT in the programs of session1 an explicit database commit is triggered. The programs in session 2 are not locked any more during the long running remainders of the programs in session 1.

 

It is not a rule, to place such calls behind each write access. Of course, a good transaction model should prevent deadlocks in application programs anyway. But if you experience such deadlocks in special situations, e.g. in helper programs that are not governed by a clean transaction model, such explicit database commits can be helpful.

Thoughts on Material Data Migration

$
0
0

Part I: Back Story, a developer's suffering


In most cases, a company has to pass just one Material Data Migration Project at a time, in some others, as the company is growing, there might be one or the other project to integrate other company’s material data. I have a customer, which is a fast growing company. I can't recall a year without a migration project. In fact: during the last years there were three or more migration projects per year and there is a queue of migrations, waiting to be processed.

 

 

Due to privacy reasons and because SCN is not a pillory, the customer's name won't (and shouldn't) be mentioned here. It's just an example for problems, which can appear likewise in many projects at many customers.


Before I joined their SAP Competence Center (as an external, freelancing developer), they worked with single-use reports to migrate the new companies' data. In the past, they tried to use LSMW, but since several external developers failed by migrating material master data with LSMW, I was not allowed to use it! In this single-use reports, it was hard coded, in which way fields are to be filled depending on their material type and it's display-only/mandatory-customizing, as well as standard values, which are to be used by default, if it's undefined or empty in source system. Hard coded inserts of MVERs, additional MARCs/MARDs, MLGNs/MLGTs, etc. Some flags appeared from nowhere and there was no way to separate the overall usable coding from the project specific code (what results in the fact, that the whole program was project specific, so they had to code another from scratch for each project). This coding was called "pragmatic".


I had to obey - knowing, that I would take great risks if I would try other ways. So I did as I was told and used - under protest - hard coded single used reports. As we were pressed by time, no discussion arose about it. And - I must admid - my last material data migration project lay back 15 years. For the sake of peace and quiet, I did as I was advised.

 

And guess what: This project was a mess - for my nerves and my health. Instead of being proud of my work, I hated my coding. After I made all requested changes it was impossible to tell by whom they were required. Of course, at Going-Live, all data have been migrated in time and correctly (hey, that’s what I am payed for!), but you don’t want to know how much money they had to pay. I won't quote, what I said to them, after passing the project (it wasn't very friendly, but honest), but I said, that I won't do another migration project in a similar way, I wanted to go my own.

 

Because the next migration project was already announced, I knew I had find a solution for this and the most important items were easily to identify:

 

  • Separation

between frontend and backend features; the single-run-programs, used in the past, were designed to be started by the developer and noone else. I wanted to have an application, which can be started by everyone after a short briefing. And I don’t want to test the whole migration stuff just because the frontend changes (S/4HANA is just around the corner, even for this customer!)


  • Exception handling

Of course, I want to work with Exception Classes....


  • Documentation

I hate undocumented development objects and even most of  SAP's are not documented, I prefer to do that (if the customer does not want to pay the documentation, I even do it in my spare time). So each class, each interface, each component, each program, table and data element has to be accompanied by a documentation. Expectation was high: For an experienced ABAP OO developer, single workday of eight hours has to be enough to perform the full maintenance program.


  • Testing

mostly it works like this: Try a few different cases (just one in most cases) and if they don’t get a dump, the app is working fine per definition. I love to test classes and I want to have a minimum of test effort. A test class is written once and a well defined bunch of test cases (growing and growing, because each issue from the productive system has to be simulated as well) can be processed by a single click. This results in the effect, that no working feature can be destroyed by developer failures.


  • Separation of concerns

It would have to have a reusable and a project specific part. In each project, there are some essentials, which have to be developed only once to be used in every migration project. On the other hand, there is always  project-specific code, which can not be handled in the reusable part. On closer inspection, there appears a third layer, which bundles similar projects, between this two layers. We’ll get deeper into that later. In particular, the „you need multiple MARAs when you want to create multiple MARCs/MARDs/MLGNs/….“-thing (more infos about it below), I wanted to code once!


  • Field status determination

As the FM MATERIAL_MAINTAIN_DARK does, I want to read the customizing to determine input/output/mandatory-attributes - not just to send a simple error message and abort (like the FM does), but to have the chance to fix the problem automatically. It turned out, that the customer was wrong: Reading the customizing was much faster and easier to implement than collecting the filling rules from all functional consultants! In addition to this, I want to determine the views, I have to create, from the customizing.


  • Protocol

Each callback "why does have field X in material no. Y value Z?" has to be answered by a protocol, which can be inspected by the functional consultants, so there is no need to bother the developer. To get this, all FM messages and all data manipulation have to be accompanied by a protocol entry.The problem was to sell this solution to my customer. So I needed two things: A good, advertising-effective name and a calculation, that my solution is cheaper than the single-run-programs, used in the past. For the name, I had to exaggerate a bit and I chose „Material Data Migration Framework“ - you can call a box of cigarettes a 'smoking framework' and every CIO will buy it! - and replaced in it’s abbrevation from MDMF to MAMF to make it speakable like a word.The calculation was  simple: I just made a bet, stating, that I would cut my bill, if the costs were higher than those of that last project. To make a long story short: The costs have been much lower (and much faster, as well!), even most of the coding was reusable, so the costs in the following projects will be FAR lower. They never had such a smooth migration project.

 

Part II - Elementaries


Explanations:

  • In the text below, I use $ as a variable for the customer’s namespace, in most cases Z or Y, in some cases something like '/…./'.
  • The migration tables' dependencies, explained at first, will be called "object hierarchy", which must not to be mixed up with the "class hierarchy", which will be explained later.
  • I won't post any coding, here - because the customer paid for this coding, so they own it.


At first, we need a package to collect all related development objects: $MAMF.


For material master data migration, we won't stop using FM MATERIAL_MAINTAIN_DARK, which works in logical transactions, as I mentioned before. More details are explained in it’s documentation. The most important fact is, that the migration tables' records are related to others of the same material master data set (material number). One example: To post a material master data set with multiple MARC-records, with multiple MARD-records each, there have to be multiple MARA-records (in the single used programs this problem was solved by inserting the multiple entries directly).


This was the high order bit for the decision to develop object-oriented. I realized, that I would have to interpret each record of each migration table of FM MATERIAL_MAINTAIN_DARK as an object, because an object has a constructor and a destructor. This means, that a MARD-record can check at construction, whether or not there is a MARC-record related to the same plant. If not, it fires the MARC record's constructor to generate one and this constructor checks, if there is a MARA-record, using the same transaction number TRANC. This results into an object hierarchy.


So I need a class inheriting hierarchy, which differs - as mentioned above - from the object hierarchy: A basic class $CL_MAMF_MMD, same for all material master data migration projects and a subclass $CL_MAMF_MMD_xxxx for each migration project, dealing with the project specific steps (xxxx is a migration project ID).


Anticipatory, it will be predicted, that we’ll learn, we’re gonna get some other basic classes, i. e. $CL_MAMF_PIR… for Purchasing Inforecords, $CL_MAMF_BOM, etc., which results in a „higher level (root) class“ $CL_MAMF for all migration projects. But for now, this is irrelevant.We need this hierarchy for all migration table types: one for MARA_UEB, one for MARC_UEB, one another for MARD_UEB, etc. For LTX1_UEB, we gonna do some special: A special class for each long text with name = Text-ID; BEST, GRUN, PRUE, and IVER. For the Sales Text (Text-ID 0002), we take the text-object MVKE for better identification of the class and (because, there already is a MVKE_UEB table) change it to MVKT. All this classes inherit (as $CL_MAMF_MMD does) from $CL_MAMF, which means, they are on the same level like $CL_MAMF_MMD). To repeat it: The object hierarchy must not to be mixed up with the classes hierarchy!The root and the basic classes' instance generation is to be set to "abstract", the project specific classes will be set to private and they are always final to avoid project-to-project-dependencies.


$CL_MAMF                root class

$CL_MAMF_MMD_xxxx       basic class

$CL_MAMF_MMD_xxxx_nnnn  project specific class


     xxxx = table / long text (MARA, MARC, ..., BEST, ....)    (not applicable for migration process controlling class)

     nnnn = migration ID


Conclusion


For each migration project, we just have to make a new subclass to each of the basic classes (except the data, we don’t want to migrate - we won’t need a BEST_XXXX-class in a migration project, which is not supposed to migrate purchasing order texts).The controlling class (...MMD) has to have a class constructor, to get some customizing tables (particularly the field status of MM01-screen fields). This class will also have a method post, which posts the whole material data set.


All classes do have a protected constructor, because we have to adopt a modified Singleton Design Pattern (a so called Multiton), to administrate the instances in a static internal table MY_INSTANCES, containing all key columns of the related migration table and a column for the objects, related to this key columns.


The next step, I did not implement, but it seems to be a good idea for the future:The following methods could be bundled in an interface $IF_MAMF_LT, implemented by all basic classes and inherited to all project specific classes.


Because ABAP does not support overloading, we have to abstract the importing parameters, which has to be explained: We store a data reference in this class, written by a set- and read by a get-method. So we can be sure, that every data object can be stored. Alternatively, we can't use an interface for that, because each migration table has it's own structure.


A free method provides a destruction service, including automatical destruction of all subordinated objects.


A factory method builds the classname by concatenating classname and migration ID to return an instance of a basic classes' subtype.


An instance creator method get_instance which checks the existence of the superior object - if this check fails, the constructor of its class will be called - and calls the constructor of his own class to return a unique instance.


The results of this concept are, that the dependencies between the migration tables have only to be coded once (in the basic classes) but used in each migration project. No developer of a migration project has to care about this stuff, he just creates objects he needs, the objects themselves  will care about the technical dependencies, the MATERIAL_MAINTAIN_DARK needs.


And, as explained earlier, we don't want to code fields contents hard in the migration program, so we have to read the customizing the field's status. MATERIAL_MAINTAIN_DARK does that, too, but only for firing an error message and abortion. This has two consequences: On the one hand, we can copy the coding instead of re-inventing it and on the other hand, we can avoid the abortion.


The method get_field_status returns an indicator for obligatory and output-only fields and in combination of the field's content we can find filled output-only fields (which have to be cleared) and empty obligatory fields. For this fields, we need a get_(fieldname) method, which returns a default value - implemented in basic class for all projects or project specific in the final class. This methods (which will be hundreds) shall be created automatically and in most cases, they will be empty (means: take data from source). Same for set-methods to manipulate the saving process for each field. An example for a manipulated field content is MARA-BISMT, which contents the material number of the former system. My customer has multiple old material numbers, because (for example) company A has been migrated to plant 1000, company B to plant 2000. For this reason, they defined a table to store the BISMT per plant. The easiest way to do that in MAMF, we implement this in the method $CL_MAMF_MMD_MARA_nnnn->set_bismt( ), which stores  the relation between the former and the current material number in the table for each migration project (means: for each plant).


Part III  - The Migration Cockpit


I've always been of the opinion, that using an app has to be funny for the user, not just performing their works duties. So the user's view on each migration project is very important: the Migration Cockpit application, which will be a report with the transaction code $MAMF and follows the current design rules of SAP: Beside the selection screen, the report itself won't have any coding, only method calls. The coding will be placed in local classes, lcl_cockpit for the main coding and lcl_cockpit_evthdl for the eventhandler, because I prefer to handle report PAI by raising events, i.e. when the user strikes F4, an event will be raised and the value help is implemented in the eventhandler class.The selection-screen is splitted into three areas:


  1. The header line with migration ID and it's description
  2. A bunch of tabstrips, one for each migration object. By now, we only need a tab for material master data, but we want to have the chance for getting more to have a single point of migration works for all material data.
  3. A docking container, displaying a short briefing, what to do to migrate the migration object from the foreground / active tab.


To define a migration project, we need a customizing table with the columns migration_id, description (I don't want to maintain this text in multiple languages, because that will be the new companie's name, so no language field is needed) and a flag for each migration object, with a generated maintenance screen. The Cockpit will read this table to show this data and to disable tabs for all migration objects, we don't want to migrate. A button in the cockpit's toolbar will open a pop up for table maintenance.The cockpit will have three modes:


  1. Post online,
  2. Post in background job, which has to be started immediately (after a single click) and
  3. Post in a job, we plan to run later.

 

In both background modes, we need an additional step for sending a SAP express mail to inform the user, that the migration run has been passed. All run modes can be processed as a test run or a productive run. And we have to put some protocol related buttons into the screen.

 

Now we come to a special feature: Buffering migration data! In the messy migration project, I talked about earlier, we had to migrate about 70.000 materials, loaded from an Excel file and enriched with additional data directly from the source system via RFC. This takes hours and a simple network problem can disconnect the migrating person's SAPGUI from the application server, causing an interrupted migration. To make it possible to upload the migration data from the client to the application without waiving the background processing and to speed up the migration run, we have to buffer the data on the application server. To avoid creating application server files from the source system's data, we will save all data in a cluster table, INDX in this case. Advantage: We can store ready-to-migrate SAP tables. And the flexible storage in cluster tables allows us to save not only the data from SAP tables, but the excel file as well and the selection screen may show, who has buffered when. And maintaining a cluster table is much easier than managing files on the application server.

 

The class hierarchy may look like this:

$MAMF_BUF           Basic class

$MAMF_BUF_RFC       RFC specific class

$MAMF_BUF_RFC_nnnn  project specific class for a migration via RFC

$MAMF_BUF_XLS       XLS-based specific class

$MAMF_BUF_XLS_nnnn  project specific class for a migration, based on XLS-files

 

So, the migration process will have two steps: Buffering the source system's data and migrating the buffered data. For multiple test runs, you'll buffer once for all test runs, which is a nice time saver. And now we can see the third layer between the migration project and the migration object: the migration process, because all RFC-based data collections are similar to each other, as well as Excel-only based migration projects are similar to each other and so on. This differentiation only works for the buffering process, after that, we have a standard situation for all migration projects: The data can be found in INDX, sorted into SAP structures MARA, MARC, etc., so we don't need this third layer in the migration classes, described earlier.

 

Of course, the brief instruction in the docking container has to be fed by a translatable SAPscript text and it needs only a handful of steps to implement it. Besides, the cockpit will have an extensive documentation to explain each step in detail.

 

Part IV -- Saving Protocols and look ahead

 

A migration protocol particularly should support two ways of analysis: on the one hand, we have to analyze, what errors occured during migration run to fix this problems. On the other hand, some functional consultants may ask the developer "Why does field x of material no. y has value z?" and the question may be allowed, why the developer has to figure that out. To avoid overloading the developer with questions like this, we should write all data manipulation in the protocol, so each difference between source data and the migration data, we send to MATERIAL_MAINTAIN_DARK can be read in the protocol. All undocumented differences between this and the posted material data were made by the SAP system.

 

At first: The application log is not appropriate to do that, because it can not be filtered properly. I tried it this way and it was a mess. So we'll define a transparent table in the data dictionary to store the protocol in. Each insert has to be committed immediately, because the rollback, caused by a program abortion (the worst case scenario) would send all protocol entries up into the Nirvana. This table $MAMF_LOG_MMD needs to have the following columns: Migration-ID, No. of migration run (we gonna need a few testruns, I'm afraid), Testrun/Productive-Indicator, Material no., message text, person in charge. By filtering a SALV based list, the functional consultant himself can retrace the "migration story" for each material no. of each migration run, and he can do that years after the migration, if he wants to. And he is able to filter the list for the messages, which are relevant just for him. If a field content i. e. from MBEW does make any trouble, the name of the FI-Consultant has to be placed in this column.

 

The Migration Cockpit needs a button on the material master data tab, which reads the table and filters the list for the last run (which is the most relevant in most cases), but as said before, the consultant is able to manipulate this filter rules to meet his individual requirements.

 

What's next? There are more material data to be migrated, so - as mentioned before - there will be more basic classes beside $CL_MAMF_MMD, i. e. $CL_MAMF_PIR for purchasing inforecords migration, $CL_MAMF_BOM for bill of materials and $CL_MAMF_STK for the stocks migration. Although the migration process will be way different, we have the chance to migrate all material data with one migration cockpit. For this reason, we need a root class $CL_MAMF to make the migration framework extensible (the magic word is "dynamic binding") without changing the root classes coding.

 

In conclusion, we do have an application, that separates the UI layer from the business logic and the reusable from individual coding, is easy to use even for non-developers and extendable. With this appliction, I have a lot of fun and no frustration in my migration projects and I learned much about OO concepts and Design Patterns (even when they are not all described here, I did of course, used them), the customer is thrilled how easy and fast we can migrate material data (which is important, because without material master data no orders, no purchases, no stocks, etc.).

 

Discussing the efforts

 

Yes, the question is allowed, why I put so much effort into such a simple thing like migration of material data. Well, it isn't that simple what it seems to be and the quality of this data is underrated. I often saw duplicates, missing weights, etc. And - we shouldn't forget this important fact - this was a really funny software development, kind of a playground, because I had the chance to work alone: I defined the requirements, wrote the concept, developed the data model and wrote the code, tested it and after all I could say: This is completely MY application. Noone took this hand on my application and I never had to hurry, because the raw concept in my head was finished before the migration project started. And in each following migration project, I was a little bit proud, because now we do have a standard process for this and we're able to do a migration within 2-3 days without being in hurry.

 

I hope you had fun, reading this and perhaps you learned a bit If you have any questions, suggestions for improvements, comments or anything else, feel free to leave a comment.


Disclaimer: English ain't my mother tongue - Although I do my very best, some things maybe unclear, mistakable or ambiguous by accident. In this case, I am open to improve my English by getting suggestions

How to add Generic Object Services to your context menus

$
0
0

Quick intro: This post is part of a series in which I show you some interesting ABAP tips and tricks. I'll present this in the context of our own developments at STA Consulting to have a real-life example and to make it easier to understand.

 

Requirement: we have a Business Object displayed in a field of an ALV grid and we want to add the Generic Object Services to our custom context menu.

 

Background information:

 

Business Objects

 

In practically all ALV grids you will find fields that contain Business Objects (BO to keep it short). For example, a Plant, a Vendor, a Material is a BO defined by SAP. You can display BOs using transaction SWO1. Our example will be BUS1001 (Material).

 

http://sta-technologies.com/wp-content/uploads/2015/08/blog_GOS_01_resized.jpg

 

In order to uniquely identify a BO, there is a link to at least one field of a database table. BO BUS1001 is linked to MARA-MATNR, which is the unique identifier of a material.

 

http://sta-technologies.com/wp-content/uploads/2015/08/blog_GOS_02_resized.jpg

Click on the image above to see the full screenshot

 

This makes it easy to identify if an ALV field contains a BO or not: simply check the field catalog of the ALV: if there is a reference to a table field which is also referenced by a BO, then we can add the GOS menu to it.

 

Generic Object Services (GOS)

 

GOS is a very useful standard tool that allows us to do certain things with BOs. You can add notes and attachments, start and display workflows, link BOs together, send BOs as attachments in messages etc. I'm sure you've seen the classic toolbar menu of GOS in many transactions like MM03:

 

http://sta-technologies.com/wp-content/uploads/2015/08/blog_GOS_04_resized.jpg

 

Why is it needed?: The basic reason we made this development is that the GOS menu is only available in certain transactions. For example, if you want to attach a file to a material, you have to launch MM03. In order to do this, you have to open a new window, copy-paste the material number, hit enter etc. It would be great to attach the file in the transaction you are in.

 

Solution: let's assume that we have already identified which ALV field contains the material number. After this, we will use a standard class to add the GOS menu to our context menu.

 

First declare and create the object:

 

DATA: lo_gos TYPE REF TO cl_gos_manager.
CREATE OBJECT lo_gos   EXPORTING     ip_no_commit = 'R'   EXCEPTIONS     others       = 1.

It is important to add parameter ip_no_commit to control database commits made by GOS, which may interfere with the current program. Space and 'X' are pretty trivial, 'R' means that updates will be performed using an RFC call. Naturally you have to add your own error handling in case there was any error.

 

The next step is to get the GOS menu as a context menu object. We have to supply the BO type and the BO key (BUS1001 and the material number the user right-clicked on):

 

DATA: lo_gos_menu TYPE REF TO cl_ctmenu,       ls_object   TYPE borident.
ls_object-objtype = 'BUS1001'.
ls_object-objkey  = lv_matnr.
CALL METHOD lo_gos->get_context_menu   EXPORTING     is_object = ls_object   IMPORTING     eo_menu   = lo_gos_menu.

The object reference received in parameter eo_menu will be exactly the same as in the toolbar of MM03.

 

The last step is to add this context menu to the context menu of the ALV grid. There are hundreds of forum posts about creating your custom context menus so I won't elaborate it here. There is a standard demo program where you can check it out: BCALV_GRID_06. The bottom line is that you will have a context menu object that you can manipulate:

 

CALL METHOD lo_alv_context_menu->add_submenu   EXPORTING     menu     = lo_gos_menu     text     = text-027.     " Generic Object Services

The end result will look like this (we have actually added the GOS menu under our own nested submenus "STA ALV Enhancer - Material"):

 

http://sta-technologies.com/wp-content/uploads/2015/08/blog_GOS_05.jpg

Click on the image above to see the full screenshot


Conclusion: This is pretty useful because now you can access the GOS in any ALV you want. Naturally if you attach a file using this context menu, it will be visible in MM03 and vice versa.

 

I hope you liked this first post, there are lots more things to come. Have a nice day!

 

p.s.: Actually it is possible to dynamically add this menu to all BOs in ALVs of all standard and custom reports, so 'BUS1001' it is not hardcoded...

 

Type less - SE80 Editor Code Templates

$
0
0

The "new" Editor has a few nice features that make it a bit less painful to use. I use them to prevent unnecessary typing of repetitively used Code-Blocks and to generate local Classes (Test, Exception and so on).

 

You can find a Collection of my Templates in a GitHub Repository. Over the time I'll add further, so if you are interested, you may watch that Repository. I you like to contribute please feel free to send a pull request!

 

 

Here's the Repository: https://github.com/zs40x/se80_templates


See also: Sharpen your ABAP Editor for Test-Driven-Development (TDD) - Part II

ABAP News for Release 7.50 - What is ABAP 7.50?

$
0
0

Today was RTC of SAP NetWeaver 7.5 with AS ABAP 7.50. While there are already some big pictures around, let me provide you with some small ones.

 

As with ABAP 7.40, let's start with the question "What is ABAP 7.50?" and extend the figure that answered this question for 7.40.


ABAP_750.jpg

The figure shows ABAP Language and ABAP Runtime Environment as seen by sy-saprl, so to say.

 

The good news is, we are back in calmer waters again. While the way to ABAP 7.40 was not too linear and  involved development in enhancement packages (EHPs as 7.02 and 7.31) and backports from NGAP, development from ABAP 7.40 on took place in support packages. The support packages 7.40, SP05 and  7.40, SP08 were delivered with new Kernels 7.41 and 7.42. New Kernels meant new functionality.  Good for you if you waited for exciting new things. Maybe not so good, if you see "support packages" as what they are: With support packages most people expect bug fixes but no new functionality. And that's why 7.40, SP08 was the last one bundled with a new Kernel. All further SPs for 7.40 stay on Kernel 742 and are real support packages again.

 

Of course, the ongoing development of ABAP did not stop with that. You might have heard rumors of 7.60 and Co. already. A new release line was opened for SAP's internal cloud development immediately, starting with ABAP 7.60 based on Kernel 7.43. This line has short release cycles, where each release is connected to an own Kernel and delivers new functionality. These releases are used - and thereby tested - by SAP-internal development teams.

 

For all the other environments than AS ABAP for Cloud Development, the now shipping release ABAP 7.50  was created as a copy of ABAP 7.62 based on Kernel 7.45. For these environments, as e.g. SAP S/4HANA or SAP NetWeaver 7.5 standalone, ABAP 7.50 is simply the direct successor of ABAP 7.40 and provides the ABAP Language and Runtime Environment for AS ABAP for NetWeaver 7.5. See the big pictures, where ABAP 7.50 will be available.

 

In an upcoming series of blogs I will present to you the most important ABAP news for ABAP 7.50. And there are quiet some of them ...

Dustbins at SAP TECHED 2015 in Las Vegas (1)

$
0
0

3 2 1.png

Apart from the keynote last night today was the first day of TECHED2015 in Las Vegas. I thought I would write down as much of what I could remember while it was still fresh in my mind. I am sure I have forgotten a lot already, but here goes:-

 

Workflow in Outlook

 

There is an add-on to Microsoft called “Microsoft Gateway server” where you can connect to ODATA exposed services from the SAP back end, and have them appear in Outlook 2010 or 2103, as, for example, tasks, but you can also see contact details and appointments.

 

For workflow items in particular there is a generic service you activate in transaction SICF. Thereafter you have to configure this service to say what particular type of work items you want to be visible from outside the SAP system.

 

Outlook “pulls” the data from SAP either by the user pressing a button, or some sort of scheduled job. This means the data is never really up to date, and if a work item is sitting in four peoples Outlook inbox, and one of them approves it, the other three items do not instantly vanish, as they would inside the SAP system.

 

SAP GUI

 

SAP GUI version 730 goes out of support in October 2015. The 740 GUI will be supported until January 2018. About a year before the that the next version will come out, the number of which has not been decided yet.

 

SAP GUI for Java is very strange; I don’t know why anyone would want to use that. Of course I am biased as I could not live without the graphical screen painter for all the DYNPRO screens no-one uses any more.

 

In the new version of Screen Personas it can work with SAP GUI for Windows, as well as with SAP GUI for HTML. However since the Personas editor is only in the HTML GUI you have to create your new screens using that tool, and then SAP GUI for Windows can read them (If you have the Personas add-on in your ABAP system).

 

It was stressed that if you create over-complex scripts there is a performance hit, as the DYNPRO screens have to run in the background, this presumably plus the time lag for the round trip from the web browser to the back end system. I don’t know if running Personas using SAP GUI for Windows will be any faster.

 

Netweaver 7.5

 

This came out today – 20th of October 2015 – though of course it would, to co-incide with the conference. The actual EHP which you would need to install on your ERP system in order to get the new version of ABAP is not yet out however, and no-one knows when it will be available.

 

The so called “Push Channels” which I am giving a speech on tomorrow are now released for productive use in version 7.5 They worked in 7.4 but you were not supposed to use them as that would have been naughty. Someone told me the underlying technology has changed somewhat radically as well. This is all needed for the so called “internet of things” where a sensor detects something of interest and then “pushes” the information to the SAP system, without the SAP system having to constantly poll for new information.

 

There is a new data type INT8 for storing really big numbers. This must be what they mean by “big data” – I had been wondering.

 

Karl Kessler gave a demonstration where he coded a CDS view in the ABAP system (using SQLScript) which linked sales order headers to items and customers. One line of code at the top said something like “Odata : Publish” which means a service was generated which exposed the data to the outside world.

 

He then tested this and the result was like a SE16 of VBAK where you could click on the customer number and then see the relevant line in KNA1.

 

Moreover, he then opened up the SAP Web IDE ( I got a bit of a mixed message, speakers were saying ABAP in Eclipse was the way to go, it’s great, and then when they coded anything they used the Web IDE – and it was still called “River” on one slide) and then generated a UI5 application from a template. Whilst configuring the application he chose the CDS view, and then picked some fields to display.

 

The resulting application not only had the data but automatic navigation to the customer data, as defined in the view. We were told SAP is working on transactional applications as well as report type things like this.

 

The BOPF got mentioned, I was really hoping this had not become obsolete already! Mind you its name had already changed to BOPFSADL on the slide, I have been wondering if SADL is a new framework for business objects like monsters and sales orders. Maybe it’s like the Incredible Hulk and BOPF turns into SADL when it gets angry.

 

There is a lot of improved tools for checking your custom code to see if it will work in the S/4 HANA on-premise addition. In the cloud addition you can’t have Z code anyway (pretty much) so the problem is not so relevant. Mind you I don’t think any customer has gone onto S/4 HANA in the cloud yet, they all chose the on-premise version.

 

S/4 HANA in General

 

First and foremost the material number increases in length from 18 characters to 40. This will of course be backward compatible so nothing existing will break (they say).

 

In the same way that “simple finance” got rid of all the tables like COEP and BSIK and all their friends, leaving just BKPF and a new table called ACCDOCU, the same treatment is being given to tables like MARD and MBEW by the “simple logistics”. The slide started off with about 20 such tables, and then they all ran off leaving just two – I think one for master data and one for transactional data (MSEG was one of the two). I can’t imagine how that is going to work.

 

It looks like all the functionality in the supply chain management “new dimension” product is being migrated back into the core – things like the APO and Demand Planning and the like. My guess is eventually (could take 20 years) all the “new dimension” products will die in the same way as SEM with everything going back to the core ERP system.

 

I give my speech tomorrow, so I am sure I will be pelted with rotten eggs and tomatoes. At least I am not the last speaker before the heavy drinking, I mean "networking" session.

 

Cheersy Cheers

 

Paul

 

 

 

 

 

 

ABAP News for Release 7.50 - IS INSTANCE OF

$
0
0

This one is a tribute to the community, to SCN, to you. One of you, Volker Wegert; has blogged his ABAP Wishlist - IS INSTANCE OF some years ago and reminded us again and again. Others backed him and I myself participated a little bit by forwarding the wish to the kernel developers. And constant dripping wears the stone, ABAP 7.50 comes with a new relational expression IS INSTANCE OF, even literally.

 

If you wanted to find out whether a reference variable of a given static type can point to an object before ABAP 7.50, you had to TRY a casting operation that might look like something as follows:

 


    DATA(typedescr) = cl_abap_typedescr=>describe_by_data( param ).

    DATA:
      elemdescr   TYPE REF TO cl_abap_elemdescr,
      structdescr TYPE REF TO cl_abap_structdescr,
      tabledescr  TYPE REF TO cl_abap_tabledescr.
    TRY.
        elemdescr ?= typedescr.
        ...
      CATCH cx_sy_move_cast_error.
        TRY.
            structdescr ?= typedescr.
            ...
          CATCH cx_sy_move_cast_error.
            TRY.

                tabledescr ?= typedescr.
                ...
              CATCH cx_sy_move_cast_error.
                ...
            ENDTRY.
        ENDTRY.
    ENDTRY.


In this example we try to find the resulting type of an RTTI-operation.

 

With ABAP 7.50 you can do the same as follows:

 

    DATA(typedescr) = cl_abap_typedescr=>describe_by_data( param ).
   

    IF typedescr IS INSTANCE OF cl_abap_elemdescr.
      DATA(elemdescr) = CAST cl_abap_elemdescr( typedescr ).
      ...
    ELSEIF typedescr IS INSTANCE OF cl_abap_structdescr.
      DATA(structdescr) = CAST cl_abap_structdescr( typedescr ).
      ...
    ELSEIF typedescr IS INSTANCE OF cl_abap_tabledescr.
      DATA(tabledescr) = CAST cl_abap_tabledescr( typedescr ).
      ...
    ELSE.
      ...
    ENDIF.

The new predicate expression IS INSTANCE OF checks, if the dynamic type of the LHS operand is more special or equal to an RHS type. In fact it checks whether the operand can be down casted with that type. In the above example, such a casting takes place after IF, but its your decision if you need it. If you need it, there is even a shorter way to write it. A new variant of the CASE -WHEN construct:!

 


    DATA(typedescr) = cl_abap_typedescr=>describe_by_data( param ).

    CASE TYPE OF typedescr.
      WHEN TYPE cl_abap_elemdescr INTO DATA(elemdescr).
        ...
      WHEN TYPE cl_abap_structdescr INTO DATA(structdescr).
        ...
      WHEN TYPE cl_abap_tabledescr INTO DATA(tabledescr).
        ...
      WHEN OTHERS.
        ...
    ENDCASE.

 

The new TYPE OF and TYPE additions to CASE and WHEN allow you to write IS INSTANCE OF as a case control structure. The optionalINTO addition does the casting for you, I think that's rather cool.

 

B.T.W., the new IS INSTANCE OF and CASE TYPE OF even work for initial reference variables. Then they check if an up cast is possible.This can be helpful for checking the static types of formal parameters or field symbols that are typed generically. Therfore IS INSTANCE OF is not only instance of but might also be labeled as a type inspection operator.


ABAP News for Release 7.50 - CDS Table Functions Implemented by AMDP

$
0
0

I just started with blogging about important ABAP News for ABAP 7.40 and - whoosh - I am asked for CDS news. OK then, a blog about the new CDS table functions (but hey, I have also real work to do).

 

ABAP CDS is the ABAP-specific implementation of SAP's general Core Data Services (CDS) concept. ABAP CDS is open, meaning that you can use it on all database platforms supported by SAP. And yes, CDS views with parameters, introduced with ABAP 7.40, SP08, are supported by all databases with ABAP 7.50.

 

While openess has its merits, developers working only on the HANA platform might miss some code-push-down capabilities in ABAP CDS. One of these missing capabilities was the usage of database functions in data models built with CDS. Up to now, only CDS views were available. With ABAP 7.50 ABAP CDS also supports CDS table functions as CDS entities. Two problems had to be solved:

 

  • how to make table functions that are implemented natively on the database callable in CDS
  • how to manage the life cycle of native table functions to be constantly available to a data model built on the application server

 

Two questions, one answer: ABAP Managed Database Procedures (AMDP), introduced with ABAP 7.50, SP05. AMDP is a class-based framework for managing and calling stored procedures as AMDP procedures in AS ABAP. For the time being, AMDP is supported by the HANA platform only. Before ABAP 7.50, AMDP knew only database procedures without a return value. With ABAP 7.50, AMDP supports also database functions with a tabular return value. And the main purpose of these AMDP-functions is the implementation of CDS table functions. They cannot be called as functional methods in ABAP, while AMDP-procedures can be called as ABAP methods.

 

In order to create a CDS table function, you have two things to do:

 

  • define it in a CDS DDL source code,
  • implement it in an AMDP method with a  return value.

 

Both steps are possible in ADT (Eclipse) only.

 

The definition in CDS DDL is straight forward, as e.g.:


@ClientDependent: true
define table function DEMO_CDS_GET_SCARR_SPFLI_INPCL
   with parameters @Environment.systemField: #CLIENT
                   clnt:abap.clnt,
                   carrid:s_carr_id
  returns { client:s_mandt;  
            carrname:s_carrname;
            connid:s_conn_id;
            cityfrom:s_from_cit;
            cityto:s_to_city; }
  implemented by method
    CL_DEMO_AMDP_FUNCTIONS_INPCL=>GET_SCARR_SPFLI_FOR_CDS;

 

A CDS table function has input parameters and returns a tabular result set, that is structured as defined behind returns. You see, that the annotation @ClientDependent can be used to switch on an automatic client handling for Open SQL. You also see a new parameter annotation @Environment.systemField, also available for views, that is handled by Open SQL by implicitly passing the value of sy-mandt to that parameter. Such a CDS table function is a fully fledged CDS entity in the ABAP CDS world and can be used like a CDS view: It is a global structured data type in the ABAP Dictionary and it can be used as data source in Open SQL's SELECT and in CDS views. Behind implemented by method you see the AMDP class and method where the function has to be implemented in.

 

After activating the CDS table function you can go on implement the functional AMDP method in an AMDP class, that is a class with the marker interface IF_AMDP_MARKER_HDB. An AMDP method for a CDS table function must be a static functional method of a static AMDP class that is declared as follows:


CLASS-METHODS get_scarr_spfli_for_cds

              FOR TABLE FUNCTION demo_cds_get_scarr_spfli_inpcl.

The declaration is linked directly to the CDS table function. The parameter interface is implicitly derived from the table function's definition! Implementation looks like you might expect it:


  METHOD get_scarr_spfli_for_cds
         BY DATABASE FUNCTION FOR HDB
         LANGUAGE SQLSCRIPT
         OPTIONS READ-ONLY
         USING scarr spfli.
    RETURN SELECT sc.mandt as client,
                  sc.carrname, sp.connid, sp.cityfrom, sp.cityto
                  FROM scarr AS sc
                    INNER JOIN spfli AS sp ON sc.mandt = sp.mandt AND
                                              sc.carrid = sp.carrid
                    WHERE sp.mandt = :clnt AND
                          sp.carrid = :carrid
                    ORDER BY sc.mandt, sc.carrname, sp.connid;
   ENDMETHOD.

Nothing really new but BY DATABASE FUNCTION and that  READ-ONLY is a must. Implementation is done in native SQLScript for a HANA database function. And native means, you have to take care for the client. Automatic client handling is done on the Open SQL side only. Of course, a real CDS table function would do more HANA specific things (e.g. wrapping a HANA function) than a simple join as shown in the simple example here! A join you can code also in Open SQL or in a CDS View.

 

Speaking about Open SQL, last but not least, the usage of our CDS table function as data source of SELECT in an ABAP program:


  SELECT *
         FROM demo_cds_get_scarr_spfli_inpcl( carrid = @carrid )
         INTO TABLE @DATA(result)
         ##db_feature_mode[amdp_table_function].

Not different to an access of a CDS view with parameters. But you must switch off a syntax warning with a pragma to show that you are sure what you are doing, namely coding for HANA only.

 

Note that we don't need to pass the client explicitly. This is because the according parameter was annotated for implicit passing of the respective system field. Since the CDS table function was annotated as client dependent, the result set of Open SQL's SELECT does not contain a client column - as it is for CDS views. Furthermore all lines of the result set, that do not belong to the current client are implicitly removed. That's why the return list of a client dependent table function must have a client column.  For the sake of performance, the native implementation should deliver only lines of the current client. But since it is native it has to take care for that itself. Confusing? That's when open and native meet. In ABAP, normally the DBI does this handling for you. But this is not possible here.

ABAP News for Release 7.50 - Fundamentals

$
0
0

This one will be short but fundamental.

Only Unicode Systems in Release 7.50

 

ABAP 7.50 is Unicode only! An AS ABAP can only run as Unicode System with system code page UTF-16. All ABAP programs must be Unicode programs where the Unicode syntax checks are effective.You must always set the respective program attribute.

 

Finally a programming guideline that is gone for good.

 

All the weird non-Unicode stuff is deleted from the  documentation.

 

Nothing more to say from my side.

 

New Built-in Data Type INT8

 

The existing built-in types b, s, i in the ABAP language and the respective built-in types INT1, INT2, INT4 in the ABAP Dictionary got a big brother (or is it a sister?):

 

int8 in ABAP language and INT8 in the ABAP Dictionary.

 

They specify 8-byte integers with a value range of  -9223372036854775808 to +922337203854775807.

 

Not much to say about that either. Simply go and use it if you need big integers. The rules are mainly the same as for 4-byte integers, of course with enhanced values for alignment requirement (adress must be divisible by 8), predefined output length on screens (20) and a new calculation type  int8 for arithmetic expressions.

 

Just a little example to remind you that it is always good to use an integer calculation type if you calculate integers, especially for big integers:


DATA arg TYPE int8 VALUE 2.

cl_demo_output=>new(
  )->write( |**  : { arg ** 62 }|
  )->write( |ipow: { ipow( base = arg exp = 62 ) }|
  )->display( ).

While ** calculates with floating point type f and produces a wrong result, ipow calculates with int8 and produces the correct result.

 

PS: Cool way of "writing" isn't it? But only for demo purposes ...

ABAP News for Release 7.50 - Test Seams and Test Injections

$
0
0

Writing ABAP Unit tests can be somewhat cumbersome if the code to be tested is not suited for automatic tests. All the hubbub about mock frameworks or test driven development isn't worth a cent if you have to deal with code that never came in touch with the concept of separation of concerns. Imagine you have code to maintain, that depends on database contents or calls UI screens and your boss wants you to increase the test coverage of the department - a real life scenario? Yes, at least in my life. If you cannot  redesign and rewrite the whole application, as a workaround you make the code test dependent. This is regarded as bad style, but it helps.

 

As a simplistic example take a method that gets data from a UI screen but should be tested by a module test. Normally there is no UI available during the test. Setup and teardown methods also do not help as they might do for selecting data from a database by providing test data. A workaround before ABAP 7.50 was a free-style test flag, e.g. as follows:

 

CLASS cls DEFINITION.
  PUBLIC SECTION.
    METHODS get_input
      RETURNING
        VALUE(input) TYPE string.
  PRIVATE SECTION.
    DATA test_flag TYPE abap_bool.
ENDCLASS.

CLASS cls IMPLEMENTATION.
  METHOD get_input.
    IF test_flag IS INITIAL.
      cl_demo_input=>request( CHANGING field = input ).
    ELSE.
      input = 'xxx'.
    ENDIF.
  ENDMETHOD.
ENDCLASS.

 

The test method of a test class that is a friend of the class to be tested can influence the method by setting the test flag.

 

CLASS tst DEFINITION FOR TESTING
          RISK LEVEL HARMLESS
          DURATION SHORT
          FINAL.
  PRIVATE SECTION.
    METHODS test_input FOR TESTING.
ENDCLASS.

CLASS tst IMPLEMENTATION.
  METHOD test_input.
    DATA(oref) = NEW cls( ).
    oref->test_flag = abap_true.
    DATA(input) = oref->get_input( ).
    cl_abap_unit_assert=>assert_equals(
    EXPORTING
      exp = 'xxx'
      act = input ).
  ENDMETHOD.
ENDCLASS.

 

Bad style and not governed by any conventions. To overcome this, with ABAP 7.50 the concept of test seams and test injections is introduced:

 

CLASS cls DEFINITION.
  PUBLIC SECTION.
    METHODS get_input
      RETURNING
        VALUE(input) TYPE string.
ENDCLASS.

CLASS cls IMPLEMENTATION.
  METHOD get_input.
    TEST-SEAM fake_input.
      cl_demo_input=>request( CHANGING field = input ).
    END-TEST-SEAM.
  ENDMETHOD.
ENDCLASS.

 

With TEST-SEAM - END-TEST-SEAM a part of the code is defined as a test seam that can be replaced by test friendly code during testing. No selfdefined attribute is necessary and the test class does not have to be a friend of the class to be tested any more (as long as public methods are tested only). The test method now might look as follows:

 

CLASS tst DEFINITION FOR TESTING
          RISK LEVEL HARMLESS
          DURATION SHORT
          FINAL.
  PRIVATE SECTION.
    METHODS test_input FOR TESTING.
ENDCLASS.


CLASS tst IMPLEMENTATION.
  METHOD test_input.
    TEST-INJECTION fake_input.
      input = 'xxx'.
    END-TEST-INJECTION.
    DATA(input) = NEW cls( )->get_input( ).
    cl_abap_unit_assert=>assert_equals(
    EXPORTING
      exp = 'xxx'
      act = input ).
  ENDMETHOD.
ENDCLASS.

 

With TEST-INJECTION - END-TEST-INJECTIONM a test injection is defined that replaces the test seam of the same name during test execution. A test injection can be empty and then simply removes the respective test seam during testing. Test injections can be defined in test includes of global classes and function groups.


For more information, more use cases, and more examples see Test Seams.

Dustbins in Las Vegas - Part Two

$
0
0

Dustbins in Las Vegas – Part 2

 

You are probably familiar with crime shows on TV like “CSI : New York”. I were to try and describe TECHED 2015 in a similar fashion it would be “CDS : Las Vegas”. The so called “CDS View” was front of stage the entire time, with UI5 standing behind it, waving over its shoulder, and HANA standing in the background trying to get some attention. This blog is just a list of the notes I made from various sessions, hopefully in a logical order, but possibly not.

 

I know everything changes; everything changes; now I know what I like about You

 

We will start with the minor things about CDS views and then work our way up. A CDS view arrived on the scene with ABAP 7.4 which the vast amount of SAP customers clearly are not on yet based on a show of hands at various speeches – and the SAO speakers are always shocked, as they have had this for years and so are amazed when real people in the real world are not using such technology. I think my company won’t upgrade till ECC 6.0 goes out of support in 2025.

 

Anyway, a really big thing was made about the fact that in the names you give to CDS Views you can use “Camel Case”. What is that you may ask? A Camel has been murdered and you are a famous detective trying to work out who committed the crime. Or, possibly, it is naming things in the style of SalesOrderItem as opposed to SALES_ORDER_ITEM. Languages like Java have always used names like that and I think SAP were getting jealous. I am all for making code easier to read but I think you are between a rock and a hard place here.

 

In 7.5 the generation (creation) of CDS views in ABAP in Eclipse is a lot better e.g. the lines at the top which you previously had to type in manually are generated for you.

 

Since CDS views are supposed to be the core of everything now it is possible to call and AMDP from inside a CDS view.

 

When playing in the Web IDE and trying to view the representation of a CDS view, you can get a mockup of the Firori Launchpad with a tile showing your new application. What use this is I have not a clue.

 

Now, since a CDS view (which is a model in the MVC sense) is the basis for generating a Fiori application you can put a whole bunch of “annotations” inside it, which seem a lot like view related data to me but I am assured it is not, such as can the UI allow the user to search on the data. If you put an annotation like @search in the CDS view then, yes they can, but you also have to specify before each field can it be searched, and the “fuzziness” actor for the search.

 

In fact what we were shown looked a lot like how you set up the ALV back in 1999 – annotations for which position a field should appear in the UI, can it be totalled and other UI specific things.

 

One other annotation which reflects the new world is you can say how important a field is – that way if the report appears on a phone the less important fields get hidden due to the lack of room.

 

I am a Model, you Know What I Mean

 

James Wood was sitting next to me and asked me – “Should all this information go in the model?”

 

It did seem odd, I would have thought a model just knows about the data and business logic, the purpose of a view was to decide how to display it to a user, which is why you can have various views of the same model. From a technical perspective all the annotation data goes into a separate file, on the grounds that if you expose the data to some consumers outside the SAP system don’t care about it i.e. ones that will not display the data.

 

Some years back, it took me a while to get to grips with the “model view controller” concept where each one of the three components does a separate job. Just to confuse things SAP has decided that the data model is going to be called a view i.e. a CDS view. This is because of the historical SE11 data dictionary definition of a database view, which a CDS view evolved out of. Yesterday someone told me that they went to a talk where someone from SAP was talking about “System Landscape Transformation” and said “This is not be confused with System Landscape Transformation”. A redundant warning clearly, who would make such a mistake? It is like the joint product from SAP and Microsoft called “Duet” which is a totally different product from the old one, also called “Duet”.

 

Anyway, so the data model is called a view in SAP terms. OK we can live with that. What is puzzling from some perspectives is all the “annotations” you can put in the CDS view giving instructions to the UI i.e. the sort of thing you would normally expect the view to take responsibility for. In addition you have a list of commands (actions) you can add to the CDS view which will correspond to the buttons that will appear on the screen, sort of like the icons at the top of the screen when you run an ALV report e.g. export data to excel or whatever. In a minute we will see this is sort of a controller like function, in the definition named “view” which is in fact a model.

 

Now, SAP want to have a common programming framework for both transactional and analytic applications and the CDS view plays a core role in this. Traditionally a database view was what it said on the tin – a view of the data, so you can have a look at that data and analyse it. Now we can make a CDS view into a transactional view by writing an annotation at the top to say it is a “#BUSINESS_OBJECT” and another one saying it is write enabled.

 

This generates a BOPF definition, the sort of thing you would normally set up using transaction BOB. Then you can in fact use the BOB transaction to add in business logic to the generated BOBPF object to perform data validations, fill in derived fields, and – as alluded to earlier code the logic needed for the actions (commands the user can do during the transaction). The list of actions themselves are listed in the CDS view and then get automatically generated in the BOPF entity.

 

I then got utterly confused, as it turns out you need another CDS view, this time called a consumption view, which looked the same as the first one to me, but had lines in it saying you could create / read / update / delete instances of the object. The two CDS views and the BOPF entity all fit together somehow.

 

The other day a German gentleman was arguing with me in relation to the BOPF chapter in my book, about the fact that when I was defining a model class I had it fill in the texts for things like sales organisation, material name and so forth. His position was that sort of thing had no place in the model; it was purely a UI function. So I was fascinated as to what SAP’s position on this was. In turns out that in the CDS view you add an annotation to say where to get the text name from. That makes sense to me as you are in effect coding a join between say MARA and MAKT and saying you want both MATNR and MAKTX in your data model.

 

However just to be contrary, at the Fiori Café yesterday I was arguing with a German guy from SAP, arguing the opposite position I usually take, saying that the texts had no place in the model and should live in the view. He managed to convince me otherwise, which is not surprising as that was my real opinion in the first place.

 

Next comes the big change in 7.5 – in the transactions we are used to you either back out or save the new or changed data. You cannot usually save the record to the database in a draft state with 10% of the fields filled out. However in the new SAP world where you are most likely on a mobile device where the connection to the back end drops in and out like a yo-yo, we now have the concept of a draft document. I fill in a few fields of the sales order, it gets saved to the database as I go along (I think) as a draft, and then my connection drops out, an hour later I can get back online on another device, I fill in the rest of the fields and press save, and the draft gets converted into an actual bona fide database record. In the BOPF generated classes you have methods like CREATE_DRAFT, COPY_DRAFT_TO_ACTUAL and LOCK. I am not yet sure how much of the logic is generated for you, I imagine most of it and then you can add anything extra you might need.

 

You may have heard of “Project Objectify” on SCN, created due to the failure of SAP to create a set of business objects representing sales orders, deliveries, purchase orders etc. To be more exact SAP has tried with SWO1 definitions, and BAPIS and the like, but you don’t have a class like CL_DELIVERY with life cycle methods and methods like GOODS_ISSUE.

 

The claim from SAP is that in S/4 HANA they will delivery precisely that. For each business object there will be a CDS view linking the header with the items, and having actions like goods issue. That sounds too good to be true, and has been promised before, we shall see.

 

Like a Dream, a life, a reason everything ABAP must change

 

This morning who should be in the lift with me but Karl Kessler from SAP who writes the “under development” column for SAP Insider magazine. I then went to his two hour talk on the future of ABAP.

 

The first point that came up was that although ABAP 7.5 was released on Tuesday in time for TECHED it wasn’t really, you still cannot download it from the service marketplace. It is aimed at BW initially, and then round about the first quarter of 2016 we will have EHP8 which will delivery ABAP 7.5 to ERP systems. You also need Kernel 7.45 for ABAP 7.5, and that Kernel is not released yet either.

 

Karl asked the audience how many were using 7.4 already, and based on the show of hands it was virtually nobody, so maybe it is a trifle premature to talk about 7.5, but it is as interesting to me as it gets.

 

As we know virtually all HANA tables are column based, but when you look at the definition in SE11 you can see there is a radio box to make the table row based. I wondered about this, apparently it is more efficient to have some customising tables as row based.

 

Next came a list of new features in ABAP 7.5, the SAP examples all look like <X> = (A – B) with no semantic meaning whatsoever, which makes it difficult to get your head around what the new feature does, but after a while I think I get it.

 

For example you can programmatically generate an internal table by lopping over another internal table and performing calculations on the fields in the source table to generate fields in the target table.

 

We now have a dynamic “move corresponding” feature where the source and target structures are not known until run time. Also you can use statements TYPE OF and CASE based on the nature of a variable e.g. if it is a string do this, if an integer do that. I presume that is aimed at dynamic data objects such as field symbols.

 

When it comes to class based exceptions they can now raise a T100 error message before propagating themselves, that message does nothing (I think) it is like in a function module where you say RAISE EXCEPTION SUCH_AND_SUCH WITH MESSAGE etc. and the message is only output if the exception is not handled, to prevent a short dump. This also enables a “where used” search for the error message, so you can hunt down the source of the problem. I thought the idea was a class based exception could contain a load of information about where it was raised and in what circumstances, but anyway that is the new feature. I like exceptions which inherit from CX_NO_CHECK so in the unlikely event of one of my exceptions not getting caught somewhere higher up the food chain, an error message would be better than a dump.

 

I knew there was a new class for ALV reports, a successor to CL_SALV_TABLE which is optimised for HANA. The class is called something like CL_SALV_IDA and once again you can have a report in just one line of code. This time there is a method called CREATE_FOR_CDS_VIEW which does what you might expect. The important thing here is that instead of the whole amount of data requested being in an internal table, in memory on the server, all that is in memory on the server is the data that is on the screen. All the grouping and sorting and totalling is done on the HANA database level, and when you press the page down button more data is retrieved.

 

In database tables we have foreign key relations, but sometimes it is difficult to spot relationships between tables, sometimes it takes the developer a while to work out the relationship between tables all to do with production orders – AFKO / AFPO / RESB etc… or the link between fixed assets and the purchase orders used to buy them. CDS views are designed to make such links more obvious – they are supposed to read as “close to conceptual thinking” as possible. The SQLScript code reads something like “sales_order.customer.address” which represents (to my mind) the link between VBAP,KNA1 and ADRC.

 

So a CDS view is supposed to be a layer above the database tables, giving meaningful names to field names like AUFNR and RGEKZ. In the S/4 HANA system the SAP developers had created 6400 CDS view as at 22/07/2015. The idea is that ABAP programs should only do SELECTS on the views on this will make them more readable, more like a domain specific language .As a side effect it is then possible to mess about with the underlying tables, if such tables are never directly read. The example given was getting rid of an ITEM_COUNT field in a header table, redundant in a HANA system. The actual example I can think of is the field in EKKO which says the highest item number in EKPO.

 

I think that in 7.4 a CDS view could have input parameters, but then you could only call such a view from within ABAP if you were on a HANA database. This is no longer the case in 7.5, all databases are fully supported with CDS views with parameters. This is quite important; you want to be able to pass selection criteria into such a database view.

 

To be clear about this - note that CDS views work on any database, not just HANA so if you have an Oracle database like my company you will be fine. SAP are clearly hedging their bets here, they want all their customers to move to HANA but realise this will not happen overnight.

 

New York Port Authority Check

 

The next interesting thing inside a CDS view is the authority check. This is called “data control language” (DCL) and reads something like:-

 

DEFINE_ROLE (‘MAD_SCIENTIST’)

            GRANT_SELECT

            WHERE (EVIL_LABORATORY’)

            ASPECT PCGF_AUTH(‘EVIL_LABORATORY’, *, ‘03’)

 

What this means is if you run a report against a CDS view and you don’t have authority in your user master for a certain evil laboratory, then records for laboratories you are not authorised to view will not show up. SAP security people should not be worried, the existing authority objects and roles are unchanged, it is just that ABAP developers no longer need to explicitly code AUTHORITY-CHECKS into reports reading from CDS views. The bad news is that the failure does not show up in SU53, though I imagine that will be sorted out in time.

 

In 7.4 the CDS view had not yet assumed its role as the be all and end all of everything in the SAP universe, and so there was a clear separation between an ABAP program calling a CDS view (database independent) and an ABAP Managed Database Procedure (HANA Only). Now a CDS view can itself call and AMDP, though this will of course invalidate its database independence if the programmer decides to do such a thing.

 

There are three parts to doing this, first you need to code a definition in the CDS view itself, which goes something like as follows (I don’t have the exact syntax, this is from memory):-

 

DEFINE TABLE FUNCTION VILLAGERS_TO_BE_KILLED

WITH PARAMETERS

  VILLAGE_NAME : abap.string

RETURNS

{

  LIST_OF_VILLAGERS : abap.something

}

IMPLEMENTED BY METHOD TO_BE_KILLEDOF CLASS VILLAGERS

 

Then in your ABAP code you do the following:-

 

CLASS VILLAGERSDEFINITION.

INTERFACE IF_HDMP_MARKER.

CLASS-METHOD VILLAGERS_TO_BE_KILLED FOR TABLE FUNCTION VILLAGERS_TO_BE_KILLED.

 

This has to be a static method. You probably have to say what CDS view you are talking about as well, as I said I cannot remember the exact syntax.

 

The last part is writing the implementation of the method in SQLScript. The code has to start with a list of all ABAP tables you will be reading, te reason given for this is that helps if “anything changes” as the ABAP system has a bit of a blind spot where code written in other languages lives.

 

Then you call the CDS view by doing a SELECT statement in your ABAP program on the view. You will get a syntax warning telling you this is a statement that will only work on a HANA database, this warning can be suppressed with a Pragma.

 

Apparently in 7.4 debugging an ADMP was done by using a separate tool to the ABAP workbench (which is ABAP in Eclipse in this case). In 7.5 this has been unified, you can only create/view the SQLScript of ADMPS in ABAP in Eclipse in any case, now you can put in a soft breakpoint, and you can then run the transaction and the debugger will stop inside the database whilst the ADMP is being executed. As I may have said previously debugging inside the database as opposed to the application server is quite spooky.

 

I thought it was quite funny that Karl Kessler said that writing SQLScript code was no fun at all, copying an existing sample and changing it was far easier. I found that as well in my experiments, the syntax is demanding to say the least; I thought ABAP was bad enough – sometimes demanding spaces between the bracket and the variable, sometimes forbidding spaces – but I did not know when I was well off. It also keeps changing, so if you copy something off the internet or a blog, then it most likely will not compile. In the last demo of the day – also two hours and all about CDS views – the SAP developer from Walldrof wrote some SQLScript and kept getting syntax errors and he could not figure out for the life of him what was wrong even with his colleague (and the audience) making suggestions. In the end he copied some – seemingly identical code – from another working application, and everything was fine.

 

The next funny thing – which I guessed at in my book but am pleased to find turns out to be true – is that inside SAP there is a contest/race between the team that develop CDS views and the team that improves the open SQL access in ABAP. So if a new feature gets added in a CDS view then the SQL team will go all out to replicate that feature in Open SQL, and vice versa. The guideline from SAP (up till now) had been that you use Open SQL first and then only use a CDS view if Open SQL could not cut it. Now CDS views have been elevated to first class citizens that may no longer apply.

 

Anyway, the idea is that with each release of ABAP the Open SQL gets closer to the “SQL_92” worldwide standard. In release 7.5 for example you have new options like UNION and UNION_ALL and you can have a dynamic ON statement in your ABAP SELECT statement.

 

On the SCN assorted people have for a long time been bitching about ABAP in Eclipse dropping you back into the SAP GUI for certain elements you wanted to view or change or create. It seems SAP have taken note and it was claimed this will now happen a lot less (they did not say never). You also get the “ABAPDOC” in Eclipse which is like JAVADOC and is for all the ABAP developers who love documenting their code for external people to read. That happens all the time. For example you can write comments next to the parameters in your function modules and you can generate a lovely HTML document. I would note that in the SAP GUI for a long time you have been able to add a long text into parameters of function modules and methods, but very few people either inside or outside of SAP actually did this.

 

Custom Code Management

 

This was a talk about how to use the solution manager tools to identify what custom code is being used, get rid of objects that have not been used for years, and then gauge the quality of the custom code that remains. I think it is common knowledge that 65%+ of custom code never gets executed, as us developers add new things all day long and nothing ever gets deleted.

 

In version 7.1 of the Solution Manager you have the custom code monitoring cockpit with a nice pretty “city model” where types of custom code as shown as skyscrapers of varying height depending on the amount of custom objects in a category. You can use your mouse to twirl the diagram around if you want.

 

The idea is that first of all you use transaction CCLM to get everything single custom object in your development system, and then use “usage and procedure logging” and the SQL Monitor (SQLM) to see what actually gets executed in the production system over a protracted period. You then set a filter to say (for example) to flag anything over three years old that has not been used for two years as a potential candidate for deletion.

 

The speaker noted that UPL informed one customer that a certain Z method was getting executed a billion times an hour (actually a billion, not me exaggerating for once) and obviously nothing needs to get called that often so clearly there was a problem in the code that needed to be sorted. That is the sort of unexpected benefit you get when doing a really detailed analysis of what goes on in your live system.

 

Once you have flagged the vast bulk of your code as never being used then you can start to use the other tools like the ABAP Test Cockpit to do a whole bunch of static checks on the code quality of the portion that remains.

 

Next year (Q2 2016) the next version of the Solution Manager comes out (it can run on a HANA database, but does not have to) it will have the “Quality Control Cockpit” which is supposed to help with the second half of this process i.e. improving the custom code that actually gets used.

 

There is also something called the “simplification database” which I think is going to be some sort of standalone tool to check your custom code for things than will break and/or could be optimised when running in an S/4 HANA system.

 

Fiori in my Inbox

 

This is the workflow inbox appearing in a Fiori app on your mobile phone or tablet. It’s quite clearly still being enhanced, it can do most of the things you would expect from the standard SAP transaction, and there are user exits to fill the gaps. You can call up attachments for example, and even jump into the work item in SAP GUI for HTML mode.

 

There is no offline capability at the moment, but that is on the roadmap.

 

This Netweaver Business Client is Guilty

 

This kept jumping between NWBC 5.0 which has been out for a while and NWBC 6.0 which is not out yet. In my humble opinion NWBC 5.0 was a lot better than 4.0 because it shares data with the actual SAP GUI and so knows what my SAP systems are for example. I agree though with the developer on SCN who said it was unusable because you get a third of the icons and menu options you are used to in the SAP GUI at the top of the screen. You can get to the others via a drop down menu but it is painful. That clearly has not changed in version 6.

 

Anyway version 6 is coming out some point this year, which probably means early next year, or late next year. You will need a 7.5 system so the question is academic anyway since it will probably be about three years till any meaningful amount of companies are using 7.5. If you have 7.5 you can also use the Fiori ESS/MSS in NWBC 5.0 but that would not make any sense.

 

Version 6 opens up with the good old Fiori launch pad with those nice white tiles. As an aside most of the “user menu” type transaction screens in the SAP GUI in my company (ECC 6.0 EHP5) like the plant managers dashboard also have a screen full of tiles you press on to go to various transactions, but ours looks more like a windows phone - we call the underlying transaction ZMETRO.

 

The point of NWBC is to have the various different UI technologies be able to open within it, starting with the SAP GUI (apparently controls like in the “Enjoy” transactions in the GUI work a lot better) right through to UI5

.

I noticed on one the slides Web Dynpro was missing from the list of possible UI technologies – may it rest in peace - though the Floorplan manager was there.

 

We were shown the good old MM03 transaction in NWBC 6.0 and the material number field was acting like the Google search field. You also get side panels (so called CHIPS) for a vast array of SAP transactions, though if you add Screen Personas to the mix they vanish until you write a whole bunch of JavaScript code to get them back.

 

Adding Fiori apps to the list of possible transactions is of course possible but you need to know the “semantic object” when adding the application, and that is like finding the needle in a haystack. You have to go to the SAP Fiori Apps Library web page and then go on a goose hunt.

 

UI5

 

Guess what! You need ABAP 7.5 for this as well! Actually you don’t, it works fine on my 7.02 system, but of course all the fancy new things I was seeing need the latest 7.5 version.

 

At least it was explained why we should use the Web IDE rather than ABAP in Eclipse. Firstly to get the Web IDE you have to sign up for a HANA Cloud Platform account, something SAP want you to do very much indeed. The next reason stated was that you get code completion for JavaScript and XML in the Web IDE and you “don’t in Eclipse”. That sounds odd to me, Eclipse seemed to be doing code completion when I was coding in ABAP and naturally in Java. Another reason given was the built in testing framework – a unit testing framework and some acronym which I presume lets you look at the finished application. That was not as easy as you might think when I tried the same in Eclipse.

 

You also get all the templates in the Web IDE. The idea being put forward by SAP here is that using templates is like moving from WRITE statements to the ALV i.e. built in buttons for common tasks like exporting to Excel, and these templates are also supposed to be like the Floorplan manager to give a unified look and feel. Then you can mess about with the generated XML view to change anything you want.

 

The use of such “smart controls” is supposed to reduce the amount of code by 90%.

 

Actually in my experience what a lot of developers like about UI5 is the very fact you have to write all the code yourself. But SAP – like IT companies since the nineteen fifties – are still pushing the “not one line of code needs to be written” approach. I don’t know how they can say that with a straight face in front of 10,000 developers all of whom would be out of a jo if no-one needed to write code anymore. You need 7.5 for those templates anyway, so this is years away. The generated application code takes its information from the “annotations” in the CDS view as mentioned earlier. We were shown how a “key user” can press a button and then do a GUIxt type of thing where you can move fields around and rename them and hide and add fields, and then let that be the default view for all the company. The company behind GUIxt sponsored the Demo Jam last night (which was wonderful). I am wondering why they are still in business now Personas is free.

 

On one of the demonstrations I noticed that the controller was in JavaScript and the view was in XML (naturally the model is in ABAP inside the SAP system). That is the way I was taught to do UI5 by Graham Robinson, it makes a lot of sense to me, as that is a clear separation of the concerns of the MVC. Then in another live demonstration one of the presenters said “you will notice there is no view or controller”. Really? That was just after auto-generating the application.

 

We were shown the “UI5 Inspector” which runs in a Chrome browser, some sort of free ad-on, for looking at the code behind the controls on the screen, debugging them, and changing them on the fly e.g. changing the icon, making the field wider or what have you.

 

There is some sort of new release cycle for UI5 where the first year is all about innovation (what I call a beta release) and then you get two years of stability. I think they are talking about the back end here as the JavaScript part seems to change every two weeks. So we have SAP_UI_740 where support ends Q12017, SAP_UI_750 where support ends 2018, SAP_UI_751 starts in 2016, and so forth.

 

The only OSS Notes that count, are the ones that come in large amounts

 

This was my last session, it might seem like a dry topic, but everyone was very emotional about this subject. You may have had to implement a specific note in the past and SNOTE does the code changes for you, but you have to do the DDIC changes yourself, and add the text elements for selection screens, and assorted other tasks. For some notes this is so complicated you are told not to bother, wait till you implement the next support stack – the new ABAP editor in SWO1 fell into this category.

 

One of the banes of my life is that in SAP the accounting entry for goods receipt posting for purchase orders with multiple account assignment happens at invoice time, whereas for a purchase order with only one cost centre it happens at GR time. This breaks one of the four laws of accounting, and as I was an accountant to start off with I have always been horrified about this. So when we found a dormant business function in EHP5 to solve this problem we switched it on as fast as fast can be. Sadly the code was all over the place, must have been written by someone on the first day of the job, mixing up WERKS and BUKRS in the code as in SELECT FROM X WHERE BUKRS = P_WERKS. Thus we got the error message “Company Code 1234 does not exist” where 1234 was in fact the plant.

 

There was an OSS Note to fix this, so I applied it, and at the bottom it said “this causes problems, fixed in note 123”. So I implemented the next note, which in turn pointed me to another note, which in turn pointed me to another note and so forth. At the end of this chain the final note basically said “this is too difficult, upgrade to EHP 6”. Lovely. That would have been wonderful information at the start of the first note, but now I will have to explain every week for the next ten years (which is when we will upgrade) why this task on my list is still “pending”.

 

Anyway, that is by the by – the problem generally is that SNOTE cannot currently do everything; the developer has to do some things manually. In the new world SNOTE has been beefed up so that it can do everything a support package can do, even though you are implementing a single note. This is done by creating a transport request, which of course happened anyway during the current process so I don’t see why this is so different but a great improvement nonetheless.

 

Also we were reminded about the automated note search – transaction ANTS_PANTS or whatever it is called, where you can do a trace on the transaction that causes the problem and you get a gigantic list of possible notes related to every module/table called and then have to guess which one fixes your problem. That is still painful but a gigantic step up from doing the search on the service marketplace. The problem was the search was that if your sales order transaction was broken then searching for “sales order” would not find the fix but searching for “RV45ZZ87” would find it, as that is the INCLUDE that was broken. So you had to know the technical name of what was broken, which of course no-one does. The ANTS_PANTS is a big step up from that.

 

The End is Nigh

 

This blog ended up a bit longer than I thought, but I needed to write everything down before I forgot it – the contents are therefore in somewhat of a random order, but at least it was not a series of photographs with no text (as in “look at this lovely place I am in”) or a series of names of people who I met (I did meet a lot of great people, but why would you care?). I am currently in the Double Helix Wine Bar typing up my notes and trying to explain to the bartender what SAP is. A conference with ten thousand people and this hotel is so big none of the staff noticed. The Industrial Fastener conference was next door to ours, which probably made more impact, everyone knows what a fastener is.

 

I gave a speech myself – about Push Channels – but that will be a subject for a blog of its own. So that is that…..

 

Cheersy Cheers

 

Paul 

 

 

 

 

 

 

 

 

Skeleton/Architecture of a Technical Design plays crucial role

$
0
0

     Architecture of a technical design plays a crucial role in any project development of a SDLC. Its long time now, Organisations have matured up in their In-House Software Teams rather going for a client support. And almost a number of companies have their own well built IT-Support team. Not only restricted to the Network related issues, but even to handle daily chores ERP Related business issues.

 

     In Order to satisfy

  • Emerging business needs,
  • Compatibility and adaptability issues with upgrades,
  • To catch with the bench mark status in technology,

     Organisations have to opt for the consulting companies for the Roll Out and phase development activities. The critical phase arrives with coordination at the time of Handover from the project team to the support team. The support team has to adopt the new development objects in continuation to the existing support they carry in day-to-day chores.

 

To overcome the ambiguity, and ensure smoother transition, following plays crucial importance.

 

  • Follow of unique and precise architecture for all the development.
  • Integration of the support team in all crucial phases of the development.
  • Separation of business logic from Database Fetch and GUI handling.
  • Ensure proper provisions for ease of modifications and enhancements for future needs.
  • Adopting dynamic code built where possible.
  • Provision of code (commented) for any general extensions that usually arrive in future. per say, addition of a button on the GUI/Report display.
  • Avoid obsolete and outdated platforms.
Viewing all 948 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>