Quantcast
Channel: ABAP Development
Viewing all 948 articles
Browse latest View live

Trailing Blanks in Character String Processing

$
0
0

To be honest, I feel a little bit uncomfortable writing this blog, because I think that all about this subject is said since long. But a recent discussion shows, that even experienced ABAPers can stumble over the handling of trailing blanks in character string processing from time to time. So why not summarize it. You can skip reading, if you know it all ...

 

For character string processing ABAP provides mainly two built-in types, c and string. Text fields of type c are flat and have a fixed length, text strings of type string are deep and have a dynamic length. Besides that, there is another major difference between text fields of type c and text strings of type string:

 

While trailing blanks are always relevant in strings, they are ignored in text fields in many operand positions of statements, especially in source fields of assignments. As a rule: When working with text fields of type c you always should look into the ABAP Keyword documentation and check whether trailing blanks are skipped or kept in the respective statement.

 

Example:

 

    DATA: text_space   TYPE c LENGTH 1 VALUE ' ',

          string_space TYPE string VALUE ` `,

          result1      TYPE string,

          result2      TYPE string.

    result1 = 'Word' && text_space   && 'Word'.

    result2 = 'Word' && string_space && 'Word'.

 

The result of the concatenation when using text_space is „WordWord“ and the result when using  string_space is „Word Word“. The trailing blank - which is also the only character of text_space - is ignored in the text field. Be aware that the built-in constant space would have same behavior as text_space here!


And now watch out! The behavior regarding trailing blanks also concerns literals. We have two kinds of character literals in ABAP,


  • text field literals '...' of type c
  • text string literals `...` of type string


It seems to be trivial, but one might tend to forget: What is said above about trailing blanks in c and string fields holds for the respective literals too. Especially text field literals '...' can be rather nasty. Trailing blanks are not kept in many positions and that means that a text field literal containing one blank ' ' is often treated like an empty string. The problem is, it's not WYSIWIG. You see a blank in the code but you don't get it.


Examples:


DATA text TYPE string.

text = ' '.


text is an empty string of length 0.


DATA text TYPE string VALUE `blah_blah_blah`.

REPLACE ALL OCCURRENCES OF '_' IN text  WITH ' '.


text contains "blahblahblah".


IF ` ` =  ' '.

  BREAK-POINT.

ENDIF.


A running gag!


By the way, the concatenation operator && skips trailing blanks while the literal operator & keeps them:


DATA text TYPE string.

text = 'Word ' && 'Word'.

text = 'Word ' 'Word'.


The results are "WordWord" and "Word Word" respectively, oh my.


But that's still not enough, there's also another way around!


DATA text TYPE string.

CONCATENATE 'Word' 'Word' INTO text SEPARATED BY ''.

 

The result is "Word Word" with a blank! There is no empty text field literal '' in ABAP. It is always replaced by ' '. You don't realize that in statements where trailing blanks are skipped. But behind SEPARATED BY they are kept!

 

Confused?

 

Rules of thumb:

 

  • Don't use trailing blanks in text field literals '...' of type c in operand positions where they are skipped
  • Always use text string literals `...` of type string, if you want to preserve trailing blanks in literals

 

B.t.w., string templates |...| have type string and trailing blanks are preserved, of course. The logical expression ` ` = | | is true.



On programming style: the use of constructor expressions - VALUE

$
0
0

With ABAP 740, a new feature was introduced called "Constructor expressions". First I was sceptical about their usefulness, and it seemed to me that they make the coding more complicated and more difficult to read. But since I am always very curious about new possibilities in programming, I decided to try them out, and I must admit that meanwhile I am a fan of them. In this post, I'd like to focus on VALUE, which I use frequently.

 

Coding examples

 

Look at these two examples from a mapping routine, taken out of a productive program of mine:

 

  1. Without using VALUE:



  2. Using VALUE:

    img1.png

Where's the difference?

 

One could say "where's the difference? No fewer coding lines, no saving of variable declaration!", which is all true. But there are two functional differences and one aesthetical.

 

The first functional difference is, that the target structure is initialized before assigning the values to it. If you ever forgot a "clear" statement before filling data structures, you will appreciate this effect.

 

The second comes out in debugging. The whole mapping of this structure will be done in the debugger with one single step. Of course, pressing F5 will bring you into the method determine_req_date, which is called by the expression, but the rest is done with one step. Do you remember, how often you were hammering F5 to step over a large mapping section, having to stop on every single assignment?

 

The second difference is about beauty and readability of the code. Using the constructor expression, you get one block for each structured element you populate, which leads to good readable coding even if the single module is quite long. Look at this example:

 

Do you get the Idea?

 

Conclusion

 

I'd appreciate to know your thoughts!

 

All the best

Jörg

MVC (model view controller) framework for ABAP part 2

$
0
0

Reports with class

 

In MVC (model view controller) framework for ABAP part 1 you can find the "starter kit" for a framework you can use to create applications that are using dynpros and CFW controls that the framework is able to control. In this second part I will introduce only one further class that can be used for report programming. In my daily work, I am using this report very frequently as a template for new ones.

 

The demo application

 

Just like the demo application in the first part, the application contains the framework class that you can extract to a public class for reusing the code.

 

Application screen You can upload the screen 0001 from the file attached. Other than in the first part, we do not need a subscreen here. All we need is an empty screen that is able to carry the docking container in which we will place an ALV with the result list.

 

Selection screen controller The selection screen is controlled by the new class ZCL_MVCFW_CON_SELSCR. The controller derives from the class ZCL_MVCFW_CON_DYNPRO and extends it only with two methods

 

How it works

 

In order to get everything to work, a few steps are necessary. Let's go through it step by step.

 

Selection screen interface To simplify the passing of selection screen parameters between the classes, I use always a local interface that contains a structured type with all input elements from the selection screen


 

You can see that I use the same names for the components as for the selection screen elements:

 


Global data Some global variables are indispensible:

 


INITIALIZATION Here the main controller is instantiated. After that, the get_con_dynpro method is called to create a controller for the actual dynpro, which is the selection screen.

 

PAI The PAI event of a selection screen is at selection-screen, so we call the PAI of the screen controller, which is a framework method that does the following:

  • Ask the main controller to fetch the screen data.
    • Therefore, the method get_screen_data of the main controller must be redefined:

      As you see, the data is stored in an attribute of the main controller. This can be useful, if you want to react to user inputs during PBO, i.e. switching on/of field attributes and so on.
  • Invoke the PAI of the super class, which will compare all component values from the screen with the stored ones in the memory of the framework class ZCL_MVCFW_DYNPRO and call PAI_FIELD_CHANGE on change of values.

 

START-OF-SELECTION The method run of the selection-screen controller is called.


RUN The control is passed to the main controller (run_program), where the main logic of the report begins to work.


Batch/Online processing As you can see in run_program, there are two branches, one for background processing and one for online. The online branch is similar to the demo in part 1. For the batch processing, a list has to be created instead of calling a screen. Therefore, a list controller has been added to the program, which we derive from the generic CFW controller.


 

Of course we do not have any CFW control here. But the framework class can be used even without container. In method refresh the output is coded. In this case, I use CL_SALV_TABLE to produce an output list.

 

Some notes

 

I some of my use cases, I use more than one model class because of complexity of the application. In this case, it comes handy to declare common used data types in the interface lif_report.

 

All the best

Jörg

From Open SQL Joins to CDS Associations

$
0
0

In this short blog I will use the most primitive example to show you the way from joins in ABAP Open SQL to associations in ABAP CDS.

 

The aim of the blog is not to show you something you should do but to gain a basic understanding of associations in CDS views.

 

Step 1, Join in Open SQL

 

I will start with the following very simple INNER JOIN between database tables SPFLI and SCARR from the good ol' flight model in Open SQL in the ABAP Editor (either WB or ADT in Eclipse):

 

SELECT FROM spfli

               INNER JOIN scarr ON

                  spfli~carrid = scarr~carrid

       FIELDS scarr~carrname  AS carrier,

              spfli~connid    AS flight,

              spfli~cityfrom  AS departure,

              spfli~cityto    AS arrival

       ORDER BY carrier, flight

       INTO TABLE @DATA(result_open_sql_join).

 

Nothing special about that and the result shown with CL_DEMO_OUTPUT looks as follows:

 

inner_join.jpg

 

Step 2, Join in ABAP CDS

 

Now let's transform the above ABAP code into the DDL of an ABAP CDS view in the DDL source code editor of ADT in Eclipse:

 

@AbapCatalog.sqlViewName: 'DEMO_CDS_JN1'

@AccessControl.authorizationCheck: #NOT_REQUIRED

define view demo_cds_join1

  as select from spfli

    inner join   scarr on

      spfli.carrid = scarr.carrid

  {

    scarr.carrname  as carrier,

    spfli.connid    as flight,

    spfli.cityfrom  as departure,

    spfli.cityto    as arrival

  }

 

This can almost be done by copy and paste. Hey, it's (almost standard) SQL for both.

 

After activating this view you can use the data preview of ADT (F8) or acces it with Open SQL:

 

SELECT FROM demo_cds_join1

       FIELDS *

       ORDER BY carrier, flight

       INTO TABLE @DATA(result_cds_join).

 

It is not too surprising that result_cds_join and result_open_sql_join contain exactly the same data.

 

Step 3, Association in ABAP CDS

 

Finally, I will use the advanced modelling capabilty of ABAP CDS and transform the explicit join into an association of another view:

 

@AbapCatalog.sqlViewName: 'DEMO_CDS_JN2'

@AccessControl.authorizationCheck: #NOT_REQUIRED

define view demo_cds_join2

  as select from spfli

  association to scarr as _scarr on

    spfli.carrid = _scarr.carrid

  {

    _scarr[inner].carrname as carrier,

    spfli.connid           as flight,

    spfli.cityfrom         as departure,

    spfli.cityto           as arrival

  }

 

The association _scarr is declared once behind the keyword association and can be used at several places inside the view in path expressions. You can also publish it for usage in other views or in Open SQL, but I have not done that here.

 

For our simple example, I use the path expression _scarr[inner].carrname as the first element of the select list. When using a path expression, the associations listed there are internally transformed to joins. In the select list those joins are left outer joins by default. Therefore, I override the default with [inner] in order to enforce an inner join. You can check the result by displaying the SQL DDL (shown for HANA here) that is generated from the ABAP CDS DDL in ADT (Context menu Show SQL CREATE statement):

 

CREATE VIEW "DEMO_CDS_JN2" AS SELECT

  "SPFLI"."MANDT" AS "MANDT",

  "=A0"."CARRNAME" AS "CARRIER",

  "SPFLI"."CONNID" AS "FLIGHT",

  "SPFLI"."CITYFROM" AS "DEPARTURE",

  "SPFLI"."CITYTO" AS "ARRIVAL"

FROM "SPFLI" "SPFLI" INNER JOIN "SCARR" "=A0" ON (

  "SPFLI"."MANDT" = "=A0"."MANDT" AND

  "SPFLI"."CARRID" = "=A0"."CARRID"

)

 

You see, we end up with something well known.

 

And of course, the data preview of ADT (F8) or the following Open SQL retrieve again the same data as before:

 

SELECT FROM demo_cds_join2

       FIELDS *

       ORDER BY carrier, flight

       INTO TABLE @DATA(result_cds_assoc).


With other words, no exceptions from


ASSERT result_cds_join  = result_open_sql_join.

ASSERT result_cds_assoc = result_cds_join.

 

Conclusion

 

The aim of this simple example is to show you that CDS associations are nothing but specifications of joins in a central position. These joins are instantiated in native SQL when using associations in path expressions.

 

The benefits of using associations are not shown in the simple example here. The advanced modelling capabilities stem from the reuse of the associations (meaning their joins) in different positions. Of course,  path expressions can contain more than one association, relieving you from the task of coding complex join expressions.  Such path expressions can be used in the same or other CDS views and even in Open SQL (if published by the defining view).

Using exception classes with application log - an approach

$
0
0

I would like to share my approach of using exception classes and transport t100 messages inside of them. For one single message, there is nothing to talk about, you simply include the t100 interface and pass the message to the exception class. But what if you would like to pass a list of messages to the caller?

 

Using an application log (BAL)

 

In the past, I often proceeded like this: I created an object that manages the message list (the BAL_ functions). The SAP class CL_RECA_MESSAGE_LIST is great in doing this job. Then, I passed this object to the classes I invoke so these classes can collect messages there. In case of an error, I just had to use the message list to show a popup with all the messages.

 

The problem here is, that each caller of the class must provide a log object and all classes involved must have this global object for logging. That's not a really big thing, but I wasn't satisfied with the solution. So I came to a new approach, which is including the log in the exception class and passing it in this way.

 

The logged exception class

 

To try it yourself, you first need an exception class that contains a log object (based on CL_RECA_MESSAGE_LIST). Let's go:

 

 

Add the attribute for the log. The referenced type is IF_RECA_MESSAGE_LIST

 

 

And create a getter method for the log:

 

 

 

Activate the class.

 

Demo Application

 

The demo application has a controller and a model class. On running, the controller instantiates the model and calls the read method. Here an error is provocated and the message is being captured in the log. Then the log is passed to the exception object.

 

The main controller catches the exception and uses the log object to display the error in a popup.

 

Here is the code:

 

*&---------------------------------------------------------------------*
*& Report  ZP_TOOL_TEST_LOGGED_EXC
*&
*&---------------------------------------------------------------------*
*&
*&
*&---------------------------------------------------------------------*
report zp_tool_test_logged_exc.

class lcl_controller definition.

   public section.
     methods run.

   private section.
     methods error
       importing io_log type ref to if_reca_message_list.

endclass.

class lcl_model definition.

   public section.
     methods read raising zcx_tool_logged.

endclass.

start-of-selection.

   new lcl_controller( )->run( ).

class lcl_controller implementation.

   method run.
     " create model.
     data(lo_model) = new lcl_model( ).

     try.
         lo_model->read( ).
       catch zcx_tool_logged into data(lo_err).
         " call error handling (which shows the log)
         error( lo_err->get_log( ) ).
         " show the log
     endtry.
   endmethod.

   method error.

     call function 'RECA_GUI_MSGLIST_POPUP'
       exporting
         io_msglist                   = io_log
       .


     io_log->clear( ).
   endmethod.
endclass.

class lcl_model implementation.

   method read.
     data lo_messages type ref to if_reca_message_list.

     lo_messages = cf_reca_message_list=>create( ).

     " provocate an error

     message e000(0k) with 'Some application error' into sy-msgli.
     lo_messages->add_symsg( ).
     raise exception type zcx_tool_logged exporting mo_log = lo_messages.

   endmethod.

endclass.


Some notes


The log is kept local in the model here. If your model is more complex. it might be that you collect messages in various methods. In this case, it could make sense to create a class-global log. Make sure to clear the log after having displayed the message, so no previous messages will show up (see io_log->clear( )  in method error).


As you see in read the message is created with a standard MESSAGE statement, which captures the text in a dummy variable. This makes it simple to code the message and it will also update the where-used list of the message id in your system. I find this very useful. After the MESSAGE statement, a simple call of lo_messages->add_symsg( ) puts the values of SY-MSGNO, SY-MSGID and so on into the log. Afterwards the exception is raised passing the log into the exeption object.


If you do not know yet CL_RECA_MESSAGE_LIST, it's worth a look. It's full of useful methods!



The Art of the Possible

$
0
0

Art of the Possible – Sydney, Australia,  August 10th

 

One day event: Practical Examples of HANA Use in Australia

 


image001.png

Introduction

 

I have not written a blog on SCN for over six months, as I have been busy writing the second edition of my good old ABAP Programming book for SAP Press. That’s finished now, in time for TECHED 2015, so I can get back into blog world, and what better way to start than by reviewing an SAP event I went to last Wednesday here in Sydney, Australia.

 

The advertising puff had all the words you might expect like “HANA” and “Big Data” and “IOT” and so on, but the idea was that instead of SAP types talking marketing to you and making you reach for the sick bag, actual real life “customers” (i.e. companies that use SAP) would talk about what they had been doing with these technologies/concepts.

 

Where I work senior management have started using words like “innovation: and “machine learning” so I am starting to wonder if all these buzz words are actually going to transform into some sort of reality in the very near future, and indeed, are some organisations already there? You do not know how good it feels to spell “organisation” with an “S” after having to spend six months having to use the USA spelling.

 

So what follows is a recap of what I can remember from the presentations and demonstrations given at the event. I will try to avoid my pet hate in blogs about SAP events, which is giving food reviews, and talking about the dustbins, and instead I will try and concentrate on the content in the presentations.

Breakfast

As we arrived at Sydney Town Hall, to fill out stomachs before the event got started in earnest we were all given “Buddha Jumps over the Wall” soup.  Apparently while this dish once contained shark's fin, an SAP employee tells me that particular (and controversial) ingredient has been replaced with Javan Rhinoceros in order to better go along with the abalone, Japanese flower mushroom, sea cucumber, dried scallops, chicken, huan ham, pork and ginseng.

1

How GHD are Solving Digital Problems with HANA

Many people think that in Australia we all live in caves and use shells for money, but in fact this is actually quite a high technology country, with a willingness to break all the rules, hence the enormous amount of technological innovation to be found in all areas of society, even the government.

GHD is an engineering company, and so deals with the real world, and has managed to utilise the power of HANA to solve real world problems for its clients, like turning a truly massive amount of data into an easy to visualise graph showing how much a big building is likely to wobble about in the wind.

 

In addition in the 1950’s the USA built a scale model of the entire Mississippi Delta and for decades used that to simulate water flows and make predictions about possible flood situations. This has been retired now, and has grass growing all over it, as now such predictions can be made using advanced computer systems such as HANA.

 

2

Australian Digital Adoption Survey

 

SAP has a massive market share in Australia. The entire government, all the mining companies, the oil companies naturally, the main telephone company, the big retailers, the utilities, some of the big banks, the Australian Wine Society (very important) basically everybody. So, if someone is dealing with a company online in Australia it is very likely there is an SAP back end system involved somehow.

 

The idea is that everyone wants to interact digitally these days, so how are all these Australian companies adapting to this model? Not very well according to the 2014 survey. The results are better in 2015 but there are still a lot of unhappy consumers who think the online applications they use to deal with these organisation suck. This is, of course, an opportunity waiting to be claimed, with massive rewards for the ones who get there first.

 

Hang on, before moving on to the next talk I need to get rid of my rubbish. I wonder where the nearest dustbin is. Oh, look there’s one:-


image002.png

3

Lion Brewery

 

This was a corker of a talk. A gentleman called Tim Reid is a solution architect at a company called lion in Australia. I know them because they make the beer that I drink, but they also make orange juice and milk.

 

With a mixture of videos and demonstrations he showed how they solve business problems using high technology. No doubt you have heard the buzzword “design thinking” and this was a really good example of how this works in real life. The business comes to IT asking for a circle, and often they get a circle, but what they really wanted was a square.

 

To me the essence of design thinking is to walk a thousand miles in the shoes of your business person until you understand the problem just as well as they do. In the video the business guy admitted that, with 20/20 hindsight, what he asked for in the first place was totally not what was needed to fix the problem. Even worse before going to his internal IT department he had asked third party people for quotes to build him what he thought he wanted. Luckily the quotes were too high.

 

I also liked in the video that the process flow diagram for the milk related application being designed started with a picture of a cow going “moo”. I also notice that in the Lion building the staff kitchen had a bar, which is what I would hope for in a brewery.

 

Next, from the same company comes the “Tap King”. They even had a working prototype of this set up at the back of the room which you could see working between talks.

 

To explain what a “Tap King” is I will describe the advert, but before I begin the general idea is to get the same sort of beer poured out of a tap in the pub in your own home.

 

In the advert a man goes to his fridge to get a beer. When he opens the fridge not only is there the usual shelves with cheese and what have you, there is also Lionel Richie with a full size grand piano with a “Tap King” attached to the top of the piano.

 

Lionel plays the piano and sings his song “hello, is it me you're looking for?” and then pours the beer and hands it to the householder.

 


image003.jpgimage004.jpg

I have checked my fridge many times, but have yet to encounter a pop star within. Anyway, the Tap King had two problems. As it is a sealed unit you cannot see how much is inside it, and you cannot be sure the temperature is low enough to make the beer drinkable.

 

As a word of warning, you have to beware of using technology to solve what a call “zero gravity pen” problems. This is a reference to NASA spending billions of dollars and years of research to create a pen for use by astronauts in zero gravity, whereas the Russians spent sixpence and used a pencil to solve the same problem.

 

Anyway, the Tap King solution involved sensors inside the barrel, and you asked “Alexa” the AWS digital assistant that looks like a black cylinder either what the beer temperature was or how much beer was left. If there was not much beer left she would ask if you wanted to order another barrel, and place such an order for you.

 

All wonderful stuff. The only problem was, nobody bought the product and so poor old Tap King has been consigned to the history books. Still, it shows the sort of problems technology can address.

 

Morning Tea

 

To celebrate the German origins of SAP for morning tea delegates were served afoot-long bratwurst infused with hundred-year-old Louis XIII cognac and topped with fresh Ivory-Billed Woodpecker, picante sauce and Kobe beef seared in olive and truffle oil.

 

4

Unstructured Data Mining

 

SAP Mentor Clint Vosloo gave the “technical” talk of the day, which of course was just the sort of thing I was looking for. He had enormous technical problems to overcome to get his live demonstration working – as did all the presenters – but he got there in the end.

 

During the talk he sent out a “tweet” about the event and then before our eyes used the HANA platform to build an application to call the Twitter API and store the resulting tabular information inside a HANA database whereby you could then run queries upon it.

 

This made it crystal clear to me who you can turn totally unstructured information into the sort of database tables I am used to querying. Even smiley faces came back as “strong positive emotion”.

 

A lot of companies have fields in their customer master to try and categorise the customers as loyal or whatever, and this has been done in the past by asking them questions. If, instead, you can just run queries on social media and find the customer has written a post about you saying they hate your company and are going to come round next week with an axe and brutally murder every single employee, together with a picture of the axe they just bought and a sad face smiley, then you can automatically read that data and populate the “customer satisfaction percentage” field in your ERP system with a low value.

 

Lunch

 

Then it was lunch time and out came the waiters, wearing Hasso Plattner masks. There was a choice of three dishes served on silver platters, or you could have all three at once if you wanted, put through a blender and served in an inverted traffic cone.

 

·         Philadelphia classic cheesesteak made with real Amur Leopard meat, cut down with foie gras and topped with truffled homemade fontina cheese on a sesame roll, with a glass of Dom Perignon 2000.

·         White truffle and gold pizza topped with organic water buffalo mozzarella, with meat from a “Saola” (Asian unicorn) and the famous “little dodo bird” and 24K gold leaf

·         Leatherback Sea Turtle burger which comes topped with seared foie gras and truffles on a brioche truffle bun and a bottle of 1995 Chateau Petrus wine and two crystal stemware glasses.

After lunch I looked around for a dustbin in which to put my empty plates, and did not have far to look:-


image005.png

5

Cloud Based Start Up on HANA

 

No SAP event would be complete without a start-up saying what a brilliant product they had built using the HANA platform, and how they could not have done it on any other platform as it would not have been fast enough/flexible enough etc..

 

What was unusual here was not that the company did not use any sort of SAP ERP system, just the HANA platform on its own, but rather that the lady who ran the start-up and gave the speech was 70. It is so easy to have pre-conceived ideas and think of start-ups as being created by teenagers who aim to sell the resulting successful company to a larger organisation and become a billionaire by age 25.

 

This was a payroll/HR/rostering system, and it looked really good to me, so maybe she will become a billionaire by age 75. I also liked the fact the example data was all to do with pubs, in particular the Marstons Brewing company in the UK, with which I am very familiar indeed.

 

The non-technology point that she was making was that every new module she added was in effect designed by a customer, so she was not building the solution and looking for companies with that problem, but looking for companies with problems, getting them to design the solution themselves, she builds it, they are happy, and then she can sell it to someone else. I have seen this before.

 

Afternoon Tea

 

Time goes by so fast, it is afternoon tea time already, and we are served a cupcake created from chocolate made from Venezuela's rare Porcelana Criollo bean, topped with Tahitian Gold Vanilla Caviar, and edible gold flakes. It also includes Louis XIII de “in Mem-ORemy” Martin Cognac and comes in a hand blown sugar Fleur-de-Lis, wrapped in the hide of a freshly slaughtered Northern Sportive Lemur.

 

6

ESRI Location Data

 

My company uses ESRI for geo-coding customers and working out travel times, so I am always interested in anything they have to say about their road map (if you forgive the pun).

 

ESRI clearly have a strong partnership with SAP and sell many joint products along the lines of plant maintenance applications where you can see the actual location of the machine that has broken down.

 

They also showed a real-time map of the world showing the areas you were likely to be attacked by pirates, and how this changes with the weather.

 

I have seen the ESRI product (ArcGIS) change from living on the user’s PC to becoming a server accessed by multiple clients. Clearly the next stage is a cloudy sort of thing accessed by web services, presumably using a real time HANA database or some such, that constantly updates as new roads get built, or get closed, or if there is a flood, or if a madman takes a bus hostage and thus blocks the main bridge across the river in Melbourne, just at the time I am in a taxi trying to get to the airport.

 

That was just like the time I was in Adelaide trying to get to the airport, and it was so hot the tram tracks had melted, so no public transport, so you could not get a taxi for love nor money. That is also the sort of information you need updated in real time to your geographic information system when planning travel. Who would have thought the melting point of the metal used in tram tracks would be relevant to the calculation of how long it takes to get to the airport? This is what all this “machine learning” is about, trying to go through vast swathes of seemingly unrelated data looking for connections. The human brain does this all the time as in “that cloud looks like a banana”.

 

At that point my pen broke, so I had to look around for a dustbin so I could throw it away. Luckily I found one almost at once.


image006.png

7

SAP Innovation Department

 

As might be imagined SAP has specialised departments evaluating new technology, in several countries. There are such departments in Waldorf in Germany, Paolo Alto in the USA, Bangalore in India and Brisbane in Australia.

 

A gentleman from SAP tried to show some live demonstrations of such technology, but in an ironic twist the on-venue technology failed totally so he had to rely on good old PowerPoint.

 

Anyway he talked about various new technologies under investigation – at one point he said the word “blockchain” and then wished he had not, as then the questions started coming thick and fast.

 

Anyway the main application being discussed was for a large bank, and it was looking at all the transactions a customer did in graphical format and trying to make a prediction of at what point the customer was likely to close their account and switch to another bank.

 

The numbers were that this bank was losing 100,000 customers a year, and it cost on average $250 to get a new customer to replace one that left, and so they were spending $250 x 100,000 each year to maintain their customer base. So if they could spot the customers before they left and do something about it, potential big savings.

 

This comes back to the point I mentioned earlier about tying together seemingly unrelated data. It turned out a give-away was when the customer started using another banks ATM all the time. Maybe they had moved house or jobs or something.

 

In the UK (at least when I lived there 20 years ago) that would not matter as there was no charge for using another banks ATM, but there certainly is in Australia, you have to pay through the nose. Last week it went up from $2 to $3 for such a transaction, not a minor increase. It’s a wonderful business model for the banks, working in IT I know it costs them nothing at all to process such a transaction, so 100% of that money is profit.

 

At the moment the marketing people at the big four banks in Australia are working on a way to justify increasing their mortgage rates at the same time the Federal reserve bank has lowered the base rate. Those two rates are not as related as people think, but in the past an increase in the base rate has always been used as an excuse to hike up the mortgage rate by the same amount or more, and now some sort of Orweliian double-speak is needed to explain that the mortgage rate needs to go up when the base rate goes up, and go up when the base rate goes down, and indeed go up when it stays the same. Try building a predicative computer algorithm around that logic.

 

So, I bought shares in all the big four banks in Australia. They all use SAP as well, so who knows I might even end up working for one of them. You go into a bank branch of the Commonwealth Bank of Australia and chained to the benches are tablets with UI5 applications running on them tied to an SAP back-end.

 

Dessert

To end the event it is dessert time, choice of two, or if you can’t decide you can have them both mixed together in a bucket.

 

·         Three scoops of Tahitian vanilla ice cream infused with Madagascar vanilla beans, topped in 23K edible gold leaf, sprinkled with a couple of expensive and rare chocolates plus candied fruits, gold dragets, chocolate truffles and bowl of caviar. It comes served in the skull of a recently killed Western Lowland Gorilla with an 18K gold spoon.

 

·         Fortress Stilt Fisherman Indulgence is made with gold leaf Italian cassata, flavored with fruit-infused Irish cream filled with chunks of real tiger flesh and Chinese Giant Salamander chunks. There's a fruit compote, a Dom Perignon champagne sabayon at the base and a handmade chocolate carving in the shape of an HPE Converged System 500 for SAP HANA server. It's adorned with an 80 carat Aquamarine gemstone whose diameter "spans the head of a soup spoon."

 

Conclusion

When you think of hotbeds of new technology then Silicon Valley in the USA springs to mind, or maybe Tel Aviv with its start-up culture.

 

It is a little known fact that Australia is bulging with innovation, and an amazing amount of new inventions come out of the country every year, which is even more surprising given the small population. This point was mentioned by the American Ambassador to Australia when he was appointed, though that could just have been him sucking up.

 

SAP is used in the vast majority of large organisations in Australia, and many are leading the word with the sort of innovative solutions they are building. My company falls bang square in the middle of that category, though ironically I cannot say what it is I have been building this last four years, I have to disguise it, which is why I use Monsters all the time as examples in my book.

 

Many companies in Australia were amongst the first to embrace HANA – utility company AGL with its application to monitor energy use, and NSW Fire and Rescue being just two examples. SAP likes to tell us that the HANA platform is designed to help foster innovation and to “digitise” your business and lots of other buzz words.

 

However, it’s difficult to trust marketing people, as they speak utter nonsense 100% of the time as in “the companies with the best e-applications run SAP” or other meaningless phrases like “run simple” one minute, and then drop the word “simple” from all the product names the next.

 

Events like this one aim to get around this by actually having “real” people speak about how they have used SAP technology – specifically HANA in this case – to solve real problems, and to say why this was a better choice than the alternatives. The good thing about actual customer presentations is that they can say negative things if they so desire. I would quote the CIO of Nestle at SAPPHIRE in May 2016 who had just moved onto S/4 HANA and said “Using SAP is like peeling an Onion. It has many layers, and it makes you cry”.

 

In conclusion this event gets a “thumbs up”. I am also glad I managed to get through describing the event in detail without once mentioning food or dustbins.

 

Cheersy Cheers

 

Paul

The case of multiple ALV horizontal scrolling

$
0
0

Not every day.... but from time to time there is a question about synchronize scrolling in ALV .

 

This article will demonstrate my attempt for horizontal scrolling .

 

Program Y_R_EITAN_TEST_08_02 use cl_gui_alv_grid.

 

CL_GUI_ALV_GRID have a method (SET_SCROLL_INFO_VIA_ID) that allows us to scroll to a given column name .

 

The same method also allow us to scroll to a given line number (this is not covered in this program)

 

For demonstration purpose the program create an empty internal table of type MARA .

 

The scrolling is done in  METHOD user_command_scroll ( I put some comments there....).

 

Output:

 

 

 

 

Happy scrolling....

Performance Trap in String Concatenations

$
0
0

It is a common programming pattern to fill a character string in iterative steps, either using the concatenation operator && or character string templates |...|:

 

DO|WHILE|LOOP|SELECT ...

 

  str = str && ...

 

  str = |{ str }...|.

 

 

ENDDO|END|ENDWHILE|ENDSELECT ...

 

Bad News

 

Both assignments have a string expression as an RHS (right hand side) . What does that mean? As a rule, when assigning an expression, a temporary or intermediate result is created that must be copied to the LHS (left hand side). Since the length of the intermediate result increases with each loop pass and there is a copy operation for each loop pass, the runtime dependency from the number of loop passes is quadratic. Not good.

 

Good News

 

Since this is a well known fact, there is an internal optimization for all concatenations that look as follows:

 

str = str && dobj1 && dobj2 && ... .

 

str = |{ str }...{ dobj1 [format_options] }...{ dobj2 [format_options] }...|.

 

As long

  • as the target string str occurs at the RHS only once and as the leftmost operator,
  • and as long as there are no formatting options for str ,
  • and as long as there are no other expressions or function calls involved in the RHS

no intermediate result is created but the characters are concatenated directly to the target stringstr. This prevents the quadratic dependency of the runtime from the number of iterations. The same is true for the CONCATENATE statement.


With other words, no problems in writing something like this:


DATA(html) = `<html><body><table border=1>`.

DO ... TIMES.

  html = |{ html }<tr><td>{ sy-index }</td></tr>|.

ENDDO.

html = html && `</table></body></html>`.


There is a simple data object sy-index concatenated to a target string html and the optimization takes place.


Bad News


The optimization only takes place for the simple cases above!


And this is the trap.


You loose the optimization, if you loop over concatenations that look as follows:


str = str && ... && meth( ... ) && ... .

 

str = str && ... && str && ... .


str = |{ str }...{ expr( ... ) }...|.


str = |{ str format_options }...|.


str = |{ str }...{ str }...|.


As long as you do such a concatenation outside of loops, no problem. But inside of loops and for a large number of iterations you can quickly experience really large runtimes.


There is in fact a problem, writing something looking as harmless like this:


DATA(html) = `<html><body><table border=1>`.

DO ... TIMES.

  html = |{ html }<tr><td>{

            CONV string( ipow( base = sy-index exp = 2 ) )

            }</td></tr>|.

ENDDO.

html = html && `</table></body></html>`.

 

Concatenating the ipow expression to str breaks the optimization.


Good News


Now that you know the problem, you can easily circumvent it. Normally we use expressions to get rid of helper variables. But in connection with loops helper variables can be a good thing. You already know that you use  them for calculating results that are constant within a loop. Now you learn, that you should also use them for concatenating expressions or functions to strings:

 

DATA(html) = `<html><body><table border=1>`.

DATA square type string.

DO ... TIMES.

   square = ipow( base = sy-index exp = 2 ).

   html = |{ html }<tr><td>{ square }</td></tr>|.

ENDDO.

html = html && `</table></body></html>`.

 

By assigning the ipow expression to helper variable square and concatenating that to str,  the optimization takes place again. Try it yourself and see what happens for large numbers of iterations with and without optimization!

 

Last but not least, what is said above for loops realized by control statements is also true for FOR loops in expressions:

 

DATA(result) =

  REDUCE string( INIT s = ``

                 FOR i = 1 UNTIL i > ...

                 NEXT s = s && CONV string( i ) ).

 

 

vs.

 

DATA(result) =

  REDUCE string( INIT s = ``

                 FOR i = 1 UNTIL i > ...

                 LET num = CONV string( i ) IN

                 NEXT s = s && num ).

 

Only by using the helper variable num declared in a LET expression the optimization is enabled. The example without helper variable shows a quadratic increase of runtime with the number of iterations.

 

The last example is directly taken from the recent documentation. But have you been aware of that?


How To Use VBA Recorded Code in ABAP

$
0
0

Hello community,

 

with a tiny trick is it easy possible to use recoreded Microsoft Office VBA (Visual Basic for Applications) code in ABAP.

 

To do that I use only the Microsoft Script Control, with a little preparation for Excel. Then I read the content from an include, which contains the recorded VBA code. I concatenate the VBScript and VBA code and then I execute it - that's all.

 

The only thing which is in addition to doing, is to set a point in front of each line of the VBA code. This is necessary because I use oExcel object in VBScript code.

 

 

Here an example report:

 

"-Begin-----------------------------------------------------------------

Report zExcelViaVBScript.

 

  "-Type pools----------------------------------------------------------

    Type-Pools:

      OLE2.

 

  "-Constants-----------------------------------------------------------

    Constants:

      CrLf(2) Type c Value cl_abap_char_utilities=>cr_lf.

 

  "-Variables-----------------------------------------------------------

    Data:

      oScript Type OLE2_OBJECT,

      VBCode Type String,

      VBACode Type String.

 

  "-Main----------------------------------------------------------------

    Create Object oScript 'MSScriptControl.ScriptControl'.

    Check sy-subrc = 0 And oScript-Handle > 0 And oScript-Type = 'OLE2'.

 

    "-Allow to display UI elements--------------------------------------

      Set Property Of oScript 'AllowUI' = 1.

 

    "-Intialize the VBScript language-----------------------------------

      Set Property Of oScript 'Language' = 'VBScript'.

 

    "-Code preparation for Excel VBA------------------------------------

      VBCode = 'Set oExcel = CreateObject("Excel.Application")'.

      VBCode = VBCode && CrLf.

      VBCode = VBCode && 'oExcel.Visible = True'.

      VBCode = VBCode && CrLf.

      VBCode = VBCode && 'Set oWorkbook = oExcel.Workbooks.Add()'.

      VBCode = VBCode && CrLf.

      VBCode = VBCode && 'Set oSheet = oWorkbook.ActiveSheet'.

      VBCode = VBCode && CrLf.

      VBCode = VBCode && 'With oExcel'.

      VBCode = VBCode && CrLf.

 

      "-Add VBA code----------------------------------------------------

        Call Function 'ZREADINCLASSTRING'

          Exporting I_INCLNAME = 'ZEXCELTEST'

          Importing E_STRINCL = VBACode.

        VBCode = VBCode && VBACode.

 

      VBCode = VBCode && 'End With'.

      VBCode = VBCode && CrLf.

 

    "-Execute VBScript code---------------------------------------------

      Call Method Of oScript 'ExecuteStatement' Exporting #1 = VBCode.

 

    "-Free the object---------------------------------------------------

      Free Object oScript.

 

"-End-------------------------------------------------------------------

 

 

Here the function module to read an include as string:

 

"-Begin-----------------------------------------------------------------

  Function ZREADINCLASSTRING.

*"----------------------------------------------------------------------

*"*"Local Interface:

*"  IMPORTING

*"     VALUE(I_INCLNAME) TYPE  SOBJ_NAME

*"  EXPORTING

*"     VALUE(E_STRINCL) TYPE  STRING

*"----------------------------------------------------------------------

 

    "-Variables---------------------------------------------------------

      Data resTADIR Type TADIR.

      Data tabIncl Type Table Of String.

      Data lineIncl Type String Value ''.

      Data strIncl Type String Value ''.

 

    "-Main--------------------------------------------------------------

      Select Single * From TADIR Into resTADIR

        Where OBJ_NAME = I_InclName.

      If sy-subrc = 0.

 

        Read Report I_InclName Into tabIncl.

        If sy-subrc = 0.

          Loop At tabIncl Into lineIncl.

            Concatenate strIncl lineIncl cl_abap_char_utilities=>cr_lf

              Into strIncl.

            lineIncl = ''.

          EndLoop.

        EndIf.

 

      EndIf.

      E_strIncl = strIncl.

 

  EndFunction.

 

"-End-------------------------------------------------------------------

 

 

Here my VBA example code, which is stored in the include ZEXCELTEST:

 

.Range("A1").Select

.ActiveCell.FormulaR1C1 = "1"

.Range("A3").Select

.ActiveCell.FormulaR1C1 = "2"

.Range("A5").Select

.ActiveCell.FormulaR1C1 = "3"

.Range("B2").Select

.ActiveCell.FormulaR1C1 = "4"

.Range("B4").Select

.ActiveCell.FormulaR1C1 = "5"

.Range("C3").Select

.ActiveCell.FormulaR1C1 = "6"

.Range("C4").Select

 

This code is a directly copied and pasted from the VBA IDE (only with a point in front of each line):

 

003.JPG

 

002.jpg

 

Here the result in Excel:

 

001.JPG

 

Enjoy it.

 

Cheers

Stefan

MVC (model view controller) framework for ABAP: Part 3

$
0
0

See also:  MVC (model view controller) framework for ABAP part 1

MVC (model view controller) framework for ABAP part 2

 

Controlling multiple screens

 

Welcome back to my MVC series. To follow this blog, it is necessary that you install the classes provided in the first two parts (see links above). In part 1 you see a demo application that controls a dynpro and an ALV control. In the second part, I wrote about a report-type program that controlled a selection screen and an ALV control, but no standard dynpro fields. This time I would like to focus on standard dynpros, especially the case, when you have more than one of them.

 

Installing the demo application

 

Download the attached file and paste the content into a new report type program. Then create the main screen, which contains only one big subdynpro area (see part 1 for further explanations about the dynpro concept of the framework). After that, create the two sub dynpros 0100 and 0200 as described below. Last, I also included a popup screen 0300 which is controlled by the same framework controller type that contols also the sub dynpros.

 

Main screen 0001

 

Attributes:

 

Screen elements:

 

(create the subscreen area using the screen painter and make it as big as the whole screen)

 

Flow logic:

 

process before output.

   module pbo_0001.

   call subscreen subscreen including sy-repid gv_subdyn.

*
process after input.
   module pai_0001 at exit-command.
   call subscreen subscreen.
   module pai_0001.

 

Sub dynpro 0100

 

Attributes:

Elements:

 

Flow logic:

 

PROCESS BEFORE OUTPUT.
   MODULE pbo_0100.
*
PROCESS AFTER INPUT.
   module pai_0100.


Appearance:

 

 

Sub dynpro 0200

 

Copy dynpro 0100 to 0200. Then set the two input fields to "no input". Create a frame to for the data fields of the table spfli and put input fields into it. In the end, it should look like this:

 

 

Adjust the flow logic:

 

PROCESS BEFORE OUTPUT.
   MODULE pbo_0200.
*
PROCESS AFTER INPUT.
   module pai_0200.

 

Popup screen 0300

 

Create a modal dialog box containing the data fields of table SCARR. It should look like this:

 

 

All fields are display only. Important: use GV_OKCODE_0300 instead of GV_OKCODE:

 

 

Adjust the flow logic:

 

PROCESS BEFORE OUTPUT.
   MODULE pbo_0300.
*
PROCESS AFTER INPUT.
   module pai_0300.


GUI status and title


Create status 0100 with function key assignments:


Enter - ENTER

F3 - BACK

Shift+F3: EXIT (exit command)

ESC: CANC (exit command)


Copy the status to 0200 and add the following functions to the toolbar:


POPUP - Text "Carrier"

DELE - Text "Delete"


Create the status 0300 as a dialogbox status and add the function CANC to key ESC.


Last, create a titlebar MAIN with title "Sample for several dynpros"


How it works


As already discussed in part 1, there is only one main screen 0001. To switch between sub screens on the main carrier screen, the method SET_SUBDYNPRO of the framework class for the main controller is used. In the constructor of the main controller, it is set to 0100:


 

As you see, the method sets also the used GUI STATUS and the titlebar. All parameters are obtional, so you can use this also for setting only some of them. For example, you could call it to change the GUI STATUS only.

 

Managing sub dynpro flow

 

Each sub dynpro has its own controller (LCL_CON_DYNPRO_0100, LCL_CON_DYNPRO_0200). In the PBO and PAI modules, the controller is being fetched from the main controller and the control is being passed to the respective methods of the own dynpro controller.

 

 

Therefore, you have to redefine method CREATE_CON_DYNPRO:

 

 

 

On ENTER on the first screen 0100, the second screen is being called:

 

 

Note that before the new screen is called, the model is used to read the database. The values of the input fields have been passed already to the model in method PAI_FIELD_CHANGE:

 

 

Remember that PAI_FIELD_CHANGE will be called automatically from the framework as soon as the user types in something. It will be called for each field separately giving fieldname (IV_FIELDNAME) and value (IV_SOURCE) as a parameter. This method is intended also for input checks on field level.

 

During PBO of the main screen, then newly set MV_SUBDYNPRO will be passed to the flow logic and the new sub dynpro is being displayed in the SUBSCREEN area.

 

In PAI for 0200, we jump back to the last screen on user command BACK:

 

 

Exit commands must be treated by the main dynpro, so CANC, which also jumps back to sub screen 0100, but without considering inputs made. So the coding is located in the main PAI method:

 

 

Managing popups

 

The popup screen is managed in a similar way. But instead of setting the variable MV_SUBSCREEN, we have to use CALL SCREEN. In the 0200 user command method, the main controller is being called in order to do this:


 

 

You can place the call screen statement directly in the PAI_USER_COMMAND, if you want. But I prefer coding everything regarding the user interface flow in the main controller.

 

One important thing: use a different OKCODE for each popup screen you create. If you don't the OKCODE set during PAI of the popup will be processed once more in the main screen controller.

 

 

As single reaction to the user interface, we leave sthe screen on pressing the cancel button:

 

 

Conclusion

 

In the productive version of the framework in my company's system, there are some more controllers like one for dynamic documents (CL_DD_DOCUMENT) and one for the class I described in Using ALV to display/edit fields of a structure. Both of them are sub classes of the CFW controller class, just like the ALV controller described in part 1. I shared this basic version to inspire the reader to use it as a starting point for own developments.

 

All the best!

Jörg

Custom program to Call Standard IDOC without MASTER_IDOC_DISTRIBUTE

$
0
0

Dear All,

 

There might be a requirement where you need to call standard IDoc's via custom program .

This can be achieved by MASTER_IDOC_DISTRIBUTE function module but this function module has some limitations and it creates both Inbound as well as Outbound Idoc.

 

Here i am not using above FM,and create a program with following FM's in the same sequence -

 

1. IDOC_CREATE_ON_DATABASE

2. BAPI_IDOC_INPUT1 (Depends upon the message type and function module configuration in WE57)

3. EDI_DOCUMENT_OPEN_FOR_PROCESS

4. EDI_DOCUMENT_STATUS_SET

5. EDI_DOCUMENT_CLOSE_PROCESS

 

Let us now understand how to use these function modules -

First you need to fill IDOC_DATA table of type- EDIDD .

 

1.IDOC_CREATE_ON_DATABASE

 

In this function module you have to pass IDOC_DATA and IDOC_CONTROL

 

2.BAPI_IDOC_INPUT1

 

Before using this function module , you have to ensure whether this function module is assigned with the required message type in WE57,If yes then go ahead and if no - You have to call the FM that is assigned in WE57 against your message type .

 

Here you have to pass -

 

idoc_contrl        fill the structure with all control parameters

idoc_data          pass as it is fetched from step-1

idoc_status        as blank

return_variables  as blank

serialization_info as blank

 

3. EDI_DOCUMENT_OPEN_FOR_PROCESS

 

Here you have to pass -

Document no i.e. IDOC no that is generated from step-2 in IDOC_DATA

En-queue Option as S

DB_Read Option as N

 

4. EDI_DOCUMENT_STATUS_SET

 

This Fm sets the status of your IDOC in standard.

 

Here you have to pass -

Document no i.e. IDOC no that is generated from step-2 in IDOC_DATA

IDOC_STATUS that is fetched from step-2 in IDOC_STATUS


5. EDI_DOCUMENT_CLOSE_PROCESS


here you have to pass document no and background as N.


Thanks and please let me know if there is any query.



I Don't Like REDUCE, I Love It

$
0
0

Recently I stumbled over the examples in the documentation of built-in functions ROUND and RESCALE.

 

There are tables with results of these functions for different values of the arguments, but there was no coding example how to achieve these results. I guess,  the functions were called  with different arguments one by one and the results were copied into the documentation one by one.

 

But, hey, we have other possibilities in ABAP now and I've tried to recreate the results with CL_DEMO_OUTPUT and REDUCE (as a recreational measure so to say).

 

I cannot refrain from showing you that (a bit of showing off):

 

TYPES:
   BEGIN OF line,
     arg       TYPE i,
     result    TYPE decfloat34,
     scale     TYPE i,
     precision TYPE i,
   END OF line,
   result TYPE STANDARD TABLE OF line WITH EMPTY KEY.

DATA(val) = CONV decfloat34( '1234.56789 ' ).
DATA(out) = cl_demo_output=>new(
   )->begin_section( 'Value'
   )->write(
     |{ val
      }, scale = { cl_abap_math=>get_scale( val )
      }, precision = { cl_abap_math=>get_number_of_digits( val ) }|
   )->begin_section( 'Round with dec'
   )->write(
    REDUCE result(
      INIT tab TYPE result
      FOR i = -5 UNTIL i > 6
      LET rddec = round( val = val dec = i
          mode  = cl_abap_math=>round_half_up ) IN
      NEXT tab = VALUE #( BASE tab
       ( arg = i
         result = rddec
         scale = cl_abap_math=>get_scale( rddec )
         precision = cl_abap_math=>get_number_of_digits( rddec )
       ) ) )
   )->next_section( 'Round with prec'
   )->write(
    REDUCE result(
      INIT tab TYPE result
      FOR i = UNTIL i > 10
      LET rdprec = round( val = val prec = i
          mode   = cl_abap_math=>round_half_up ) IN
      NEXT tab = VALUE #( BASE tab
       ( arg = i
         result = rdprec
         scale = cl_abap_math=>get_scale( rdprec )
         precision = cl_abap_math=>get_number_of_digits( rdprec )
       ) ) )
   )->next_section( 'Rescale with dec'
   )->write(
    REDUCE result(
      INIT tab TYPE result
      FOR i = -5 UNTIL i > 8
      LET rsdec = rescale( val = val dec = i
          mode  = cl_abap_math=>round_half_up ) IN
      NEXT tab  = VALUE #( BASE tab
       ( arg = i
         result = rsdec
         scale = cl_abap_math=>get_scale( rsdec )
         precision = cl_abap_math=>get_number_of_digits( rsdec )
       ) ) )
   )->next_section( 'Rescale with prec'
   )->write(
    REDUCE result(
      INIT tab TYPE result
      FOR i = UNTIL i > 12
      LET rsprec = rescale( val = val prec = i
          mode   = cl_abap_math=>round_half_up ) IN
      NEXT tab = VALUE #( BASE tab
       ( arg = i
         result = rsprec
         scale = cl_abap_math=>get_scale( rsprec )
         precision = cl_abap_math=>get_number_of_digits( rsprec )
       ) ) )
   )->display( ).

 

Giving

reduce.jpg

...

 

Isn't REDUCE just awesome? For me, one of the most powerful of all the constructor operators. Use it to get used to it. Believe me, after some training, you even don't have to look up its documentation any more.

Custom program to Call Standard IDOC without MASTER_IDOC_DISTRIBUTE

$
0
0

Dear All,

 

There might be a requirement where you need to call standard IDoc's via custom program .

This can be achieved by MASTER_IDOC_DISTRIBUTE function module but this function module has some limitations and it creates both Inbound as well as Outbound Idoc.

 

Here i am not using above FM,and create a program with following FM's in the same sequence -

 

1. IDOC_CREATE_ON_DATABASE

2. BAPI_IDOC_INPUT1 (Depends upon the message type and function module configuration in WE57)

3. EDI_DOCUMENT_OPEN_FOR_PROCESS

4. EDI_DOCUMENT_STATUS_SET

5. EDI_DOCUMENT_CLOSE_PROCESS

 

Let us now understand how to use these function modules -

First you need to fill IDOC_DATA table of type- EDIDD .

 

1.IDOC_CREATE_ON_DATABASE

 

In this function module you have to pass IDOC_DATA and IDOC_CONTROL

 

2.BAPI_IDOC_INPUT1

 

Before using this function module , you have to ensure whether this function module is assigned with the required message type in WE57,If yes then go ahead and if no - You have to call the FM that is assigned in WE57 against your message type .

 

Here you have to pass -

 

idoc_contrl        fill the structure with all control parameters

idoc_data          pass as it is fetched from step-1

idoc_status        as blank

return_variables  as blank

serialization_info as blank

 

3. EDI_DOCUMENT_OPEN_FOR_PROCESS

 

Here you have to pass -

Document no i.e. IDOC no that is generated from step-2 in IDOC_DATA

En-queue Option as S

DB_Read Option as N

 

4. EDI_DOCUMENT_STATUS_SET

 

This Fm sets the status of your IDOC in standard.

 

Here you have to pass -

Document no i.e. IDOC no that is generated from step-2 in IDOC_DATA

IDOC_STATUS that is fetched from step-2 in IDOC_STATUS


5. EDI_DOCUMENT_CLOSE_PROCESS


here you have to pass document no and background as N.


Thanks and please let me know if there is any query.



Abap data parser - open source TAB-delimited text parser

$
0
0

Hi Community !

 

I'd like to share a piece of code which might be useful for someone. It is called abap data parser. Its purpose is parsing of TAB-delimited text into an arbitrary flat structure or internal table. Why TAB-delimited? This is the format which is used automatically if you copy (clipboard) something from Excel - this creates some opportunities for good program usability.

 

So what does it do. Let's say we have this data in a form of string (CRLF as a line delimiter, TAB as a field delimiter):

NAME     BIRTHDATE

ALEX     01.01.1990

JOHN     02.02.1995

LARA     03.03.2000

 

... and a corresponding data type and internal table.

types: begin of my_table_type,

         name      type char10,

         birthdate type datum,

       end of my_table_type.

 

data lt_container type my_table_type.

 

To parse the string into the container table just add the following code:

lcl_data_parser=>create( lt_container )->parse(

  exporting i_data      = lv_some_string_with_data

  importing e_container = lt_container ).

 

The class supports some additional features, in particular, "unstrict mode" which allow to skip field of the target structure in text - useful when you need to load just several certain fields of a huge data structure (like standard tables in SAP). Let's consider our data type has additional field, unnecessary in the current context:

types: begin of my_table_type,

         name      type char10,

         city      type char40,   " << New field, but still just 2 in the text

         birthdate type datum,

       end of my_table_type.

...

 

lcl_data_parser=>create(

    i_pattern       = lt_container       

    i_amount_format = ' .'         " specify thousand and decimal delimiters

  )->parse(

    exporting

      i_data      = lv_some_string_with_data

      i_strict    = abap_false     " missing city field will not throw an error

      i_has_head  = abap_true      " headers in the first line of the text

    importing

      e_container = lt_container ).

 

Another feature: i_has_head parameter above means that the first line contains tech names of the fields - then the parser uses it to identify existing fields and their order (which may be flexible then).

 

Cases of usage

- we (our company) use the code for some of our company's products - like this one

- we use it in the mockup loader - another our openly published tool for unit testing (actually the data parser was a part of mockup loader initially)

- as a tool for mass uploads for some z-tables of some other our products

 

The code is free to use under MIT licence. Project home page is https://github.com/sbcgua/abap_data_parser

Installation can be done manually - just 1 include to to install - or with abapGit tool (the most convenient way).

 

I hope you find this useful ! =)

About Time Stamps

$
0
0

A timestamp is a sequence of characters or encoded information identifying when a certain event occurred, usually giving date and time of day, sometimes accurate to a small fraction of a second.

 

(Wikipedia, August 29, 2016).

 

In ABAP you get a time stamp accurate to second with the statement

 

GET TIME STAMP FIELD DATA(ts).


cl_demo_output=>display( ts ).


Here ts has the dictionary type TIMESTAMP and the result might look like 20160829131515.


And a time stamp accurate to a small fraction of a second with:


DATA ts TYPE timestampl.

GET TIME STAMP FIELD ts.


cl_demo_output=>display( ts ).

 

The result might look like 20160829131612.294638.

 

Those are are POSIX time stamps that are independent of a time zone.

 

The format of such  ABAP timestamps is YYYYMMDDHHMMSS.fffffff with 7 fractions of a second.

 

As a rule, you use such time stamps to mark data with - well - time stamps (time of creation, time of update, ...).

 

In order to handle timestamps, you can do the following:

 

  • You can directly compare different timestamps of the same type:

    GET TIME STAMP FIELD DATA(ts2).
    WAIT UP TO 1 SECONDS.
    GET TIME STAMP FIELD DATA(ts1).
    ASSERT ts2 < ts1.

 

  • You can convert timestamps into date and time fields of a time zone:

    GET TIME STAMP FIELD DATA(ts).

 

    CONVERT TIME STAMP ts TIME ZONE sy-zonlo

            INTO DATE DATA(date) TIME DATA(time)

            DAYLIGHT SAVING TIME DATA(dst).

 

     cl_demo_output=>display( |{ date }\n{

                                 time }\n{

                                 dst } | ).

 

         Giving something like 20160829, 172223, X


  • You can format timestamps in string processing:

    GET TIME STAMP FIELD DATA(ts).

    cl_demo_output=>display( |{ ts TIMESTAMP = ISO } | ).

    Giving something like 2016‑08‑29T15:27:29

  • You can serialize/deserialize timestamps, if their datatype refers to a special domain:

    DATA ts TYPE xsddatetime_z.
    GET TIME STAMP FIELD ts.

    CALL TRANSFORMATION id SOURCE ts = ts
                   RESULT XML DATA(xml).
    cl_demo_output=>display_xml( xml ).


    Giving something like: <TS>2016‑08‑29T15:33:50Z</TS>

  • You can do some simple calculations with the methods of class CL_ABAP_TSTMP:

    DATA: ts1 TYPE timestampl,

          ts2 TYPE timestampl.

 

    GET TIME STAMP FIELD ts2.

    WAIT UP TO 1 SECONDS.

    GET TIME STAMP FIELD ts1.

 

    DATA(seconds) = cl_abap_tstmp=>subtract(

        EXPORTING

          tstmp1 = ts1

          tstmp2 = ts2 ).

 

    cl_demo_output=>display( seconds ).

     Giving something like 1.001369.

 

And that is it. Timestamps are not foreseen for more and you cannot do more! Especially, you should never do direct calculations with timestamps itself:

 

GET TIME STAMP FIELD DATA(ts1).

 

DATA(ts2) = cl_abap_tstmp=>add(

                tstmp = ts1

                secs  = 3600 ).

 

cl_demo_output=>display( ts2 - ts1 ).


The result is 10000. How that?

 

Well, you know it. Timestamps don't have an own built-in ABAP type (in another ABAP world, in NGAP, that is Release 8.x in fact they have and we wouldn't have to bother). But in the 7.x-7.40-7.50 Release line, timestamps are stored in type p numbers. p length 8 without decimal places for dictionary type TIMESTAMP and p length 11 with seven decimal places for dictionary type TIMESTAMPL.

 

Besides the above mentioned points, ABAP does not recognize the semantical meaning of a timestamp. It simply treats it as a packed number of the given value. With other words, if ts1 above is 20160829160257, adding 3600 seconds using the method ADD gives 20160829170257. You see the difference? One hour later (17 compared to 16) in the timestamp format, but a difference of 100000 in normal value format for type p. Using type p for timestamps is simply an efficient way of storing timestamps allowing decimal places.But never, never, never tend to believe that you can do something meaningful with the type p number directly!




A small tip to find message id and number by repository information system

$
0
0

Normally if we see a message in SAPGUI and we can just double click the message icon to get its technical detail like message id and number displayed.

clipboard1.png

However in some case the icon is not available for click, for example below, there is a popup window displayed and you could not double click the icon. If you close the popup, the message disappears as well.

clipboard2.png

In this case, use tcode SE84 to launch repository information system, specify Short Description with value: *cannot be used as proxy object

clipboard3.png

Then you get answer: message id DDLS, value 444.

clipboard4.png

Or if you already know that the table T100 stores message text, you can directly query on table T100 to get the same result.

clipboard5.png

clipboard6.png

Do not Test with WRITE

$
0
0

In a recent discussion I've seen someone was testing whether a loop was executed by placing a WRITE statement inside. Maybe it was a beginner's error but maybe others tend to do that too. Since the discussion is locked, I say it here:

 

WRITE is not appropriate for error analysis.


WRITE writes to he list buffer. A display of the list buffer in form of a classic list only takes place after calling the list. An automatic call of a classic list happens only in the program flow of a submitted executable program. As a rule, in any other framework there is no automatic list display. Therefore, the fact that you don't see any list output normally does not allow you to conclude that the WRITE statement was not executed.

 

  • For finding bugs during development, you use checkpoints (breakpoints, assertions, logpoints).

 

  • For testing during and after development you use module tests of ABAP Unit.

 

For the most of you this is crystal clear, but sometimes I get the impression that the knowledge about the fundamentals of classic ABAP programming - that unfortunately can freely be mixed into all the modern stuff - is taking a backseat more and more. Reminds me of the old days when I started with ABAP and also believed that WRITE is simply a kind of printf for creating a console output.

How to fetch *nicely* 2 values from ITAB-Row?

$
0
0

I started this already via Twitterhow to fetch or map two values from an internal table if you are not interested in the whole line

 

Input:     lt_vbpa   (just an internal table with some more columns )


target:     lv_kunn  (two fields to be filled ... needed for next method or something.

               lv_land1

 

Of cause there are lots of solutions, we could make a list out of it, but key is to have FAST, short an understandable code

 

80s style:

Read Table lt_vbpa assigning <fs>  Key ...

lv_kunnr = <fs>-kunnr.

lv_land1  = <fs>-land1.

 

 

New >7.40

lv_kunnr = lt_vbpa[ parvw = "WE" ]-kunnr

lv_land1  = lt_vbpa[ parvw = "WE" ]-land1

* but worse 2 read operations in ITAB performance issue in mass processing.

 

 

Via Twitter I did receive some Ideas:

 

New >7.40feat. Uwe Fetzer

Read Table ltvbpa assigning <fs>  Key ...

assign lt_vbpa[ parvw = "WE" ] to <fs>.

lv_kunnr = <fs>-kunnr.

lv_land1  = <fs>-land1.

* Nicer, one read on the itab only, but still tree lines of code

 

 

Solution by Enno Wulff

* Also nicer but quite some coding overhead... and again with the performance issue of two reads.

 

 

I still do not see a great Idea to do this in a nice way in two lines of code (pls. no #Macros, or two statemts in one line ideas ...)

Thanks for the two of you for your Ideas!

How to identify Non-unicode characters in a Text file

$
0
0

Hello Folks,


Usually we encounter a scenario where a program goes for a dump due to conversion errors while using Open/Read Dataset to read .txt files lying on the Application server.For ex below is the screenshot of such a dump.If the text file is very large then it will be tough to identify the rows or columns having non Unicode characters or identifying if at all there are any non-unicode characters in the file.

 

Dump details.jpg

 

Below are the steps to identify non-unicode Characters in a .txt file :-

 

  • Open a blank notepad.
  • Type the below given text in the notepad.

 

<?xml version="1.0"?><test></test>

Test.jpg

 

  • Copy the content of the .txt file on the Application Server in between the <test> and </test> in the notepad file that we had created and save it with .xml extension.
  • To identify the Non Unicode characters we can use either Google Chrome or Mozilla firefox browser by just dragging and dropping the file to the browser.
  • Chrome will show us only the row and column number of the .txt file where the non-unicode character is lying but it will not show the content of that particular row or column.

 

chrome screenshot.jpg

 

  • Mozilla Firefox will show us the row and column number along with the content of that row and column.An underscore will be till the column where the non-unicode character is lying.If there are multiple non Unicode characters in the .txt file then we should remove the first non-unicode character that was identified and then repeat all the steps as explained here to identify the next non-unicode character.Tedious,but this way atleast we can identify the presence of non-unicode characters in the text file.

Mozilla Screenshot1.jpg

 

Mozilla Screenshot2.jpg

 

  • Notepad screenshot going by the row and column number that we got using Mozilla Firefox.Status Bar option in the notepad will help us seeing the row and column number in the notepad file.

Notepad1.jpg

Notepad2.jpg

 

  • Using Internet Explorer when we try to open the .txt file with non-unicode characters it will just show a blank page.So,we need either Chrome or Mozilla Firefox browser to identify the row and column with non-unicode characters.
  • Attached are the text file and xml file which can be used to test by dragging and dropping in Chrome or Mozilla.

A tale of two SAP incidents

$
0
0

Summer is a strange time for SAP teams: lots of people go on holiday, projects are left to tick over, burning issues are put on hold. A time for those who remain to take it easy a while, take a look at those really intractable problems or catch up on technical innovations and pet projects with no-one to bother you. This is all fine in theory except that the business doesn't rest and production incidents still occur. This is the story of two such incidents and how SAP support helped us.

 

The first occurred right at the start of August, when nearly all my development team had already left: the SWN_SELSEN production job had started to abend. I groaned inwardly when I heard the news. Nothing could be more simple on the face of it than SWN_SELSEN: it simply selects workflow notifications and sends them out. There is no variant but the customising is fearsomely complicated and you needed to be a deeply experienced workflow guru (something I am not) to understand the code and the exits. This was a high-profile problem too - though the business impact was small, the program was sending PO approval notifications to all the top guys in the company, so when these stopped, important people noticed.

 

I had a look at the dump. The problem occurred about 20 levels deep with a TSV_TNEW_PAGE_ALLOC_FAILED. An internal table space error. The source throwing the error was SAP for sure, but there was a little cluster of badi code around 15 deep in the stack, before the code returned to standard SAP. The badi changes were imported the previous week - I had found the smoking gun. Unfortunately, dear reader, as you may be aware, developers lack a certain credibility and though I strongly suspected the custom code was causing the issue, I couldn't be 100% sure. My lack of certainty combined with my colleagues 100% certainty that the changes had been thoroughly tested with no problem in our quality system turned the spotlight back onto standard SAP code. We looked at the customising and tried to understand it. In the meantime, as Max Attention customers we felt emboldened enough to turn to SAP and opened an incident.

 

We changed the custo and imported it on production - no effect. Then we received an almost cheerful answer from SAP support that our Badi code WAS causing the problem and to check out some code which called a certain SAP module. Damn - this was embarrassing! I should have checked out this code more but it was complex and just looking at it made me want to get a coffee and mindlessly browse facebook.

 

Somehow I figured out the the SAP module (in case you're interested it is function 'SAP_WAPI_WORKITEMS_TO_OBJECT') was doing lots of processing that we didn't need, and we could replace it by a join on tables swwwihead and sww_wi2obj. It did the trick, the dumps stopped and the top guys started getting their PO approvals again.

 

The second problem was more of a long-running issue concerning invoice pricing. This was an "intractable" problem for which we already had an open OSS incident. Now was the time, with everyone away, to have a very detailed look at the problem. Each time I debugged, the problem seemed to arise in an SAP standard formula. No custom code was implicated. For sure, we did have a strange situation where we were taking the sales order in the sales unit, doing the picking in the base unit, and then converting back to the sales unit for billing, but that couldn't be causing the problem, could it? Again, dear reader, I took the easy route and informed SAP their pricing algorithm was wrong.

 

After spending sometimes entire days debugging, I was finally convinced: the problem is in the SAP code, I told the business analyst; it doesn't go near any user-exit code. So the OSS incident was updated and next day the reply came back: Dear customer, please check these OSS notes and tell us why you have modified standard modules. It advising us that we would have problems if we tried to apply notes to these modules, and providing the notes we should apply. The reply finished with a rather pointed comment, asking us to kindly answer if there was some Z code at pricing. Well, this was really laying down the gauntlet, and my professional pride was piqued.

 

I was surprised at the modification in the standard include and had no idea why it was there. We set up an IDES system at (nearly) the same EhP level, and compared the code. It turned out that we'd been careless at the last upgrade during the SPAU phase, and simply accepted the modifications for an old note instead of going back to the standard. I backed out the mods. We applied the 2 notes (and 96 pre-requisites) and retested - no joy.

 

Much highly concentrated debugging followed during which I had to anatomise function RV_INVOICE_CREATE and all its main forms and exits (maybe I'll share this on a future blog if there is sufficient demand ;-D). What I found was simple: it was the base to sale units conversion, but this was being done at the wrong time and totally screwing up (to use the technical term) pricing.

 

There are a few simple lessons to draw from these incidents:

 

1. If you have a problem and have modified standard code or have user-exits in the problem area, it's highly probable that your changes are causing the problem

 

2. OSS works very well, and particularly for Max Attention customers, as an 'expert of last resort' that can really help catalyse problems with your SAP system.

 

3. Mistakes during SPAU can have long-lasting impacts

 

4. IDES systems are very useful

 

5. Best to go on holiday when everyone else does.

Viewing all 948 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>