Quantcast
Channel: ABAP Development
Viewing all 948 articles
Browse latest View live

Join types SAP queries

$
0
0

Quickviewer / Query – Joining tables.


There are 2 ways of joining tables in SAP: Inner and outer joins.


If you have to following 2 tables, that you want to join, you see that there are 2 matching records: 11 and 12.

 

Table 1
11
12
13
14

 

 

Table 2
11
12
21
22

 

Using this table to explain the matter:

 

 

1: Inner join

 

This is a 1:1 relation. In short, this will only show records when the data is available in both tables.

Using an inner join, you will have a better performance. It is advisable to use this when possible.

 

When joining these 2 tables, the data that is available in both tables is 11 and 12. This is what you will get as a result.

 

Table 1Table 2
1111
1212

 

2: outer Join

 

This is a 1:n relation. In short, this will show all records of the left table in the join, regardless if it is available in the joined table.

 

When joining these 2 tables, it will show all data of Table 1. Only there where there is a match, it will show the data of Table 2.

 

Table 1Table 2
1111
1212
13
14

 

 

Example,:

you want to show all material information using MARA, and see wich materials are available in a storage location (MARD).

 

Inner Join:

Create a query using a Table Join.

 

Afbeelding 1.png

 

In the join, select MARA and MARD

 

Afbeelding 2.png

 

Note the connecting line, this shows you have an inner join.

 

Now click back and select the fields you want to use for selection and  output.

For this example, I have selected as selection and output: MARA-MATNR and MARD-WERKS.

 

Afbeelding 3.png

 

I have 2 material numbers: 51 and 801.  Material 51 is not available in the MARD table.

The result from running the report is that only for 801 the results will be shown (inner join only shows data that is available in both tables):

 

Afbeelding 4.png

 

Outer Join:

I go back to the data join, and change the join to a outer join.

 

For this, right-click on the connecting line and select Left Outer Join

 

Afbeelding 5.png

 

Note the connecting line is now showing it is an outer join.

 

When now executing the report with the same selection, this is the result:

 

Afbeelding 6.png

 

 

With a left outer join, all data from the left table will be shown and only there where this data is also available in the right table it will be shown.

In this example, material 51 is available in the left table (MARA) so the material number is listed. Since there is no info in the MARD, this info will remain empty.

 

Example 2:

 

Inner join preferred:

 

You have created a new Availability check and want to see all materials where (still) the old checking group is used.

For this, the table with the checking groups is TMVF. The master data is stored in MARC.

Since you only want to see the data that is available in both tables, you can use an inner join.

 

For the left table, you pick TMVF, since this is where you want to make the most important selection for your report.

The right table then is the MARC table.

 

Afbeelding 1.png

 

You select the MTVFP field and MATNR and WERKS as both input and output fields.

 

Now, to show all materials with checking group 02:

 

Afbeelding 2.pngAfbeelding 3.png

 

Outer join:

You have a list of invoices, some of them might be in an invoice list. You want to see all invoices and when they are in an invoice list, the IL number should be shown.

 

Left table is the VBRK. Right table is the VBRL.

 

Afbeelding 4.png

 

Note that I have changed the join to VBELN_VF and made it a Left Outer Join.

 

Now I select VBELN from VBRK and VBELN from VBRL as input and output.

 

Afbeelding 5.png

 

Afbeelding 6.png

 

Extra:

If I want to add more info about the invoice list, I need to also use the VBRK table. But I can only use this table once.

To be able to add info a second time from VBRK, I have to create an alias:

 

Go to the join and click on the Alias button. In the pop-up, click on create.


Afbeelding 7.png

 

Fill in the table name, and think of a name for your alias. Accept your input.

Now insert the newly created 'alias' table and use an inner join from VBRL to the alias (outer join can only be used once in a dataflow):

 

Afbeelding 8.png

 

Now when you select the billing type as output from VBRK and the alias, you will get the following result:

 

Afbeelding 9.png

 

As you can see, the outer join is ignored because of the inner join further to the right. When using a right outer join, it should be the last table in the flow.


Functional ABAP - Functional Programming in ABAP ?!

$
0
0

Introduction

The idea to write a blog exploring the possibilities of functional programming in ABAP first came to my mind when I read Horst Keller's blogs on ABAP Language News for Release 7.40, SP08. Especially the REDUCE operator that is part of the new iterator expressions ( ABAP News for 7.40, SP08 - FOR Expressions) immediately reminded me of the reduce function in Clojure. As I was learning different function languages in my spare time anyway I started to try to implement some of the examples used in the functional programming tutorial in ABAP.  After a twitter conversation with Uwe Fetzer and Fred Verheul (see Uwe's blog Rosetta Code and ABAP) I decided to collect some of the code examples I created so far and share them in a blog.

 

A (very, very) short introduction to functional programming

Functional programming is one of the oldest programming paradigms, dating back as far as the late 1950s (cf. Functional programming - Wikipedia). Despite this long history, functional programming never gained widespread acceptance (compared to e.g. C or C++). However, in recent years there has been a growing interest in functional programming and functional programming languages again.

For example, the RedMonk Programming Language Ranking for January 2015 list with Scala (The Scala Programming Language), Haskell (Haskell Language) and Clojure (Clojure - home) three functional languages among the top 20 programming languages (cf. image below). In addition functional extensions to popular programming languages like Java have been developed (cf. Functional Java  and How Functional is Java 8?).

lang.rank_.plot_.q1152.png

There are numerous articles and discussions available on the internet regarding the advantages and/or disadvantages of function programming (e.g. Functional thinking: Why functional programming is on the rise and Advantages Of Functional Programming). In my opinion there are two key advantages that lead to the current interest in functional programming. Firstly, the possibility to develop specific abstractions is an integrated part of each functional language. Secondly, the property of functional languages of being side effect free simplifies the development of parallel programs.

 

Disclaimer

I don't try to advocate the usage of the code examples I will show below for productive usage (at least not yet). With all the new language features getting added to the ABAP language Tobias Trapp's advice "Don't try to be smart. Be smart." is more valid then ever. In the context of what I'll show below I would paraphrase Tobias's statement as "Just because something is possible doesn't mean it is a got idea to do it."

Furthermore, it is important to keep in mind that ABAP is not (and most likely will never be) a functional programming language. In ABAP, it is not possible to pass functions as arguments to other functions. The ABAP compiler and runtime currently lack important features (e.g. tail call optimization, cf. Tail call - Wikipedia or What Is Tail Call Optimization? - Stack Overflow) of runtime engines for functional languages. Consequently, functional programming in ABAP is limited to certain cases. The examples shown below might also only run very slowly or only work for small input values compared to an imperative or object-oriented implementation.

Nevertheless, it is in my opinion quite interesting to see what is possible with the current version of the ABAP language.

 

Functional ABAP examples

In the following sections I'll show some quasi functional implementations of different algorithms in ABAP. Note that all code examples in this blog a screen shots. There reason for this is that no syntax highlighting for ABAP is available. In order to simplify the usage of the code snippets is also create a pastie (Pastie) for each snippet and added a link to it below each screen shot.

 

Simple start

As a simple start to functional programming in ABAP lets calculate the sum and the product of the values of an internal table. The code snippet below  shows how this would be implemented in ABAP without the usage of any of the new language features. To calculate the sum and the product one would simply loop through the internal table and store the calculation result in a temporary variable. It would of cause be possible to calculate the sum and the product using a single loop. However, I used two loops to make the similarity to the functional implantation more obvious.

2015-04-07 21_49_40-ABAP - Programm ZCD_SUM_AND_PRODUCT [IFD] - Inaktiv, Gesperrt - IFD_100_drumm_en.png

#10078959 - Pastie - zcd_sum_and_product

 

Using the reduce table expression calculating the sum and the product can also be implemented as shown below.

2015-04-07 21_31_09-ABAP - Programm ZCD_REDUCE_TEST [IFD] - Aktiv - IFD_100_drumm_en [IFD, 100, DRUM.png

#10078938 - Pastie - zcd_reduce_test

 

Note that in this case there is not much difference with respect to the code amount between the two solutions. In fact the classical solution using only one loop would be shorter than the functional one. The key advantage of the REDUCE operator is that it can be combined with other operators to create more expressive expressions.

 

Fizz-Buzz

The first a little more advanced algorithm I implemented using a functional paradigm in ABAP is the Fizz Buzz Test. The Fizz Buzz Test is a simple programming exercise I stated to use in job interviews for developers lately. Many implementations of the Fizz Buzz Test in different languages are documented on rosettacode (http://rosettacode.org/wiki/FizzBuzz). The goal of this test is to:

"Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”."


The code snippet below shows the implementation of the Fizz Buzz Test in "normal" ABAP without the usage of any of the new ABAP features. In lines 6 to 9 an internal table is initialized with the numbers 1 to 100. After that the lines 11 to 22 show the core implementation of the FIZZ BUZZ Test.

2015-04-06 20_43_34-ABAP - Programm ZCD_FIZZ_BUZZ [IFD] - Aktiv, Gesperrt - IFD_100_drumm_en - Eclip.png

#10076806 - Pastie - zcd_fizz_buzz


An alternative implementation of of the Fizz Buzz Test using functional ABAP is shown below. It is immediately obvious that the second implementation is much more concise then the previous one. The implementation consists of a constructor expression for a string table (line 4). Using the FOR iterator expression the values form 1 to 100 are passed to a COND expression. The COND expressions uses a LET expression to create two local variables r3 and r5. r3 and r5 store the result of i MOD 3 and i MOD 5 respectively. Finally, the WHEN and ELSE clauses in lines 8 to 11 implement the core logic of the Fizz Buzz Test.

Besides being more concise, the functional implementation is in my opinion a lot cleaner then the classical one.

2015-04-06 20_44_40-ABAP - Programm ZCD_FUNCTIONAL_ABAP [IFD] - Aktiv, Gesperrt - IFD_100_drumm_en -.png

#10076802 - Pastie - zcd_functional_fizz_buzz

 

n!

Next I implemented the factorial function. The factorial of a positive integer n is defined as the product of the sequence n, n-1, n-2, ...1. Furthermore, the factorial of 0 is defined as being 1. Using the the COND and the REDUCE operator the factorial function can be implemented as shown below. This functional implementation of the factorial function nicely resembles the definition. Therefore the functional implementation is in my opinion simpler to understand compared to the classical implementation.

Note, that the COND operator is not required in the implementation. The REDUCE operator would also handle the special case of 0 correctly. Nevertheless, I included the COND operator in order to have the function definition more clearly expressed in the code.

2015-04-07 22_28_27-ABAP - Programm ZCD_FUNCTIONAL_FACTORIAL [IFD] - Aktiv - IFD_100_drumm_en - Ecli.png

#10079013 - Pastie - zcd_functional_factorial

 

Fibonacci Sequence

The last algorithm I implemented using the functional operators is the Fibonacci sequence. I used a recursive approach for this implementation. The COND operator is used to represent the definition of the Fibonacci numbers (lines 13 - 18). In order to implement the recursive call I used a LET expression to invoke the Fibonacci function recursively (line 18). The two auxiliary variables x and y store the last and second to last element to the Fibonacci sequence for n-1 (lines 19 and 20). Finally the result of adding x and y is concatenated with the elements of the Fibonacci sequence for n-1 and returned (line 22).

2015-04-06 21_28_21-ABAP - Programm ZCD_FUNCTIONAL_ABAP [IFD] - Aktiv - IFD_100_drumm_en - Eclipse.png

#10076876 - Pastie - zcd_functional_fibonacci

 

What's next?

As stated in the introduction it is quite interesting to see what is possible with the new ABAP features. However, there are still some open questions:

 

1. Performance

In the examples above I didn't compare the performance of the classical ABAP solutions to the functional ones. It would be quite interesting to see how the different solutions compare with respect to performance. This might be the topic for a future blog post.

 

2. Productive usage scenarios

Another open question is which productive usage scenarios are suitable to apply the functional ABAP features. The examples above are implementation of mathematical algorithm. However, productive ABAP code usually deals with the handling of business objects (e.g. business partners). The first usage scenario that came to my mind is the usage of the FILTER operator (not shown in the examples above, cf.FILTER expression) to filter an internal table after some select (e.g. select from BUT000 and the filter the business partner type). However, It would be interesting to find further usage scenarios in which a functional solution offer advantages over the classical one.

 

3. Future ABAP versions

It will be really interesting to see what future ABAP version will offer. Maybe ABAP will in the future offer some functional features that are currently still missing.

 

Finally, I'd be interested to hear your opinion of the code examples I've shown above. Do you think this is something that should be used in productive code?

Christian

Why we must take Functional Programming seriously.

$
0
0

Preamble

I was in the process of responding to Christian Drumm's excellent blog on Functional Programming in ABAP, when I realised that my mental ramblings should really be put into a blog (or perhaps more of a brain dump).

 

Please read Christian's post for the context of the following comments...

 

I must say that over the last few years, I've somewhat lost touch with the latest language developments in ABAP, so its nice to see that in a 7.40 system, ABAP has these additional features and that they're heading in the functional direction.

 

However, it does beg the obvious question though - why didn't ABAP have these features 20 years ago?  But anyway, they're here now, and that's a good thing.

 

Real Functional Programming in ABAP?

Well, yes but not entirely.


Until ABAP is able to treat a function as a 1st Class Citizen, then only partial progress can be made towards making ABAP a true functional programming language.  For those who are not sure, a "1st Class Citizen" is any object that can be created and deleted dynamically, can be passed as a parameter to a function or subroutine, and can be passed as the return value from a function.  An ABAP function module does not meet all of these criteria.


The underlying principle of functional programming is not simply being able to use the map or reduce functions, but rather in the fact that the software is designed using the systematic application of referential transparency.

 

Adding various operators like COND, WHEN and LET certainly allows solutions to be programmed in a more concise, functional style; but this only permits an occasional, ad hoc use of referential transparency, rather than its systematic application.  Nonetheless, these additions to ABAP are a good step in the right direction and I'm sure people will find it very useful.

 

I'm sure the ABAP language developers are well aware of the need to create a unit of code that behaves as a 1st class citizen, and have already been thinking long and hard about whether and how such changes could be implemented.

 

Borrowing from Clojure?

Looking at the syntax and implementation concept behind the new ABAP operators like COND, WHEN and LET, they look very much like they've been lifted directly out of Clojure - which is certainly not a bad thing!


We just need to add operators like DEFN and FN and ABAP will have made the switch - but see the comments above about ABAP functions and 1st class citizens...

 

Functional vs Imperative Programming in General

I have been a long time advocate of the functional programming paradigm, and at the risk of sounding like a broken record, have been trying to convince people within not just SAP, but also the business programming world that this paradigm needs to be treated as a viable and beneficial development option.

 

However, the blunt fact of the matter is that there tends to be a large cultural divide between the functional and imperative programming camps.

 

I may use a little bit of exaggeration here, but this is only to highlight the polarities that exist between these two camps.  Each camp tends to view the other with suspicion on the basis that "we have the correct priorities" and that "they are focused on all the wrong things".

 

The business programming world (the imperative camp) is focused on writing software as a means to make money.  Yes, yes, they want to solve customer problems, but in a world governed by a market driven economy, at the end of the day, the share price is king.

 

Over in the functional camp, this programming paradigm is presented as an implementation of Alonzo Church's Lambda Calculus.  Here, the Platonic concept of "pure intellectual thought" reaches one of its pinnacles.  Pure functional programming languages such as Haskell have created rigorous implementations of this mathematical concept, and the result is a problem solving technique that, after you have first accepted the very restrictive modus operandi imposed by all functions needing to be pure (that is, they never have side effects), can produce solutions whose correctness can be determined with a very high degree of certainty.

 

Whilst there is far more to the situation than simply "making money" and "functional purity", when you talk to people from these two camps about their differences, the conversation tends to come back to the importance of one or other of these priorities.

 

Why are the two camps so polarised?

Over in the Functional Camp

A large part of the polarisation here comes from the fact that many universities still teach functional programming as just an implementation of lambda calculus; therefore, if you're not interested in mathematics, functional programming is of little use to you.

 

Having said that, universities are gradually seeing the need to change this approach.  I've spoken with Prof Kevin Hammond of St Andrew's University in Scotland about this, and he is actively working to join up the worlds of business programming and functional programming.  However in general, functional programming is not really presented as a viable solution to the practical problems seen in the business world - such as creating warehouse stock management systems or running your accounting system.  Instead, students of functional programming are often told to write their own language interpreter that outputs an abstract syntax tree, or to design an algorithm that decides if a graph tour can be performed in which each edge is visited only once.

 

That's all well and good, but it doesn't prepare students for work in the business software world.  Consequently many students are implicitly taught to exclude this development paradigm as a viable option in the hard-nosed world of business (where success is measured solely in financial terms (Is this a good thing? Discuss))

 

For those software developers who do enjoy the supposed purity of mathematical thought, there is often the sense that being able to write in the functional programming style makes one intellectually superior to the lowly imperative programmers.

 

Well, there's a small element of truth to this argument, but a larger element of blindness caused by intellectual snobbery.  This blindness then causes otherwise intelligent people to suggest that all business software is written by money-grubbing companies interested only in meeting their greedy revenue targets. (Yes, one functional programmer I spoke to a while back implied that SAP's market dominance meant I worked for "The Great Satan")

 

Meanwhile, back in Gotham City... uh sorry, The Imperative Camp

From the imperative side of things, this programming paradigm is easier to pick up than functional programming in the same way that the mechanical engineering solution to a problem is "visible" (you can see wheels and gears turning, drive shafts spinning and valves operating) and the electrical engineering solution to the same problem is "invisible" (you can see the resistors, capacitors and diodes on a circuit board, but you can see the current flowing through those components).

 

There also tends to be the overriding attitude that "time is money"; therefore, software must be pushed out the door as fast as possible to avoid loosing (the perception of) market share.  It is certainly true that from time to time, software companies push software out the door way too early (this is the "if it compiles, ship it" approach), but this is generally avoided.

 

Similarly, the business programming world must avoid the smugness of thinking that they deal with the "real" world, and dismiss the functional crowd as just a bunch of intellectual snobs who live an abstract world completely divorced from, and irrelevant to anything down here on terra firma.

 

Again, there's a small element of truth here, but a larger element of blindness caused by thinking that success is measured solely in financial terms.

 

Can the two camps co-exist?

Yes, certainly.  And I believe the first step that must be taken is to remove the sectarian boundaries that been erected to protect the supposed rightness of each camp's particular point of view.

 

Both sides can benefit from each other by exchanging techniques that they have found to be successful.

 

On the imperative side, the software needs to meet the customer's needs at a price the market is willing to pay.  When a company does this well, that company makes money.  But making money is by no means the sole measure of success.

 

And on the functional side, the use of functional programming techniques often allows for a more concise definition of the solution. Also the correctness of that solution can typically be determined with a higher degree of certainty than is possible with the imperative approach; however, the rigour with which the policy of functional purity is applied should not override the other factors inherent to running a business.

 

The division between the camps is certainly less than it was 10 years ago, but it still tends to be true (at the macroscopic level at least) that when these two camps come into contact, they generally regard each other with suspicion, throw a few shallow, poorly considered insults over the fence, then as they leave, they congratulate themselves for having the "correct" approach to software development and get back to business as usual.

 

However, I'm happy to say that developers on the imperative side of things are starting to look more closely at the functional way of doing things, and that the universities are trying to tailor their functional programming courses to include business software as a target problem domain.

 

Unfortunately, there are some other factors that reinforce this cultural divide.


Functional Programming at SAP

SAP is already a highly successful software company, and none of that success is attributable to coding written using a functional programming language, or even the functional programming paradigm.  So this leads to two further problems; and I must stress that these problem are by no means unique to SAP, but apply generally to all fields where expertise has lead to success:

 

  1. The "inertia of expertise" problem.  This is a well known phenomenon in which genuine expertise in a subject has on the one hand, produced genuine break throughs and resulted in the creation of valuable and highly successful solutions; yet on the other hand, that very success creates a large mental inertia that makes it very hard for experts to seriously consider the possibility that other, equally viable solutions may also exist.
  2. The "innovator's dilemma".  This is also a well known problem that can be thought of as the flip side of the "inertia of expertise" problem.   This is where it is very difficult (or even impossible) to innovate in an environment within which complex problems have already been solved, and solved effectively.  Existing ABAP systems contain many highly successful solutions for solving business problems, so the overriding mindset is "if it ain't broke, don't fix it".  The resistance to the use of innovative languages is compounded by the fact that the solution created by the innovative approach will often not be any better than the existing solution.  Thus the person trying to introduce a new way of solving an old problem is often viewed as someone who "doesn't really get it" because "that problem has already been solved - move on!".  This makes indifference to innovation appear entirely plausible.

 

Having said that, the functional programming language R is used within the HANA system, but at the moment there are no tools by which a customer or partner could extend or develop their own R procedures.

 

I strongly believe that if the business software world (SAP) is to survive the next major phase shift in computing technology (Cloud computing), then at least two things need to change.  By taking the perspective of an ABAP system, we could rather loosely describe these as "changes from within" and "changes from without".

 

Changes from within (The Natural Growth Approach)

Adjust the ABAP programming language so that true functional programming can be performed.  This will require ABAP to be able to handle functions as 1st class citizens; however, since I am not an ABAP language developer, I don't know whether this proposal is even viable.

 

Assuming that such a change is viable, then the syntax that has already been used by the new COND, WHEN and LET operators could be extended to create a language that looks and behaves very much like Clojure.  (Though there are other much deeper issues as stake here, particularly when handling concurrency).

 

However, this change (assuming its even possible) will be the slow option because it relies on the natural growth of ABAP software from within the developing system.  In realistic terms, natural growth like this takes 3 to 5 years to become an integral part of the system.

 

Changes from without (The Innovative Approach)

Due to the "Innovator's Dilemma", the adoption of functional programming as a way of life cannot take hold in any area where mature solutions already exist.  Therefore, new programming paradigms must occupy new problem domains.  This avoids the plausible accusation of trying to re-invent the wheel.

 

I believe the area where this approach has the most potential is in the Internet of Things.  This is a wide open field in which mature solutions do not yet exist.  Therefore, the possibilities exist to create cloud-based software based on the functional programming paradigm that can handle the huge volume of data and connections demanded by a connected world.

 

I further believe this paradigm shift is absolutely necessary for the following reason:

 

To survive the IoT, we must be highly scalable and be able to support massive levels of concurrency

Sorry to burst anyone's bubble here, but if you think that concurrency and scalability are topics that can be handled after you've got your business app up and running, then you'll be in for a very nasty surprise come go live...

 

Scalability can be solved by throwing more hardware at the problem, but concurrency is a much thornier problem.  Concurrency poses a different type of challenge because it requires us to fundamentally change the way we write software.  But there is a natural human resistance to change - especially if we have existing solutions that have worked well in the past.  Sorry, but yesterday's imperative solutions will not solve tomorrow's concurrency problems.

 

Concurrency is of fundamental importance to any business scenario that expects to perform real-time communication with the number client devices expected with a realistic IoT solution.  These devices could either be mobile smart phones or static devices such pieces of machinery on a production line or low energy Bluetooth beacons.

 

The exponential growth in computing device numbers means that any business involved in the IoT will, in the reasonably near future, have to handle in excess of a million concurrent connections as its normal runtime state, rather than as a peak-load exception.  From a database perspective, a HANA system can handle the number of queries generated by such levels of concurrency, but how about all the intermediate systems that connect the outside world with your backend HANA system?

 

A totally new approach is needed for the communication interface between the SAP world and the gazillions of devices all screaming for a timely response from the backend business system.  Concurrency is an area where functional programming can provide a solution with far less development effort than would be required if an imperative language were used.

 

Therefore, it is of the greatest importance to choose both a programming language and paradigm that will contribute towards the solution and not become part of the problem.  Such a language must not contain any unnecessary complexity, for as Tony Hoare has said:

 

"If our basic tool, the language in which we design
and code our programs, is also complicated,
the language itself becomes part of the problem
rather than part of its solution."

 

Ignore this advice at your peril...

 

Therefore when writing a communication interface for the IoT, it makes perfect sense (to me at least) to choose a language in which concurrency is a natural feature, rather than an additional construct that must be implemented in that language.  Here, the obvious language choices would be either Erlang or Clojure - both of which are functional programming languages and are both very good at concurrency.

 

For the specific task of an IoT communication hub, my first choice would be Erlang due to the fact that this language is already battle-hardened due to having been used in the Telecoms industry for over two decades.  Among other things, Erlang was specifically deigned to handle the massive levels of concurrency experienced by a telecoms switch, so this makes it an ideal candidate.

 

If you're still sceptical about the commercial viability of a language like Er...whats-it-called (Erlang), then consider that WhatsApp is written entirely in Erlang and they regularly handle more than 200,000 messages per second.  See here for reviews and technical details.

 

Also Massively Multiplayer Online games such as the different Call of Duty variants all use Erlang servers to handle the client-to-client transfer of game state information.

 

Whilst the idea of using a niche language like Erlang for handling massive levels of communication concurrency might scare you, it really is the best choice for this particular problem domain.

 

However, if you want a more general purpose language that runs on the more familiar environment of a Java Virtual Machine, then Clojure would be a more comfortable choice.  This language is also very good at handling concurrency and since it compiles down to Java byte code, can easily be understood by experienced Java developers.

 

The choice to use either of the languages mentioned here (or any functional programming language for that matter - Haskell, Scala, OCaml etc) requires a willingness to move outside our comfort zone.  Generally speaking, this willingness is not always forthcoming until we have first experienced the failure that comes from thinking that old solutions will solve new problems (I.E. trying to rationalise not having to move outside our comfort zone).

 

The imperative programming paradigm cannot solve the concurrency issues raised by having to support massive levels of simultaneous connections.  If you don't believe me, go and write a solution in <pick_your_favourite_imperative_programming_language> and see how much scalability you can practically achieve.  After this painful experience of blood, sweat and tears, I venture to say you will be more willing to consider a solution based on the functional programming paradigm. 

 

 

So what? What does this have to do with SAP software?

With the whole computing industry moving towards Cloud-based solutions, providing massively scalable, centralised communication access to your backend system/s is fast becoming a make-or-break topic.

 

The potential exists to use something like a Cloud Foundry Custom Buildpack to implement an IoT communication solution based on any programming language you like.  This in turn could be used to communicate with a HANA system...

 

The field is wide open here.  Let's seize this opportunity to create new solutions for the new environment, and not make the mistake of thinking that our comfortable, old solutions will be suitable - they won't.

 

Chris W

ABAP Obfuscation Riddle

$
0
0


Tobias Trapp wants to start an ABAP obfuscation contest.

 

For warming up a small riddle:

 

INCLUDE

 

NOT. IF
NOT  NOT  NOT  NOT  NOT  NOT  NOT  NOT  NOT
NOT  NOT !NOT  OR   NOT  NOT  NOT  NOT  NOT
NOT  NOT  NOT  NOT  NOT  NOT  NOT  NOT  NOT
NOT  NOT  NOT=>NOT( NOT ) OR  NOT  NOT  NOT
NOT  NOT  NOT  NOT  NOT  NOT !NOT  ...  NOT.

 

 


This program is syntactically correct (as of 7.40, SP08).

 

 

Now tell me how that?

Using SALV to show a simple popup error log

$
0
0

Sometimes we need to show an error log with multiples lines, and a popup window is a very good option to use here.

Using CL_SALV_TABLE class, we can achieve this requirement in a very easy way.

 

Here is a sample code to open a popup window with some records as icon and status message.

 

 TYPES:   BEGIN OF ys_log,     icon     TYPE c LENGTH 40,     message TYPE c LENGTH 200,   END OF ys_log.
 " Variáveis locais
 DATA:   ls_log     TYPE ys_log,   lt_log     TYPE TABLE OF ys_log,   lr_table   TYPE REF TO cl_salv_table,   lr_display TYPE REF TO cl_salv_display_settings,   lr_columns TYPE REF TO cl_salv_columns,   lr_column  TYPE REF TO cl_salv_column.
 " Sample data
 CLEAR ls_log.
 ls_log-icon = icon_led_green.
 ls_log-message = 'Status OK'.
 APPEND ls_log TO lt_log.
 CLEAR ls_log.
 ls_log-icon = icon_led_red.
 ls_log-message = 'Erros'.
 APPEND ls_log TO lt_log.
 " Create ALV table
 TRY.     cl_salv_table=>factory(       IMPORTING         r_salv_table = lr_table       CHANGING         t_table      = lt_log ).   CATCH cx_salv_msg.
 ENDTRY.
 lr_display = lr_table->get_display_settings( ).
 lr_display->set_list_header( 'Error Log' ).
 " Set columns
 lr_columns = lr_table->get_columns( ).
 lr_columns->set_optimize( 'X' ).
 " Change Texts if needes
 TRY.     lr_column = lr_columns->get_column( 'ICON' ).     lr_column->set_short_text( 'Status' ).   CATCH cx_salv_not_found.
 ENDTRY.
 TRY.     lr_column = lr_columns->get_column( 'MESSAGE' ).     lr_column->set_short_text( 'Message' ).   CATCH cx_salv_not_found.
 ENDTRY.
 " Set a Popup
 lr_table->set_screen_popup(   start_column = 1   end_column   = 50   start_line   = 1   end_line     = 10 ).
 " Show ALV
 lr_table->display( ).

Regards,

Frisoni

There is a Monster Under my Bed

$
0
0

There’s a MONSTER under my bed!

image001.jpgimage002.jpg

Writing a Book for SAP Press – Part Three

Table of Contents

Background

Unit Testing

Static Code Checks

To Be Continued


Who’s that watching Benny Hill – Is it a Monster? Is it a Monster?


The other day I published my second blog about writing a book for SAP Press  - a book which primarily focuses on monsters but may possibly illustrate that subject by talking about all the latest ABAP technology.


http://scn.sap.com/community/abap/blog/2015/04/07/whats-that-coming-over-the-hill-is-it-a-monster

 

My old matey Martin English – whom I shall be having a quick drink with at the Mastering SAP Technology event in Melbourne at the end of May 2015 – asked if I was ever going to shut up about this? Sadly the answer is no, not until I have finished documenting the thinking behind everything in the book, so ha ha ha ha, hard luck.


He also wondered if I was going to put all the best bits from the book online. Now that would be silly. I often do silly things, virtually all the time in fact, but I am not going to do that one. The author of the recent book “SAP Nation” Vinnie Mirchandani wrote that he likes to put 20% of the content of his books online. In my case SAP Press already give out a sample chapter, so that is enough to be going on with. Instead I am going to add in material that was cut out of my book for reasons of space.


Chapter and Verse


In the last blog I mentioned the thinking behind the first two chapters. To summarise…..


The first chapter was all about Eclipse,

Because that is so Cool and Hip.

The second was on new ABAP Features,

To show I practice what I preaches.


Going back to prose for a second I was trying to follow the logical flow of writing a program. The first step is choosing your development environment; the second is to make sure you are aware of all the new ABAP features at your disposal. The third step – and therefore the third chapter - is something more often seen in non-ABAP languages and is something I like to waffle on about like a broken record till you can’t take it any more.


How evil are YOU?


The first thing you are supposed to do when writing a new application is to write the tests before you write any actual code. A few years back I wrote a whacking great blog on the subject:-


http://scn.sap.com/community/abap/blog/2013/04/18/are-you-writing-evil-abap-code

 

I am an enormous fan of test driven development in general, and it’s manifestation in SAP namely ABAP Unit. This is not just an academic position, I write unit tests for all my new developments and retro-fit them when I make changes to old programs, and this has saved my bacon many times. What I mean by that is the automated tests point out to me the schoolboy errors in my programs well before anyone else can find out about them and laugh at me / sack me / whatever.


Yesterday, for example, I spent all morning writing unit tests for a business critical program. I would say as a result I made the application much more stable, but a lot of people would say I was wasting my time when I could have been doing something worthwhile. How can this be? Why, when some people realise what ABAP Unit is instantly say “oh my god, don’t do THAT!”?


Is this the right room for an argument?


The ABAP Unit framework came in with ECC 6.0, which entered ramp up in October 2005. So we are looking at something that is ten years old in the near future. If this approach and supporting framework are as good as I (and all the experts I keep quoting) keep saying they are then you would think that everyone would be using this by now, and it would have no place in a book about the „latest“ SAP developments. Yet it has been my experience that in fact very few ABAP developers use ABAP Unit at all and a surprising number are utterly unaware of its‘ existence.

 

Even worse, some development organisations have an outright ban on this approach. We now look at two common arguments against the test driven development methodology.

 

There are two possible groups who can raise objections:-

·         The developers themselves

·         IT Management

 

Developers Won‘t Use It / Don’t Like It

 

At a demonstration at my company in 2013 none of the other programmers had heard of ABAP Unit and my boss asked me if this was open source software like ABAP2XLS that I had found somewhere on the SAP Community Network web site.

 

So one reason ABAP Unit is not used as much as might be expected is that a lot of people don‘t know it’s there, but the other side of the coin is that even when they do know about it you have the „you can lead a horse to water but you can’t make it drink“ problem. This is when the developer has a quick look at the tool and decides it doesn’t add any value and so disregards it.

 

When I started looking at test driven development a few years back I thought “what a waste of time, 95% of my programs involve reading things from the database and updating other database tables, and then showing the result to the user, if you fake all of this using stubs what in the world is the benefit?”

It turns out that the remaining 5% of code is riddled with bugs and this shows up instantly during unit testing. If you don’t believe me, try it yourself and you will most likely discover the same thing on the very first attempt. Many times I have thought “this routine has only three lines of code – so it cannot possibly be wrong“ – yet it was. In the ABAP2XLS project unit tests have started to be introduced, and there was a test on a three line method that failed – a null value was passed in and the test was that a null value was passed out, except that it wasn’t.

 

Just to be clear about this, I think that any developer using test driven development and ABAP Unit for the first time on an actual program they are writing for productive use will discover a bug they would not normally have found until weeks later- within the first half hour, quite possibly the very first time you take the test • unit test option.

 

This is literally a product that sells itself (if you can get someone to try it), so developers are easy to convince, but then we come to the next barrier, which is management...

 

Some Things Just Can’t Be Tested


How can you do a unit test on say, a part of a program that has a pop up box requiring user input? It’s impossible surely?

Well yes, you do not have a user. The trick is to alter the program being tested so that all user interface code and screens (the user interface layer) is in its own class. Unit tests cannot have any user output at all, not even messages, so a unit test cannot call any code which might do a CALL SCREEN or a MESSAGE statement.

 

SAP have not yet worked out a way to reconcile the “call screen” statement which all GUI user interfaces revolve around with OO principles. The recommendation is to put your screen in a function module, and then wrap that in a method so it can be subclassed for unit testing. That’s a lot of work when creating programs when just doing a CALL SCREEN or MESSAGE statement is so easy, for what seems like zero return and you may wonder – is it really worth it? I would say yes, the benefit of being able to test something you could never test before (a part of the program which involves user interaction) outweighs everything.

 

It Takes Too Long


The only way to make your programs absolutely bullet-proof is to make sure every single line of code gets tested – every variation of every IF statement, every CASE statement etc.

 

This really means the amount of tests you have to write is directly proportional to the amount of production code you are testing, and because of the code needed to set up the tests and evaluate them afterwards that proportion is greater than 100%.

 

There are tools to reduce the amount of test code, but it could still be the case that the amount of test code you need to write is at least equal to the amount of code in the program being tested, probably more. So when you are done there will be more test code than production code.

 

You try telling that to an IT manager and see what happens. It is difficult enough getting a program working before the incredibly tight deadline as it is, and now we are talking about doubling the amount of code. That means doubling the development time doesn’t it? Doubling the cost? That would imply missing the deadline and coming in well over budget.

 

If that was true then test driven development would never get off the ground, management would ban the entire concept the instant they heard about it – and some managers do just that.

 

Here are two arguments you can bring to bear to counter the – inevitable – criticism from people who have never heard of the concept of automated unit tests.

 

Unit Testing’s impact on maintaining existing developments


If someone were to ask “what is it exactly that programmers do?” after the swearing and derogatory comments died down the answer would probably come back as “they write new computer programs”. That sounds correct but anyone who has spent any time at all programming would realize the correct answer is “a programmer spend 5% of their time writing new programs (the interesting bit) and 95% of their time fixing bugs and enhancing existing programs”.

 


image003.png

Development time vs Maintenance time over total life cycle


The development manager at our company was amazed to hear this, he thought I had made this up after being hit on the head or something, and only after every programmer on the team voiced their agreement did he come to the conclusion this might possibly be true.

 

A huge debate raged about this on the internet – people were saying that if you spend 95% of your time fixing existing programs then it must mean the programs were really badly written in the first place.

 

I used the analogy of the SAP HR module – every year in every country the governments change the laws and taxes governing payroll, and SAP has to modify their HR / Payroll programs to reflect the legal changes. It does not matter if the program was written badly or well – the law is the law and SAP has to change these programs, every year, forever. Thus, over a (say) 20 year period the percentage of maintenance effort as opposed to the effort of originally writing the programs climbs to near 100%.

 

Not everything changes that much, but we have found that any custom program in heavy use is subject to a non-stop stream of enhancement requests from the users. No-one is ever content – it is human nature and good job too, otherwise as programmers we would be out of a job – and so it could be argued the only reason you are NOT getting lots of requests to add new features to any given program is because no-one is using it.

 

If you accept that you will spend the vast bulk of your working life maintaining existing programs, then your main day to day problem becomes this – how to change an existing program that works perfectly without breaking it.

 

That’s where the unit test framework comes in – traditionally you change one area and then break other areas, and either people have to manually test the vast array of different things the program does to ensure nothing has broken, or the end users find out in production, and you have to spend hours debugging to find out what in the world went wrong.

 

Nothing is ever a 100% silver bullet, but with automated unit tests you can see if any area of the program unrelated to your change has broken, five seconds after making your new change.

 

The argument is that the amount of time saved on maintenance over the entire lifecycle of the program more than outweighs the extra time needed at the start. The problem is that accepting this involves thinking in the medium to long term instead of just the immediate short term, and we seem to be living in a short term focused world.

 

It might be that you are a start-up and if the program is not finished really quickly then the company will go belly up and there will not be a medium or long term to worry about, in which case you might think the whole argument would fall on its face. That brings me to my second argument against test driven development taking too long.

 

Unit Testing’s impact on new developments


Traditionally if you had a really large development, you could only be sure it was working properly when you had finished a large chunk of it, and were then able to run it in the development environment yourself, doing what you end user was supposed to do and seeing what happened. Usually a short dump was the result, or all sorts of weird results, certainly nothing like what you were expecting.

 

The test driven development methodology turns this on its head – by the time you get to that stage you have a big bunch of tests covering every line of code you have written, so if something is wrong you will have found out and fixed it long before the stage where you try and simulate what the end user would do.

A lot of the time the development system has no meaningful data, or is not hooked up to an external system, so you wait till the program is in QA before you can do a meaningful test, then you (Or the testers) find the problem, and you fix it in development and have to move it to QA to try again. If you could resolve a lot of those problems before the program got moved to QA you would solve a tremendous amount of time.

 

So my argument would be – by writing twice as much code you actually get a working product (program) out of the door faster. That may sound difficult to believe but all you have to do is try this approach once and see if what is descried above is true or not. I would hope you would be pleasantly surprised.

 

Code Quality Street


If the third chapter was going to be on the sort of dynamic testing that is represented by Unit Tests, then the fourth chapter would cover static code checks.

During upgrades SAP are always telling us to get rid of as much custom code as we can – that is primarily because so much custom code never gets used after a very short while, but this can be perceived as an attack on the quality of us “customer” programmers.


It is easy as a non-SAP developer to get all offended by this, especially since standard SAP code is 100% visible and sometimes schoolboy errors are spotted by us “customers” which then get corrected by OSS notes - leading to the observation “people who live in glass houses should not throw stones”.

SAP do have a point however, many companies have got into serious trouble over custom code quality, the reason is unclear but possibly this is down to the fact that a lot of programmers started without any formal training in ABAP (I am an example) because they knew other programming languages, or because they were experts in other areas of SAP (or even the business) and migrated to ABAP programming, or in my case because I was in the right place at the right time.

 

A Code Inspector Calls


We are all familiar with static code check tools like the extended syntax check and the code inspector, but do you ever find that some of your developers either can’t be bothered and/or think the whole idea is a waste of time?


I once recall being told by a programmer that a gigantic program was maintained by the whole programming team, and the person telling me the story got to make a change about every four months. After they had made their change they did a SLIN/SCI exercise and found dozens of warnings and errors that had popped up over the intervening period when other programmers who did not use the checks had made changes. The icing on the cake was that the programmer who did use the code inspector was not allowed to fix the other problems because they did not relate to the change at hand.

 

SAP had clearly thought about this sort of situation and as a result the ABAP Test Cockpit was born, which became the subject of my fourth chapter. A lot of people find it difficult to get their head around what this actually is and what the difference between the ATC and a code inspector with more checks is.

As mentioned just now, some developers don’t bother with the static code check tools. They sometimes justify that by saying that those tools can come up with so many false positive errors that you can’t see the wood from the trees.


Pragma – we’re all Crazy Now!


You may have seen the t-shirts where you have the familiar picture of a monkey on all fours behind a monkey walking on his hind legs following a Neanderthal behind a human who then turns round and says “you lot - stop following me!”

 

That was all about evolution, and in regard to false positive errors SAP’s handling of these has evolved from “Pseudo Comments” via “Pragmas” to “exemptions”. All three follow the same premise but become more informative each time.

 

With a “pseudo comment” in the code the warning/error went away but all a later reader of the code knew was that the developer thought this was not a real error. The practice of some developers to use the technique to suppress every single reported error did not help matters much either.

 

With “pragmas” you could have a real comment (on the same line) after the pragma to say why the error is being suppressed. That was a big step forward, and with exemptions (such as you find in the ATC) you can write a war and peace length explanation as to why this is not a real error.

 

Big Brother is Watching YOU


The ATC is based on the idea of some sort of “Quality Manager” with horns and fangs who has to keep the programmers in line by beating them with a stick every time they make one their numerous mistakes.


This is the so called “four eyes” principal where a different person to the developer reviews the code for correctness, and is all well and good, but a proper “peer review” system where another programmer has to check the code before the transport gets released is better in my opinion.

 

That may be more manual but it keeps the development team informed as to what their colleagues are doing, and less experienced programmers can learn from more experienced ones. A peer review can involves using the code inspector, just to make sure everyone is using it.

 

Naturally there is nothing against doing both peer reviews and having the SAP recommended ATC Quality Manager / Demon with a Pitchfork approach, as long as the Quality Manager actually knows something about programming, hopefully all the Quality Manager would have to do is review the exemptions as any real errors would have been picked up by

 

·         The developer doing their own ATC checks on their own objects in development

·         Peer Reviews

·         The transport system displaying warnings and errors before releasing the transport to test

 

Nonetheless having a good explanation of false positives in the SAP system itself can only be a good thing, as it stops someone coming along later and trying to correct the “error”.

 

To Be Continued


To end with the same paragraph as before, in subsequent blogs I will continue to talk about the thought process behind how I chose what topics to include in the book, enriched with some content that was cut from the final version of the book for reasons of space.


Cheersy Cheers

 

Paul

 

https://www.sap-press.com/abap-to-the-future_3680/

 

 

Debugging Any program via User parameters

$
0
0

This is just a step to think over how do we allow the programs to be debuggable. Its one of the ideas that i come to think over. We keep writing lots of code but at the end we need to debug them especially in the Production system. So debugging becomes an important issue!!.

 

DEBUGGING CONFIGURATION VIA USER PARAMETERS

 

Step 1- We will enable the program to be debugged thru the user parameters!!

 

This way we can add or remove the parameters and enable the code to be debugged in any system.

Lets say: we create parameters like: debug level, debug tcode and debug program name.

 

Some code snippet is:

CONSTANTS: c_param_debug_level TYPE memoryid VALUE 'DEBUG_LEVEL',     

c_param_debug_tcode TYPE memoryid VALUE 'DEBUG_TCODE',   

c_param_debug_prog  TYPE memoryid VALUE 'DEBUG_PROG'.

 

We will put a debug point in the program smth like this _debug_level 1. For more please see the implementation code i will attach!!

 

Step 2- We need to create a Function or a object that is going to be reading the parameters and checking the level of debugging and loading the details for debugging. Object that will be loading all the required data for debugging!!.

 

This should be called first then everything in the code! For example at the Standard report at initialization section.

As the code the name of this class is zcl_debug.

 

Step 3- We need to write a Macro in the code we want it to be debugged, so that we can stop the code at that execution point.

Why macro? Because via macro we can put Break-point and as macros can not be debugged thats the best candidate for this.

 

Code snippet: ***Writing the MAcros!!!

 

DEFINE _debug_level.

  if zcl_Debug=>get_instance( )->is_debug_level_correct( iv_debug_level =  CONV string( '&1' ) )  = abap_true.   "#EC USER_OK      BREAK-POINT.                                        "#EC NOBREAK

  endif.

END-OF-DEFINITION.

 

Step 4- we could be able to configure the level? We can use config table but that will complicate the debugging and most often its forgotten. So why not we use a pop up screen where we can change the debugging level dynamically!!

 

DEFINE _debug_configure.

  zcl_Debug=>get_instance( )->configure_debug_level( ).

END-OF-DEFINITION.

 

Step 5- Final Look Report zxxx.

 

DEFINE _debug_configure.

zcl_Debug=>get_instance( )->configure_debug_level( ).

END-OF-DEFINITION. …

İnitialization.

_debug_configure.

Start-of-selection.

_debug_level 3. …

 

By this way we can enable the code to be debuggable which will make our life much easier!!

 

 

Any suggestions, please feel free to suggest me

Collection of our best table expressions

$
0
0

This post is our little collection of usefull table expressions used in SAP Transportation Management context. The examples are doamin specific, but maybe other can use the principles as well.

Special thanks to Horst Keller for helping us getting started!

 

A simple one to get started:

 

Extract  the latest timestamp from a table

 

    lr_tor_a_checkin->arrival = REDUCE /scmtms/actual_date( INIT res = lr_tor_a_checkin->arrival                                                            FOR <fs_exec> IN lt_d_execinfo                                                            NEXT res = nmax( val1 = res                                                                             val2 = <fs_exec>-actual_date ) ).

Now for something a bit mor complex...

Extract Ealiest and latest timestamp out of a table with multiple time stamp fields in it

 

 

 

  TYPES: BEGIN OF ty_min_max_times,           min_time TYPE /scmtms/timestamp,           max_time TYPE /scmtms/timestamp,         END OF ty_min_max_times.  CONSTANTS: lc_max TYPE timestamp VALUE '29993112000000'.    ls_min_max_times = REDUCE ty_min_max_times(                            INIT res = VALUE ty_min_max_times( min_time = lc_max                                                               max_time = 0 )                            FOR <fs_stop> IN it_stop                            USING KEY parent_key                            WHERE (  parent_key = is_root-key )                            NEXT                            res-max_time = nmax( val1 = res-max_time                                                 val2 = <fs_stop>-appointment_end                                                 val3 = <fs_stop>-aggr_assgn_end_l                                                 val4 = <fs_stop>-aggr_assgn_end_c                                                 val5 = <fs_stop>-req_end                                                 val6 = <fs_stop>-plan_trans_time )                            res-min_time = nmin( val1 = res-min_time                                                 val2 = COND timestamp( WHEN <fs_stop>-appointment_start   IS NOT INITIAL THEN <fs_stop>-appointment_start ELSE lc_max )                                                 val3 = COND timestamp( WHEN <fs_stop>-req_start           IS NOT INITIAL THEN <fs_stop>-req_start         ELSE lc_max )                                                 val4 = COND timestamp( WHEN <fs_stop>-plan_trans_time     IS NOT INITIAL THEN <fs_stop>-plan_trans_time   ELSE lc_max )                                                 val5 = COND timestamp( WHEN <fs_stop>-aggr_assgn_start_l  IS NOT INITIAL THEN <fs_stop>-aggr_assgn_start_l   ELSE lc_max )                         ) ).

 

The underlying discussion can be found here.

Now, something more than just timestamps...

 

Extract the shortest duration together with another value which should come from the line with the shortest duration

 

   
DATA(ls_min_dur) = REDUCE ty_min_dur( INIT result = VALUE ty_min_dur( min_dur =
99999 )                         
FOR <ls_lddd> IN mt_lddd                         
USING KEY loc_fr WHERE ( loc_fr = iv_loc_key )                           
NEXT                             
result-loc_fr  = <ls_lddd>-loc_fr                             result-min_dur
= nmin( val1 = result-min_dur val2 = <ls_lddd>-duration )                             
result-loc_to  =  COND #( WHEN result-min_dur = <ls_lddd>-duration
THEN <ls_lddd>-loc_to ELSE result-loc_to ) ).

Call HANA Orion API from ABAP

$
0
0

Some months ago Thomas Jungintroduced the REST API for the SAP HANA Repository. I've implemented a client in ABAP that uses the REST API, it includes a simple GUI that lets you edit files on the HANA server.

 

editor.png

 

Clicking save will call the API and the code is saved on HANA.

 

web.png

 

I'll not recommend using the ABAP editor for XSJS development, as it is just a simple text editor that doesn't know about javascript.

 

The following example shows how a file can be created from ABAP using the API,

 

DATA(lo_factory) = NEW zcl_orion_factory( iv_url      = p_url

                                          iv_user     = p_user

                                          iv_password = p_passw ).

DATA(lo_file) = lo_factory->file( ).

lo_file->file_create( iv_path = 'foobar/'

                      iv_name = 'asdf.txt' ) ##NO_TEXT.

lo_file->file_update( iv_path = 'foobar/asdf.txt'

                      iv_data = 'Hello' ) ##NO_TEXT.

 

The code doesn't work with HCP, I haven't had any luck getting past the SAML authentication from ABAP.

 

 

larshp/abapOrion · GitHub

The Loch Ness Monster

$
0
0

The Loch Ness MONSTER

image001.jpgimage002.jpg

Writing a Book for SAP Press – Part Four

Table of Contents

Background

Debugger Scripting

Enhancement Framework

Exception Handling

To Be Continued

Back Back Back Back, Back to the Start

I have been writing a series of blogs about writing a book for SAP Press, a book all about the woes of being a programmer in the employ of Baron Frankenstein. The latest ABAP features also get a look in as well. The last such blog was:-


http://scn.sap.com/community/abap/blog/2015/04/14/there-is-a-monster-under-my-bed

 

I am trying to get through this as fast as I can, but I have been delayed by a disaster. When I first joined SCN (SDN as it was then) back in about 2001, the only way to do so was by using your “S” number. The problem is, what if your S number ever changes e.g. you move jobs? In other words a disaster waiting to happen, especially now we have “gamification” which encourages us to get “points”.


I have not moved jobs, but my company did move from regional licences to a single global licence (3 years ago) which meant I got a new S number. I did not notice, but after 3 years SAP cancels dead S numbers, so that was that for my poor old SCN account.


It might seem childish to miss all those points I had amassed, about 3000 odd, and start again with the one point for setting up a new account, but it was not as if there was any choice. However the good news is that SCN are working on a way to “merge” dead accounts with new ones when this happens.


Anyway that’s my problem, not yours, so let’s talk about some exciting new SAP technology and why I felt it vital to the future of the universe to talk about this in print.


Debugs Bunny


In earlier chapters I had talked about unit testing and static code checks, to try and pre-empt bugs before the program was ever run. However there are always going to be bugs, which is why we programmers (yellow shirts) spend so much time in front of the debugger. As you all know with the advent of ECC 6.0 we have the “new” debugger with all sorts of extra functionality.


In the West Country of England where I grew up it is quiet common for a stranger to walk into a pub and the whole place stops talking and looks at them in a threatening manner. Then someone will come up to them and say “we don’t like unfriendly strangers in here”. It is often the same when SAP developers encounter some new technology like the “new” debugger when it walks into the room.


As a result many developers switch the new debugger off on the day after the system is upgraded and they see the debugger has changed.

 

In the same way there are four billion bacteria in the room you are currently in, the programs we write are always riddled with bugs at the start, hopefully not four billion - though with some program I have seen I wonder.


If someone tells me a program I have written does everything that was desired with no errors the very first time it is run in DEV or QA I am deeply suspicious, it is rare that anything works perfectly first time.

 

As a result we expect the first few tests – by ourselves or others – to return unexpected results. Hopefully we first see these as a result of failed unit tests before anyone else goes anywhere near our lovely new application, giving us a chance to fix the problems in our code before anyone else can pick them up and sneer at us.

 

Then it is time to sit in front of our program, type in /H to start the debugger and sit in front of our screen for hours stepping through the program a line at a time trying to work out the exact point where everything goes horribly wrong.

 

I once heard our BASIS department get a request from an HR consultant to extend the time before a debugger session times out from the default 45 minutes. If you need to spend more than 45 minutes at a time debugging a program then “something is rotten in the state of Denmark” as someone somewhere once said.

 

Where be that problem? I be after ‘e. It be up that Wurzel Tree and I be after ‘e.


As mentioned just now, in version 7.02 of ABAP the debugger changed dramatically. The look and feel was so different a lot of people looked for the option to change the debugger back to the way it used to be, set that flag, and then forgot there was ever such a thing as the new debugger. Tempting as that is I tend to feel that hiding your head in the sand like an ostrich is not going to let you take advantage of the myriad of added features in the new debugger, of which I think debugger scripting is one of the most useful.

 

Why are things always in the last place you look for them?

Because you stop looking when you find them.

— Children’s Riddle

 

Let us have a look at a pretty picture of what we tend to do whilst debugging.


image003.png

Figure 1: Debugger Timeline


As can be seen the vast amount of time is spent debugging.

 

It could be said that spending 80%+ of your time looking for the problem is a horrible waste of time, even if it makes you feel like Sherlock Holmes. It has been suggested that Starship Captains, when menaced by aliens, should think of the last thing that could possibly work, and then try it first, to avoid wasting time, and in the same way if we could knock off some of the squares on the left and discover the problem sooner, then the day would be a lot less boring and we could proceed straight to the code writing activity that we love more than life itself.

 

The Robots of Death to the Rescue


Doctor : What do you do? What do any of you people do when you have a job that is too difficult or too boring for you?

Investigator Poul : I call for a robot….

Doctor Who and The Robots of Death – Terrance Dicks

 

The other day I was asked to check if there were any materials with duplicate legacy numbers (BISMT, as in Admiral Nelsons dying words to me which were “BISMT Hardy!” ) in our development system. I could have just dumped out table MARA in a spreadsheet, sorted the data and done a formula to compare each cell value with the one above it. That would have worked but I course I didn’t solve the problem that way – I am a programmer so I wrote a quick program to do the exact same thing.

 

If I can possibly solve any sort of problem by means of writing a program then that is the route I take. Let us say you have got to the situation where you know something is wrong with your program and are merrily debugging it. The fifth time you are in the debugger and find yourself doing the same steps, with no end in sight, you start thinking how boring this is and wishing you could make your life easier.

 

Solving this problem is what debugger scripting is all about, so I felt a small chapter on this subject in the book is the way forward. I wonder if anyone except me and my matey Brian O’Neill is actually using this. If you are please share what you think. I myself think it is so good all the programmers in your organisation should jump up out of their cubicles and do a Bollywood style musical number down the corridor singing the virtues of this “new” technology.

 

I want it all, I want it all, I want it all. And I want it now.


That about does it for testing, notice the first five chapters of the book – a third of the book – are all about improved development tools and quality control, or to put it another way “writing code fast and writing it right” the idea of which should make management types bounce up and down with joy i.e. the faster you bring the new thing to the table the more money the company makes.

 

In regard to the company making more money because of the programs you write, I read a blog by Chris Whealy the other day about Functional Programming


http://scn.sap.com/community/abap/blog/2015/04/09/why-we-must-take-functional-programming-seriously

 

His position (I think) was that Functional Programming languages as pushed by universities were all about “purity” of the language i.e. “you do all the right things for all the right reasons” whilst imperative languages like ABAP were all about “making money. Mind you some CFOs view IT as a cost centre which can never have any sort of positive benefit to the top line.


I may have utterly misunderstood the above blog but I wonder if it is possible to square the circle. Is it possible to still make the money for the company (or make Government more efficient thus costing taxpayers like me less) by doing things right in the first place? Even more, can doing things right even make your programming faster than the traditional quick and dirty way? I hope so, which is why I focus so much on quality control.


User Exit – Stage Left


The first section of the book was all about programming tools, but I could not let you go without giving you a “game for a laugh” badge and mentioning something that has been around for a while now but was not mentioned in the “Next Generation ABAP Programming” book, namely the Enhancement Framework.

 

I think that (Enhancement Framework) is the bee’s knees, though some development departments in assorted companies ban the use of this. As an actual use case, here is what I had to deal with:-

 

As an example when creating a sales order via the standard VA01 transaction if you press the “create with reference” button then the default tab on the popup is to create the order with reference to a quotation. Once upon a time the business users wanted the contract tab to be the default, but after some debugging I found that what tab popped up first was hard coded (at least in the version of SAP I was using) as opposed to being read from a customizing table, and at that time there was no user exit to change this behaviour. So we had to live with it.

 

Enter the Enhancement Framework and you can insert some code in the standard so this does in fact read some sort of customising table rather than following hard coded rules.

 

SPAU Ballet


As can be imagined many companies have found that standard SAP programs almost exactly fit their needs, but need one or two tweaks, so the desire was to somehow change the standard SAP program. User exists solve the problem to a large extent but as you might have noticed they are not everywhere that every single company in the world might need them, so the answer was to do a “repair” which is in fact anything but a repair it is changing standard SAP code.  I’d just like to stress that in the vast majority of cases you can get round modifying standard SAP but sometimes you just have to do this as there is no other way of satisfying the business.

 

A long time ago SAP used to recommend you take a copy of the standard program, creating a “clone” that started with a Z, and then make your changes there.

 

As this was official SAP policy at the time many SAP customers did just that, and created an army of cloned programs. Then SAP belatedly realized this was the worst thing you could possibly do, as during an upgrade the original program would change (bug fixes and extra functions) but not the clone. Then came what can best be described as the “clone wars” with customers writing “clone hunter” programs, which eventually became part of standard SAP.

The revised policy was not to have clones, but to use the modification assistant to insert your changes directly into standard SAP code. Then, during an upgrade the SPAU process would recognize such changes and make you decide if you wanted to keep them or not.

 

This was a lot better, but if you have a fair number of such modifications to the standard system (and a lot of companies do, even ones with a “no modification” policy) then at upgrade / support stack time the SPAU process can take quite a while.

 

The enhancement framework can drastically reduce such overheads at upgrade time by replacing such modifications with enhancements. It is more than possible you could virtually eliminate the SPAU overhead during an upgrade or support stack application. As a specific example the form based user exits like MV45AFZZ are based on you modifying a standard SAP program, so at support stack time they have an alarming habit of removing all the code you added and on rare occasions this does not even show up in SPAU. Adding the code using enhancements rather the modifications is a lot safer.

 

The needs of every company vary dramatically so it is impossible to give a specific example that is useful to everybody, but here are a few potential ways to replace “repairs” with enhancements.

 

·         Let us say we have a standard SAP executable program which makes vendor postings, let us say the ERS process, but the document date is hard coded to today’s date via SY-DATUM. You want the flexibility to able to choose the posting date. You could use the enhancement framework to add some code right at the start of the program to add a new parameter for the user to input the date. Then at the start of the FORM routine which makes use of the posting data you can change the value of the variable that holds that date to the value of the input parameter.

·         You may want to add extra commands to a standard SAP screen or menu for whatever reason. Adding the button or menu entry is always going to be a modification, outside of the Web Dynpro framework. However, responding to that new button or command does not have to be a modification. Usually there is a CASE statement in the SAP program to respond to the various standard commands, and 99 times out of a hundred there is no WHEN OTHERS at the end. This means you can add an enhancement at the end of that routine with a new CASE block to respond to your new command.


As can be seen the possibilities are almost limitless. I think this is one of the best tools SAP has ever given us, but to use the phrase “not only, but also” there is another side to the enhancement framework.

 

BADI Santa


I have written a whole bunch of blogs on how to try and design an SAP system for use in multiple countries.

 

http://scn.sap.com/community/abap/blog/2012/05/01/back-to-the-future--part-01

 

The answer there was to use BADIS (which go in and glove with the enhancement framework) so you can have a core system that is the same for all countries and yet each country can still do all the country specific weird things it wants to do.

 

When a country wants to change something, the new functionality is added, but the core program is not changed. This is a staple of OO programming called the “Open-Closed Principle”.

 

If you were to read any books on object orientated programming this principle would jump you at you almost at once.


In essence the idea is that you have a program that works perfectly and any change to that program might break it, so therefore you don’t want to change it. However you have been given the job of giving that very same program some extra functionality.


So the open-closed principal is all about making the program do something new without changing it. That’s obviously impossible you would think, so much so that when I explained this to one manager I could see he was thinking about committing me to the nearest madhouse.


When you look a bit deeper you will however see that SAP programs can be made to do different things by changing customizing values or implementing user exits – and all of this was well before SAP even looked at trying to be object orientated.


Warning! Warning! Danger Will Robinson

image004.png

At long last it is time to turn to the business logic of programs, but once again I cannot stop myself going into broken record mode and turning back to the subject of quality This time I felt the need to waffle on about exception handling, specifically exception classes which I am not convinced many people are using, despite them having been around for ages.


There is nothing much I can say here which I did not say in the following blog:-


http://scn.sap.com/community/abap/blog/2012/09/08/design-by-contract-in-abap

 

Earlier chapters talked about how you can prevent errors in the first place, errors in the program itself. Here I talk about the concept of “Design by Contract” which is designed to root out program errors before the program hits production, and also the normal use of exception classes which is (a) to handle situations which are “impossible” but you suspect might happen anyway, and (b) situations where things go wrong which are totally out of your hands but the program must react to anyway.


To Be Continued


To end with the same paragraph as the prior blogs on book writing, in subsequent blogs I will continue to talk about the thought process behind how I chose what topics to include in the book, enriched with some content that was cut from the final version of the book for reasons of space.


Cheersy Cheers

 

Paul

 

https://www.sap-press.com/abap-to-the-future_3680/

 

P.S.

I got interviewed the other day by a chap from the American SAP User Group (ASUG)


http://www.asugnews.com/article/sap-press-abap-to-the-future


Have a look at the comments at the bottom. Some are very strange indeed, clearly written by some sort of automated comment writing robot.

 

 

 

 

WDA: Call WebDynpro application from ABAP code with parameter table type

$
0
0

In the main window of WD component define importing parameter (for example SYSTEM_ROLES with type IHTTPVAL (predefined type - string)

Handle method.jpg

Add the parameter to the WD application

application.jpg

In the ABAP code (the place where the WD application is called) define table – type TIHTTPNVP, line type IHTTPNVP.

app parameter.jpg

In this example information which is needed to run the WD application form ABAP is in the table IT_SYSTEM_ROLES – type ZSYSTEM_ROLES_TT, line type ZSYSTEM_ROLES


roles table.jpg

 

Use the following code to move the content of table ZSYSTEM_ROLES_TT to TIHTTPNVP.

table to string.jpg

Note: Use different separators to divide the values of the fields in one row and the content of the rows in the table. In this example for fields is used space and for rows is used comma.



Now call the application using FM WDY_EXECUTE_IN_PLACE

call FM.jpg


In the WD component, either in WDDOINIT method of the VIEW or in WDDOINIT method of the COMPONENTCONTROLLER, you can get the application parameters using the following code:

get parameters.jpg

And in the end, move back the parameters to the table ZSYSTEM_ROLES_TT

split.jpg

ABAP Developer quo vadis?

$
0
0

ABAP Developer Specialist or Generalist or something else?

 

What is an ABAP developers nowadays? This question I'm trying to get answered and also how I would describe my work at this time as a developer?

 

Infographic__ABAP_Developer___Infogram.jpg

( https://infogr.am/abap_developer-96 )


In this blog I try to "better understand" the ABAP Developer or sort the required Skill-Set of an ABAP Developer mentally.

 

The classic ABAP developers


Is the classic ABAP developer existing at all? Why the skill requirements are so different? And how the developer Jobs actually looks really?

In other professions there was a development from generalists to specialists and to Super-specialists. Take, for example, the doctor profession for comparison. Here the development of the general physician to the medical specialist as to the Super-specialist. A similar trend I see already in our profession.

Specializations of the ABAP Developer

  • classic ABAP Developer
    • enjoy Controls / Reports / Screen Programming
  • WebDynpro Developer
    • WebDynpro ABAP 4 / JAVA
  • BW Developer
  • SAU UI Developer 5
    • Fiori
    • HTML, CSS, Javascript, Jquery
  • SAP HANA Developer

MUST-HAVE-knowledge for all ABAP developers


Yes, it is a criterion that both in freelance as well as in permanent Jobs you should mastery the ABAP OO and  also the OO thinking which summarizes for me content such as OOA, OOD and architecture skills.

 

SAP Module knowledge


Among the technological skills SAP Module skills are added.
In the SAP environment, there are an estimated 250 main and 750 sub-Modules. The SAP Module knowledge is very often a must-criterion for the ABAP Developer and only in combination they describe the "ideal candidate" for the company. I have my own opinion abaout the module-topic and my experience is so far that even experienced generalists  brings added value to projects where for them new Modules are used.

Additional knowledge


Each ABAP Developer should bring a good basic skills in the following topics:

  • Interface technologies (BAPI, RFC, IDoc, XML)
  • Performance Basics
  • Security Basics
  • Testing Basics

 

 

New frameworks and tools

 

  • BRFplus (see SCN)
  • BOPF (see SCN)
  • Core Data Services (CDS, see SCN)

 

What does this mean for me as a developer? The comparison to medicine can certainly bring many insights, but our profession is without question to understand quite different. My intuition tells me that the skill-set is quite developed so that a "broad-based developer" can make a big added value in the company in combination with a "specialization".


Generalist or specialist or super-specialist?

 

How do you see that?
What specializations  can you find in your job environment?

 

Info

This Blog is also translated in german: ABAP Entwickler quo vadis?

Kill delayed work processses automatically

$
0
0

Kill delayed work processses automatically through batch

 

Generally we use TCode SM50 to view running processes and delete the long running processes manually. This is an attempt to make this manual process into automated process.


SAP has given a function module'TH_SERVER_LIST'which gives all the details of the SAP Application Servers on the network.


SAP has given a function module  'TH_SYSTEMWIDE_WPINFO'which gives all the details of work processes on given SAP Application Server.


We can filter the details by which type of work processes shall be stopped based on the time it has taken.


We have to take those process identifications (PID) and SAP Application Server and call the function'ThWpInfo'to stop the process permanently.


This report could be scheduled so that it runs on its own and stop the long running processes automatically.


It is to be noted that this Program uses unreleased function modules and kernel calls, and so is to be used at your own risk.

 

Complete source is as follows:


I do hope it helps.


I thank Mr. Matthew Billinghamfor his valuable suggestions and guidance.


REPORT  ZR_KILL_PROCESS.

DATA: itab LIKE STANDARD TABLE OF WPINFO,
      wa
LIKE WPINFO,
      delay_seconds
TYPE i VALUE 900.

DATA: BEGIN OF TY
 
INCLUDE STRUCTURE MSXXLIST_V6.
DATA: END OF TY.

DATA: itab_as LIKE STANDARD TABLE OF TY,
      wa_as
LIKE TY.

CONSTANTS: opcode_wp_stop TYPE x VALUE 2.

CALL FUNCTION 'TH_SERVER_LIST'
TABLES
LIST          
= itab_as
EXCEPTIONS
NO_SERVER_LIST
= 1
OTHERS         = 2.

LOOP AT itab_as INTO wa_as.

  CALL FUNCTION 'TH_WPINFO'
  
EXPORTING
   SRVNAME       
= wa_as-name
  
TABLES
   WPLIST        
= itab
  
EXCEPTIONS
  
OTHERS = 1.

 
LOOP AT ITAB INTO WA.

  
IF WA-WP_TYP = 'DIA' AND WA-WP_STATUS = 'Running' AND WA-WP_ELTIME GT delay_seconds.

    C
ALL 'ThWpInfo'
   
ID 'OPCODE' FIELD opcode_wp_stop
   
ID 'SERVER' FIELD wa_as-name
   
ID 'PID' FIELD wa-wp_pid.

  
ENDIF.
 
ENDLOOP.

ENDLOOP.

MIGO with respect to latest standard price available in material master of a Material

$
0
0

Preamble -

Consider a scenario where a standard costing (S) is used for raw material pricing.

Purchase Orders are generated for raw materials and the standard cost of raw material is captured in condition type ZI01 as per PO pricing procedure.

The freight related to procurement is captured in condition type ZCFR which is also available in the PO pricing procedure.

The freight cost is calculated as a percentage of the standard cost condition ZI01 and captured in the PO pricing procedure.

 

Issue description -

The issue faced here relates to changes in the standard cost and the effects of that in the freight calculation.

Currently, once the PO is created, the freight cost (condition type ZCFR) gets calculated on raw material standard cost (condition type Z101).

 

Material master Accounting View for Material 41.

I1.png

 

In ME23N, standard price is 120 USD.

p1.png

 

Post creation of PO, if material standard cost is changed through standard cost estimation cycle (yearly / half yearly), it does not automatically update the condition type Z101 & hence during GRN freight cost( condition type ZCFR) gets calculated on old material standard cost.

 

Eg.

Before GR (MIGO) , Using the TCODE MR21, standard price of the material 41 is changed from 120 USD to 150 USD.

 

Last.png

 

After pressing Save, same price of 150 USD is available in Material master.

 

i3.png

Now, all the pricing calculation should occur with respect to this latest price of 150 USD. However, this recently changed price is not updated in PO. It still shows old price i.e. 120 USD which is incorrect.

p1.png

Solution -

To overcome this, the new standard cost that has been released for a material should be updated in all open PO for that material (condition type ZI01) so that during GRN the new standard cost (condition type ZI01) and freight (condition type ZCFR) will be posted at latest cost.

The requirement is also to ensure that the changes for new standard cost need not be done manually by the business user and a solution which would dynamically update the standard cost of an existing PO.

 

 

While doing good’s receipt using MIGO, In order to update latest Statistical price (MBEW-STPRS) to PO line items, BADI ‘MB_MIGO_BADI’ will be implemented. Method ‘LINE_MODIFY’ of BADI will use BAPI ‘BAPI_PO_CHANGE’ to update latest Statistical price (MBEW-STPRS) to PO line items.

 

Code to update Statistical Price should execute only for Goods Receipt process for the POs other than Stock Transfer POs.

 

  • To check if activity is Good’s receipt (A01) or not, method ‘MODE_SET’ of ‘MB_MIGO_BADI’ will be used. Parameter I_ACTION of method ‘MODE_SET’ will be used for checking the same.

 

  • To check if PO is Stock Transfer PO or not, EKPV (Shipping Data For Stock Transfer) table will be used. IF entry for PO exists in EKPV, it’s Stock Transfer PO else it’s not.

 

Once the PO is updated with latest Statistical Price, message ‘Latest Cost updated in PO’ will be displayed.

In order to execute this price updating logic only once, internal table GT_PROCESSED is used. All the processed PO# for that particular session will be marked as 'X' initially so that when control comes again to method LINE_MODIFY, it will not execute the price updating logic again and thus will improve the performance. Finally, all the memory IDs will be flushed when user posts the document using method POST_DOCUMENT.



Pre-requisite

 

 

Go to configuration settings through path SPRO – Ref IMG - Material Management – Purchasing – Conditions – Define price determination process – Define condition type – Select condition “Z101” – click on Details tab

 

Table V_T685A in condition type “Z101” should have status “No limitations”

 

Config.png

 

 

Code Snippets :

------------------------------------------------------------------------------------------------------------------------

1. Method LINE_MODIFY :

------------------------------------------------------------------------------------------------------------------------

 

METHOD IF_EX_MB_MIGO_BADI~LINE_MODIFY.

  TYPES: BEGIN OF LTY_KONV,

          KNUMV TYPE KONV-KNUMV,

          KSCHL TYPE KONV-KSCHL,

          KWERT TYPE KONV-KWERT,

        END OF LTY_KONV.

 

 

  TYPES :BEGIN OF TY_PROCESSED,

          PO_NUM TYPE EBELN,

          PROC  TYPE CHAR1,

        END OF TY_PROCESSED.

 

 

  TYPES :BEGIN OF TY_MBEW,

        BWKEY TYPE MBEW-BWKEY,

          MATNR TYPE MBEW-MATNR,

          STPRS TYPE MBEW-STPRS,

        END OF TY_MBEW.

 

 

  DATA : GT_MBEW TYPE STANDARD TABLE OF TY_MBEW,

        GW_MBEW TYPE TY_MBEW.

 

 

  DATA: GT_PROCESSED TYPE STANDARD TABLE OF TY_PROCESSED,

        WA_PROCESSED TYPE TY_PROCESSED.

 

 

  DATA :GW_HEADER TYPE BAPIEKKOL,

        GT_RETURN TYPE TABLE OF BAPIRET2,

        GW_RETURN TYPE BAPIRET2,

        GT_ITEMS TYPE TABLE OF BAPIEKPO,

        GW_ITEMS TYPE BAPIEKPO,

        GT_COND TYPE TABLE OF BAPIMEPOCOND,

        GW_COND TYPE BAPIMEPOCOND,

        GT_CONDX TYPE TABLE OF BAPIMEPOCONDX,

        LT_POCONDX  TYPE STANDARD TABLE OF  BAPIMEPOCONDX,

        LS_POCONDX TYPE BAPIMEPOCONDX,

        GT_POSCHEDULE TYPE TABLE OF BAPIMEPOSCHEDULE,

        GW_POSCHEDULE TYPE BAPIMEPOSCHEDULE,

        GT_POSCHEDULX TYPE TABLE OF BAPIMEPOSCHEDULX,

        GW_POSCHEDULX TYPE BAPIMEPOSCHEDULX.

 

 

  DATA :  LV_SPRICE TYPE MBEW-STPRS,

          LV_KNUMV  TYPE EKKO-KNUMV,

          LV_MES TYPE CHAR100.

 

 

  DATA: LW_KONV TYPE LTY_KONV.

  DATA: LV_CALC_AMT TYPE MBEW-STPRS,

        LV_WERKS TYPE EKPO-WERKS,

        GT_BAPI_POITEM TYPE STANDARD TABLE OF BAPIMEPOITEM,

        GW_BAPI_POITEM TYPE BAPIMEPOITEM,

        GT_BAPI_POITEMX TYPE STANDARD TABLE OF BAPIMEPOITEMX,

        GW_BAPI_POITEMX TYPE BAPIMEPOITEMX.

 

 

  DATA : LV_DONE TYPE CHAR1,

        LV_MODE TYPE CHAR1.

 

 

  DATA : LV_LINE TYPE CHAR4.

  DATA : LV_VSTEL TYPE EKPV-VSTEL,

        LV_VSBED TYPE EKPV-VSBED,

        LV_LADGR TYPE EKPV-LADGR.

 

 

  CONSTANTS : LC_X    TYPE CHAR1                  VALUE 'X'.

  CONSTANTS : LC_Z101 TYPE BAPIMEPOCOND-COND_TYPE VALUE 'Z101',

              LC_U    TYPE BAPIMEPOCOND-CHANGE_ID VALUE 'U',

              LC_3    TYPE BAPIMEPOITEM-PRICEDATE VALUE '3',

              LC_C    TYPE BAPIMEPOITEM-CALCTYPE  VALUE 'C'.

 

 

  CONSTANTS : LC_E    TYPE BAPIRET2-TYPE    VALUE 'E',

              LC_S    TYPE BAPIRET2-TYPE    VALUE 'S',

              LC_ME  TYPE BAPIRET2-ID      VALUE 'ME',

              LC_06  TYPE BAPIRET2-ID      VALUE '06',

              LC_006  TYPE BAPIRET2-NUMBER  VALUE '006',

              LC_023  TYPE BAPIRET2-NUMBER  VALUE '023'.

 

 

* Execute only for Goods Receipt

  IMPORT  LV_MODE TO LV_MODE FROM  MEMORY ID 'MODE'.

  CHECK LV_MODE = LC_X.

 

 

* Execute only Once.This Memory ID GET Export from below code

  IMPORT GT_PROCESSED TO GT_PROCESSED FROM MEMORY ID 'PROCESSED'.

 

 

  READ TABLE GT_PROCESSED INTO WA_PROCESSED WITH KEY PO_NUM = CS_GOITEM-EBELN.

  CHECK WA_PROCESSED-PROC NE LC_X.

 

 

  WA_PROCESSED-PROC  = LC_X.

  WA_PROCESSED-PO_NUM = CS_GOITEM-EBELN.

  APPEND WA_PROCESSED TO GT_PROCESSED.

  CLEAR WA_PROCESSED.

 

 

  EXPORT GT_PROCESSED FROM GT_PROCESSED TO MEMORY ID 'PROCESSED'.

 

 

* Restrict the logic from Execution for Stock Transfer PO

  CLEAR : LV_VSTEL,LV_VSBED, LV_LADGR.

  SELECT SINGLE VSTEL

                VSBED

                LADGR

    FROM EKPV

    INTO (LV_VSTEL,LV_VSBED, LV_LADGR)

    WHERE EBELN = CS_GOITEM-EBELN.

 

 

  IF LV_VSTEL IS NOT INITIAL OR

    LV_VSBED IS NOT INITIAL OR

    LV_LADGR IS NOT INITIAL  .

    EXIT.

  ENDIF.

 

 

* Get Condition Record Number

  CLEAR LV_KNUMV.

  SELECT SINGLE KNUMV FROM EKKO

    INTO LV_KNUMV

    WHERE EBELN =  CS_GOITEM-EBELN.

 

 

* Get PO details

  CALL FUNCTION 'BAPI_PO_GETDETAIL'

    EXPORTING

      PURCHASEORDER = CS_GOITEM-EBELN

      ITEMS        = LC_X

    IMPORTING

      PO_HEADER    = GW_HEADER

    TABLES

      PO_ITEMS      = GT_ITEMS.

 

 

 

 

  IF GT_ITEMS[] IS NOT INITIAL.

* Get Std Price

    SELECT BWKEY MATNR STPRS

    FROM MBEW

    INTO TABLE GT_MBEW

    FOR ALL ENTRIES IN GT_ITEMS

    WHERE BWKEY EQ GT_ITEMS-PLANT AND

          MATNR EQ GT_ITEMS-MATERIAL.

  ENDIF.

 

 

* Sort Material Valuation details

  SORT GT_MBEW ASCENDING BY BWKEY MATNR STPRS.

 

 

  LOOP AT GT_ITEMS INTO GW_ITEMS.

 

 

    REFRESH : GT_BAPI_POITEMX, GT_BAPI_POITEM.

    REFRESH : GT_RETURN[],GT_POSCHEDULE[],GT_POSCHEDULX,GT_COND[].

 

 

    READ TABLE GT_MBEW INTO GW_MBEW WITH KEY BWKEY = GW_ITEMS-PLANT

                                            MATNR = GW_ITEMS-MATERIAL BINARY SEARCH.

 

 

    IF SY-SUBRC EQ 0 .

* Assign Current Standard Price

      IF GW_MBEW-STPRS IS NOT INITIAL .

        LV_CALC_AMT = GW_MBEW-STPRS / 10 .

      ENDIF.

    ENDIF.

 

 

* Delivery Complition Flag is marked,do not process the Item

    IF GW_ITEMS-DEL_COMPL = LC_X.

      CLEAR:GW_MBEW,GW_ITEMS,LV_CALC_AMT.

      CONTINUE.

    ENDIF.

 

 

* Condition Records

    GW_COND-CONDITION_NO = LV_KNUMV.

    GW_COND-ITM_NUMBER  = GW_ITEMS-PO_ITEM.

    GW_COND-COND_TYPE    = LC_Z101.

    GW_COND-COND_VALUE  = LV_CALC_AMT.

    GW_COND-CHANGE_ID    = LC_U.

    APPEND GW_COND TO GT_COND.

 

 

 

 

    LS_POCONDX-ITM_NUMBER = GW_ITEMS-PO_ITEM.

    LS_POCONDX-COND_TYPE  = LC_X.

    LS_POCONDX-COND_VALUE = LC_X.

    LS_POCONDX-CHANGE_ID  = LC_U.

    APPEND LS_POCONDX TO LT_POCONDX.

 

 

 

 

    GW_BAPI_POITEM-PO_ITEM  = GW_ITEMS-PO_ITEM.

    GW_BAPI_POITEM-MATERIAL  = GW_ITEMS-MATERIAL.

    GW_BAPI_POITEM-NET_PRICE = LV_CALC_AMT .

    GW_BAPI_POITEM-CALCTYPE  =  LC_C.          " Reprice it

    GW_BAPI_POITEM-PRICEDATE =  LC_3.          " Reprice it based on current date

    APPEND GW_BAPI_POITEM TO GT_BAPI_POITEM.

 

 

    GW_BAPI_POITEMX-PO_ITEM  = GW_ITEMS-PO_ITEM.

    GW_BAPI_POITEMX-PO_ITEMX  = LC_X.

    GW_BAPI_POITEMX-CALCTYPE  = LC_X.

    GW_BAPI_POITEMX-PRICEDATE = LC_X.

    APPEND GW_BAPI_POITEMX TO GT_BAPI_POITEMX.

 

 

* Schedule Line

    GW_POSCHEDULE-PO_ITEM = GW_ITEMS-PO_ITEM.

 

 

* Supress Leading zeros

    CALL FUNCTION 'CONVERSION_EXIT_ALPHA_INPUT'

      EXPORTING

        INPUT  = GW_ITEMS-PO_ITEM

      IMPORTING

        OUTPUT = LV_LINE.

 

 

    GW_POSCHEDULE-PO_ITEM  = GW_ITEMS-PO_ITEM.

    GW_POSCHEDULE-SCHED_LINE = LV_LINE.

    APPEND GW_POSCHEDULE TO GT_POSCHEDULE.

 

 

 

 

    GW_POSCHEDULX-PO_ITEM = GW_ITEMS-PO_ITEM.

    GW_POSCHEDULX-PO_ITEMX = LC_X.

    APPEND GW_POSCHEDULX TO GT_POSCHEDULX.

 

 

 

 

* Update PO with latest Pricing

    CALL FUNCTION 'BAPI_PO_CHANGE'

      EXPORTING

        PURCHASEORDER = CS_GOITEM-EBELN

      TABLES

        RETURN        = GT_RETURN

        POITEM        = GT_BAPI_POITEM

        POITEMX      = GT_BAPI_POITEMX

        POSCHEDULE    = GT_POSCHEDULE

        POSCHEDULEX  = GT_POSCHEDULX

        POCOND        = GT_COND

        POCONDX      = LT_POCONDX.

 

 

 

 

* When PO is open in change mode, show error message

    LOOP AT GT_RETURN INTO GW_RETURN WHERE TYPE  = LC_E AND

                                          ID    = LC_ME AND

                                          NUMBER = LC_006 .

 

 

      MESSAGE GW_RETURN-MESSAGE TYPE 'S' DISPLAY LIKE 'E'. "PO is being edited by some user

      LEAVE TO TRANSACTION 'MIGO'.

    ENDLOOP.

 

 

* Display Success message

    LOOP AT GT_RETURN INTO GW_RETURN WHERE TYPE  = LC_S AND

                                          ID    = LC_06 AND

                                          NUMBER = LC_023 .

      CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'

        EXPORTING

          WAIT = LC_X.

      IF SY-SUBRC EQ 0.

        MESSAGE TEXT-002 TYPE 'S'. " Latest Cost updated in PO

      ENDIF.

 

 

    ENDLOOP.

    CLEAR: GW_ITEMS,GW_POSCHEDULE,GW_POSCHEDULX.

    CLEAR: GW_BAPI_POITEMX, GW_BAPI_POITEM,GW_COND,LV_CALC_AMT.

  ENDLOOP.

ENDMETHOD.

 

------------------------------------------------------------------------------------------------------------------------

2.Method MODE_SET.

------------------------------------------------------------------------------------------------------------------------

method IF_EX_MB_MIGO_BADI~MODE_SET.

*Local Constants

  DATA : LC_A01 TYPE GOACTION VALUE 'A01', " Goods Receipt

        LV_MODE TYPE CHAR1.

  FREE MEMORY ID 'MODE'.

  IF I_ACTION EQ LC_A01 .

    LV_MODE = 'X'.

    EXPORT LV_MODE FROM LV_MODE TO MEMORY ID 'MODE'.

  ENDIF.

endmethod.

 

------------------------------------------------------------------------------------------------------------------------

3.Method POST_DOCUMENT

------------------------------------------------------------------------------------------------------------------------

 

method IF_EX_MB_MIGO_BADI~POST_DOCUMENT.

  FREE MEMORY ID 'DONE'.

  FREE MEMORY ID 'PROCESSED'.

endmethod.

 

Conclusion :


During GR when User enters the PO in MIGO.

l1.png

Once the user hits Enter button after entering PO# in MIGO, in the background BADI ‘MB_MIGO_BADI’ implementation changes the PO Pricing conditions with new material standard price ($150 in this case).


Finally Success message is shown.

l2.png

Now, if we check the PO 4500264840 in ME23N, we see latest cost for condition type 'Z101'.

l5.png

Post GRN : Financial document :

Freight will be calculated on new material price in GRN that resulting in posting correct PPV.

 

c2.png

Test Cases:

case1.png

case2.png

case3.png

Usefull tips to Deveopers and Functional Consultants

$
0
0

Hi Guys,

 

I would like to share my knowledge on technical which will be helpful to the Functional consultant to prepare the Functional specifications and hope will help full to the Technical developer on designing the Structure or Transparent Table and finding the data type and length in simple way.

Before going to detailed explanation, we have to understand - Why Functional consultant need technical knowledge:

  1. Prepare the Functional specifications
  2. Understood requirement from client in detail and can map into Technical words.
  3. Helping to Technical developer on difficulty developments.
  4. To discuss with Client on possibility and non-possibilities of logic and impact on report performance.
  5. Discussion with Technical developer will be more flexible if you have knowledge on Technical.

I would like to explain knowledge on How to differentiate the tables and How to find Table name with reference to field and description and etc...

And also will discuss about the logics which will be frequently used in Developments of smart forms and in new customized Reports developments.

  • How to find Configuration table name with reference of master data table – field name.

Generally we know lot of configuration data will be used in Master data’s. But we will be used only code to create the master data’s. While saving table also we know only code will be save in master data’s table or in transaction data tables and corresponding code description will not get update in master data tables.

While developing the report or smart form development mostly user need only description in the output development. For this again we will be search in configuration level what was table name finding through F1 or some other ways.

Here explaining some fields configuration table can find from Master data table- field name.

Example : If Material Group code will be enter in Material master but if you need to find description of Material group then you have to go in OMSF and need to find the table name or description. But below table which will be help full to find the Configuration table name.

Table Name: DD03M or DD03L

DD03M – Generated Table for view

Enter table and field name:

Assume MARA table and field name MATKL and language “EN”.

 

1.png

 

And execute the table.

Check “Check table - filed (CHECKTABLE) in DD03M or DD03L

 

2.png

 

See the table name T023 and pass this into se16n

 

3.png

 

It will help full Not only for functional consultant but also for technical developers to develop the reports and smart forms and in data extraction these table will be help full lot.

Not only configuration table, we can find data type, ABAP type, data element (in DD03L) and length of field and position number in se11 and number of character  and table category from  DD03VT and  DD03M and DD03L table.

 

  • From Table  How to find where used list of programmes

Too much difficult to remember the program every time which was developed for each developer and functional consultant. Developer Sometimes they will create maintenance table for pick the user values. In this case Z table remembering is little bit easy to compare to remembering the table.

The following way we can find the program name from table.

Table: D010TAB

D010TAB: Table for Use Report<->Tables

Pass table name in this D010TAB table,

Assume ZMM_TABLE which is developed for STO auto creation.

I need to find the program which I was used this table ---

So I have passed this table into D010TAB table.


4.png


Execute

 

5.png

 

Check in Se38

 

6.png

 

Execute

 

7.png

 

This is one way can find program name from tables..

 

As of now we are discussed How to find Configuration table and from table how to find where used list of programs. Now will discuss How to find the Transaction code for the Program

  • From Program how to find the Transaction code

Using below table we can find Transaction code:Table name: TSTC   Program name: ZBDC_XX_PO1.PNGExecute the Program 2.PNGHere you can get details of Transaction code of the Program.Now we will discuss on list of tables used in one program

  • Find List of tables used in One program

Some programs developer need to create lot of customized tables. It is difficult to remember what are the tables used in the Program. The below simple way we can find the list of tables and structures used in Program.

Go to Se16n,

 

Enter table D010B

Enter Program name

3.PNG

Execute

 

4.PNG

You will get all tables and structures used in Program ..


There is a MONSTER in my Pocket!

$
0
0

There is a MONSTER in my pocket!

image001.jpgimage002.png

Writing a Book for SAP Press – Part Five


Table of Contents


Background


Business Object Processing Framework


BRFPlus


Exception Handling


To Be Continued

Background


This is the latest in my series of blogs about writing a book about monsters for SAP Press. They insisted it should be all about monsters and I had to fight tooth and nail to get some mentions of the latest features in ABAP inserted here and there. The last such blog was as follows:-

 

http://scn.sap.com/community/abap/blog/2015/04/25/the-loch-ness-monster

 

As might be imagined it is becoming more and more difficult to find blog headings with “monster” in the title, but hopefully I am going to run out of chapters before I run out of monster headings. I’m half way through already should I should be OK.

 

As mentioned in earlier blogs SCN does not much like people doing big adverts for their own book e.g. just saying “look at this” and then listing the table of contents. So I have been focusing more on the actual process of writing a book i.e. deciding what to add and what to leave out, and putting in material that was cut from the actual book due to lack of space. My original manuscript stretched to the moon, and so had to be edited down a bit.

 

In earlier blogs I talked about the reasoning behind the chapters in the first section – developer tools. Now we move to filling up the next section which is all about business logic.

 

MMMBOPF, Ba Duba Dop


When writing a list of exciting new SAP tools from the last few years, the “Business Object Processing Framework” (BOPF) sprang straight to the top of the list. The interesting thing about the BOPF is that it arrives via a support stack as opposed to by an upgrade. Where I work we have a 7.02 system and the BOPF arrived last week, as a side effect of upgrading the support stack level of the “business suite foundation”.

 

This cannot be stressed too much – support stacks now add extra functionality, which was not previously the case.

 

Anyway the problem I had was that the series of blogs by James Wood about the BOPF were so amazingly good,.

 

http://scn.sap.com/community/abap/blog/2013/01/04/navigating-the-bopf-part-1--getting-started


How could I write anything on the same topic without looking like the biggest plagiarist in the universe? He seemed to have covered everything.

 

Buddy, Can you Spare a DYNPRO?


Since I had to take a radically different approach I thought I would compare the BOPF to traditional DYNPRO programming as opposed to the OO way of looking at it. This is the sort of approach which could get me tarred and feathered – you will hear the six guns sound, as they drive me out of town.

 

What’s so good about BOPF anyway?


I like to start any discussion of a new tool not with the the “how to” but the (to me) more important “why” as in “why is this framework better than the twenty million frameworks that came before it?” The clue is in the fact that there have been so many attempts previously.

 

The History of SAP Business Objects


For the sake of argument let us say I have a new custom object type – a monster – that I want to set up in SAP. There are a wide variety of ways in which I could go about this. Let us examine them briefly one at a time, starting with the oldest.

 

In every case the idea is the same – this is the “model” in our “model – view – controller” framework, a model contains the business logic as to how a particular real world object behaves and what attributes it has.

 

Let us look at the “Famous Five” frameworks that preceded BOPF, try and work out which one of them is Timmy the Dog ( he’s so licky ) and then see that BOPF is proud to be the sixth in the series.

 

SWO1 Business Objects


I am a big fan of SAP Business Workflow (which was laughingly renamed “webflow” briefly during the dot com boom) and that works using business objects defined and maintained using transaction SWO1.

 

This is the earliest example of an object orientated framework in SAP with attributes and methods and events, and even inheritance as you can “delegate” the standard SAP business object like a purchase order to your custom version and programs that call the standard object start using the custom one without knowing.

 

Nobody could argue this approach was perfection itself but it did a really good job especially considering how long ago this was created.

 

BAPIs


If you call transaction BAPI you will see a tree where you have a wide variety of business objects, each one either represented by an SWO1 business object or by a BAPI set of function modules.

 

Business Application Programming Interfaces (BAPIs) were created with an external view in mind i.e. an external system wants to created / read / update / delete sales orders in SAP and needs an interface (signature) so that it can provide the needed instructions in the correct format.

 

The problem here was that the signatures were amazingly complicated (in order to try and give as much functionality as possible) and were (with a few exceptions) very poorly documented, the naming conventions were all over the place, and they still did not deliver all the functionality you could get by doing a BDC on the standard transaction (to be fair replication all online functionality was probably impossible for sales orders).

 

Nonetheless this was a still usually a step up from BDCs for performance reasons (no need to render the screen in the background) and 99% of the time they did not break during an upgrade (I had some grief with the purchase order BAPI moving from 4.5 to 4.7).

 

Classes


The day dawned when the tasks in workflow had two options for picking which objects you wanted to use – traditional SWO1 objects or custom classes you define yourself using SE24. Your custom class had to implement a few interfaces and the workflow system could then talk to it.

 

Apart from the perceived absurdity of having two frameworks that looked almost identical doing the same thing, there were still some gaps in that the new Z classes could not do everything an SWO1 object could within the workflow system.

 

This led to crazy situations, like the time that, because of one such gap, I created an SWO1 object and then delegated every method to an equivalent SE24 class.

 

In addition a lot of people have wondered why SAP did not create a series of classes to represent the same objects that SWO1 covered – sales orders, purchase orders, leave requests and so on.

 

Instead you get the result than ten thousand different programmers create a ZCL_SALES_ORDER class (I know I have) in their own system, all different to a smaller or greater degree.

 

This has led to “Project Objectify” an open source project on SCN which was first postulated by Matthew Billingham and then taken up by Bruno Esperança to try and unify all these disparate efforts to a uniform set of classes for business objects.

 

http://scn.sap.com/community/abap/blog/2014/03/22/project-objectify--continued

 

This is not actually at odds with the BOPF, as we (I would like to get involved in that project) want to develop a set of business object classes that can be used with any SAP business object framework – past, present or future.

 

Persistent Objects


The SAP database model was not really designed with OO programming in mind. To address this gap SAP came up with the idea of “persistent objects” which was a framework that provided a uniform way to handle transaction processing for classes that represented real world objects – locking, database updates, queries and the like. This was also known as “object services”.

 

You will see in SE24 there is a flag to enable a class to be persistent (instances can be saved in the database) which makes the class implement IF_OS_STATE and generates some “co-classes” to enable your new class to work with this framework.

 

A whole SAP Press book has been written on this subject, but this concept does not seem to have caught on as well as SAP would have liked. Many people perceive this framework to be over-complicated and crawling with limitations e.g. at one point data structures had to have their fields be in alphabetical order for some reason.

 

CRM Business Object Layer (in ERP)


Not everybody has the SAP Customer Relationship Management (CRM) system implemented, but if you have an ECC 6.0 system you may be surprised to know that the CRM “Business Object Layer” (BOL) has found its way into the core ERP system.

 

If you go into transaction SE80 and look for package CRM_BOL you will see the whole framework sitting there quietly, with most programmers blissfully unaware of its existence.

 

In contrast to the persistent object framework I have never heard any programmer say anything bad about the BOL. If anything programmers who have worked on this in a CRM system wax eloquent about how this is the greatest thing since sliced bread.

 

Again it is pointless to go into detail here except to bring to your notice that this exists, and to stress just how many ways there are in SAP of achieving the same thing,

 

BOPF


The Business Object Framework came in with ECC 6.0 EHP5 “Business Suite Foundation” Support Stack 11, SS5 in EHP6 and as standard in EHP7. It is also used in the SAP product “business by design”.

 

This has the same aim as all the previously mentioned business object frameworks generally and in particular the persistent object framework / object services – to not only provide a representation of a real word object within SAP but to also provide a uniform way to handle things like authorizations, transaction management, locking and so on.

 

In summary the main benefits of BOPF are to automate certain areas which you would usually have to code yourself whilst writing programs, and to give you a clear place to put each piece of functionality, separated in such a way as to make your program more “anti-fragile” i.e. resistant to change.

 

I would also like to briefly touch on two of the benefits not related to coding a transaction – integration with the persistency layer (shared memory) and UI layer (Floorplan Manager).

 

Use of Shared Memory


I would guess that 99% of the time you would want a business object to be persisted in the database, but it is also possible to have BOPF objects that live only in shared memory.

 

I have yet to see many practical uses for this, but as always it is good to be aware of the possibility.

 

Integration with the Floorplan Manager


Do you remember the song “I Spy for the FBI”? In this case FBI stands for “Floorplan Manager Business Object Integration” i.e. it should really be FMBOI but that would take all the fun out the acronym.

 

This lets you create a Web Dynpro application for creating and maintaining instances of your BOPF business objects with dramatically less coding.

SAP also claim that their “target architecture” for the future involves something similar to the FBI which will integrate BOBPF with UI5 in the same way

.

BRFPlus


When I worked in Heidelberg at lunchtimes I used to go to the beer garden round the corner, and whilst eating my chicken wings (or Flammkuchen) I would be reading the book by Carsten Ziegler about BRFPlus, the Business Rules Engine., which promises to replace the IMG, the condition mechanism for pricing and output, and the internal combustion engine, all whilst giving “the business” greater ownership of their own business rules.

image003.png

Probably because Heidelberg is just up the road from Waldorf, none of the middle aged guys who were eating their lunch at that drinking establishment seemed to think it odd I was reading a book with a big SAP logo on the front.

 

Whilst I was gone our Australian subsidiary upgraded to ECC 6.0, so as soon as I got back I was bursting to find a problem worthy of the attention of BRFPlus. None of my colleagues had even heard of the thing, and had no idea it was sitting there in the system, ready to use.

 

I did in fact find lots of valid business uses for this, so it had to go in the book, although BRFPlus has been available since version 7.02 of ABAP so it some sense it is not “new”. Nonetheless it is evolving at a breakneck pace, my understanding is that at any given time there is a huge backlog of features to be added which a team of SAP developers are working on.

 

In case I completely stuffed everything up (i.e. everything I wrote turned out to be wrong) I sent my draft chapter to Carsten Ziegler himself, and he was kind enough to not only review the chapter but also arrange a screen share session so I could look at the development system in Germany and have a sneaky peek at the future of BRFPlus.

 

He also put my mind to rest on the two areas of BRFPlus I was worried about – namely performance of the tool in general, and ease of data entry.

BRFPlus lives inside the ABAP system but the front end is Web Dynpro which makes it look a lot different to the IMG or similar SM30 type transactions where you are used to entering data for business rules. I had horrible performance problems using BRFPlus at first, staring at a big whirling circle for minutes on end whist entering data. It turns out that most of that is a one off hit the first time a screen gets called up, the same way you get the “compiling XXXX in a different session” message when you start doing SAP GUI transactions after a support pack installation.

 

The remaining performance problems are constantly under investigation and get addressed in the huge bunch of OSS notes that come with every support stack. The runtime of BRFPlus is never going to be an issue, the rules get compiled into ABAP code when first accessed and thereafter involve no database access at all.

 

My other concern was the ease of data entry – filling out a decision tree, or a line in a decision table seemed to take an awful lot of button clicks and drop downs compared to the IMG/SM30 equivalent. If BRFPlus is going to take off it has to be just as easy or easier to use as the thing it is replacing – just like OO programming is easier to use than procedural programming (tee hee hee)! Once again, the team at SAP are on the case, investigating an easier way to enter data.

 

A lot of people have squawked like a duck because in its natural form you cannot change business rules directly in production, whereas in the past with Z tables the developer has the choice to enforce changes being made in development and transported through the landscape, or to have the table able to be directly maintained in production.

 

SAP might say to us customers - “why in the world would you want to change data directly in production”. A lot of customers would answer “because we do” which in the end is the only answer that counts. If someone has the ability to do something they want right now, and you are trying to convince them to take on a new tool which can’t do the thing they want, you are not going to get very far.

 

I will give a concrete example – in my company we have a table which controls which plants have certain interfaces active. If the device at the plant breaks down you need to switch the interface off right there and then, to enable manual entry at the plant of the information which was coming from the device. Waiting a day for a transport just will not cut the mustard. So we have a Z table controlled by the helpdesk which can switch an interface off directly in production.

 

It’s the same with a table which says what raw materials come from which quarry. If we have a huge cyclone like we did the other day, the power at the quarry goes out along with the phones, and we may wish to change the table to another source of supply right there and then. If we had to wait for a transport, by the time it went through the rain would be over, and the quarry back on line again, and then we would have to do another transport to reverse the change. In the meantime any plant relying on that material would be stuffed.

 

The other day a blog came out explaining one way to maintain such rules in production:-

 

http://scn.sap.com/community/brm/blog/2015/03/23/brfplus-application-exits-ability-to-maintain-entries-in-decision-tables

 

You can see in the comments at the bottom that another way is to use the separately licenced product “Decision Services Management” which lets you do a remote simulation of the effect the change will have before it hits production, approval etc. Lovely, but it costs extra.

 

At my company we built our own application to do this (simulation / workflow approval) because it was deemed cheaper than buying the DSM licence. I am sure Carsten would be amazed that we did this. Mind you the programming bar at my organisation is very high – much as I like to blow my own trumpet I am not even the best programmer, there is another guy ten times better than me. Not every organisation is in that position.

 

Regardless, I think BRFPlus is the future of IMG type rules, and my chapter on the subject was probably the one I was most satisfied with in the whole book.

 

To Be Continued


To end with the same paragraph as the prior blogs on book writing, in subsequent blogs I will continue to talk about the thought process behind how I chose what topics to include in the book, enriched with some content that was cut from the final version of the book for reasons of space.


Cheersy Cheers


Paul


https://www.sap-press.com/abap-to-the-future_3680/

 

PS I think the “Timmy the Dog” award should go to Persistent Objects.

 

Create GOS attachment for HR object

$
0
0

Hello!

 

Since you have a task to attach a file to HR object you may spend a lot of time learning how to do it.

Once I did it I wanted to keep this knowledge here to return here when required.

 

Actualy here is just 1 method implementation to do it:

 

 

class-methods ATTACH_FILE_TO_THE_HROBJID

     importing

       !IV_OBJID type HROBJID

       !IV_OTYPE type OTYPE

       !IV_FILENAME type STRING

       !IV_DESCRIPT type STRING

       !IR_LOGINST type ref to CL_CCMS_APPLOG optional

       !IV_HEXCONT type XSTRING

       !IV_DOCTYPE type SO_OBJ_TP default 'PDF' .


METHOD attach_file_to_the_hrobjid.

   DATA: lv_msgty TYPE msgty,

         lv_msgid TYPE msgid,

         lv_msgno TYPE symsgno,

         lv_msgv1 TYPE msgv1,

         lv_msgv2 TYPE msgv2,

         lv_msgv3 TYPE msgv3.

   DEFINE add_log_msg.

     if ir_loginst is supplied.

       lv_msgty = &1.

       lv_msgid = &2.

       lv_msgno = &3.

       lv_msgv1 = &4.

       lv_msgv2 = &5.

       lv_msgv3 = &6.

       ir_loginst->add_message(

                     ip_msg_type = lv_msgty

                     ip_msg_id   = lv_msgid

                     ip_msg_no   = lv_msgno

                     ip_msg_v1   = lv_msgv1

                     ip_msg_v2   = lv_msgv2

                     ip_msg_v3   = lv_msgv3 ).

     endif.

   END-OF-DEFINITION.

 

*  Check if the object exists

*  The fastest way is to access the DB table

   DATA: lv_plvar TYPE hrp1000-plvar.

   SELECT SINGLE plvar INTO lv_plvar

     FROM hrp1000

     WHERE plvar = cl_hap_pmp_const=>plvar and

           otype = iv_otype AND

           objid = iv_objid.

   IF sy-subrc <> 0.

     "add_log_msg 'E' ' ' 030 iv_otype iv_objid ''. "#EC NOTEXT

     RETURN.

   ENDIF.

 

  

* Here is your logic to check if there's already file attached

   IF sy-subrc <> 4.

     "add_log_msg 'E' ' ' 031 iv_objid '' ''. "#EC NOTEXT

     RETURN.

   ENDIF.

  

 

   DATA: ls_folder TYPE soodk.

 

   CALL FUNCTION 'SO_FOLDER_ROOT_ID_GET'

     EXPORTING

       region                = 'B'                           "#EC NOTEXT

     IMPORTING

       folder_id             = ls_folder

     EXCEPTIONS

       communication_failure = 1

       owner_not_exist       = 2

       system_failure        = 3

       x_error               = 4

       OTHERS                = 5.

   IF sy-subrc <> 0.

* Implement suitable error handling here

 

   ENDIF.

*  Convert hex string to hex table

   DATA: lv_binsize TYPE i,

         lt_hextab  TYPE TABLE OF solix,

         ls_docdat  TYPE sodocchgi1,

         ls_docinfo TYPE sofolenti1.

 

   CALL FUNCTION 'SCMS_XSTRING_TO_BINARY'

     EXPORTING

       buffer        = iv_hexcont

     IMPORTING

       output_length = lv_binsize

     TABLES

       binary_tab    = lt_hextab.

 

*  Prepare some attributes for the file

   ls_docdat-doc_size  = lv_binsize.

   ls_docdat-obj_name  = iv_filename.

   ls_docdat-obj_descr = iv_descript.

 

*  Create the file

   CALL FUNCTION 'SO_DOCUMENT_INSERT_API1'

     EXPORTING

       folder_id                  = ls_folder

       document_data              = ls_docdat

       document_type              = iv_doctype

     IMPORTING

       document_info              = ls_docinfo

     TABLES

       contents_hex               = lt_hextab

     EXCEPTIONS

       folder_not_exist           = 1

       document_type_not_exist    = 2

       operation_no_authorization = 3

       parameter_error            = 4

       x_error                    = 5

       enqueue_error              = 6

       OTHERS                     = 7.

   IF sy-subrc <> 0.

* Implement suitable error handling here

     add_log_msg 'E' sy-msgid sy-msgno sy-msgv1 sy-msgv2 sy-msgv3. "#EC NOTEXT

     RETURN.

   ENDIF.

 

*  Create relation between file and the object

   DATA:   ls_object        TYPE borident,

           ls_attachmnt     TYPE borident.

   ls_object-objkey = lv_plvar && iv_objid.

   ls_object-objtype = c_prefix_hrobj && iv_otype.

 

   ls_attachmnt-objtype = 'MESSAGE'.                         "#EC NOTEXT

   ls_attachmnt-objkey  = ls_docinfo-doc_id(34).

   CALL FUNCTION 'BINARY_RELATION_CREATE'

     EXPORTING

       obj_rolea      = ls_object

       obj_roleb      = ls_attachmnt

       relationtype   = 'ATTA'                               "#EC NOTEXT

       fire_events    = abap_false

     EXCEPTIONS

       no_model       = 1

       internal_error = 2

       unknown        = 3

       OTHERS         = 4.

   IF sy-subrc <> 0.

* Implement suitable error handling here

     add_log_msg 'E' sy-msgid sy-msgno sy-msgv1 sy-msgv2 sy-msgv3. "#EC NOTEXT

     RETURN.

   ELSE.

     COMMIT WORK AND WAIT.

   ENDIF.

 

*Here you may implement your logic to check if file was attached

   IF sy-subrc = 0.

*    Success

     "add_log_msg 'S' ' ' 032 iv_objid '' ''. "#EC NOTEXT

     RETURN.

   ELSE.

*    Failed

     "add_log_msg 'E' ' ' 033 iv_objid '' ''. "#EC NOTEXT

     RETURN.

   ENDIF.

 

ENDMETHOD.



I know this will help me in future. Hope it will useful for you as well.


Best regards,

Alex Guryanov

Quasimodo vs CL_GUI_ALV_GRID

$
0
0

The Hunchback of Notre Dame vs CL_GUI_ALV_GRID

image001.jpgimage002.jpg

Writing a Book for SAP Press – Part Six


Table of Contents


Background


Interfaces


CL_GUI_ALV_GRID


To Be Continued

Background

This is the latest in my series of blogs about writing a book for SAP Press. A book about building monsters for Baron Frankenstein, a situation virtually all ABAP programmers find themselves in at some point in their careers. The last such blog was as follows:-

 

http://scn.sap.com/community/abap/blog/2015/05/01/there-is-a-monster-in-my-pocket

 

I have avoided talking about the contents of the book, but rather the thinking behind it. In this series of blogs we have now reached Chapter 10, regarding the ALV interface technology, and the notable point here is that about two thirds of the original chapter I had prepared had to be edited out of the final version for reasons of space.


So this would be an appropriate place to talk about some such subject matter which h never made it to the book, talking more about the philosophy of the usage of various user technologies in general rather than about a specific UI technology such as CL_SALV_TABLE. That said, once the general background has been covered I will use CL_GUI_ALV_GRID as a specific example of a general point.


Interfaces


When I first trying to get my head around object orientated programming one of the areas I struggled with the most was the concept of an “interface”. This appeared to be a class definition containing normal things like methods with their signatures and class attributes and what have you but without any code behind it. Rather like declaring a local class definition, but not the implementation.


What was that all about? I thought to myself. People on the internet were advising me to always use interfaces for the public parts of my classes. Well, why not have a normal class with the public parts public and everything else private and cut out the middleman? I was lost.


I found the answer via lots of experimentation and reading books like “Head First Design Patterns” and articles on the “Dependency Inversion Principle” and so forth, but one of the best definitions I heard was on the SCN by “Fire Fighter”.


You have probably heard phrases like “at work I wear many different hats” or “putting my accountants’ hat on for a second”. Most of us are more than one thing – you could be a computer programmer by trade or an accountant (Or both like me) and a keen sportsperson in your spare time (which I am not) and – in the case of the example – a volunteer fire fighter in your spare time.


You could be looking at a room full of accountants and say “I need a top golfer” and only some of them could step forward and fulfil that need, and most likely very few would be trained in actual firefighting. But they are still all accountants.


So an interface in this case could be a golfing interface with an attribute like “handicap” and methods like “swing club” or “putt”, things of no relevance to accounting. Conversely accountants have might have methods like “do bank reconciliation” or “look forward to month end like it’s Christmas” which have no place on the golf course.


In IT terms you would have a “person” class with methods every person does like “breathe” and attributes like “age”. Some people would implement the accounting interface, some the golfing interface, and some both. The requesting program would only ask for the golfing interface, as that is all it is interested in, it does not care if you are an accountant as well, and it has no interested in the methods every person can do.


Private Investigations


So a class can have one or more interfaces which are the public faces it wears. A requesting class types it’s IMPORTING parameter with reference to an interface rather to a concrete class. When the requesting class gets an instance passed in it neither knows nor cares what the actual type of class is. The requestor knows the class can perform the methods it wants on the grounds that the interface claims that it can. The requestor naturally can only access the interface methods and attributes as it knows nothing else about what the actual class passed in can do.


This is a clear separation of the “what” from the “how”. The interface says “what” the class can do but the actual mechanics of the “how” are as private as can be.


Hale and Pace Layering


Some technologies evolve much faster than others – a concept known as “pace layering”. If the requesting class is the sort of thing that changes very rarely - e.g. the business logic of making concrete, something that has not changed much since the days of the Roman Empire – it can communicate with a technology that changes very frequently – e.g. each week, like user interface technology – by means of an interface. This way the slower evolving class does not need to know how fast (or slow) the other technology is evolving, as it does not care one jot how the job is done, just that it is done.


Fifty Shades of User Interface Technology


One thing I can think of where the technology has changed a great deal over the years and continues to change to this day, yet still does fundamentally the same thing, is the good old ALV grid, so I wanted to devote a chapter of the book to this subject.


You may wonder why in the world I would want to be talking about a technology that uses the SAP GUI when all you tend to hear about at conferences and on the internet is a relentless focus on zero footprint browser based interfaces. I have always thought of the SAP GUI as being like an internal combustion engine car with the various browser based technologies being like electric cars. Electric cars have obvious advantages – cheaper running costs, less parts, saves the poor starving orphan polar bears from drowning – but will only kill conventional cars dead when they can go further on a single charge than a normal car can on a single tank of petrol (gasoline).

 

In the same way a browser based technology has obvious advantages – it looks nicer and runs on your mobile device – but it will only kill the GUI dead when it can have an equal or faster response time than the SAP GUI for an equivalent business transaction; at that point it will be game over for the GUI. In both cases the new technology has not killed the old one yet – but in both fields a lot of work is going on to try and tilt the balance.

 

You may think work on the ALV would have stopped dead at SAP but in between starting writing my book in February 2014 and finishing in February 2015 I discovered a new ALV class had been created, especially for dealing with big tables such as you get when using HANA.

Let us have a look at the history of this technology, or to paraphrase Michael Caine “What’s it all about, ALV?”


Why the original ALV was a good thing

 

The ALV framework – via function modules REUSE ALV DISPLAY LIST and REUSE ALV DISPLAY GRID were brought out round about the year 2000 and you could find them in SAP systems as early as version 4.5.

 

There is no need to go into detail on how to use those modules in your programs for two reasons:-

·         They have been around (in IT) terms forever so there is an abundance of documentation on the internet (there was not so much around in the year 2000 as I recall)

·         Technically this technique is obsolete, even though I see it still being used an enormous amount and think it will be here for a long while yet. As an example when you upgrade to ECC 6.0 one of the dormant business functions you can switch on is to transform a load of standard SAP reports from WRITE statements to ALV lists.

 

Instead I am going to focus on the philosophy behind the introduction of the ALV (ALV the Elder), which as we shall see in due course is also mirrored in its SALV younger brother (ALV the Younger), and it’s even younger brother “ALV the glint in HANA the Milkman’s eye”.

 

The problem ALV was invented to solve

 

At the time of writing I am 46 so I am too young to ever have used WRITE statements to code a full blown report. I have converted enough WRITE based reports to ALV though to know that it must have been horrible. It reminds me of someone who once told me about when they started programming and they wrote their programs on punch cards and then fed them into the computer one by one and if they dropped the cards and they went out of sequence they were sunk.

 

The business users have come to expect certain things from a list of data displayed on a screen – to be able to sort it how they want, to filter it by any value they desire, to drill down on a particular field to see the underlying SAP document, to be able to sub-total the values, the list goes on forever.

When the only tool you had was WRITE statements then it was fairly easy to output a static list but if you wanted to give the users any of the fancy features I just mentioned you had to program them yourself in each report program you wrote.

 

Needless to say every programmer did this in a different fashion, often differently each time they wrote a new program, and implemented between one and all of the fancy features for a given report, so apart from this being an enormous amount of effort for the programmers, you ended up with dozens of reports in your system which all looked and behaved violently differently.

 

How ALV solved the problem

 

The primary point of the ALV was to bypass all the “boiler plate” programming of sorting data and drilling down and what have you. Instead you called a function module to display the data and passed into it several structures saying what fancy features you wanted e.g. a field catalogue to say what columns you wanted, a sort catalogue to control the sorting and sub-totalling and you could even control the print formatting to an extent.

 

So when I converted a WRITE based report into an ALV report the two steps consisted of adding routines to fill up the structures, and then for every line I had added I found myself deleting twenty lines of code which were just not needed any more, for sorting and the like.

 

The secondary bonus was that after I had converted twenty reports to ALV they then all looked and responded exactly the same as each other which certainly was not the case before. You even got extra features like being able to download to EXCEL.

 

You could still add extra application specific features like extra buttons at the top of the screen but the point was that having to make the effort of adding those extra features was now the exception rather than the rule.

 

This brings me to the whole theme of this chapter – the idea behind the ALV was to remove the boiler plate code a programmer had to write, thus leaving them free to concentrate on the code that was specific to the business problem at hand they were trying to solve. This is the OO maxim “separate the things that change from the things that stay the same”.

 

Why CL_GUI_ALV_GRID was better (and worse)

 

With the advent of version 4.6 of SAP ERP the ABAP language was renamed “ABAP Objects” and you got a big cave painting telling you just that every time you opened SE80. This was to try and ram home the point it you could now do object orientated programming in ABAP. Even now this still seems to come as a shock to some people.

 

Assorted classes were now available to do the same sort of thing as the ALV so you were no longer supposed to use the function module REUSE ALV GRID DISPLAY but instead the class CL_GUI_ALV_GRID. It worked on virtually the same principle, passing in input structures.

 

These classes utilized the “control” technology whereby you could embed controls in areas of DYNPRO screens and then at runtime determine if one or more grids of data (or web pages or pretty pictures or a whole raft of other options) should appear in those areas.

 

This was part of the hilariously named ‘Enjoy SAP” exercise to try and make the standard transactions more user-friendly. I still find it funny when an end user complains about how difficult it is to use a transaction like ME21N and I get to tell them “this is an ENJOY transaction – you are supposed to be enjoying it”.

Whether the users enjoyed this or not, it did give programmers an enormous amount of new options as to how to display things on the screen. In addition you could now add your own user commands programmatically without having to create a so-called STATUS and then adding your own extra buttons.

 

However this flexibility comes at a cost – compared to setting up and calling a function module the process of creating a DYNPRO screen and creating and setting up the needed objects can appear quite laborious (though still not as bad as WRITE statements).

 

For example if all you wanted to do was display a list of data then it would be tempting to just call the function module rather than having to create a screen and then instantiate all the control objects needed for the CL_GU_ALV_GRID.

 

To summarize this was better than the ALV function modules because of the vast range of new things you could do, but the downside was that some of the boiler plate code you had thought was safely dead and buried came jumping back out of the grave shouting “did you miss me?”.

 

Why the SALV reporting framework was the best of both worlds

 

The SALV reporting framework, in the form of CL_SALV_TABLE and its friends, although coming after the CL_GU_ALV_GRID are in some senses much more of a direct follow on from the ALV function modules.

 

The most obvious improvement, that made the programming community sing and dance, was that before SALV came on the scene you had your report data in an internal table, and you had to define all the fields of your internal table, and then define those exact same fields again when setting up a field catalogue for an ALV function module or a CL_GUI class.

 

In the SALV reporting framework you just pass the internal table in, and the class dynamically creates the field catalogue for you. In some basic report programs that halves the lines of code in one fowl swoop. Moreover you do not need to create a DYNPRO screen for the report output to live in, the SALV class does that for you as well just like the function module did. You can however, if you so desire, attach your SALV object to a control just like the CL_GUI classes, so you do have the best of both worlds.

 

This time there is no grey area – if you have a simple report then using CL_SALV_TABLE is always going to take less lines of code and less programming effort than the function module equivalent, at least for saying what columns you want, and many reports have dozens of columns. This was the nail in the ALV function modules coffins.

 

With ABAP 7.40 we also dispense with another task (declaring the internal table) so we have moved from:-

·         Declaring an internal table by listing all the fields

·         Filling the internal table with data by listing all the fields again in a SELECT statement

·         Creating a field catalogue by listing all the fields again

·         Calling the function module

To:-

·         Filling the internal table by listing all the fields in a SELECT statement, which also dynamically creates the internal table structure

·         Calling the CL_SALV_TABLE class and passing it the internal table


This is all wonderful, even before 7.40 - however what causes consternation amongst programmers the first time they encounter the SALV is that as opposed to having structures as inputs to a function module, a SALV object takes other objects as input, one from sorting, one for subtotalling, one for controlling the fields, in fact quite a large number of possible input objects.

 

The more features you want in your SALV report the more objects you have to declare, and then create and then link to the main SALV object and then finally feed them the information you want.

 

This is normal in OO programming but could also be described as a vast horde of boiler plate code statements sweeping out of the graveyard and devouring everything in their path.

 

So, each of the three technologies discussed here still has its fair share of boiler plate code. How do we go about defeating this enemy?

 

Susan Boilerplate Code


I dreamed a dream in times gone by, a dream in which I wanted to:-


·         For SALV reports I wanted to avoid having to declare all the helper objects each time, and wanted the code to set up the special features for each report to be as simple (or simpler) as it had been for the original ALV

·         As you still need the CL_ALV_GRID_DISPLAY in certain situations, again I wanted this to be as easy (or easier) as the original ALV

·         As a purely academic exercise to demonstrate the point, I thought why can’t you have the same report with a radio button on the front saying “REUSE_ALV_LIST or CL_GUI_ALV_GRID or CL_SALV_TABLE display” and then have the user choose the report output (rather like the “user settings” options in SE16), but without having the program be three times as long. This is an academic exercise as most users would not know those terms, and you would not actually want that option in real life, this would just be to show you can output the results of the same program using three different technologies and swap between them.

·         To summarise – I wanted to create a template program for an ALV report and then to reduce the code in that report template program to the bare minimum and as a side effect you are then able to switch the technology used to output the report at the drop of a hat.


It’s the same old song, but with a different meaning…..


So the first thing I did was to write three template programs, all reading the same Z table, and all displaying the contents using the ALV. One program used the REUSE_ALV_LIST function module, one used CL_GUI_ALV_GRID and one used CL_SALV_TABLE. Upon completion they all looked totally different - as might be imagined. This is because each one was stuffed full of the specific structures (and objects in the latter two cases) that are specific to that particular version of the UI technology.

 

However at a very high level all 3 were doing the exact same thing – taking the contents of the same table and displaying it. The aim of the game is to try and see if you can take that process down to a lower level and see if all three are still the same. The lower down you go eventually there is bound to be a difference, and so you have to define your interface one level up from when things start to change depending on what technology you use.

 

I found by writing the program flow from each of the three programs on a beer mat and drawing circles around blocks of code which did similar things I could break the flow into four main chunks, the same in each case, which would be the level of my interface.

 

·         Report Agnostic / Technology Specific - Sometimes there was a need to create a “container” (not for the old fashioned one!). The code to do this – though different for each technology -  would not vary between different programs that used that same technology.

 

·         Report Agnostic / Technology Specific - There was also a need to “initialise” some variables e.g. the variant name off of the selection screen, an instance of the report class when a class is the technology involved. Again the code would be the same for different programs using the same technology, only input values from the selection screen and the like would change.

 

·         Report Specific / Technology Agnostic - Then there would be what I call “application specific changes” which would always vary between programs e.g. I want to rename this column, enable drill down on this column etc. However this chunk does not care what technology is being used.

 

·         Report Agnostic / Technology Specific – when you are finished you use your selected technology to display the report to the user. At this point the selected technology takes control and communicates with the calling program by sending back “events” to say what the end user has been doing e.g. double clicking on a field.

 

You can see from the above that for three of those chunks all the calling report has to say is “do this” and the selected technology will take care of how to carry out the task. For those tasks the amount of code in the template program can be very small indeed.

 

The third one is the important one – saying what is going to be different about this report in particular.

 

I don’t care, I don’t care, I don’t care, if a new ALV class comes round here


We want to make sure the calling report program is not too fussed (does not care) what ALV technology is in use, and doesn’t need to get too stressed when it changes. How this is going to work is that you have an interface which describes those four things I have just mentioned. The calling program is going to define a “view” object as TYPE REF TO the interface. Then at the start of the program the view object is going to be created as a specific class that implements that interface e.g.

 

ZCL_BC_VIEW_RALD - Base View for REUSE_ALV_GRID_DISPLAY

ZCL_BC_VIEW_ALV_GRID - Base View Model for CL_ALV_GRID

ZCL_BC_VIEW_SALV_TABLE - Base View Model for CL_SALV_TABLE

 

You could hard code the CREATE OBJECT statement to use the TYPE of the ALV technology you are using at the moment, and then when you want to trade up you just change one line i.e. the class TYPE of the view object you are creating. Or you could read the exact type to be created out of a customising table, maybe have a factory class used by lots of reports that returns a view object. That way the calling program would not have to be changed at all when you wanted to start using a new technology.

 

For example tomorrow SAP could invent a new ALV class called CL_SALVE_FOR_MY_WOUNDS which is ten times better than anything that came before.

You could then just change the factory class – or an entry in a customising table – and all your reports would suddenly start using the new ALV class. Naturally you would have to create a SALVE specific class that implements the generic interface.

 

What’s in the book, chook?


In my book I talked about the mechanics of the generic interface and gave a detailed example of a specific implementation using CL_SALV_TABLE. I wanted to give an example using CL_GUI_ALV_GRID as well, but it turned out there was not room. So I will give the CL_GUI_ALV_GRID example here.

 

This is important because – and I harp on about this like a broken record – the CL_SALV_TABLE is not supposed to be editable, and every single SAP customer needs an ALV grid to be editable, so the “new” technology is missing a vital feature the “old” technology had.

 

In my book I talk about how to work around this, but really you should not have to work around this, it should be standard. So, if you need an editable grid and you don’t want to break the rules, you need CL_GUI_ALV_GRID.

 

Moreover I wonder if any of you have been in the boat where you have been asked to write a read-only report, so you chose CL_SALV_TABLE as it was easier, and then at the last second the requirements changes to have “just one column editable”? Then you have to re-write the entire program using CL_GUI_ALV_GRID and that takes ages.

 

With the interface approach if that happens you just change the object creation line in the calling program to use the CL_GUI_ALV_GRID related Z class as opposed to the CL_SALV_TABLE related Z class.

 

Example with CL_GUI_ALV_GRID


In the book I talk about a monster report I have I implemented using CL_SALV_TABLE but here I decide I need to use CL_GUI_ALV_GRID instead but leave the monster report totally unchanged with the exception that instead of

 

CLASS lcl_view DEFINITION INHERITING FROM zcl_bc_view_salv_table.

 

We have




CLASS lcl_view DEFINITION INHERITING FROM zcl_bc_view_alv_grid.

 

Since both classes implement our custom interface the calling program cannot tell them apart, though of course the underlying code will be very different.

To start off with the list of the attributes of GUI ALV GRID subclass bear no resemblance to the attributes of the SALV subclass as can be seen below

image003.png

Custom GUI ALV GRID Subclass Attributes

We will now go through the same functional areas as we did in the last example to see how we achieve each task in the way the CL GUI ALV GRID requires.

Initialisation

 

Of the three GUI report technologies mentioned earlier, the CL GUI ALV GRID is the only one, where, if you just want a full screen report you have to define a screen which is empty apart from a whacking great “custom control” filling it up. This is because the CL GUI ALV GRID needs a container to sit within.

Defining such a screen for each new report does not take long, but it is another of those boring repetitive tasks I could do without. In such cases I often spend ages coming up with a way to automate such steps so I never have to worry about them again. The solution I describe is naturally not the only way to achieve this, but it works and will give you an idea of the principle.

 

Sometimes the report will be displayed on part of a screen, in which case we do have to manually create the screen and container area, and then pass the container in, but if we want the screen and associated container to magically create themselves then we split the initialization into two steps – creating such a screen and then the rest of the initialization steps you normally have to do manually in each report.

 

METHOD create_container_prep_display.

  md_report_name    
= id_report_name.
  ms_variant
-report  = id_report_name.
  ms_variant
-variant = id_variant.
  mt_user_commands[]
= it_user_commands[].

 
CREATE DATA mt_data_table LIKE ct_data_table.
 
GET REFERENCE OF ct_data_table INTO mt_data_table.


 
CALL FUNCTION 'ZSALV_CSQT_CREATE_CONTAINER'
   
EXPORTING
      r_content_manager = me
     
title             = id_title.

ENDMETHOD.

Creating a Container Autaomatically


The first half of the above code just puts the input parameters in global (to the class instance) variables, the second relates to creating a screen and container.

 

This process of creating a screen automatically is going to seem quite complicated but as noted before this only has to be set up once and all of this complexity is hidden from the calling program. Anyway, I noticed some standard SAP programs seemed to set up their own screens and containers so I investigated how this worked.

 

I discovered that a class that wants automatic screen creation has to firstly implement interface IF_SALV_CSQT_CONTENT_MANAGER which has the somewhat enigmatic description “manages content”.

 

There is a standard SAP function SALV_CSQT_CREATE_CONTAINER which I copied and made a small change which was to tick the “without application toolbar” checkbox so the resulting screen does not have an ugly “hole” at the top. The code above makes a call to this function module passing in the calling class so the function knows where to return control to. The function creates a screen and a container and then passes back that container to method FILL CONTAINER CONTENT of the interface that our custom view class has implemented.

 

METHOD if_salv_csqt_content_manager~fill_container_content.
*---------------------------------------------------------------*
* This gets called from function SALV_CSQT_CREATE_CONTAINER PBO module
* which creates a screen and a container, and passes us that container
*---------------------------------------------------------------*
*Local Variables
  FIELD-SYMBOLS: <lt_data_table> TYPE ANY TABLE.

  ASSIGN mt_data_table->* TO <lt_data_table>.

  prepare_display_data(
    EXPORTING
      id_report_name   = md_report_name " Calling program
      id_variant       = ms_variant-variant
      io_container     = r_container      

      it_user_commands = mt_user_commands
    CHANGING
      ct_data_table    = <lt_data_table> ).  " Data Table


ENDMETHOD.

 

All the FILL CONTAINER CONTENT does is to pass on the container object to a method which then orchestrates the rest of the initialization steps.

 

METHOD zif_bc_alv_report_view~prepare_display_data.
* Step One - Generic - Set up the Basic Report
  initialise(
   
EXPORTING
      id_report_name   =  id_report_name
" Calling program
      id_variant       =  id_variant    
" Layout
      io_container     =  io_container  
      it_user_commands =  it_user_commands
" Toolbar Buttons
   
CHANGING
      ct_data_table    = ct_data_table ).

* Step Two - Application Specific
  application_specific_changes( ).

* Step Three - Generic - Actually Display the Report
  display( ).

ENDMETHOD.

METHOD zif_bc_alv_report_view~initialise.
* Local Variables
 
DATA: lo_container TYPE REF TO cl_gui_custom_container.

  ms_variant
-report  = id_report_name.
  ms_variant
-variant = id_variant.
  mt_user_commands[] = it_user_commands[].

 
CREATE DATA mt_data_table LIKE ct_data_table.
 
GET REFERENCE OF ct_data_table INTO mt_data_table.

* Create CL_GUI_ALV_GRID object. Make sure we do not create a container if we are offline
 
IF cl_gui_alv_grid=>offline( ) IS INITIAL.
    mo_custom_container ?= io_container.
 
ENDIF.

* Now we have a container, we place within it an ALV grid
 
CREATE OBJECT mo_alv_grid
   
EXPORTING
      i_appl_events =
'X'
      i_parent      = mo_custom_container.

  display_basic_toolbar( ).

  set_layout( ).

* User Command Processing
  set_handlers( ).

* Turn on the processing of events on <ENTER>
 
IF cl_gui_alv_grid=>offline( ) IS INITIAL.
   
CALL METHOD mo_alv_grid->register_edit_event
     
EXPORTING
        i_event_id = cl_gui_alv_grid
=>mc_evt_enter.
 
ENDIF.

ENDMETHOD.

Custom CL GUI ALV GRID – Initialization Method



Any GUI technology which uses the “controls” framework relies on the processing being done on the “client” i.e. the PC that SAP is talking to, as a lot of the work is done on the client side which is why scrolling is so slow, for example. If the report is running in the background then there is no client and you have to be careful not to cause a short dump. The standard code is usually clever enough to realize that a report is running in the background and switches to a display mode which does not need a client.

 

You will see in the listing above that we are using a static method of CL GUI ALV GRID to determine if the report is running in the background to make sure we do not pass in unexpected things like a container (which makes no sense when there is no screen output) or preparing for the user pressing enter when there is no user.

 

Creating the object is simple enough - then we come to setting up the toolbar at the top.

 

METHOD display_basic_toolbar.
*---------------------------------------------------------------*
* At the top of the standard ALV grid are a big bunch of icons. Some of them
* do nothing, others do harmful things. We want to switch some of them off
*---------------------------------------------------------------*
 
DATA ls_exclude TYPE ui_func.

 
REFRESH mt_exclude.

  ls_exclude
= cl_gui_alv_grid=>mc_fc_loc_undo.
 
APPEND ls_exclude TO mt_exclude.
  ls_exclude
= cl_gui_alv_grid=>mc_fc_loc_copy_row.
 
APPEND ls_exclude TO mt_exclude.
  ls_exclude
= cl_gui_alv_grid=>mc_fc_loc_append_row.
 
APPEND ls_exclude TO mt_exclude.
  ls_exclude
= cl_gui_alv_grid=>mc_fc_loc_delete_row.
 
APPEND ls_exclude TO mt_exclude.
  ls_exclude
= cl_gui_alv_grid=>mc_fc_loc_paste_new_row.
 
APPEND ls_exclude TO mt_exclude.
  ls_exclude
= cl_gui_alv_grid=>mc_fc_loc_paste.
 
APPEND ls_exclude TO mt_exclude.
  ls_exclude
= cl_gui_alv_grid=>mc_fc_loc_cut.
 
APPEND ls_exclude TO mt_exclude.
  ls_exclude
= cl_gui_alv_grid=>mc_fc_data_save.
 
APPEND ls_exclude TO mt_exclude.
  ls_exclude
= cl_gui_alv_grid=>mc_fc_subtot.
 
APPEND ls_exclude TO mt_exclude.
  ls_exclude
= cl_gui_alv_grid=>mc_fc_sum.
 
APPEND ls_exclude TO mt_exclude.
  ls_exclude
= cl_gui_alv_grid=>mc_fc_refresh.
 
APPEND ls_exclude TO mt_exclude.
  ls_exclude
= cl_gui_alv_grid=>mc_fc_graph.
 
APPEND ls_exclude TO mt_exclude.

ENDMETHOD.

Custom CL GUI ALV GRID Initialization – Toolbar Method


In the SALV object the default behaviour is no buttons at the top of the screen. In CL GUI ALV GRID the reverse is true – a large number of buttons appear by default, the vast majority of which make no sense at all for a static report – buttons for inserting and deleting rows for example. So the code above is concerned with de-activating a large portion of the default buttons.

 

We set up the display variant right at the start when the object is created and the initialization method is called; now we set up the rest of the report layout options.

 

METHOD set_layout.

  ms_layout
-stylefname = 'CELLTAB'.
  ms_layout
-zebra      = 'X'.
  ms_layout
-sel_mode   = 'D'.
  ms_layout
-cwidth_opt = 'A'.”Always optimised
  ms_layout
-numc_total = 'X'.

* Set save restriction
* Check authority to change display variants.
 
AUTHORITY-CHECK OBJECT 'Z_VARIANT1' ID 'ACTVT' FIELD '*'.

 
IF sy-subrc = 0.   " does he ride a white horse?
    md_save =
'A'.   " yes,allow user and global display variants
 
ELSE.
    md_save =
'U'.
 
ENDIF.

ENDMETHOD."Set Layout

Using this UI class we don’t really need a separate method to optimize the column widths, as this is done together with the other layout options.

 

Now we move on to user command processing. This is very similar to the way we set up the event handlers for our custom SALV class; the main difference is that CL GUI ALV GRID has a lot more standard events that it raises.

 

METHOD set_handlers.

 
SET HANDLER handle_toolbar_set          FOR mo_alv_grid.
 
SET HANDLER handle_context_menu_request FOR mo_alv_grid.
 
SET HANDLER handle_user_command         FOR mo_alv_grid.
 
SET HANDLER handle_double_clicks        FOR mo_alv_grid.
 
SET HANDLER handle_link_click           FOR mo_alv_grid.
 
SET HANDLER handle_on_f4                FOR mo_alv_grid.
 
SET HANDLER handle_button_click         FOR mo_alv_grid.

ENDMETHOD.

Custom CL GUI ALV GRID CLASS – Event handlers


The first two events are not related to anything the user does, they are called when the GUI object creates the toolbar the first time, or the user right clicks on a line and the “context menu” appears.

 

When our custom view object was being created we had an optional table of extra commands passed in. If this table has any entries then we need to add our new buttons to the toolbar at the top of the screen.

 

METHOD handle_toolbar_set.
* Local Variables
 
DATA: ls_commands_out TYPE stb_button,
        ls_commands_in 
TYPE zsbc_stb_button.

* Create own Menubuttons and ToolbarButtons
* Append a separator to normal toolbar
 
CLEAR ls_commands_out.
 
MOVE 3 TO ls_commands_out-butn_type.
 
APPEND ls_commands_out TO e_object->mt_toolbar.

* Add the buttons the model would like to add
 
CHECK mt_user_commands[] IS NOT INITIAL.

   
LOOP AT mt_user_commands INTO ls_commands_in.
     
CLEAR ls_commands_out.
     
MOVE-CORRESPONDING ls_commands_in TO ls_commands_out.
     
APPEND ls_commands_out TO e_object->mt_toolbar.
   
ENDLOOP.

 
ENDMETHOD."Handle Toolbar Set

Adding our own user commands to the toolbar


Can be seen in the listing above we first add a separator to make a distinction between the standard toolbar ICONS and our custom ones, and then just loop through our table of extra commands adding them one by one. This is much better than having to create a user defined STATUS and change it each time you need a new command, which was the case with the REUSE_ALV function modules.

 

METHOD handle_context_menu_request.
* Local Variables
 
DATA: ld_fcode    TYPE ui_func,
        ld_text    
TYPE gui_text,
        ld_icon    
TYPE icon_d,
        ls_commands
TYPE zsbc_stb_button.

 
CHECK mt_user_commands[] IS NOT INITIAL.


 
LOOP AT mt_user_commands INTO ls_commands.

   
MOVE: ls_commands-function  TO ld_fcode,
          ls_commands
-icon      TO ld_icon,
          ls_commands
-quickinfo TO ld_text.

   
CALL METHOD e_object->add_function
     
EXPORTING
        fcode = ld_fcode
       
text  = ld_text
       
icon  = ld_icon.

 
ENDLOOP."Custom User Commands

ENDMETHOD."Handle Context Menu Request

 

A context menu is a list of user commands that appears in a single column under your cursor when you right click on a cell in the report. We are adding our own commands on at the end in the same way we add them at the top right of toolbar at the top of the screen.

 

Next we have three events which are identical from our point of view – clicking on a hotspot, double clicking on a cell, and clicking on a cell which is a pushbutton.

 

METHOD handle_link_click.
* Local Variables
 
DATA: ld_column TYPE salv_de_column,
        ld_row   
TYPE salv_de_row.

* Adapt view specific data to generic values
  ld_row    = es_row_no
-row_id.
  ld_column = e_column_id
-fieldname.

RAISE EVENT user_command_received

EXPORTING ed_user_command = '&IC1'
           ed_row          = ld_row
           ed_column       = ld_column.
ENDMETHOD.”Handle Link Click


This time we need to adapt the data elements CL GUI ALV GRID uses to store the row and column to our more generic data elements we use in the interface.

 

The events HANDLE_DOUBLE_CLICKS and HANDLE_BUTTON_CLICK are coded identically except for the fact that the names and definitions of the data structure that stores the selected row and column are different each time and need to be similarly adapted.

 

That just leaves us with HANDLE ON F4 which is triggered when a user places their cursor on a cell on presses F4 to get a list of possible values. This happens automatically for columns defined with reference to a data element, you only need to program this for other columns, which should be a rarity. I have left the implementation empty in the subclass, if a calling program really needs this then the method can be redefined in a local subclass.

 

We are now finished creating the basic report object, it is now time to define the nature of the report columns.

 

Formatting the Columns

 

Sadly CL GUI ALV GRID is not as clever as CL_SALV_TABLE and cannot automatically define the columns based on the internal table that supplies the data. We have to manually add each column to the field catalogue just as we used to do with the REUSE_ALV function modules.

 

To keep thing as uniform in the calling program as possible, what this means is that for occasions when we use the SALV object we only need to call the SET COLUMN ATTRIBUTES for fields we want to change in some way, for a CL GUI ALV GRID we have to call that method for every single field – in the order we want the columns to appear.

 

METHOD zif_bc_alv_report_view~set_column_attributes.
* Local Variables
 
DATA: ls_fieldcat LIKE LINE OF mt_fieldcat,
        ld_col_pos 
TYPE sy-tabix,
        ld_name    
TYPE c LENGTH 80.

 
FIELD-SYMBOLS: <ls_fieldcat> LIKE LINE OF mt_fieldcat.

READ TABLE mt_fieldcat ASSIGNING <ls_fieldcat>

WITH KEY fieldname = id_field_name.

 
IF sy-subrc NE 0.

    ld_col_pos =
lines( mt_fieldcat[] ) + 1.

   
APPEND INITIAL LINE TO mt_fieldcat ASSIGNING <ls_fieldcat>.

    <ls_fieldcat>
-col_pos   = ld_col_pos.
    <ls_fieldcat>
-fieldname = id_field_name.
    <ls_fieldcat>
-tabname   = id_table_name.
    <ls_fieldcat>
-ref_table = id_table_name.

 
ENDIF.

  
IF if_is_hotspot = abap_true.
    <ls_fieldcat>
-hotspot = abap_true.
 
ENDIF.

 
IF if_is_visible IS SUPPLIED.
    set_visible(
EXPORTING if_is_visible = if_is_visible
                
CHANGING  cs_fieldcat   = <ls_fieldcat> ).
 
ENDIF.

 
IF if_is_technical = abap_true.
    <ls_fieldcat>
-tech = abap_true.
 
ENDIF.

 
IF if_is_a_button = abap_true.
    <ls_fieldcat>
-style = cl_gui_alv_grid=>mc_style_button.
 
ENDIF.

 
IF if_is_subtotal = abap_true.
    <ls_fieldcat>
-do_sum = abap_true.
 
ENDIF.

 
IF id_long_text IS NOT INITIAL.
    set_long_text(
EXPORTING id_long_text = id_long_text
                  
CHANGING  cs_fieldcat  = <ls_fieldcat> ).
 
ENDIF.

 
IF id_medium_text IS NOT INITIAL.
    set_medium_text(
EXPORTING id_medium_text = id_medium_text
                    
CHANGING  cs_fieldcat    = <ls_fieldcat> ).
 
ENDIF.

 
IF id_short_text IS NOT INITIAL.
   <ls_fieldcat>
-scrtext_s = id_short_text.
 
ENDIF.

 
IF id_tooltip IS NOT INITIAL.
   <ls_fieldcat>
-tooltip = id_tooltip.
 
ENDIF.

ENDMETHOD."Set Column Attributes

Custom CL GUI ALV GRID class – set column attributes


As can be seen the code for setting various column attributes is a lot simpler than the SALV equivalent. This time we don’t need to create helper methods unless they are doing something that requires more than one line of code.

 

 

METHOD set_visible.

 
IF if_is_visible = abap_true.
    cs_fieldcat
-no_out = abap_false.
 
ELSE.
    cs_fieldcat
-no_out = abap_true.
 
ENDIF.





ENDMETHOD.


We are keeping the parameters as “is visible” for consistence with the SALV structure, so we just need to reverse the flag and pass it into the CL GUI ALV GRID “no output” flag.

 

METHOD set_long_text.

  cs_fieldcat
-scrtext_l = id_long_text.

 
IF strlen( id_long_text ) LE 20.
    cs_fieldcat
-scrtext_m = id_long_text.
 
ENDIF.

 
IF strlen( id_long_text ) LE 10.
    cs_fieldcat
-scrtext_s = id_long_text.
 
ENDIF.

ENDMETHOD.

 

In regard to the column texts, you can see that the same logic is being applied as in the SALV equivalent, to avoid us having to pass the same value in there times for changed descriptions that apply to all three texts.

 

Sorting


METHOD zif_bc_alv_report_view~add_sort_criteria.
* Local Variables
 
DATA: ld_position TYPE sy-tabix.

 
FIELD-SYMBOLS: <ls_sort> LIKE LINE OF mt_sort.

 
READ TABLE mt_sort ASSIGNING <ls_sort>
 
WITH KEY fieldname = id_columnname
           spos      = id_position.

 
CHECK sy-subrc <> 0.

  ld_position =
lines( mt_sort[] ) + 1.
 
APPEND INITIAL LINE TO mt_sort ASSIGNING <ls_sort>.
  <ls_sort>
-spos      = ld_position.
  <ls_sort>
-fieldname = id_columnname.

 
IF if_descending = abap_true.
    <ls_sort>
-up   = abap_false.
    <ls_sort>
-down = abap_true.
 
ELSE.
    <ls_sort>
-up   = abap_true.
    <ls_sort>
-down = abap_false.
 
ENDIF.

 
IF if_subtotal = abap_true.
    <ls_sort>
-subtot = abap_true.
 
ENDIF.

ENDMETHOD.

 

The sort method is just a question of adapting the input parameters from the interface method – which are based on the SALV object – to the CL GUI ALV GRID equivalent.

 

METHOD zif_bc_alv_report_view~add_sort_criteria.
* Local Variables
 
DATA: ld_position TYPE sy-tabix.

 
FIELD-SYMBOLS: <ls_sort> LIKE LINE OF mt_sort.

 
READ TABLE mt_sort ASSIGNING <ls_sort>
 
WITH KEY fieldname = id_columnname
           spos      = id_position.

 
CHECK sy-subrc <> 0.

  ld_position =
lines( mt_sort[] ) + 1.
 
APPEND INITIAL LINE TO mt_sort ASSIGNING <ls_sort>.
  <ls_sort>
-spos      = ld_position.
  <ls_sort>
-fieldname = id_columnname.

 
IF if_descending = abap_true.
    <ls_sort>
-up   = abap_false.
    <ls_sort>
-down = abap_true.
 
ELSE.
    <ls_sort>
-up   = abap_true.
    <ls_sort>
-down = abap_false.
 
ENDIF.

 
IF if_subtotal = abap_true.
    <ls_sort>
-subtot = abap_true.
 
ENDIF.

ENDMETHOD.”Add Sort Criteria

 

Displaying the Report

 

There is nothing dramatic here, we are just calling the display method of CL_GUI_ALV_GRID and once again hiding the complexity.

 

METHOD zif_bc_alv_report_view~display.
* Local Variables
 
FIELD-SYMBOLS: <lt_data> TYPE ANY TABLE.

 
ASSIGN mt_data_table->* TO <lt_data>.

 
CALL METHOD mo_alv_grid->set_table_for_first_display
   
EXPORTING
      it_toolbar_excluding = mt_exclude
      is_layout            = ms_layout
   
CHANGING
      it_fieldcatalog      = mt_fieldcat
      it_sort              = mt_sort
      it_filter            = mt_filter
      it_outtab            = <lt_data>.

ENDMETHOD.

User Commands


 

We talked about the various events raised by CL GUI ALV GRID earlier. In the event handler methods we just pass these on to an event raised by our custom class, adapting the CL GUI data structures to the SALV equivalent where needed.

 

METHOD handle_link_click.
* Local Variables
 
DATA: ld_column TYPE salv_de_column,
        ld_row   
TYPE salv_de_row.

* Adapt view specific data to generic values
  ld_row    = es_row_no
-row_id.
  ld_column = e_column_id
-fieldname.

 
RAISE EVENT user_command_received

EXPORTING ed_user_command = '&IC1'
          ed_row          = ld_row
          ed_column       = ld_column.

ENDMETHOD.

METHOD handle_user_command.
* No need for type conversion
 
RAISE EVENT user_command_received

EXPORTING ed_user_command = e_ucomm.
ENDMETHOD.

 

The calling report will have to have a class which implements a method like this:-

  handle_user_command FOR EVENT user_command_received
                                  OF lcl_view
                                  IMPORTING ed_user_command
                                            ed_row
                                            ed_column.

In the above code listing LCL_VIEW is the local class whch inherits from our abstract custom subclass. The calling report can then decide what to do with the user input, it is no longer the views responsibility. Typically this is picked up by a controller which may or may not respond to the event and then passes the evnt on to the model which also may or may not respond.

 

Future Proofing

 

As we have seen by using a common interface we can move a report from one UI technology to another by changing the bare minimum of code.

The logical extension of this is that if and when SAP come up with the successor to the SALV reporting framework, all we will need to do is create a new subclass which implements the generic view interface, create implementations for each of the interface methods to deal with the specific way the new technology will deal with each task, and then we could go around changing existing reports to use the new technology at the drop of a hat, rather than having to rewrite them all.

 

What are you ON about?


Some of the above example may not have made a lot of sense as it refers to the SALV example of implementing the generic report interface that is in my good old book. Still hopefully you get the idea and of course I am open to answering any questions you may have. In addition if some of the code seems a bit inconsistent, then I am always open to suggestions as to how to improve it. That's what the SCN is all about, after all.....

 

To Be Continued


To end with the same paragraph as the prior blogs on book writing, in subsequent blogs I will continue to talk about the thought process behind how I chose what topics to include in the book, enriched with some content that was cut from the final version of the book for reasons of space.


Cheersy Cheers


Paul


https://www.sap-press.com/abap-to-the-future_3680/

 

simple sap ABAP code to find USER-EXITS and BADI

$
0
0

Dear friends,

This is the code to find userexits and badi.

 

*&---------------------------------------------------------------------*

*& Report  ZSAM_USER_BADI

*&

*&---------------------------------------------------------------------*

*&

*&

*&---------------------------------------------------------------------*

 

REPORT  zsam_user_badi.

TABLES : tstc,

          tadir,

          modsapt,

          modact,

          trdir,

          tfdir,

          enlfdir,

          sxs_attrt ,

          tstct.

DATA : jtab LIKE tadir OCCURS 0 WITH HEADER LINE.

DATA : field1(30).

DATA : v_devclass LIKE tadir-devclass.

PARAMETERS : p_tcode LIKE tstc-tcode,

              p_pgmna LIKE tstc-pgmna .

 

DATA wa_tadir TYPE tadir.

 

START-OF-SELECTION.

   IF NOT p_tcode IS INITIAL.

     SELECT SINGLE * FROM tstc WHERE tcode EQ p_tcode.

   ELSEIF NOT p_pgmna IS INITIAL.

     tstc-pgmna = p_pgmna.

   ENDIF.

   IF sy-subrc EQ 0.

     SELECT SINGLE * FROM tadir WHERE pgmid = 'R3TR'

                                 AND object = 'PROG'

                                 AND obj_name = tstc-pgmna.

     MOVE : tadir-devclass TO v_devclass.

     IF sy-subrc NE 0.

       SELECT SINGLE * FROM trdir WHERE name = tstc-pgmna.

       IF trdir-subc EQ 'F'.

         SELECT SINGLE * FROM tfdir WHERE pname = tstc-pgmna.

         SELECT SINGLE * FROM enlfdir WHERE funcname = tfdir-funcname.

         SELECT SINGLE * FROM tadir WHERE pgmid = 'R3TR'

         AND object = 'FUGR'

         AND obj_name EQ enlfdir-area.

         MOVE : tadir-devclass TO v_devclass.

       ENDIF.

     ENDIF.

     SELECT * FROM tadir INTO TABLE jtab WHERE pgmid = 'R3TR'

     AND object IN ('SMOD', 'SXSD') AND devclass = v_devclass.

 

     SELECT SINGLE * FROM tstct WHERE sprsl EQ sy-langu AND tcode EQ p_tcode.

     FORMAT COLOR COL_POSITIVE INTENSIFIED OFF.WRITE:/(19) 'Transaction Code - ',20(20) p_tcode,45(50) tstct-ttext.SKIP.

     IF NOT jtab[] IS INITIAL.

       WRITE:/(105) sy-uline.FORMAT COLOR COL_HEADING INTENSIFIED ON.

 

 

* Sorting The Internal Table

       SORT jtab BY object.DATA : wf_txt(60) TYPE c,

       wf_smod TYPE i ,

       wf_badi TYPE i ,

       wf_object2(30) TYPE c.

       CLEAR : wf_smod, wf_badi , wf_object2.

* Get the total SMOD.

       LOOP AT jtab INTO wa_tadir.

         AT FIRST.

           FORMAT COLOR COL_HEADING INTENSIFIED ON.

           WRITE:/1 sy-vline,2 'Enhancement/ Business Add-in',41 sy-vline ,42 'Description',105 sy-vline.WRITE:/(105) sy-uline.

         ENDAT.

         CLEAR wf_txt.

         AT NEW object.

           IF wa_tadir-object = 'SMOD'.

             wf_object2 = 'Enhancement' .

           ELSEIF wa_tadir-object = 'SXSD'.

             wf_object2 = ' Business Add-in'.

           ENDIF.

           FORMAT COLOR COL_GROUP INTENSIFIED ON.

           WRITE:/1 sy-vline,

           2 wf_object2,105 sy-vline.

         ENDAT.

         CASE wa_tadir-object.

           WHEN 'SMOD'.

             wf_smod = wf_smod + 1.

             SELECT SINGLE modtext INTO wf_txt FROM modsapt WHERE sprsl = sy-langu AND name = wa_tadir-obj_name.FORMAT COLOR COL_NORMAL INTENSIFIED OFF.

           WHEN 'SXSD'.

 

* For BADis

             wf_badi = wf_badi + 1 .

             SELECT SINGLE text INTO wf_txt FROM sxs_attrt WHERE sprsl = sy-langu AND exit_name = wa_tadir-obj_name.FORMAT COLOR COL_NORMAL INTENSIFIED ON.

         ENDCASE.

 

         WRITE:/1 sy-vline,2 wa_tadir-obj_name HOTSPOT ON,41 sy-vline ,42 wf_txt,105 sy-vline.

         AT END OF object.WRITE : /(105) sy-uline.ENDAT.

 

       ENDLOOP.

       WRITE:/(105) sy-uline.

       SKIP.

       FORMAT COLOR COL_TOTAL INTENSIFIED ON.WRITE:/ 'No.of Exits:' , wf_smod.WRITE:/ 'No.of BADis:' , wf_badi.

     ELSE.

       FORMAT COLOR COL_NEGATIVE INTENSIFIED ON.WRITE:/(105) 'No userexits or BADis exist'.

     ENDIF.

   ELSE.

     FORMAT COLOR COL_NEGATIVE INTENSIFIED ON.WRITE:/(105) 'Transaction does not exist'.

   ENDIF.

 

AT LINE-SELECTION.

   DATA : wf_object TYPE tadir-object.CLEAR wf_object.

 

   GET CURSOR FIELD field1.

   CHECK field1(8) EQ 'WA_TADIR'.

   READ TABLE jtab WITH KEY obj_name = sy-lisel+1(20).

   MOVE jtab-object TO wf_object.

   CASE wf_object.

     WHEN 'SMOD'.

       SET PARAMETER ID 'MON' FIELD sy-lisel+1(10).

       CALL TRANSACTION 'SMOD' AND SKIP FIRST SCREEN.

     WHEN 'SXSD'.

       SET PARAMETER ID 'EXN' FIELD sy-lisel+1(20).

       CALL TRANSACTION 'SE18' AND SKIP FIRST SCREEN.

ENDCASE


   .Capture1.PNG

 

Capture2.PNG

Attach File to Standard Trasaction VA02 Using Function Module SO_DOCUMENT_REPOSITORY_MANAGER

$
0
0

Scenario:

 

Many times there is a business requirement of linking documents, entering notes, sending notes or linking an internet address to various SAP objects. These external  attachments can be reference documents, pictures, Email attachments, design , diagrams or related spreadsheets. To meet this requirement SAP has provided a tool bar called 'Generic Service Toolbar'(GOS).

 

Recently, I came across a requirement where i had to create a attachment for existing sales order (va02) through report. in this requirement i have created a Custom report.

 

By using this report, You can attach a wide range of documents like Word Document, Excel Sheets, PDF and Text files and many more, including pictures.

 

Go to transaction SE38 and create report .

 

Here is the Source Code.

 

*Declarations

 

*Structure Declarations

TYPES : BEGIN OF ty_table,    "Structure for FileName

         fname(128) TYPE c,

  END OF ty_table.

 

*Data Declarations

DATA: w_prog                TYPE sy-repid,         "Current Program Name

             w_dynnr              TYPE sy-dynnr,        "Current Dynpro Number

             w_attachement   TYPE borident,         "Work Area for BOR object identifier

             ws_borident        TYPE borident,         "Work Area for BOR object identifier

             w_document       TYPE sood4,            "Interface for send screen and MOM

             folder_id             TYPE soodk,             "Definition of Object Key

             w_h_data           TYPE sood2,             "Object Definition Workarea

             w_fol_data         TYPE sofm2,             "Folder Contents Work area

             w_rec_data        TYPE soos6,             "Transfer Information of folder Work area

             ws_files              TYPE ty_table,

             wt_files               TYPE TABLE OF ty_table.

 

SELECTION-SCREEN BEGIN OF BLOCK b1 WITH FRAME TITLE text-001.

*Parameter Declarations

PARAMETER: p_mandt TYPE sy-mandt,                                                      " Client Number

            p_vbeln TYPE vbeln,                                                                            " Sales Order Number

            p_path  TYPE ibipparms-path  MEMORY ID ad_local_path,               " File Path

            p_name(30).                                                                                          " Name of attachement.

SELECTION-SCREEN END OF BLOCK b1.

 

*Initialization Event

INITIALIZATION .

w_prog = sy-repid .

w_dynnr = sy-dynnr .

 

*/ Selection Screen  For File Path Selection

  AT SELECTION-SCREEN ON VALUE-REQUEST FOR p_path .

 

*/ F4 Help for file selection

  CALL FUNCTION 'F4_FILENAME'

  EXPORTING

    program_name        = w_prog

    dynpro_number       = w_dynnr

    field_name               = 'P_PATH'

IMPORTING

    file_name                 = p_path .

 

*Start of selection Evvent

START-OF-SELECTION .

 

*/Client Validations

IF sy-mandt NE p_mandt .

WRITE 'Mandt Error' .

EXIT .

ENDIF .

 

*/ Assign Object Keys to the Structure BOR

ws_borident-objkey      = p_vbeln.             "SalesOrder Number

ws_borident-objtype     = 'EQUI'.               "Object Type

ws_borident-objtype     = 'BUS2032'.        "BUS Number

 

 

*/ Filename Assign to the Structure

ws_files-fname = p_path .               "Path

APPEND ws_files TO wt_files .

 

*/ Folder Root

CALL FUNCTION 'SO_FOLDER_ROOT_ID_GET'

EXPORTING

      region    = 'B'

IMPORTING

      folder_id = folder_id

EXCEPTIONS

     OTHERS    = 1.

 

*/ Append data to the MOM Structure

  w_document-foltp   = folder_id-objtp.

  w_document-folyr   = folder_id-objyr .

  w_document-folno   = folder_id-objno .

  w_document-objdes  = p_name .           "Name of file

  w_document-objnam  = p_name .          "Name of file

 

*/ Attachment FileName Assignment

  w_h_data-objdes = p_name .                 "Name of file

 

*/ Using this function module to read FILE from Presentation server

CALL FUNCTION 'SO_DOCUMENT_REPOSITORY_MANAGER'

EXPORTING

   method                  = 'IMPORTFROMPC'

   ref_document        = w_document

TABLES

    files                       = wt_files

CHANGING

    document              = w_document

    header_data          = w_h_data

    folmem_data         = w_fol_data

    receive_data         = w_rec_data .

 

*/ File Creation OkCodes

       IF w_document-okcode = 'CREA' OR w_document-okcode = 'CHNG'.

          w_attachement-objtype = 'MESSAGE'.

          w_attachement-objkey  = w_document(34).

 

          CALL FUNCTION 'BINARY_RELATION_CREATE_COMMIT'

             EXPORTING

                obj_rolea            = ws_borident

                obj_roleb            = w_attachement

                relationtype         = 'ATTA'

             EXCEPTIONS

                no_model             = 1

                internal_error       = 2

                unknown              = 3

                OTHERS             = 4.

*/Error Handling

           IF sy-subrc <> 0.

    MESSAGE ID sy-msgid TYPE sy-msgty NUMBER sy-msgno

           WITH sy-msgv1 sy-msgv2 sy-msgv3 sy-msgv4.

           ENDIF.

 

ENDIF .

 

Before Execution:


Go to the transaction VA03 (Sales Order Display ) in system as shown in the below screen shot.

 

1.png

There is no attachment existed for this sales order. Now go to report and execute it following selection screen will be display.


2.png

 

Give the inputs Client Number, Sales  order number , File path and file name as shown in below.

 

In this i have given a text document for attachment.

 

3.png

 

When click on execute button it will create a sales order attachment.


4.png

Testing:


Now go to transaction VA03 and check whether the attachment is added or not.

 

5.png

 

Attachment is added.


Table Storage:

 

The Content of file converted to binary and it stores in tables like SOOD and SRGBTBREL.

 

 

SOOD Table stores the information of attached Object info.

 

6.png

 

SRGBTBREL table stores the information of Relationships in GOS Environment.


7.png

I hope this Blog would help many ABAP'ers in attaching files.

 

Thanks&Regards,

Harikrishna M.

Viewing all 948 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>