Quantcast
Channel: ABAP Development
Viewing all 948 articles
Browse latest View live

Antifragile software

$
0
0

                                                        Anti-fragile Software         

 

Before proceeding further, I have a confession to make - it has mostly nothing to do with ABAP development and it even spans other areas of SAP. For simulation of fault tolerance systems, I used non SAP software However, as it concerns software development and in SAP space what better subspace than ABAP to get opinions of developers, I'm putting it in ABAP development. Hopefully it will be of some use.


I recently read “Anti-Fragile” from Nicolas Nassim Taleb and it kept me wrapped till my eyes were hurting. It is a very good read even though I may not agree with all his notions. Taleb coined the term ‘antfragile’ as there was no English word for what he wanted to express, though there’s a mathematical term - long complexity.

 

Taleb categorizes objects in the following triads:

 

- Fragile : This is something that doesn’t like volatility. An example will be a package of wine glasses you’re sending to a friend.

 

- Robust : This is the normal condition of most of the products we expect to work. It will include the wine glasses you’re sending to the friend, our bodies ,computer systems.

 

- Antifragile: These gain from volatility. It’s performance thrives when confronted with volatility.

 

Here volatility means an event that induces stress.If fragile loses from volatility and robustness merely tolerates adverse conditions, the object that gains from volatility is antifragile. Our own bodies are healthier over time with non linear exposure to temperature and food. Our immune systems become better when attacked by disease. And as it’s now obvious in Australia, small naturally occurring fires prevent bigger fires. Spider webs are able to resist winds of hurricanes - a single thread breaks allowing the rest of the web to remain unharmed.

 

Taleb’s book mostly considers the notions of fragility and antifragility in biological, medical, economic, and political systems. He doesn’t talk about how this can apply to software systems but there are some valuable lessons we can draw when it comes to software systems. Failures can result from a variety of causes - mistakes are made and software bugs can be in hibernation for a long time before showing up. As these failures are not predictable, the risk and uncertainty in any system increases with time.In some ways, the problem is similar to a turkey fed by the butcher - for a thousand days, the turkey is fed by the butcher and each day the turkey feels that statistically, the butcher will never hurt him. In fact the confidence is highest just before Thanksgiving.

 

Traditionally we have been designing software systems trying to make them robust and we expect them to work under all conditions.This is becoming more challenging as software is becoming much more complex and the number of components is increasing. We use technology stacks at higher levels of abstractions. Further, with onset of cloud, there might be parts which are not even in your own direct control. Your office might be safe but what happens if data centers where the data and applications reside get hit by the proverbial truck.

 

We try to prove the correctness of a system through rigorous analysis using models and lots of testing. However, both are never sufficient and as a result some bugs always show up in production – especially while interacting with other systems.

 

For designing many systems, we often look at nature – nature is efficient and wouldn’t waste any resources. At the same time, it has anti-fragility built in – when we exercise, we’re temporarily putting stress on body. Consequently, body overshoots in it’s prediction for next stressful condition and we become stronger.If you lift 100 kg, your body prepares itself for lifting 102 kg next time.

 

We spend a great deal of effort in making a system robust but much in making it antifragile.The rough equivalent of antifragile is resilience in common language - it is an attribute of a system that enables it to deal with failure in a way that doesn’t cause the entire system to fail. There are two ways to increase resilience in systems.

 

a)  Create fault tolerant applications:The following classical best practices aid in this goal.

 

     - Focus is better than features: Keep classes small and focused - they should be created for a specific task and should do it well. If you see new extraneous features being added, it’s better to create separate classes for them.

 

- Simplicity is better than anything: Keeping the design simple - It may be fun to use dynamic programming using ABAP RTTI / Java Reflection but if it’s not required, don’t do it.

 

    - Keep high cohesion and loose coupling: If the application is tightly coupled, making a change is highly risky.It makes the code harder to understand as it becomes confusing when it’s trying to do two things at the same time ( e.g. try to do data access and execute a business logic at the same time ). Any change to the business logic change will have to rip through data access parts. As an example, whenever the system needs to communicate with an external system ( say you’re sending messages via an ABAP proxy  to PI / some external systems ) , keep the sending part as a V2 update. You don’t want to block the main business transaction processing or hang on to locks.If there are issues with the receiving system being slow or non available, it’ll  ensure that your main business document processing doesn’t get affected.

 

And keeping fault tolerance in mind, the following ideas can help.

 

- While putting any new code in production, make it unpluggable in case things go wrong.

 

- Create tools to deal with scenarios when things go wrong. Taking the example scenario when we’re not able to send messages as the external system is down / unable to keep up with the throughput, we should have transactions that can resend these messages after identifying them.

 

Replica Sets and Sharding: As developers we may not have to worry about too much building fault tolerant infrastructure but it’s helpful to know the following concepts.

 

- Replica Sets: Create a set of replication nodes for redundancy . If the primary node fails the secondary nodes get activated as primary. For instance, in a three node scenario we can have a primary where all the writes happen ( in green ) and the secondaries ( in red )are asynchronously updated. In case the primary fails, one of the secondaries can become the primary. There can be further variations where reads can be delegated to secondaries if freshness of data is not a concern ( e.g. writes to some data set happens very rarely or at times when the likelihood of application requiring data is very small ).

 

repl-set.png

 

 

 

For simulation, I created a replication set and made the primary node fail. This is how things look when things are going on smoothly . dB writes are issued and the callbacks confirm that the write is successful.

 

 

normal.png

Now, I made the primary node fail so that the secondary becomes the primary. We’re issuing inserts but as the secondary takes some time to become primary, the writes are cached in the dB driver before it gets completed and the callbacks confirm of the update.

 

failover.png

 

Sharding: It’s a horizontal partition of data - i.e. divide the data set into multiple servers or shards.

Vertical scaling on contrast aims to add more resources to a single node which is disproportionately more expensive than using smaller systems.

sharda.png

 

And sharding and replica sets can be combined .

 

shardb.png

 

    Integration: Here again, some very simple things help a lot.

 

    - Keeping the communication asynchronous - while designing integration always assume that the different parts will go down and identify steps needed to control the impact. It’s similar to the earlier example of primary node failing .

 

    - In queuing scenarios, bad messages to be moved to an error queue. Thankfully this feature has been added in SAP PI with 7.3X .

 

However, there is a class of errors that we’re still susceptible to - anything which has a single point of failure. And these could be things external to your application - your firewall configuration etc.

 

Digital circuits achieve fault tolerance with some form of redundancy .An example is triple modular redundancy (TMR).

 

TMR.png

 

 

The majority gate is a simple AND–OR circuit - if the inputs to the majority gate are denoted by x, y and z, then the output of the majority gate is  . In essence we have three distinct pipelines and the result is achieved by majority voting.

 

Application integration with ESB is definitely better than using point to point communications but it’s susceptible to single node failures. May be need a more resilient integration system?

 

b) Regularly induce failures to reduce uncertainty: Thinking of fault tolerance in design certainly helps but there can always be certain category of problems that come with no warning. Further, the damage is more if a service breaks down once in five years than a service which fails every two weeks. Hence, the assertion is that by making it constantly fail, the impact can be minimized. ‘DR tests’ in enterprises are an effort in that direction. However, what happens if we don’t want the failure to be like a fire drill. And in fact most failures in future are going to be the ones we can’t predict. Companies like netflix are already using this strategy. They have their own Simian Army with different kinds of monkeys - Chaos Monkey shuts down virtual instances in production environment - instanced which are serving live traffic to customers. Chaos Gorilla can bring an entire data center and Chaon Kong will bring down an entire region. Then there is latency monkey - it causes addition of latency and this is a much more difficult problem to deal

         

                                       

                                             Mobile Development and Antifragile

My experience with mobile development is only for the last couple of years but there are some distinct patterns I can see here. The languages, frameworks, technologies etc. are fun than some of the broader points that emerge are:

 

- Being antifragile is a feature: The expectation of users is to have the application performing even under volatile conditions - bad / low internet connectivity. We went in the application with a lot of features and then cut down to make it more performant - this was the most critical feature.

 

- Parallels between antifragile and agile development. Agile processes have short iterations, test driven design and rapid prototyping - they all indicate that failure is inherent and the best way to get out of it is to acknowledge and learn from it to make corrections. In some ways, agile is more honest than the traditional methods where we think we can calculate all the future estimates, know all the risks and know what’s going to fail. The emphasis is on failure being a source of strength than something to be hidden assuming it’ll never be discovered.

 

                                                                Cloud and Antifragile

I’ve very limited experience with cloud and none of it is in production- AWS for developer machines , using SAP’s HANA trial for the open sap course and another provider for trying out some other technologies. I can see two big benefits :

 

- It’s easier to try out new iterations making the software development process more agile.

- If a component fails, it’s easier to replace them.

 

                                                                Thinking of Failure


Moving to the original notion of what's the most useful - it is the notion of failure . An antifragile way of developing software does require a shift in way of thinking though.Some of the more important ones being :

 

- Seeing ‘bugs’ differently : Bugs should be seen as how the system functions under certain situation and the emphasis on what we can learn from it.

 

- Adoption of a ‘blameless’ culture : Follows from the law of unintended consequences. We create incentives for people to come out as perfect who never fail and consequently we annihilate any change, sometimes slowing down to the extent where we can’t even get much needed changes.


These were some of my thoughts. Like any way of thinking, it may not be an elixir but there are valuable lessons in being antifragile.


Where to find if LSMW recording exists for a Tcode

$
0
0

On my quest to find if  any LSMW recording exits for Chart of accounts.

Researching the Internet I came across two tables.

 

 

/sapdmc/lsgbdca : This table stores the project,recording code and Recording: Transaction Code

 

/sapdmc/lsorec : This table stores the project and the individual object in the recording.

 

We could write an ALV report for listing the details from these two tables.

 

Create a parameter where you can input the TCODE.

 

With that parameter checkif the tcode is used inany recording from lsgbdca.

 

then if successful checkif the recordings found are assignedtoany objects from lsorec.

 

Use FUNCTION'/SAPDMC/LSM_OBJ_STARTER' to start the lsmw.

 

Check the field mappings if it s suitable for you requirements.

how do you usually upload picture in SCN? A workaround for current SCN upload issue

$
0
0

Hello friends,

recently there is a known issue in SCN  that you cannot upload pictures in your local laptops when you write blogs.( The insert image button is disabled ) 
Currently the SCN guys are investigating on this issue.

http://farm4.staticflickr.com/3714/11186669724_827e399dde_o.png

 

So I have to insert the picture from the web.

http://farm4.staticflickr.com/3825/11186625575_c60730fb4f_o.png

 

I have to upload the pictures to http://www.flickr.com and paste the picture url to SCN blog.

Unfortunately in flickr website, we can only obtain url like http://www.flickr.com/photos/53067560@N00/2658147888/in/set-72157606175084388/. Once opended, you will find the url contains both the content of uploaded picture itself and the frame of flickr website. Such url could not be used in SCN.

If we right click on the uploaded picture, we can see see a couple of picture size choices. Clicking on "Original" means we would like to view the picture with original size.

http://farm8.staticflickr.com/7398/11186669604_170270d4b0_o.png

 

on the new window, right click on picture and choose menu item "Properties", then you can get the static url, which could be used in SCN blog.

http://farm4.staticflickr.com/3794/11186669214_2b46d711d2_o.png

If you have lots of picture to upload this inefficient operations will make you mad. So finally I write a small tool to do url conversion automatically.
It is written in JAVA but could easily be rewritten in ABAP.

Firstly parte all uploaded picture url into a text file in your laptop like below:

http://farm6.staticflickr.com/5525/11186672226_7aee893efd_o.png

 

http://farm6.staticflickr.com/5500/11186625055_54b7989b09_o.png

 

import attached java file into your Eclipse and execute it, it will convert the url for you:

http://farm4.staticflickr.com/3735/11186779023_7936503806_o.png

You need to create an application in flickr website and then you can get your own application key.

 

http://farm4.staticflickr.com/3733/11186669024_4b54d5b925_o.png

Just fill the application key in JAVA class URLFetcher and that's all.

http://farm3.staticflickr.com/2872/11186624915_c57aca8e21_o.png

Still I feel it is not convenient compared to local picture uploading. What is your favourite way to upload picture when you write blog?

Progress Bar - avoid timeout dump

$
0
0

A progress bar is used to display the progress of a process. Sometimes a program can take long to execute and can result in a timeout dump.

 

A progress bar can be used to prevent a timeout dump.

 

1.jpg

 

 

Use of the Function Module SAPGUI_PROGRESS_INDICATOR

 

 

Example after a Select statement:

 

2.jpg

 

3.jpg

 

 

 

Example in a Loop statement:

 

4.jpg

 

 

 

Result:

 

5.jpg

 

 

6.jpg

 

7.jpg

 

 

Please find sample codes in attachment.

Performance Tuning in BI Routines - ABAP Programming

$
0
0

  This blog gives you a detailed description of Performance tuning needed in ABAP routines (transformation) and hints to write the optimized code.


  ABAP Routines – Deployment in Transformation

 

  1. Characteristics or Field Routine
    • Not preferred as it executes for each and every field
  2. Expert Routine
    • Not preferred as it requires ABAP coding for entire Transformation
  3. Start Routine
    • Preferred  based on the requirement(used mostly when changes are need to be done at sourcepackage level)
  4. End Routine
    • Preferred based on the requirement(used mostly when changes are need to be done at result package level)

Types of declarations:-

  1. Global Declaration
    • Should be declared here only when required in both the routines(either start/field/end routines).
    • When populated it is carried across the routines.
    • Should be cleared (either using clear/refresh command) when not used.

   2.  Declaration in Routines

    • Only used within the specific routine.
    • Data cannot be transferred to other routines.
    • Data will be cleared at the end of the routine.

Declaration of data fields:

Structures –

    • Always try to use declaration of data fields as TYPES instead of DATA.
    • Avoid use of TABLES statement for declaring Internal Tables and Structures.
    • Try to use only the required fields for Internal table instead of entire DB Tables.

Example:

TYPES:BEGIN OF ty_structure,      

              field1 TYPE <data_element1>,  

                  field2 TYPE <data_element2>,   

                END OF ty_structure.

DATA: it_table1 TYPE TABLE OF ty_structure,                    Declaration of Internal Table       

            wa_table1 TYPE ty_structure.                                        Declaration of Work Area


Field Symbols –

    • Field symbols are placeholders and do not physically reserve space.

    Syntax:             FIELD-SYMBOLS <FS> TYPE <Data Objects>.

    • This Field symbol should be assigned by any data object before using it.
    • Addressing a Field symbol means it address the field assigned to it.
    • Equivalent to MODIFY Statement when used as Work area for Internal Table.

Example:

  FIELD-SYMBOLS<fs_wa> type ty_structure.

   READ TABLE it_table1 assigning <fs_wa> INDEX 1.

       IF sy-subrc = 0.

        <fs_wa>-field2 = <fs_wa>-field1 + <fs_wa>-field2.

       ENDIF.

  ASSIGN wa_table1 to <fs_wa>.


Internal Tables –

    • Type of Internal Tables should be based on the handling of data.
    • HASHED Tables are preferred for handling huge volume of data. Only unique entries can be loaded to it.
    • STANDARD Tables should be used when INDEX operations are required which is not possible in HASHED tables.
    • When multiple entries for the same key (Header and Detail Records) are required for processing STANDARD Tables should be used.

Example:  

                  DATA: it_table1_h TYPE HASHED TABLE OF ty_structure WITHUNIQUE KEY field1,   

                  it_table1 TYPE STANDARD TABLE OF ty_structure.

    • Reading an entry from STANDARD Table should be used with BINARY SEARCH. It should be sorted before based on the where condition.
    • Reading an entry from HASHED Table should be used with all the key fields when declared and also with keyword “TABLE KEY”. No Sorting should be made.
    • SY-SUBRC should be checked whenever Internal table is read using READ TABLE.

Example:  

              READ TABLE it_table1_h ASSIGNING <fs_wa> WITH TABLE KEY field1 = ‘100’.  

              SORTit_table1 BY field1ASCENDING.

              READ TABLE it_table1 ASSIGNING<fs_wa> WITH KEY field1 = ‘100’ BINARY SEARCH.


Nested Loops – Performance Killer

    • At any cause Nested Loops should be avoided and can be replaced by Parallel Cursor concept.
    • Main Internal Table can be of any type but the Inner Internal table should be of STANDARD table and should be sorted before based on the where condition.
    • Any number of Nested loops can be avoided by using the Parallel Cursor concept.

Parallel Cursor Concept

    • Looping the inner Internal table based on the INDEX.
    • Use Read Table statement instead of using two loops to read the Internal tables..


Example:      

                  DATA: l_tabix TYPE sy-tabix. 

                              LOOP AT it_table1_h ASSIGNING<fs_wa>. 

                                READ TABLE it_table1 TRANSPORTING NO FIELDS WITH KEY field1 = <fs_wa>-field1 BINARY SEARCH.       

                                  IF sy-subrc = 0.

                                    l_tabix = sy-tabix. 

                                    LOOP AT it_table1 ASSIGNING<fs_wa2> INDEX l_tabix.

                                    IF<fs_wa2>-field1 <> <fs_wa>-field1.

                                       EXIT.

                                   ELSE.    

                                 ……..                                       <code for processing>.                                      ……..         

                                   ENDIF.                        

                                  ENDLOOP.  

                       ENDIF. 

                   ENDLOOP.


Points to Remember while using Select statements :-
      • Only required fields should be fetched from DB table.
      • Selection fields should be in the order as in DB table.
      • INTO CORRESPONDING FIELDS OF TABLE should be avoided.
      • Selection of entries should be restricted based on the required entries for selection.Use FOR ALL ENTRIES.
      • Entries in the Internal Table used in FOR ALL ENTRIES should be unique.
      • Fields in the WHERE condition should be primary key fields of the DB table with the same order as in DB.
      • Use of conditions like <, >, <>, LIKE, IN should be avoided in WHERE condition.
      • SELECT statements should not be used in the LOOP statements.
      • SELECT ENDSELECT should be avoided rather SELECT INTO TABLE should be used. Reduce the load on the Database Server.


Use the transaction code SE30 – ABAP Runtime Analysis to check the performance of the program.

How to solve the problem of different length for the same field

$
0
0

Scenario

 

In table LFBK, the field BANKN is of length 18.

In table TIBAN, the field BANKN is of length 35.

 

The problem arises when we have a For All Entries like below:

 

  SELECT *
FROM tiban
INTO TABLE lt_tiban
FOR ALL ENTRIES IN lt_lfbk
WHERE banks EQ lt_lfbk-banks
AND   bankl EQ lt_lfbk-bankl
AND   bankn EQ lt_lfbk-bankn
AND   bkont EQ lt_lfbk-bkont.

 

The yellow line will result in an error because the size of TIBAN-BANKN is bigger than LFBK-BANKN.

 

 

Solution

 

A solution would be to create a new structure and add a new field with length 35:

 

1. Create a type. 


BEGIN OF ty_lfbk_new.
include TYPE lfbk.
TYPES:
bankn35
TYPE tiban-bankn,
END OF   ty_lfbk_new.

 

 

2. Create a new structure and table with the new type.


DATA:    ls_lfbk_new TYPE ty_lfbk_new,
          lt_lfbk_new
TYPE STANDARD TABLE OF ty_lfbk_new.

 

 

3. Fill the new field BANKN35 with the data LS_LFBK-BANKN.

 

LOOP AT lt_lfbk INTO ls_lfbk.
ls_lfbk_new
= ls_lfbk.
ls_lfbk_new
-bankn35 = ls_lfbk-bankn.
APPEND ls_lfbk_new to lt_lfbk_new.
ENDLOOP.

 

 

4. In the Select statement, use the new table LT_LFBK_NEW for FOR ALL ENTRIES.

 

SELECT *
FROM tiban
INTO TABLE lt_tiban
FOR ALL ENTRIES IN lt_lfbk_new
WHERE banks EQ lt_lfbk_new-banks
AND   bankl EQ lt_lfbk_new-bankl
AND   bankn EQ lt_lfbk_new-bankn35
AND   bkont EQ lt_lfbk_new-bkont.

Theory about online language switching (without log off/log on)

$
0
0

Hello,

here I want to share some knowledge/hints how an online language switch can be implemented.

  1. SET LOCALE LANGUAGE <SPRAS> changes the current language of the current internal session. see elp.sap.com/abapdocu_702/en/abapset_locale.htm.
  2. Transactions called by Function  TH_REMOTE_TRANSACTION seem to belong to that internal session, so they use that language
    1. Parameter DEST is not an RFC destination, but 'NONE' is accepted. If you want to execute the function on certain servers, you can use e.g. function  TH_SERVER_LIST  to get a list of servers (also by RFC).
    2. using transaction  ' ' starts just a session
    3. 'SESSION_MANAGER' is also a transaction.
  3. LEAVE PROGRAM  terminates a program.

 

 

 

So code snippet to switch language could look like this (usage on your own risk / only for evaluation):


form SWITCH using FUW_SPRAS

                                 FUW_TCODE..

   data: LW_TCODE type TSTC-TCODE.

   set locale language FUW_SPRAS.

   select single TCODE from TSTC into LW_TCODE
     where TCODE = FUW_TCODE.

   if not LW_TCODE is initial.
     call function 'AUTHORITY_CHECK_TCODE'
       exporting
         TCODE  = LW_TCODE
       exceptions
         OK     = 1
         NOT_OK = 2.
     if SY-SUBRC = 2.
       return.
     endif.
   endif.

   call function 'TH_REMOTE_TRANSACTION'
     exporting
       TCODE = LW_TCODE
       DEST  = 'NONE'.
   leave program.

endform.                    "SWITCH


 

-Jürgen-

Troubleshooting a Trfc production issue

$
0
0

It's been a long time since I have written a blog. Through this blog I am going to share my understanding of Trfc's gained while handling a production issue.

Last week a strange ticket came with so many Process orders (COR2) getting stuck in SM58 Trfc.


The first look at the log informed us with below mentioned info

  1. The function module struck in queue is a custom one....Custom code culprit
  2. The status is of the call is SYSFAIL
  3. Message in the error text is "Process order ABCD locked".


Table ARFCSSTATE screen shot attached below


arfc.png

Misunderstanding 1


One of the biggest misconceptions we had was Trfc function calls are started as Background task in update mode even when the locks acquired by SAP are not released.

 

Correct Understanding 1

 

The understanding was corrected when we found in update debugging that ARFC are called as a new task at the end when all locks are released(Cross verified from SM12).Trfc's and Qrfc's are executed at the last via starting new task when all the locks have been released by the transaction. We were like fine if the locks are released why we still have locks.

 

ARFC call.png

So now the time was to check what was happening in our custom function module. The custom function module was called from an exit on save as a background task.   While replicating the scenario in testing system we found out that we had only one entry for our custom function module in SM58 with status as Transaction Recorded which is correct as the it was suppose to be executed.

 

Everything looked fine. Transaction locks are released and only one function module running in background task so from where does this issue coming from. So we decided to think in other direction. Even if their is an issue a background job should try to post these error entries. On  checking the system we found out there was no job scheduled for RSARFCEX to restart such LUW's. But we found out that there were so many background jobs created for other TRFC errored out entries in SM37. So we decided to look into this flip of the coin first to increase our understanding.

 

Misunderstanding 2

Why ARFC background jobs are not picking our entry but picking other entries to reprocess.

 

arf_jobs.png

jOB nAME.png

Correct Understanding 2

Then the next thing which we stumble upon was different ARFC jobs getting created for program RSARFCSE in SM37 to process SM58 entries.  Why was this not happening for our case? On searching a bit about it we found one nicely documented SAP KBA 1902003 - Many ARFC* jobs in SM37 and many "Error when opening an RFC connection" in SM58 at the same time

We came to learn that only entries for which connection issues are encountered, the system creates the job automatically via settings maintained for the RFC destination in SM59.

 

Quick Fix( Temporary solution with root cause still unknown)

Stuck with no solution in sight except executing those LUW's or all the via report RSARFCEX or do some more custom development like storing entries in custom table and later process them via custom batch job

 

Good Luck Prevails Finally found the root cause


Till now we were stuck but have some temporary solution (not good type) in mind. The only thought was what can be locking the process order. On brain storming with one my colleague it came down to there can be two possible causes as mentioned below.

 

  1. Either it is the user who are locking the process order by going back in the transaction and our Trfc calls are getting processed strongly. This we knew that there are very rare chances of this to happen.
  2. Some other process running in parallel is blocking the stuff...May be some custom development which is not triggering in our development system.

 

So we decided to drill down more into the second option. We thought to check if there is any custom tables getting updated at the same time(this did the trick). We stumble on one of SAP table DBTABLOGour savior which maintains the custom table log if table logging is active. We found some entries in one of the custom table were getting created. Checked in our culprit FM is it getting updated via us no..no..no..no... Bingo finally something else is also doing its part in the background which we are not able to replicate in testing system.

 

DBTABLOG

 

On doing a where used list of the table found another custom FM getting triggered in the same exit via back ground task same as ours.  On taking a detailed look at another FM, we found out that it was acquiring locks on the process order along with lot of wait statements.Since we knew from our initial learning that Trfc's are called after all locks have been released and they are called in parallel using starting new task. The second Z FM was not getting triggered as some conditions were not satisfied before the call.We simply triggered this second FM also in Testing client and we were able to replicate it.

 

Proposed Solution

  1. Since both the Function module needs to be called lets call in them in Wrapper one with one call after another. No locking issue.
  2. Convert this Trfc's call into a Qrfc call using queue which has it's pros and cons.

 

The solution is yet to be finalized. We will share them once we have started with it. Meanwhile you can also provide us your valuable ideas which you have implemented in similar kind of problems.

 

Takeaway from this issue

  1. Trfc's and Qrfc's are called after the locks have been released. So this means we can use them when to perform further operations on the same object like PO/PR etc.
  2. Why background jobs were not created for some failed Trfc's.
  3. All Function module called in background task will remain in status Transaction recorded if you are in Update debugging mode and checking in SM58 with exception of the one being called in Update task( they will eventually get this status:)).

 

Please add your valuable opinion/comments for the same. Lets share lets learn

 

 

Common Syntax of Trfc call is


CALL FUNCTION IN BACKGROUND TASK STARTING NEW TASK DESTINATION


New ABAP feature in 740: LET expression

$
0
0

A LET expression defines variables var1, var2, ... or field symbols <fs1>, <fs2>, ... as local auxiliary fields in an expression and assigned values to them. When declared, the auxiliary fields can be used in the operand positions of the expression. There is no way of accessing an auxiliary field statically outside  its expression.

 

See example below:

 

1. in line 25 and line 26 we define two auxiliary fields date and sep with keyword LET, which are used in LET expressions in line 27.

2. in line 27 we define a LET expression by keyword IN

3. finally the value of LET expression will be calculated and filled to inline variable isodate defined in line 24. We use CONV string to explicitly specify that inline defined variable isodate has type STRING.

clipboard1.png

 


Execution result:

clipboard2.png

Another example:

 

Defines three local auxiliary variables, x, y, and z, in a constructor expression to construct the values of a structure. The values of the auxiliary variables are used for the structure components.

clipboard3.png

Execution result:

clipboard4.png

FSCM - Adding Custom Fields to Formula Editor

$
0
0

This blog explains how to add the custom attributes/fields to Formula Editor from the additional custom tabs of BP General Data and BP Credit Segment Data created through Business Data Toolset (BDT) for new credit scoring formulas.

 

BP Transaction: UKM_BP

 

Ways to add custom attributes / fields to Formula Editor: (SAP Provided Documentation)

With BAdI: Formula Parameters and Functions (UKM_EV_FORMULA), we can enhance the field and function selection of the formula editor.

You can either integrate your own fields that you have defined with BAdI: Additional Attributes for Business Partner (UKM_BP_ADD_FIELDS), or integrate fields from the following business partner structures:

  • UKM_S_BP_CMS
  • BP1010
  • BAPIBUS1006_ADDRESS
  • BAPIBUS1006_CENTRAL
  • BAPIBUS1006_CENTRAL_GROUP
  • BAPIBUS1006_CENTRAL_ORGAN
  • BAPIBUS1006_CENTRAL_PERSON

  BAdI methods:

  • ADD_FIELDS
  • FILL_FIELD

 

SAP Reference IMG for BAdI “UKM_EV_FORMULA”:

 

Financial Supply Chain Management -> Credit Management -> Credit Risk Monitoring -> Enhancements -> BAdI: Formula Parameters and Functions

But this post will explain how to add custom fields to Formula Editor instead of fields from above mentioned BP structures.

Define Formulas:

 

SAP Reference IMG to Define Formulas:

 

Financial Supply Chain Management -> Credit Management -> Credit Risk Monitoring -> Define Formulas

 

After defining new Formula (ex: ZSCORE), click on button "Formula Editor" to define the required formulas for custom fields.

 

Sample Code Snippet to add fields to Formula Editor:

Implement the BADI “UKM_EV_FORMULA” with below code in the method “ADD_FIELDS”.

METHOD if_ex_ukm_ev_formula~add_fields.

 
CONSTANTS: lc_empty  TYPE sfbefsym VALUE '',
             lc_bp_gen
TYPE sfbefsym VALUE 'ZTABLE'.
 
DATA: wa_operands TYPE sfbeoprnd.

 
CASE i_key.
   
WHEN lc_empty.
     
CLEAR wa_operands.
      wa_operands
-tech_name = 'ZTABLE'. “Custom table
      wa_operands
-descriptn = 'Table Description'.
     
APPEND wa_operands TO ct_operands.
   
WHEN lc_bp_gen.
     
CLEAR wa_operands.
      wa_operands
-tech_name = 'ZTABLE-ZFIELD1'. “Custom Table Field
      wa_operands
-descriptn = 'Z-Field1 Description'.
      wa_operands
-type = 'CHAR20'.
     
APPEND wa_operands TO ct_operands.

     
CLEAR wa_operands.
      wa_operands
-tech_name = 'ZTABLE-ZFIELD2'. “Custom Table Field
      wa_operands
-descriptn = 'Z-Field2 Description'.
      wa_operands
-type = 'INT4'.
     
APPEND wa_operands TO ct_operands.
   
WHEN OTHERS.
 
ENDCASE.

ENDMETHOD.

Sample Code Snippet to populate fields for Formula Calculations:

Implement the BADI “UKM_EV_FORMULA” with below code in the method “FILL_FIELD”.

METHOD if_ex_ukm_ev_formula~fill_field.

 
DATA: ls_but000 TYPE but000.
 
DATA: lv_field1 TYPE ZTABLE-FIELD1,
        lv_field2
TYPE ZTABLE-FIELD2.
 
DATA: dref1 TYPE REF TO data,
        dref2
TYPE REF TO data.

 
FIELD-SYMBOLS: <fs_field1> TYPE any,
                 <fs_field2>
TYPE any.

 
CLEAR ls_but000.

* read partner from BP screen
 
CALL FUNCTION 'BUP_BUPA_BUT000_GET'
   
IMPORTING
      e_but000
= ls_but000.

CASE i_fieldname.
   
WHEN 'ZTABLE-FIELD1'.
     
CREATE DATA dref1 TYPE ZTABLE-FIELD1.

      ASSIGN dref1->* TO <fs_field1>.


     
SELECT SINGLE FIELD1
             
INTO lv_field1
            
FROM ZTABLE
       
WHERE partner = ls_but000-partner.
     
IF sy-subrc = 0.

        <fs_field1> = lv_field1.


       
GET REFERENCE OF <fs_field1> INTO rd_result.

      ENDIF.
   

     WHEN 'ZTABLE-FIELD2'.
     
CREATE DATA dref2 TYPE ZTABLE-FIELD2.

      ASSIGN dref2->* TO <fs_field2>.

     
SELECT SINGLE FIELD2

              INTO lv_field2
            
FROM ZTABLE
       
WHERE partner = ls_but000-partner.

      IF sy-subrc = 0.
        <fs_field2>
= lv_field2.
       
GET REFERENCE OF <fs_field2> INTO rd_result.

      ENDIF.
   

     WHEN OTHERS.

  ENDCASE.

ENDMETHOD.

 

Results:

 

Custom fields have been added to Formula Editor and required Formulas.

 

Transaction UKM_BP -> Select BP -> Select BP Role “SAP Credit Management” -> Go to “Credit Profile” tab -> Select the Rule -> Click on button “Calc. with Formula”.

 

Score will be calculated based on the field values populated from respective DB tables. We can toggle to Rule Evaluation to see the detailed scoring criteria.

 

Thanks

Gangadhar

ABAP Mesh in 740: Connect your internal table as BO node association

$
0
0

ABAP Mesh is also a new feature in 740. Let's use an example to demonstrate how it works:

 

I have defined two types for developers and managers. developer type has a field manager which points to his manager, while manager type does not have any reference to his managing emplopyee.

 

 

 

types: begin of t_manager,

         name     type char10,

         salary   type int4,

       end of t_manager,

       tt_manager type sorted table of t_manager with unique key name.

types: begin of t_developer,

         name     type char10,

         salary   type int4,

         manager TYPE char10,

       end of t_developer,

       tt_developer type sorted table of t_developer with unique key name.

 

 

I also use the new grammar - inline data declaration to fill developer and manager table. So far nothing special.

 

DATA: lt_developer TYPE tt_developer,

         lt_manager TYPE tt_manager.

   DATA(Jerry) = VALUE t_developer( name = 'Jerry' salary = 1000 manager = 'Jason' ).

   DATA(Tom) = VALUE t_developer( name = 'Tom' salary = 2000 manager = 'Jason' ).

   DATA(Bob) = VALUE t_developer( name = 'Bob' salary = 2100 manager = 'Jason' ).

   DATA(Jack) = VALUE t_developer( name = 'Jack' salary = 1000 manager = 'Thomas' ).

   DATA(David) = VALUE t_developer( name = 'David' salary = 2000 manager = 'Thomas' ).

   DATA(John) = VALUE t_developer( name = 'John' salary = 2100 manager = 'Thomas' ).

   DATA(Jason) = VALUE t_manager( name = 'Jason' salary = 3000 ).

   DATA(Thomas) = VALUE t_manager( name = 'Thomas' salary = 3200 ).

   INSERT Jerry INTO TABLE lt_developer.

   INSERT Tom INTO TABLE lt_developer.

   INSERT Bob INTO TABLE lt_developer.

   INSERT Jack INTO TABLE lt_developer.

   INSERT David INTO TABLE lt_developer.

   INSERT John INTO TABLE lt_developer.

   INSERT Jason INTO TABLE lt_manager.

   INSERT Thomas INTO TABLE lt_manager.

Now I define one ABAP mesh t_team with two component managers and developers. With association 'my_employee', I connect the internal table managers to developers, so that I could easily find all developers of a given manager. The association 'my_manager' just enables the connection in opposite direction: find out the manager of a given developer.

clipboard1.png


You can compare how I find Jerry's manager and find all developers whose manager are Thomas using new ABAP mesh and the traditional way.

clipboard2.png

The result are exactly the same.

clipboard3.png

New Open SQL Enhancement in 740

$
0
0

The following open SQL statement looks a little werid, however it could really works in 740.

clipboard1.png

1. The field name of my structure ty_my_sflight is different from field defined in sflight, so in SQL statement I use the format <field in DB table> AS <field in my own structure> to move the content from DB to the corresponding fields of my internal table.

 

2. I want to calculate the percent about how many seat are occupied and put the result into my field my_seatrate. Now I could push the calculation to DB layer instead of calculating it in ABAP side.

 

3. The logic to determine the flight price in the example shows that we could define some application logic in open sql statement.

 

4. Since we are using new SQL enhanced syntax in 740, it is required that all variables defined in the application code must be escaped with flag "@" when they are being used in the SQL, as is shown in line 28 and 33.

 

The original data displayed in SE16:

clipboard2.png


The content of internal table lt_flight is listed below. We observed that the price for the 2013-2-13 and 2013-3-13 is reduced correctly, also the seat occupation percent.

clipboard3.png

Bitmap Processing : Stitching Images Horizontally

$
0
0

Two images can be stitched horizontally to create a bigger image.

The image processing is done at byte level, and utilizes Thomas Jung's ZCL_ABAP_BITMAP class.

Below image summarizes what is being done.

 

 

Prerequisites

Thomas Jung's ABAP Bitmap Image Processing Class (ZCL_ABAP_BITMAP) , available as SAPLink nugget.

Any Hex Editor to view image files, because the processing is done on hex string at byte level.

Code snippet can be written in Thomas Jung's test program ZABAP_BITMAP_TEST, or a new method can be created in ZCL_ABAP_BITMAP

 

Summary

Class ZCL_ABAP_BITMAP already has several single image processing options such as rotation, flipping, inversion etc.

For 2 input images, 2 instances are creating using method CREATE_FROM_EXT_FORMAT.

The bitmap headers are retrieved using method GET_HEADER_INFORMATION.

Individual image pixels are read using method GET_PIXEL_VALUE.

After creating hex stream that contains stitched image, method CREATE_FROM_BYTESTREAM is used to create image object.

Method DISPLAY_IN_SAPGUI is used to display final image.

 

Understanding of bitmap structure using small images

Although everything is well explained in ABAP Bitmap Image Processing Class and BMP file format - Wikipedia , it took some effort to understand how bitmap image is structured.

So I created a very small image of resolution 3x4 using Microsoft Paint. (24-bit uncompressed bitmap format).

This is how the 3x4 image looks at 32x zoom.

 

After debugging the constructor and rotate methods on this image, I came to know that:

  • Bitmap stream can be divided on 2 parts, header and pixel array.
  • Bitmap header contains size, height, width, pixel array bytes information. Wikipedia article shows exact offset at which this information is stored.
  • Bitmap pixel array start at an offset maintained in header, and for 24-bit bitmap, 3 bytes per pixel is used.
  • A 3x4 image has 4 rows, each row having 3 pixels (total 12 pixels).
  • For a row (3pixels), 9 bytes are required, but bitmap uses 12 bytes for each row. This is because row size in bytes is rounded to next multiple of 4.
  • 1 row = 12 bytes = 3pixels * 3bytesperpixel + 3 (padding with dummy data)
  • Since padding determines exact bytes occupied, rotating an image can result in change of pixel array size.
  • For example 3x4 image pixel array size is 48 {4 * (3 * 3 + 3padding) }, whereas 4x3 image pixel array size is 36 { 3 * ( 4 * 3 + 0padding ) }.

 

 

Hex mode comparison of input and output image bitmap header

I created two 3x4 input images, and stitched them manually to create 6x4 output image.

All 3 images were seen in hex mode to figure out the logic.

Below are the images at 32x zoom, and their hex mode view.

This is the pseudo code for stitching images horizontally, deduced by comparing hex modes.

  • Ensure height of input images is same.
  • Copy header of first image.
  • Since images are joined horizontally edge to edge, height does not need to be changed.
  • New width would be sum of widths of input images.
  • Pixel array Offset remains unchanged for 24bit uncompressed bitmap.
  • Size of pixel array needs to be calculated for (width1+width2)x(commonheight) image, taken padding bytes into consideration.
  • The pixel array needs to be constructed putting together pixels of image1, image2 and padding bytes.
  • Hex stream of output image can be created by concatenating new header and new pixel array.
  • A new instance of ZCL_ABAP_BITMAP can be created using this hex content, so that output can be shown in SAP GUI.

 

 

Stitching images vertically

For stitching images vertically, similar logic is required.

It should be relatively simpler because in this case the width (padding too by extension) does not change.

So we can directly concatenate pixel arrays in input images to get output pixel array.

Size can be direct sum of header, pixelarray1 and pixelarray2.

A simple example of using ABAP regular expression: Should I use it at all in this case?

$
0
0

Currently I am working on the project to enable CRM system with social media integration.

We need to extract all social posts with different social channels into CRM system like twitter, facebook and sina weibo etc.

The detail about integration would be found in my blog.

 

Sina weibo is a very popular social media channel in Chinese with more than half a billion.

 

users can create weibo posts with multiple pictures uploaded from local laptops.

clipboard1.png

clipboard2.png

I found in the json string, two thumbnail pic urls are available there. However, only one original url exists. It means I need to populate the original url for the second picture based on its thumbnail url http://ww3.sinaimg.cn/thumbnail/d19bb9dfgw1ebdk3zp82mj20bi07omxx.jpg.

 

The population logic should be:

 

original picture url of picture1: http://A/B/1.jpg - url1

thubmnail url of picture2: http://C/D/2.jpg - url2

 

the calculated original url has format like <large picture url prefix got from url1>/<file name got from url2>, that is http://A/B/2.jpg

 

 

 

I compared the normal way and the way using regular expression. The normal way only uses 6 lines and the regular expression uses 11 lines.

In this very case, it seems the regular expression does not show its power. Which one do you prefer? Please kindly let me know your suggestion.

 

REPORT ZTESTREG1.

DATA: lv_origin_url1 TYPE STRING value 'http://ww2.sinaimg.cn/large/d19bb9dfgw1ebdk3zbk3rj20ch0a5jrv.jpg',

      lv_thumbnail_2 TYPE STRING value 'http://ww3.sinaimg.cn/thumbnail/d19bb9dfgw1ebdk3zp82mj20bi07omxx.jpg',

      lv_result1 TYPE string,

      lv_result2 TYPE string.

" normal solution

SPLIT lv_thumbnail_2 AT '/' INTO TABLE DATA(lt_temp).

READ TABLE lt_temp ASSIGNING FIELD-SYMBOL(<file_name>) INDEX ( lines( lt_temp ) ).

FIND ALL OCCURRENCES OF '/' IN lv_origin_url1 RESULTS DATA(lt_match_result).

READ TABLE lt_match_result ASSIGNING FIELD-SYMBOL(<last_match>) INDEX ( lines( lt_match_result ) ).

DATA(lv_origin_prefix) = lv_origin_url1+0(<last_match>-offset).

lv_result1 = lv_origin_prefix && '/' && <file_name>.

" use regular expression

DATA(reg_pattern) = '(http://)([\.\w]+)/(\w+)/([\.\w]+)'.

DATA(lo_regex) = NEW cl_abap_regex( pattern = reg_pattern ).

DATA(lo_matcher) = lo_regex->create_matcher( EXPORTING text = lv_thumbnail_2 ).

CHECK lo_matcher->match( ) = abap_true.

DATA(lt_reg_match_result) = lo_matcher->find_all( ).

READ TABLE lt_reg_match_result ASSIGNING FIELD-SYMBOL(<reg_entry>) INDEX 1.

READ TABLE <reg_entry>-submatches ASSIGNING FIELD-SYMBOL(<match>) INDEX lines( <reg_entry>-submatches ).

DATA(file_name) = lv_thumbnail_2+<match>-offset(<match>-length).

DATA(lv_new) = |$1$2/$3/{ file_name }|.

lv_result2 = lv_origin_url1.

REPLACE ALL OCCURRENCES OF REGEX reg_pattern IN lv_result2 WITH lv_new.

ASSERT lv_result2 = lv_result1.

 

An example of AMDP( ABAP Managed Database Procedure ) in 740

$
0
0

ABAP Managed Database Procedures are new feature available in AS ABAP 7.40, SP05 which enables you to manage and call stored procedures or database procedure in AS ABAP. An ABAP Managed Database Procedure is a procedure written in its database-specific language (Native SQL, SQLScript, ...) implemented in an AMDP method of an AMDP class.

 

 

I use a simple example to show how it works. In line 21 the check is done against database type. Since I will demonstrate to manage and call store procedure writtin in SQLScript in HANA, the sample must be running in HANA DB.

http://farm6.staticflickr.com/5473/11303408906_88f9f419c6_o.png

http://farm3.staticflickr.com/2809/11303432254_5a6ef6e018_o.png

Then in line 26 a pop up is displayed to allow user input how much price he would like to increase. In line 33, the price is increased by calling

HANA store procedure, with the help of AMDP class CL_DEMO_AMDP.

 

An AMDP class to handle with HANA stuff must inherit the tag interface IF_AMDP_MARKER_HDB 

http://farm4.staticflickr.com/3832/11303376995_9ce72c0e7b_o.png

And AMDP method must specify the database type ( HDB ) and language type ( SQLSCRIPT ) using keyword shown below.

The ":" in line 4 and line 5 are specific syntax of SQLScript - the importing parameter should have it as prefix.

http://farm8.staticflickr.com/7372/11303376945_36b91f586f_o.png

The most important part is the AMDP class could not be developed in SAP GUI, but only possible in ABAP development Tools ( i.e ABAP in Eclipse)

http://farm6.staticflickr.com/5506/11303418906_be85370fe6_o.png

http://farm3.staticflickr.com/2844/11303408616_6ab2c8d047_o.png

Every AMDP method corresponds to a store procedure which you can find in HANA studio. Or you can also find it via ST05:

http://farm8.staticflickr.com/7314/11303432054_ff70a15540_o.png


A small tip to find all classes which are registered to a given event - And how I find this tip via SM50

$
0
0

I would like to get a list of all classes which have methods registered to the event NEW_FOCUS of CL_BSP_WD_COLLECTION_WRAPPER.

 

When I try to use "where used list", I meet with a time out exception. It seems the number of hit are too large.

Also I just would like to get the list of classes which have naming convention for example CL_BSP_WD<XXXX>. However, "where used list" does not allow me to specify any filters.

http://farm8.staticflickr.com/7395/11306057494_806372ba8b_o.png

then I try repository information system in SE80:

http://farm6.staticflickr.com/5521/11306104493_0a3eaeb335_o.png

Unfortunately it just return the class which defines that event.

http://farm3.staticflickr.com/2836/11305995385_d7da8ea6dc_o.png

I know I can use ST05 to find the transparent table which stores the definition of class method. However there is a more rapid way.

Use tcode SM50, then I found the long-running process for where used list function. It clearly shows that the process is reading on view VSEOCOMPDF.

 

http://farm3.staticflickr.com/2821/11306057334_8f13a398a4_o.png

Have a look at this view in SE11:

http://farm6.staticflickr.com/5497/11306104303_499c27f352_o.png  

Then I try with table SEOCOMPODF:

http://farm4.staticflickr.com/3712/11306057214_e47db85d03_o.png

Then I got the list

http://farm8.staticflickr.com/7303/11306057264_04efc8250e_o.png

class documentation generator

$
0
0

The ABAP Class Documentation Generator works equal to the well-known JavaDoc tool for documenting Java classes. It is available in software component BBPCRM (starting with CRM 7.0)

 

A good documentation will
1. make handover more smooth, ease the know-how transfer 
2. help customers e.g. for BAdI implementation reference

 

How to use:

1. Make documentation on class level. You need to create a dummy private instance method with naming convention CLASS_DOCU.Then make all documentations before the first line of that method implementation code.

http://farm6.staticflickr.com/5516/11336530844_488dacc2b7_o.png

 

2. In the method implementation source code, just make comments the same as the well-known JavaDoc before the first line of method source code as below.

http://farm6.staticflickr.com/5494/11336530824_f6cf0159a9_o.png

3. After you finish all documentations, use tcode CLASS_DOCUGEN to generate documentation:

 

http://farm6.staticflickr.com/5528/11336569563_ef198d42af_o.png

http://farm6.staticflickr.com/5480/11336505366_0ab98162d4_o.png

4. in class builder, click button to view generated documentation:

http://farm4.staticflickr.com/3749/11336569513_40b609c5fc_o.png

http://farm6.staticflickr.com/5522/11336505236_7d5e989231_o.png

 

http://farm6.staticflickr.com/5476/11336530594_40b37a5b6b_o.png

ABAP in jEdit - Offline ABAP Editor

$
0
0

Do you also keep your ABAP snippets in some kind of ASCII-files using notepad or other editors to look them up? If you want to lookup your ABAP code when you are offline everything is displayed in one color (mainly black).

 

ABAP in eclipse can not be used to display ABAP code offline, because you are not connected to an ABAP project. That is because the pretty printing is done using the ABAP back end. Only in that way ABAP in eclipse is able to keep track of different ABAP versions.

 

 

So I was very happy to find out, that with jEdit - a programmers editor written in java - I can do keyword high lightning as well as folding and indention regarding a statement block.

 

 

Everything is controlled by a so called mode. It is a xml file which tells the editor, which are the keywords and how the folding and indention is done. Nathan Jones started with some small configuration. I added the keywords regarding ABAP 7.31 as well as folding and indention.

 

In the first picture you see a typical jEdit coloring. If you want the ABAP coloring, you have to adjust that manually in the settings, because the color itself can not be controlled by the mode.

 

 

Finally you have to edit the catalog file, which connects the file extension with the mode. I have chosen .abap as well as the ABAP in eclipse extension .asinc. You add your own. The abap.xml file you put in the main directory c:/programs/jedit/modes. There you also find the catalog.xml, which you have to edit.

 

What do you think. Is this useful for you? What are your experience with that solution? Do you know other editors to display/edit ABAP?

Environment sensitive Job Spawning

$
0
0

Inspiration

For faster results, we often execute programs in parallel using background jobs. Even though this might be a good idea in terms of processing efficiency, on many occasions this results in high utilization of the system resources and non-availability of background work processes for other key custom operations.

 

Solution

The below technique provides a controlled and conservative approach towards multiple/parallel job generation. With this technique, we can set a maximum utilization percentage and hence always reserve some work processes for other operations. The attributes like utilization threshold and delay can be input as a Parameter/Select-option in the program or can be maintained in selection variable table TVARVC or a custom table, but to keep it simple I am going to use this as constants in the program.

 

The key function that we use in this method is TH_GET_WPINFO which returns the details of the active work processes in the system. The function takes as input 2 optional parameters SRVNAME (Application server) and WITH_CPU. If the job processing has to be limited to a specific server on the system, these parameters can be used to limit the query and later submit the job to the specific server group. But if no restrictions need to be imposed in terms of servers, these can be left blank. The function returns the list of work processes in the system and is similar to SM50 screen (except for the fact that SM50 shows the work processes only from the server on which user is logged on to). Based on this information, we can determine the total and available work processes and the background work process utilization level. Based on the percentage, we can either generate the next job or wait for predefined amount of time and try again.

 

Assumptions

Utilization threshold : 60%

No. of jobs to be executed : 10

Wait before retry if utilization is too high : 10 seconds

 

Source Code

REPORT  zcons_bgd_proc.

 

CONSTANTS:

  "Assuming the utilization limit to be 60%

  c_limit  TYPE p DECIMALS 2 VALUE 60.

 

DATA:

  "Assuming a total of 10 jobs need to be generated

  v_jobs    TYPE i VALUE 10,

  "Assuming a wait time of 10 seconds to be inserted

  " if utilization is above limit

  v_delay   TYPE i VALUE 10.

 

DATA:

  lt_wplist TYPE TABLE OF wpinfo,

  v_total   TYPE i,

  v_utilz   TYPE i,

  v_uperc   TYPE p DECIMALS 2.

 

DO.

  CALL FUNCTION 'TH_GET_WPINFO'

* EXPORTING

*   srvname          =  "Application server optional

*   with_cpu         =  "CPU optional

    TABLES

      wplist           = lt_wplist

   EXCEPTIONS

     send_error       = 1

     OTHERS           = 2.

  IF sy-subrc = 0.

 

    "Retain details of background work processes only

    DELETE lt_wplist WHERE wp_itype NE 4.

    DESCRIBE TABLE lt_wplist LINES v_total.

 

    "Delete the work processes in Waiting state

    DELETE lt_wplist WHERE wp_istatus = 2.

    DESCRIBE TABLE lt_wplist LINES v_utilz.

    v_uperc = v_utilz * 100 / v_total.

 

    "If utilization is within set threshold, proceed

    IF v_uperc LE c_limit.

      "<Generate background job>

      v_jobs = v_jobs - 1.

      IF v_jobs = 0.

        EXIT.

      ENDIF.

 

      " If utilization crosses threshold, wait

    ELSE.

      WAIT UP TO v_delay SECONDS.

    ENDIF.

  ENDIF.

ENDDO.

 

Improvements

Please suggest alternate approaches you have used to handle similar situations at program level.

Creating Excel the Java way

$
0
0

Hi all,

 

I would like to share with you a some code development .

 

There is a constant request to generate Excel report from SAP .


As far as I know there is no built in support to do that from ABAP.


There is also the requirement to run the whole show in back ground job .

The regular "solution" is create html files, CSV files etc.

 

There is ABAP project http://wiki.scn.sap.com/wiki/display/ABAP/abap2xlsx but not every
organization would like to go this way.


Since I use Java as a hobby I thought I will try and use java to do the job .


The code supplied here is supplied as is. Try it and see if it is good for you.

 

Steps

  • Create XML using ABAP .
    The XML will contain meta data and instructions to the java program.
  • Write the XML to a file .
  • Call Java as external command .
  • The Java program will parse the XML file and generate the Excel file.

 

Open source projects used

Apache POI - http://poi.apache.org/ - the Java API for Microsoft Documents
This is main work horse... it will be used to generate the Excel files .
At the time of writing I did not utilize the full potential of this project. It is full of goodies that
are worth exploring...(I plan to try and use the "Formula Support" ).


Apache Commons CLI - http://commons.apache.org/proper/commons-cli
parsing command line options .

 

Apache Commons Lang - http://commons.apache.org/proper/commons-lang/
StopWatch.


Required jars from the projects (Already included in XmlFileToExcel.jar )

commons-cli-1.2.jar
commons-lang3-3.1.jar
dom4j-1.6.1.jar
poi-3.9-20121203.jar
poi-ooxml-3.9-20121203.jar
poi-ooxml-schemas-3.9-20121203.jar
xmlbeans-2.3.0.jar


The environment
Java Editor - Eclipse -  http://www.eclipse.org/


The Java code is located here: https://drive.google.com/folderview?id=0B6Cb7sgVnODWNV9DS29kQXdPblE&usp=sharing

The reason for putting the code here is because of the file types involved.

(If someone can point me to a place within scn.sap.com I will be grateful...)


XmlFileToExcel.zip - The whole set of source code is in a this zip file this way the directory structure of the Java project is preserved.


XmlFileToExcel.jar - This is the ".exe" equivalent of Java .
                     This file contain all the required jars from the projects .
                     The jar was created by eclipse using the "Runnable JAR File Exporter"
                     Class main.Main is the start class for this program.
                     This file is actually a zip file with extension of "jar".

 

Y_R_EITAN_TEST_40_02.xml  - Sample XML created from SAP .

Y_R_EITAN_TEST_40_02.xlsx - Sample Excel generated by the java program .

            

Java Setup

 

  • Create in some shared folder accessible from sap the following:
    • A folder with the name "jre" this will contain the "Java Runtime Environment".
    • - Install java jre on your PC .
        http://www.oracle.com/technetwork/java/javase/downloads/jre7-downloads-1880261.html
      - Note the folder where you install the jre (Usually C:\Program Files\Java\jre7 )
      - Copy folder "jre7" to folder "jre" .
    • Create a batch file named Y_JAVA_1.bat
      The content of the file:(note the "\bin")

      path \\<path to jre>\bin
      java.exe -jar %1 %2 %3 %4 %5 %6 %7 %8 %9
    • Download XmlFileToExcel.jar and put it next to Y_JAVA_1.bat .

The folder will look like this:

capture_20131212_110900.png

 

 

SAP setup (Simple....)

program:( Y_R_EITAN_TEST_40_02 source included)

Upload the source to SAP and activate .


The program use table sbook as input (On our site it contain 100000 records)
The program read the selected records (FORM get_data_1) .
The program generate and write XML file (FORM write_1_to_xml) .
The program Execute an External Command and call java (FORM write_2_to_excel) .

 

 

External command

Use Transaction SM69 and add "Y_JAVA_1" as external command.

Y_JAVA_1 will call the batch file Y_JAVA_1.bat with the required parameters.

 

capture_20131212_094913.png


When you run the program you will be prompted for "My folder" this also needs to be a shared folder accessible from SAP .

 

All the paths needs to to be in \\host\directoryname structure (Universal Naming Convention) .

 

Thats all for now. have fun....

Viewing all 948 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>