Quantcast
Channel: ABAP Development
Viewing all 948 articles
Browse latest View live

ABAP Text Symbol Analysis Tool

$
0
0


SAP has provided inbuilt text symbols analysis tool. 

 

This tool is used for following purposes

a) To delete text symbols which are not used in report.

b) To create text symbols that needs to be added in text pool .

c) To correct text symbols which are defined repeatedly or differently.

 

Lets know more about the ABAP Text Symbol Analysis Tool

 

I have created a test report Z_TEXTSYMBOL_TEST for explaining the tool as below


 

In this report you can see there are 10 texts.  “Text1”, “Text5” , “Text6” and “Text9” with text id 001,005,006 and 009 respectively. However, “Text6” is defined differently in text pool as below:



   


 

Also you can notice that text “Unused Text” with Text ID 011 is not used in report.

 

 

a) Delete unused text from text pool


     Using “Delete Row” button we delete the text. However for large size reports, it is difficult to find

     which text symbol is not used in report.

     To delete unused text symbols from text pool go to SE38. Type report name (e.g.

     Z_TEXTSYMBOL_TEST) and select “Text elements” radio button and click on change.


 

 

You can see the change text symbol screen. Now click on “Compare Text Symbols”  shown in below figure or press key Ctrl+Shift+F7 to start ABAP Text Symbol Analysis Tool



 

You can see the ABAP Text Symbol Analysis screen as below.



You can also access this tool by SE32 transaction as below:

Goto SE32 transaction an click on "Anlysis".

 

 

 

 

Now select first radio button i.e. “Text symbols that can be deleted from the text pool” and click on “Edit” button.

 

 

You can see below screen.

 

Now check the text symbols which you want to delete from the text pool and click on delete symbol. You will get below message “Text symbols may still be used elsewhere. Delete anyway”.  Click on “Yes” to delete the selected texts.

 

 

If you select all the unused texts and deleted them then you will see below screen which means now there are no unused text symbols present in text pool



 

 

b) To create text symbols that needs to be added in text pool.


Now same way select second radio button “Text symbols that need to be added to the text pool” and click on “Edit”. Then you will see all the texts with text id e.g. “Text1” (001) present in report which are not added in the text pool as below

 

 

Now check texts which you want to add in text pool and click on create as below



 

After creating selected texts in text pool you can see below screen which shows there are no texts with text id in report which are not maintained in text pool.

 

 

 

 

c)  To correct text symbols which are defined repeatedly or differently.

 

Select third option “Text symbols defined repeatedly/differently in program” and click on edit to see texts which are maintained differently in report than the text pool as below:

 

 

You can see for text id 006 there are two texts.  The unmarked text will be replaced with the marked text.

Here “Text6” is marked as we want to maintain Text6 for text id 006. Click on replace as shown below to replace “Defined Differently in report” with “Text6”.



 

Now you can see all inconsistencies in text pool got corrected as below



 

Now click on save to save the changes made in text pool. If you go back you can see updated text pool as below:

 


Huge file reader now part of apab2xlsx package

$
0
0

 

This is a continuation of my blog A Way of Reading Huge Excel Files in which I described the usage of the sXML parser for an alternative Excel file reader with a massively reduced memory footprint. I followed the suggestion of some of my blog readers (Kay Streubel, Rainer Hübenthal) to add the code to the abap2xlsx repository at ivanfemia/abap2xlsx · GitHub. The abap2xlsx package now contains a subclass ZCL_EXCEL_READER_HUGE_FILE of the existing reader class ZCL_EXCEL_READER_2007.


I owe special thanks to Ivan Femia for helping me with the first steps into the abap2xlsx realm. :-)


Only the sheet data and shared string table are parsed differently in ZCL_EXCEL_READER_HUGE_FILE, as these usually are the heavy-load parts of an Excel file. The other parts - like style details - continue to be analyzed in the superclass and are therefore parsed with iXML parsers, as before.

 

How to Use It

 

The public interface - and therefore the usage - is the same as for ZCL_EXCEL_READER_2007. If you have an XSTRING with the .xlsx file content, you work with the LOAD( ) method; if you have a filename and want abap2xlsx to read it (from application or presentation server), you would use LOAD_FILE( ).

 

The test report ZZ_TEST_EXCEL_READER (code below) shows the idea.

 

 

It's Just Another Object Type

 

You can use the report to load an excel file from the presentation or application server, doing the parsing with the "standard" reader or with the new "huge file reader". As you see, the difference between the two reader versions is only in the CREATE OBJECT lo_reader TYPE ... statement.

 

Decision to Read Files From Presentation Server or Application Server

 

The possibility to decide between application server and presentation server explicitly is a new feature. Formerly, this distinction was made automatically, depending on the SY-BATCH flag. This will still be the default value for the new optional import parameter I_FROM_APPLSERVER. But now, the distinction can also be made by the caller.

 

It's good practice to have the reader as a local object in an own, separate code unit (here: a form routine). This way, all the memory that was necessary during the read process will be freed when the excel object has been built and the reading unit is left.

 

 

report zz_test_excel_reader.
parameters:   p_file  type string lower case default `C:\temp\test.xlsx`,   p_appl  type flag,   p_huge  type flag.
start-of-selection.   perform start.
* ---
form start.   data: lo_excel type ref to zcl_excel,         lo_excpt type ref to zcx_excel.   try.       perform read using p_file p_appl p_huge                    changing lo_excel.       break-point. " <<<--- Inspect lo_excel here
* Parsed date are usually contained in the following ITAB:
* LO_EXCEL->WORKSHEETS->WORKSHEETS->COLLECTION[1]->SHEET_CONTENT     catch zcx_excel into lo_excpt.       message lo_excpt type 'I'.   endtry.
endform.                    "start
* ---
form read using i_filename           type csequence                 i_from_applserver    type flag                 i_huge               type flag           changing e_excel           type ref to zcl_excel           raising zcx_excel.
* Use the reader instance as a local variable in a separate unit,
* so its memory will be released after leaving the unit.   data: lo_reader type ref to zif_excel_reader.   if i_huge eq 'X'.     create object lo_reader type zcl_excel_reader_huge_file.   else.     create object lo_reader type zcl_excel_reader_2007.   endif.   e_excel = lo_reader->load_file( i_filename         = i_filename                                    i_from_applserver = i_from_applserver ).
endform.                    "read

 

The Main Loop

 

An Excel workbook is saved as a zip archive containing several XML files. The files sheet1.xml, sheet2.xml etc. contain the actual worksheet data, among other things like style definitions. Here is a typical example of a sheet.xml: As you see, the data start with a separate element named sheetData; in this example, the cells contain pointers to the shared string table (which is defined in another file sharedStrings.xml).

 

sheetData.png

 

The parser (we use "token-based sXML parsing", see ABAP Keyword Documentation) for this file can safely skip forward until it meets the opening of element sheetData. Once this is reached, it should read elements and their contents one after another, as follows:

  • When an opening c element is detected, a new cell data structure has to be filled from the element c's attribute values;
  • When a text content of an element is detected, this content should usually be treated as value of the current cell
    • in some cases, however, the value has to be fetched by index from the shared string table
  • When a closing c element is detected, the current cell data should be added into the ABAP worksheet representation
  • When a closing sheetData element is detected, the loop should be left.

 

And this is the above text written in ABAP:

 

method read_worksheet_data.   data: ls_cell   type t_cell.
* Skip to <sheetData> element   skip_to(  iv_element_name = `sheetData`  io_reader = io_reader ).
* Main loop: Evaluate the <c> elements and its children   while io_reader->node_type ne c_end_of_stream.     io_reader->next_node( ).     case io_reader->node_type.       when c_element_open.         if io_reader->name eq `c`.           ls_cell = fill_cell_from_attributes( io_reader ).         endif.       when c_node_value.         case io_reader->name.           when `f`.             ls_cell-formula = io_reader->value.           when `v`.             if ls_cell-datatype eq `s`.               ls_cell-value = get_shared_string( io_reader->value ).             else.               ls_cell-value = io_reader->value.             endif.           when `t` or `is`.             ls_cell-value = io_reader->value.         endcase.       when c_element_close.         case io_reader->name.           when `c`.             put_cell_to_worksheet( is_cell = ls_cell io_worksheet = io_worksheet ).           when `sheetData`.             exit.         endcase.     endcase.   endwhile.
endmethod.

 

A caveat for the sXML parser is necessary: the instance attributes like name, value, etc. are never initialized. The caller has to take care which values are outdated, by looking at the three events "element open", "element close" and "node value".

 

When using the sXML reader, special caution is necessary for elements with mixed content. In the example,

 

<x><y>v</y>w</x>


the following events will be raised:


 

Event co_nt_element_open  name: x   value:
Event co_nt_element_open  name: y   value:
Event co_nt_value         name: y   value: v
Event co_nt_element_close name: y   value: v
Event co_nt_value         name: y   value: w
Event co_nt_element_close name: x   value: w
Event co_nt_final         name: x   value: w

 

Observe that when the event for value 'w' is raised, the name attribute still contains 'y', although y has already been closed and thus is an outdated value: The sXML parser doesn't keep track of this outdating - if it would, it had to keep track of the document structure (for example by maintaining a stack of the parents of the current node). In the sXML philosophy, however, all things regarding knowledge of the document structure, are left to the caller - the reading process is restricted to the detection of opening and closing elements, of attribute values, text nodes etc.

 

Some Remarks on Tests

 

A parser is an ideal candidate for unit tests, since its output depends only on the input stream - there are usually no dependency on other states (database, API calls, ...). So I implemented all code of the reader using unit tests.

 

Testing Private Methods

 

The huge file reader has no public methods on its own - the only public methods are inherited from ZCL_EXCEL_READER_2007,  which is out of the scope of my tests. For this reason, I decided to test private methods. There is a price for this decision: We are not as free as it could be in renaming and deleting private methods, since always the test classes have to be adapted.

 

To make private methods testable, we start the unit test section with a declaration of friendship:

 

class lcl_test definition deferred.
class zcl_excel_reader_huge_file definition local friends lcl_test.

Reading a Number

 

With some help methods in the test class, I let the code of a test look more intentional and more to-the-point: Everybody can read from the code of the following test method that the cell datatype "number" is under test: A cell with numerical content (17) should propagate the worksheet in the expected way:

**

   method test_number.     data lo_reader type ref to if_sxml_reader.     lo_reader = get_reader(       `<c r="A1" t="n"><v>17</v></c>`       ).     out->read_worksheet_data( io_reader = lo_reader io_worksheet = worksheet ).     assert_value_equals( `17` ).     assert_datatype_equals( `n` ).
endmethod.                    "test_shared_string

 

Here, OUT is the "object under test". As usual for unit tests, a new reader instance is created for each test method in the setup method. The same holds for the worksheet instance here, which holds the result.

 

Many tests just call the private central method read_worksheet_data( ) and then inspect the produced worksheet instance. This method imports an instance of an sXML reader to read the input data, and an instance of ZCL_EXCEL_WORKSHEET to write the result. Apart from the expected changes in the worksheet's state, there are no side effects on other data. And the only further external dependency is that the table of shared strings (supposed to be parsed earlier from the sharedStings.xml file) should be available already in the private attribute shared_strings, a simple list of strings (of type STRINGTAB).

 

In order to restrict the code of the test methods to the essential, specific parts, the get_reader( ) method only receives the XML code snippet for the  cell to be parsed (or multiple cells in a row): before creating the reader instance, this nucleus is wrapped into a document skeleton, to emulate a full XML document:

 

 

*  method get_reader.    data: lv_full type string.    concatenate `<root><sheetData><row>` iv_xml `</row></sheetData></root>` into lv_full.    eo_reader = cl_sxml_string_reader=>create( cl_abap_codepage=>convert_to( lv_full ) ).  endmethod.                    "get_reader

 

On Shared Strings and the Boundary Values Problem

 

As a local friend, the test class is allowed to pre-fill the shared_strings table attribute. Therefore, we can pretend to have already parsed the sharedStrings.xml file by adding some values to the shared_strings table. Afterwards, we call the reader and check that a cell with a string reference is correctly resolved. Here is the test for correct reading of the string table:

 

 

*  method test_shared_string.    data lo_reader type ref to if_sxml_reader.    append `Test1` to out->shared_strings.    append `Test2` to out->shared_strings.    lo_reader = get_reader(      `<c r="A1" t="s"><v>1</v></c>`      ).    out->read_worksheet_data( io_reader = lo_reader io_worksheet = worksheet ).    assert_value_equals( `Test2` ).    assert_datatype_equals( `s` ).  endmethod.                    "test_shared_string

 

 

For shared strings, the content of the <v> element is the 0-based index for reading the string table. Therefore, the cell content in this example has to be 'Test2' after parsing.

 

In a former version, I had been too minimalistic: I used only one entry in the string table, and the <v> element contained 0, not 1, to point to this one entry.


The test in its former version passed, but it didn't detect a bug in the code, due to the "boundary value" anti-pattern: 0 is a special value, since it also is the initial value for integer variables. The bug was that the index was not transferred at all to the variable holding the index, which thus remained on its initial value, zero.  Only by extending the test with a second string and working with the non-special index value 1 instead, I could reproduce the bug - and then fix it.

 

Test Coverage

 

While working on the code and testing it, it's good to have a look on the test coverage. It shows possible uncovered coding extracts in which a bug might be hidden.

 

coverge.png

Of course, some parts are intentionally uncovered. In this class, the methods load_shared_strings( ), load_worksheet( ) and get_sxml_reader( ) have a zero coverage on purpose, since they provide data from outside. I only test how the class works on these data. But the rest of the methods  have a 100% statement coverage.

 

 

The branch coverage (a column which is hidden by default in the coverage analyzer's default layout) shows that some conditions could still be added. Indeed, I have no particular tests for invalid sheet.xml documents - for example, I always expect that there is sheetData element, as the Open XML specification requires.

 

As the boundary value problem above shows, full statement coverage - or even full branch coverage - is not sufficient for detecting all bugs sleeping in the code. An instinct for potential bugs is necessary while writing the test method. It's an interesting change of hat: when writing test methods, we are supposed to turn into the most fervent critics of ourselves!

Update Purchase requisition based on external operation of production order

$
0
0

During our recent ECC implementation, business requested that Service Material value (populated in user field : AFVGD-USR00) should be auto populated into a purchase requisition for an external operation.

 

The Purchase Requisition (PR) is created as soon as a Production order lines having Operation control keys relevant for external processing such as ‘EXTL’ or ‘EXTP’ is saved. In Standard SAP, PR is created with Operation text as a Material Description and it does not carry Material code in the PR material Code field. But to provide better visibility to buyer and supplier, business requested a PR to be created from Production order with external operations should address the following.

 

Purchase Requisition should be

  • Auto populated Material code in PR with material code maintained in the “Service Matl” field in the Operation details screen for the external Operation number in Production Order(ref the screen Shot)

screenshot1.png

 

screenshot2.png

  • Item long text: material text both short and long copied from the material master, engineering revision level to copy from basic data 1 of the material master.

screenshot3.png

 

The business also requested some pre-validations when performing auto population of material code.

For e.g.:

1. The service material code should begin with ‘SM_’. 

2. Material should be available in MARA table.

3. Material and plant combination should exist in MARC table.

 

The validation errors should be visible on the operation details screen of the production order.

 

It took me some time to debug the end to end transaction to find out appropriate enhancement points to achieve different business requirements. I am listing down the enhancements used in table below.

I hope this is useful for others attempting to map similar requirements during automatic PR created\changed when Production Order is created\changed. 

 

Enhancement Details:

Sequence

Enhancement Type

Enhancement Name

Purpose

1

User - Exit

EXIT_SAPLCOVG_001

  • We used this exit to collect all modified external operations records.
  • This exit is triggered only when an external operation is modified.
  • However there were few scenarios in which this exit is not triggered at all and hence we have also implemented EXIT_SAPLCOZF_001.

E.g. when there is no change to external operation or when an external operation is deleted etc.

  • Populate external operation data obtained in this exit into a global internal table using custom method.

2

Spot

Enhancement spot - ES_SAPLCOZF

Point - \PR:SAPLCOZF\EX:SAPLCOZF_OLC_002\EI

  • This enhancement spot is triggered when an external operation is deleted.
  • Export deleted external operation record (AFVGD_IMP) and respective deletion flag (DELKNZ_BANF) using ABAP memory id.

3

User - Exit

EXIT_SAPLCOZF_001

  • This exit is triggered for each external operation once.
  • We used this exit to collect external operations records which cannot be fetched using exit EXIT_SAPLCOVG_001.
  • However this exit is not triggered when an existing external operation record is modified. So in that case exit EXIT_SAPLCOVG_001 will be useful.
  • Also import deleted external operations using ABAP memory id. (This data is exported by 2nd enhancement).
  • Populate external operation data obtained in this exit into a global internal table using custom method.

4

Badi

WORKORDER_UPDATE

Method – AT_SAVE

  • This method is used for adding any custom validation on external operation fields.
  • By pass the validation for deleted external operation with the help of deletion flag.
  • Fetch all the external operation from global internal table populated using above 2 exits.
  • Appropriate error message can be displayed using this Badi method.
  • If all the external operations are valid then export the external operation internal table using ABAP memory id for further processing. 

5

Spot

Enhancement spot – ES_SAPLEBNE

Point - \PR:SAPLEBNE\EX:LEBNEF08_01\EI

  • This enhancement spot is triggered before execution of standard FM ‘ME_UPDATE_REQUISITION’ used for updating PR data.
  • Import valid external operation table using ABAP memory id.
  • Modify purchase requisition table XEBAN using external operation data.
  • In our business scenario we modified PR item material number and short description using external operation fields.

6

Badi

WORKORDER_UPDATE

Method – IN_UPDATE

  • At this point, the PR is already created and hence this method can be used to populate various long texts available in PR.
  • In our business scenario, we used this method to update PR item long text based on external operation records.

Checking for Duplicate Sales Orders in SAP CRM

$
0
0

1.   Introduction

 

While creating sales orders from SAP or in WUI it is often possible that similar sales orders are placed more than once. In that case it would be nice to have a preventive measure by checking in the system if there are any existing sales orders that are similar to the one that is being created. Finding one probable duplicate order and being warned about the same will ensure that before we save a new order.

 

1. MAJOR FEATURES

 

When a sales order is entered in CRM, before saving it, it may be necessary to perform a duplicate order check in BADI ORDER_SAVE, depending on the conditions described below.

The duplicate order check should be activated for the combination of transaction type, sales area and media type. (For example: for standard orders within France the check is activated, for standard orders within the UK the check is deactivated).

Sales Area – The sales area provides a mechanism to turn the functionality on or off per country.

Transaction Type – Should be checked to make the difference between sales orders. (For example: excluding debit or credit memo’s.

Media Type – E.g. Fax, E-mail, telephone. (For example: web orders and EDI will be excluded

Contact Person – The check on contact person should be turned on or off per activation area above. Example: When the check on contact person is switched ON the check will only consider orders created for the same ship-to and contact person. When the check is turned OFF there is a check between all orders for a certain ship-to.

 

Number of days – A number of days should be included to know how many days back in time should be checked. This can differ per activation area. EG for France 8 days, for UK 4 days.

Number of items – In the as-is situation 4 items are checked. If 4 items are identical in 2 orders, the duplicate check sees the order as a possible duplicate order. In the to-be situation it must be possible that the number of items is variable per activation area above.

1. TECHNICAL DETAILS

 

For the purpose of the duplicate order check, key figures like sales organization, distribution channel, division, transaction type and media type is looked into for the order that is to be saved.

 

This is done using a table where certain flags and ranges are stored. For example, if the flag duplicate_order_check is enabled this means duplicacy needs to be checked. Similarly, if contact_per_chk is enabled then the contact person has to be compared as well. The days_chk determines maximum how many days back we should check for duplicates and for items_chk determines how many items need to be similar in order to qualify as a duplicate.

 

scn1.jpg

 

Once a duplicate order is found, the customer would like to be warned that what they are trying to create might already exist in the system. This can be done simply by using a popup message box. This part we can come it later but the main requirement is to find the details of the current sales order that is about to be saved.

 

 

CRM ABAP

In the SAP system it is possible using BADI ORDER_SAVE->CHECK_BEFORE_SAVE. So we need to implement this badi first.

 

 

scn1.jpg

In the method CHECK_BEFORE_SAVE the importing parameter in concern is IV_GUID. So all the details of the order need to be fetched through the GUID.

 

First and foremost, this is only applicable in foreground and to prevent this we check.

 

 

 


IF sy-batch IS NOT INITIAL.
RETURN.
ENDIF.

 

Some of the data can be retrieved from the SAP memory.

 

* Fetch partner data to find
* determine layout
CALL FUNCTION 'COM_PARTNER_LAYOUT_GET_UI'
EXPORTING
iv_subscreen_id   = lco_scr1000
IMPORTING
es_screen_layout  = lwa_screen_layout
EXCEPTIONS
no_layout_defined = 1
  OTHERS            = 2.

  IF sy-subrc NE 0.
* screen data could not be fetched so return
RETURN.
ENDIF.

* when its a new order object id is initial
IF lwa_screen_layout-object_id IS NOT INITIAL.
RETURN.
ENDIF.

 

 

This data helps us to find the details of the partners like sold to party, ship to party, contact etc.

 

* get partners
CALL FUNCTION 'COM_PARTNER_TO_DISPLAY_OW'
EXPORTING
iv_partnerset_guid   = lwa_screen_layout-partnerset
  iv_determ_proc       = lwa_screen_layout-determ_proc
IMPORTING
  et_partner           = lt_partner_to_disp
EXCEPTIONS
partnerset_not_found = 1
  OTHERS = 2.
IF sy-subrc NE 0.
* screen data could not be fetched so return
RETURN.
ENDIF.

lt_partner_to_display[] = lt_partner_to_disp[].

* get ship to party
READ TABLE  lt_partner_to_display
INTO lwa_partner_to_display
WITH KEY partner_fct = lco_partner_2.
IF sy-subrc EQ 0.
lv_ship_to_c = lwa_partner_to_display-partner_number.
ENDIF.


* get contact person
READ TABLE lt_partner_to_display
INTO lwa_partner_to_display
WITH KEY partner_fct = lco_partner_15.
IF sy-subrc EQ 0.
lv_contact_c = lwa_partner_to_display-partner_number.
ENDIF.

 

But we will need the GUID to fetch additional information

 

* Fetch sales org/dis channel/division from orgman
INSERT iv_guid
INTO TABLE lt_header_guid.

  CALL FUNCTION 'CRM_INTLAY_GET_DATA'
EXPORTING
iv_guid = iv_guid
  iv_kind           = lco_kind_a
iv_interfacename  = lco_int_orgman
IMPORTING
es_interfacevalue = lwa_orgman_ui
EXCEPTIONS
error_occurred    = 1
no_valid_guid     = 2.
IF sy-subrc EQ 0.
lv_sales_org_c        = lwa_orgman_ui-sales_org_short.
lv_dis_channel_c      = lwa_orgman_ui-dis_channel.
  lv_division_c         = lwa_orgman_ui-division.
ELSE.
* failed to fetch organization data so return
RETURN.
ENDIF.

* Fetch category(media type) from sales UI
CALL FUNCTION 'CRM_INTLAY_GET_DATA'
EXPORTING
  iv_guid           = iv_guid
  iv_kind           = lco_kind_a
iv_interfacename  = lco_int_sales
IMPORTING
es_interfacevalue = lwa_sales_ui
EXCEPTIONS
error_occurred    = 1
no_valid_guid     = 2.
IF sy-subrc EQ 0.
lv_media_type_c       = lwa_sales_ui-category.
ELSE.
* failed to fetch media type data so return
RETURN.
ENDIF.
* Transaction type
lv_transaction_type_c = lwa_screen_layout-process_type.

 

This gives us almost all information we require for checking the duplicate order, except the item level check. This is done using the standard function

 

  CALL FUNCTION 'CRM_ORDER_READ'

EXPORTING

it_header_guid       = lt_header_guid

IMPORTING

et_orderadm_h        = lt_orderadm_h

et_orderadm_i        = lt_orderadm

et_product_i         = lt_product_i

EXCEPTIONS

document_not_found   = 1

error_occurred       = 2

document_locked      = 3

no_change_authority  = 4

no_display_authority = 5

no_change_allowed    = 6

OTHERS = 7.

  IF sy-subrc <> 0.

* unable to fetch posting date so return

RETURN.

  ELSE.

READ TABLE lt_orderadm_h

INTO lwa_orderadm_h

INDEX 1.

  ENDIF.

 

In the next step we can check on the duplicate order table to check if the checking is enabled for the current order. In case duplicate check is disabled, we need not proceed further.

 

* Fetch data from switch table
SELECT SINGLE *
FROM yorder_check
INTO lwa_duplicate
WHERE sales_org         = lv_sales_org_c
AND dis_channel       = lv_dis_channel_c
AND division          = lv_division_c
AND transaction_type  = lv_transaction_type_c
AND media_type = lv_media_type_c.

* check if record is there and duplicate check is active else return
IF sy-subrc NE 0
OR lwa_duplicate-duplicate_ord_chk IS INITIAL.
RETURN.
ENDIF.

 

 

Once we know that we have to check for duplicates, we start looking for existing records in between today's date and the date before with days check.

 

* determine posting date range
lwa_post_dt-sign         = lco_range_i.
lwa_post_dt-option       = lco_range_bt.
lwa_post_dt-low          = lwa_orderadm_h-posting_date - lwa_duplicate-days_chk.
lwa_post_dt-high         = sy-datum.
APPEND lwa_post_dt TO lt_post_dt.

* select GUIDs from orderadm_h table
SELECT *
FROM crmd_orderadm_h
INTO TABLE lt_header
WHERE posting_date IN lt_post_dt
AND process_type EQ lv_transaction_type_c.

  IF lt_header[] IS INITIAL.
RETURN.
ENDIF.

 

Now that we have lt_header[] with all the sales orders which fall within the same date range we can fetch all information related to these orders.

 

* header has values this means records exist within the date range
LOOP AT lt_header INTO lwa_header.
APPEND lwa_header-guid TO lt_guid_sorted.
ENDLOOP.
SORT lt_guid_sorted.
DELETE ADJACENT DUPLICATES FROM lt_guid_sorted.
lt_header_guid[] = lt_guid_sorted[].
* read the record details
CALL FUNCTION 'CRM_ORDER_READ'
EXPORTING
  it_header_guid       = lt_header_guid
IMPORTING
  et_activity_h        = lt_activity2
  et_orderadm_i        = lt_orderadm2
  et_product_i         = lt_product_i2
et_orgman = lt_orgman2_s
  et_partner           = lt_partner2
EXCEPTIONS
document_not_found   = 1
  error_occurred       = 2
document_locked      = 3
no_change_authority  = 4
no_display_authority = 5
no_change_allowed    = 6
  OTHERS = 7.
IF sy-subrc <> 0.
* unable to fetch record details so return
RETURN.
ENDIF.

lt_orgman2[] = lt_orgman2_s[].

 

Now that we have all the details for the selected orders, we simple need to compare them

 

  LOOP AT lt_header INTO lwa_header.
* Clear variables and looping tables
CLEAR:
lv_ship_to_v,
lv_sales_org_v,
lv_dis_channel_v,
lv_division_v,
lv_transaction_type_v,
lv_media_type_v,
lv_contact_v.

* Now that we have the partner list in lt_partner2
* fetch ship to party and check
READ TABLE lt_partner2
INTO lwa_partner2
WITH KEY partner_fct     = lco_partner_2
  ref_guid        = lwa_header-guid.
IF sy-subrc EQ 0.
lv_ship_to_v = lwa_partner2-partner_no.
ENDIF.


IF lv_ship_to_c NE lv_ship_to_v.
CONTINUE.
ENDIF.

* fetch orgainzational data
READ TABLE lt_orgman2
INTO lwa_orgman2
WITH KEY sales_org_ori   = lco_sori_a
  ref_guid        = lwa_header-guid.
IF sy-subrc EQ 0.
  lv_sales_org_v         = lwa_orgman2-sales_org_short.
  lv_dis_channel_v       = lwa_orgman2-dis_channel.
  lv_division_v          = lwa_orgman2-division.
ENDIF.


* if organizational data does not match check next record
IF lv_sales_org_c   NE lv_sales_org_v
OR lv_dis_channel_c NE lv_dis_channel_v
OR lv_division_c    NE lv_division_v.
CONTINUE.
ENDIF.

* Transaction type
lv_transaction_type_v  = lwa_header-process_type.

* fetch media type
READ TABLE lt_activity2
INTO lwa_activity2
WITH KEY guid = lwa_header-guid.
IF sy-subrc EQ 0.
  lv_media_type_v        = lwa_activity2-category.
ENDIF.

* Compare and check items
IF lv_transaction_type_c = lv_transaction_type_v
AND lv_media_type_c       = lv_media_type_v.

* When similar record exist in existing and current line items
* create table for current line items
REFRESH: lt_items, lt_items2.
LOOP AT lt_orderadm INTO lwa_orderadm.
lwa_items-ordered_prod  = lwa_orderadm-ordered_prod.
CLEAR lwa_product_i.
READ TABLE lt_product_i
INTO lwa_product_i
WITH KEY guid = lwa_orderadm-guid.
IF sy-subrc EQ 0.
  lwa_items-net_weight    = lwa_product_i-net_weight.
ENDIF.
APPEND lwa_items TO lt_items.
ENDLOOP.
SORT lt_items.
* create table for existing line items
LOOP AT lt_orderadm2 INTO lwa_orderadm2.
IF lwa_orderadm2-header = lwa_header-guid.
lwa_items2-ordered_prod = lwa_orderadm2-ordered_prod.
CLEAR lwa_product_i2.
READ TABLE lt_product_i2
INTO lwa_product_i2
WITH KEY guid = lwa_orderadm2-guid.
IF sy-subrc EQ 0.
  lwa_items2-net_weight    = lwa_product_i2-net_weight.
ENDIF.
APPEND lwa_items2 TO lt_items2.
ENDIF.
ENDLOOP.
SORT lt_items2.
* now that we have the product and their quantities in sorted order
* we have to look for matching matching items
CLEAR lv_match.
LOOP AT lt_items INTO lwa_items.
READ TABLE lt_items2
TRANSPORTING NO FIELDS
WITH KEY ordered_prod    = lwa_items-ordered_prod
  net_weight      = lwa_items-net_weight.
IF sy-subrc = 0.
  lv_match = lv_match + 1.
DELETE lt_items2 INDEX sy-tabix.
ENDIF.
ENDLOOP.

* if there are more matching records lv_match than lwa_duplicate-items_chk
* or equal, then we say that its a duplicate record
IF lv_match GE lwa_duplicate-items_chk.
* check if contact person check is active, if yes then check contacts
IF lwa_duplicate-contact_per_chk IS NOT INITIAL.
* fetch contact data
READ TABLE lt_partner2
INTO lwa_partner2
WITH KEY partner_fct = lco_partner_15
  ref_guid = lwa_header-guid.
IF sy-subrc EQ 0.
lv_contact_v = lwa_partner2-partner_no.
ENDIF.

IF lv_contact_c = lv_contact_v.
* show message
lv_message = text-t01.
REPLACE ALL OCCURRENCES OF '&' IN lv_message WITH lwa_header-object_id.
CALL FUNCTION 'POPUP_TO_CONFIRM'
  EXPORTING
  text_question         = lv_message
  display_cancel_button = ''
  IMPORTING
  answer = lv_answer.
IF lv_answer NE lco_ans_1.
  RAISE do_not_save.
ENDIF.
RETURN.
ENDIF.
ELSE.
* when contact person check is not active
* show message
lv_message = text-t01.
REPLACE ALL OCCURRENCES OF '&' IN lv_message WITH lwa_header-object_id.
CALL FUNCTION 'POPUP_TO_CONFIRM'
EXPORTING
  text_question         = lv_message
  display_cancel_button = ''
IMPORTING
  answer = lv_answer.
IF lv_answer NE lco_ans_1.
RAISE do_not_save.
ENDIF.
RETURN.
ENDIF.
ENDIF.
ENDIF.
ENDLOOP.

As we can see that the program generates a popup message in case a possible duplicate order is found. The order number is reflected in the popup message asking whether to continue saving or not. In case we choose yes, the order is saved irrespective of possible duplicacy, else, an exception is raised and this causes the order not to be saved.

 

In this case, we get a popup in the SAP system while trying to save a new (or copied) order using CRMD_ORDER. If yes it will save, else it wont save

 

scn1.jpg

How does table record numbers influence performance

$
0
0

Recently we are handling with one customer complaint about performance issue in their production system. We are investigating from both ABAP side and DB side to seek potential improvement. Regarding DB side, we are working together with customer on possible DB index optimization.

 

On the other hand, we observed the database tables which causes the performance pain have a huge number of records ( among 50 ~ 200+ millions ). We do see the possibility to achive some history data, however we are not sure whether customer could get the performance gain after achiving.

So I did a small experiment to try to get the draft idea about the relationship between table record number and performance.

 

Test environment and test data preparation

 

Database: HANA DB with release 1.00.74.00.390550. The test is done on our internal system. It makes more sense to test in customer's system since customer is currently using a NON_HANA DB but unfortunately I cannot achieve it - I need to write several reports to generate the massive test data and my request to ask for a user in their sandbox system didn't get approved. I hope finally I could have the chance to repeat this experiment on customer sandbox system.

 

The table causing performance issue is PPFTTRIGG, which is a standard table in SAP CRM, storing transaction data of action framework processing detail.

In customer system it has 75 million records. In ST12 its DB time always ranks the first in trace result.

 

For the table itself:

 

Red color: Key field

Blue color: Index field

Yellow color: Non index field

clipboard1.png

I copy the standard table into ten new Z table with exactly the same technical settings, and fill each Z table with 10 millons ~ 100 millons records.

For field os_guid, appl_oid and medium_oid, I use function module GUID_CREATE to generate new guid and assign to them.

For other field like applctn, context and ttype, I read all possible values from their value table and assign to table records evenly.

 

Table name

ZPPFTTRIGG1

ZPPFTTRIGG2

ZPPFTTRIGG3

ZPPFTTRIGG4

ZPPFTTRIGG5

ZPPFTTRIGG6

ZPPFTTRIGG7

ZPPFTTRIGG8

ZPPFTTRIGG9

ZPPFTTRIGG10

Table record

10 millons

20 millons

30 millons

40 millons

50 millons

60 millons

70 millons

80 millons

90 millons

100 millons

 

The following measurement are done against the ten Z tables. All the time recorded in this blog is measured in millisecond by default.

 

Read via Index field, single record returned for each table

 

I use index field appl_oid, which could ensure only unique record returned for each Z table. The time is measured in microsecond.

 

clipboard2.png

I test it repeatedly and the result always shows that the time does not increase linearly according to the number of table records.

 

Read via table key field OS_GUID

 

The time is measured in microsecond.

clipboard3.png

The result shows it is on average a little faster to read via table key field compared with reading via index field.

 

Read via Non-index fields, one record returned

 

In this case I query the ten tables via non-index field MEDIUM_OID. Each query only returns one result for each Z table. The time is measured in microsecond.

clipboard4.png

All three kinds of read operation could ensure the unique record returned by SQL statement, sorting based on efficiency: read via key > read via index field > read via non-index field

 

Read via Index fields, multiple and different number of records returned for each table

 

The read is performed on index field applctn and context.

clipboard5.png

The X axis represents the number of records returned via SELECT * INTO TABLE <record set> FROM <ztable > WHERE applctn = XX AND context = XXX for each Z table.

 

Read via Index fields, multiple and fixed number of records returned

 

Similar as test above, but added UP TO 6000 rows to force the SQL on each Z table always returned the fixed number of records.

clipboard6.png

Read via Non-index fields, multiple and fixed number of records returned

 

I perform the read to retrieve all records for each table which has flag is_changed marked as abap_true. Before testing, I manually change the table entries for each Z table to ensure all ten tables have the exactly the same number of records with is_changed marked as abap_true.

clipboard7.png

Table entry mass update

 

The update operation is done on the non-index field is_changed. First I use SELECT UP TO XX to retrieve the given record set which has is_changed = abap_true, and then use UPDATE <Z table> FROM TABLE <record set> to update the table. The execution time of statement UPDATE <Z table> FROM TABLE <record set> is measured:

when doing mass change on the ten tables one by one in the same session, <record set> contains 10000 entries to be updated:

clipboard8.png

clipboard9.png

clipboard10.png

before my testing, my assumption is that the consuming time for updating will increase linearly according to numbers of table records.

However it seems according to this test result that the number of table records will not degrade the update performance at least in HANA DB.

 

Summary

 

Based on this test result, it seems in HANA DB, there would be no significant performance improvement for read operation which only returns a unique record, even if number of table records reduces dramatically( reduction from 100 millions to 10 millions ). The same holds true for update operation via UPDATE <DB table> FROM TABLE <record set>.

For read operation with multiple records returned, no matter index or non-index fields is used, fixed or different number of records are returned, the time consumed for read always increses almost linearly with number of table entries. In this case customer could still get the benefit from table archiving.

Fund Management Documents handling in ABAP

$
0
0

Introduction

 

Fund Management Documents are generated based on Budgeting Process defined as per customizing. Following Budgeting Process have been pre-configured in SAP:

 

ProcessMeaning
EnterEnter new budget data
Balanced EntryBalanced budget increase
CarryoverCarry over budget data from one year to another
Revenues Increasing BudgetTransfer budget data using a Revenues Increasing the Budget (RIB) rule
TransferTransfer budget data
ReturnReturn budget data
SupplementSupplement for budget data
Transfer Cover EligibilityTransfer budget data using a manual cover eligibility rule

 

The process of managing the Fund Management Documents is primarily done using the Transaction Code FMBB which is called as Budgeting Workbench. All the activities from Preposting an Fund Management Document, to Posting a Preposted Fund Management Document, from Undone a Prepost Document to Helding a Posted Document.

 

fm_01.JPG


The scope of this document is Handling the Posting of Fund Management Document using ABAP. It includes following an Object Oriented approach in handling the relevant Documents.


Overview

For handling Fund Management Documents, SAP has provided the Object Orient way by giving the class CL_FMKU_ENTRYDOC_HANDLER (FM entry document handler) which is used to handle all the activities related to Fund Management Documents. Activities / methods include Mass Reversal Check and Post, Document Get Contents and Post, Held Create, delete etc.and FM Prepost Document handling.

 

fm_02.JPG

 

Digging Deep

 

Simple ABAP Objects concept can be used to create the Reference Objects and use the relevant methods provided for Fund Management Document handling. Lets walk through the basic steps for the same taking the example here as Posting a PrePost Document or Undone a Prepost Document.

 

1. Declaring the basic data for class before action.

 

DATA: REFOBJ TYPE REF TO CL_FMKU_ENTRYDOC_HANDLER,

       F_DOCID TYPE FMED_S_DOCID,

       E_F_HEADER  TYPE FMED_S_HEADER,

       E_T_LINES  TYPE FMED_T_LINES,

       E_T_LINE  TYPE FMED_S_LINE,

       E_FLG_FROM_ARCHIVE  TYPE XFELD,

       I_REF_MSG  TYPE REF TO CL_BUBAS_APPL_LOG_CTX,

       I_LOG_HEADER TYPE BAL_S_LOG,

       E_LOG_HANDLE  TYPE BALLOGHNDL.

 

2. Creating required objects for the functioning.

 

CREATE OBJECT REFOBJ

     EXPORTING

       I_FM_AREA = 'XXXX'.   "=> XXX here is FM Area

 

   CREATE OBJECT I_REF_MSG.

 

3. Making Log Handler ready for the call.

 

CALL METHOD I_REF_MSG->CREATE_LOG_HANDLE

     EXPORTING

       I_LOG_HEADER            = I_LOG_HEADER

*     I_ONLY_ERROR            =

     IMPORTING

       E_LOG_HANDLE            = E_LOG_HANDLE

     EXCEPTIONS

       LOG_HEADER_INCONSISTENT = 1

       OTHERS                  = 2.

   IF SY-SUBRC <> 0.

*   Implement suitable error handling here

   ENDIF.

 

 

4. Reading the contents of an existing FM Document.

 

   F_DOCID-FM_AREA  = 'XXXX'.            "-> Same as above

   F_DOCID-DOCYEAR  = FDOCYEAR. "-> From Screen

   F_DOCID-DOCNR  = DOCNR.            "-> From Screen

 

CALL METHOD REFOBJ->DOC_GET_CONTENTS

     EXPORTING

       I_F_DOCID            = F_DOCID

*     I_FLG_ALLOW_ARCHIVED = ' '

     IMPORTING

       E_F_HEADER           = E_F_HEADER

       E_T_LINES            = E_T_LINES

       E_FLG_FROM_ARCHIVE   = E_FLG_FROM_ARCHIVE

     EXCEPTIONS

       NOT_FOUND            = 1

       OTHERS               = 2.

   IF SY-SUBRC <> 0.

* Implement suitable error handling here

   ENDIF.

 

5. Now after reading the FM documents content, now its time to change the document state from PrePost to Post or Undone.

 

*1  Posted

*2  Preposted

*3  Preposted posted

*4  Preposted undone


  E_F_HEADER-DOCSTATE = 4.

 

6. Calling Prepost document to be Posted or Prepost document to be Undone.

 

CALL METHOD REFOBJ->PREPOSTED_POST

     EXPORTING

       I_F_DOCID          = F_DOCID

       I_F_HEADER         = E_F_HEADER

       I_REF_MSG          = I_REF_MSG

       I_FLG_POPUP        = 'X'

       I_POST_W           = 'X'

     CHANGING

       C_T_LINES          = E_T_LINES

     EXCEPTIONS

       NOT_PREPOSTED      = 1

       ORIGINAL_NOT_FOUND = 2

       REF_MSG_INITIAL    = 3

       DOCNR_ERROR        = 4

       LOCK_ERROR         = 5

       OTHERS             = 6.

   IF SY-SUBRC <> 0.

* Implement suitable error handling here

   ENDIF.


Or


CALL METHOD REFOBJ->PREPOSTED_UNDO

     EXPORTING

       I_F_DOCID          = F_DOCID

       I_F_HEADER         = E_F_HEADER

       I_T_LINES          = E_T_LINES

       I_REF_MSG          = I_REF_MSG

     EXCEPTIONS

       NOT_PREPOSTED      = 1

       ORIGINAL_NOT_FOUND = 2

       ALREADY_LOCKED     = 3

       DOCNR_ERROR        = 4

       OTHERS             = 5.

   IF SY-SUBRC <> 0.

* Implement suitable error handling here

   ENDIF.

 

7. At the end, don't forget to commit the document after changes.

 

CALL FUNCTION 'BAPI_TRANSACTION_COMMIT'.

   IF SY-SUBRC IS INITIAL.

     MESSAGE S001(00) WITH 'Document:' DOCNR ' updated successfully.'.

   ENDIF.

 

 

Concluding Remarks


For handling the Fund Management Documents in ABAP, ABAP Objects makes the work very simple. As in this case, Class  CL_FMKU_ENTRYDOC_HANDLER provided by SAP proves savior and helps in handling pretty easy and smooth.

What to do before changing Code

$
0
0

Hi all,

Today I want to write about a theme everybody of us (the developers) gone through.

 

Creating and changing code. To be more specific, the part we are thinking that it drives us crazy while trying to refactor it.

 

Not because we are not having a transport or a technical documentation on hand. No, because our initial changes influenced further developing and now it is again on our turn, because we build the first things. If you took part in a developing (You thought: “Please, do never show up in my todo-list again”) it always returns a time to you.

That is an unwritten rule, is it?

 

But now, what can you do to improve it and reduce such messy work as a developer.

 

1st point Do not be just a recipient


Might be a lot are now thinking: I’m a developer, so what, it is my job and if someone said it, I’ll get it done!

 

This is an argument and yes, I agree with you in a way, but not at all.

 

Of course, there is always this stuff which is really unnecessary and it has to be done. It remains in every project the same. People love doing things the same way they did before. So that is the reason for it and if it doesn’t affect SAP-Code I just let it happen. In these cases I see the great opportunities to develop from the very beginning and I can do it my way (the developing, the result is given from others)

 

But when it affects SAP or established source I always talk to the consultants and ask questions about it. It's not the goal to argue with the consultants, I want to understand what they are trying to have in the end and why the available stuff in the system doesn't match. You know, nobody knows everything.


2nd point Bring people together


What does that mean?

If you got a concept in front, just read the given task. Take a minute to remember similar requests. You remember one, let both know each other (of course, they should work for your company). That’s it.

I got it more than one time, that similar request got different solutions. And most of the time, one solution is the better one. And if it is not better, it might be easier to implement…  Got the point


3rd point Take your time to prove the concept


You passed the second point and it is something new, so most of our request start with a pre-technical concept (if not, there might be something wrong). This is a very important point to me. Take your time to prove the concept, of course the points above implement that in a way, but you have to develop it and you are the last one in the row. If you say ok, it is written in stone and it is a heavy lift to change the requesters (customers) mind again.

 

4th point Expect the unexpected


A big problem is that some developers fear to ask questions. You know, you got the concept proved and now the developing starts and there it is. New questions appear out of it. Now it is not the point to hide behind your computer. Contact the relevant persons and tell them what’s going on. Nobody will kill you for asking the questions right now, but perhaps they will if you don't


5th point Update the documents


A lot of requests doesn't match the concept in the end. You know, the consultant cross your way and during a coffee break another point is added to the request. Most of the times it isn’t a big story and therefore we (me also, I know that and I work on it) just implement it right after it and now it is gone. That’s the point, no it isn’t! Weeks later you get a question why something work like it does and not like that stuff written in the concept. If you’re lucky, you can remember more than just the good coffee, do you?

You see, it is a messy work but it's very important and if you’re a smart developer most of the time others have to do it.

You just have to handle it


6th point Keep it simple



Plan your code before you write it.

That is the best way to see the full picture as far as you can and you are able to keep it as simple as possible.

If you think it is crux  you contact another developer around you and let him/her take a look at it.

I’m pretty sure there is most of the times an easier way to find


7th point Don’t reinvent the wheel.


Many tasks you want to do have been done before and many of them are available prebuilt. That mean, before you work through a lot of tables take your time and search for helpful documents. The internet, SCN especially , older requests (if you got a ticket-system) might help you to save a lot of time.

Searching is the most undervalued timesaver at the moment. (But that is a personal statement)



Some words between
Now there are a few points which don’t match exact the timeline to do before changing code. But I felt to add these also here, so I just go ahead.



8th point Know your tools


That mean, you need to use all the provided tools and plan the stuff you need to plan in the beginning. Unit-Tests / Codeinspector-Variants / Analysis-Tools / Breakpoint-Id’s / Test sets and all the stuff I forgot to mention here. So take your time to work through your tools and play with it. You will be very surprised what all is possible. I can’t tell you here, how often this happened to me in the past… and will in the future

 

9th point Always be a hungry developer


Use your time and try out new things, it doesn’t matter if you need it right now. Read a lot and try things out. Just because you don’t need it right now, doesn’t mean you don’t need it next week.


10th point Feel like the guy using your stuff


I know, it is a very global statement, but I think development improves by such thinking. If you like the feeling using it, I’m pretty sure the end-user likes it too. That doesn’t implement, to change essential things written in the concept. Just talk with the relevant persons about it and share your thoughts about the usability.

 

That’s it. I know, all the stuff I told here is not new, but it is very important to remember these. That’s why I shared the blog. At the moment I work through Managing Custom Code in SAP. A big thing is of course how to identify unnecessary or cloned code in your system.

The simplest possibility: Don’t create cloned or unnecessary stuff.

 

Thank you for reading to the end. Feel free to add points you think it is important or just leave a comment.

 

Cheers

Florian

 

PS: I'm not sure, if crux is a word, but I think you understand the meaning of it.

PPS: Thank you Matthew Billingham for bringing that book under my pillow




ABAP proxy to Consume PHP Webservices

$
0
0

ABAP proxy to Consume PHP Webservices


Objective

 

   The purpose of this article is to help on how to implement logical ports and how to create an executable program in order to consume an abap proxy after creating a webservice in PHP and a service consumer in Enterprise Services. In order to start with this article it is necessary to first build the PHP webservice and define the service consumer in ES. This foundation is perfectly explained by Karthikeya Sastry in his great document:  http://scn.sap.com/docs/DOC-46103  It is a very straightforward tutorial on how to consume PHP webservices and as always: the most complex solution is rarely the best one. After going through his document I was looking after a way to consume that service he created in an executable program. So, this tutorial is a complement to show how to configure the logical port in SOAMANAGER in a way to set it as default to be used in ABAP programs that consume the proxy developed.

 

 

Configuration of Consumer Proxies    


   The configuration of consumer proxies is implemented via logical ports. Consequently, a logical port is created based on the requirements of the provider. A logical port refers to a service endpoint that is available under a unique location in the provider system. A logical port additionally contains a runtime configuration for accessing the service endpoint. Furthermore, the port also contains the logon data that is required for calling the service methods. You can create multiple logical ports for each consumer proxy, but a logical port can only refer to one endpoint. You manage and configure consumer proxies via the SOA Management tool (Transaction SOAMANAGER). The below figure illustrates the relationship between logical ports and service endpoints. A service consumer establishes a connection by sending a call via a logical port. A logical port can send a call only to one service end- point, but a service endpoint can be called via various logical ports. 

 

1.png 

After following the steps to develop the PHP webservice in Karthikeya Sastry tutorial you should have the below enterprise service:

 

2.jpg

 

PHP development


In my case the MySQL DB has the structure below and by providing the titel column as a key for the PHP report, it must return the entire row:

 

  3.jpg

 

- My adjusted PHP program according to the model form the tutorial:

 

 

<?php

class MyClass{

                  function test($val){

                                    $host = "localhost";                                  

     $user = "root";

                                    $senhabd = "paulo";

                                    $db = "cdcol";

                                   

                                    $connection = mysql_connect($host, $user, $senhabd) or die('Não foi possivel conectar: ' . mysql_error());

                                    $db = mysql_select_db($db, $connection) or die("erro database");

                                   

                                    $sql = "SELECT * FROM CDS WHERE TITEL = '".$val."'";

                                   

                                    $result = mysql_query($sql) or die(mysql_error());

                                   

                                    $tbl = mysql_fetch_array($result);

                                   

                           $nome = $tbl["interpret"];

                                   

                                    echo $nome;

                           return $nome;

                  }

}

$server = new SOAPServer('service.wsdl');

$server->setClass('MyClass');

$server->handle();

?>

 

 

 

-  The WSDL that I generated from the tutorial:

 

wsdl.jpg

 

Configuration of Logical Port


      Go to SOAMANAGER transaction and select the service administration tab. Then choose single service configuration and search for the consumer proxy you created. Then create the logical port with the URL of your generated PHP WSDL path. Until now you can only test your proxy but you are not able to use it in your ABAP program. So, the trick here is to define this logical port as the default for your consumer proxy by checking the box ‘Logical Port is default’ as in the pictures below. You could also do that in older versions with transaction LPCONFIG although it is not advisable since it must not be used to create logical ports for proxy classes generated in versions after SAP NetWeaver 2004s. SOAMANAGER override logical ports created in transaction LPCONFIG. This means that, when two logical ports with the same name exist, the logical port created in transaction SOAMANAGER will be used. This also applies to the behavior of default logical ports. If transaction SOAMANAGER is used to create a default logical port for a particular proxy class, the default logical port created for this class with transaction LPCONFIG is no longer used.

 

soa1.jpg

soa2.png

 

ABAP Proxy


    Finally,  after setting the logical port as default we can consume the service in an ABAP program. According to the parameters we defined for the webservice, we provide the input structure with the key for the for the records of the MySQL DB in order to retrieve the record from the output structure and display its columns.

 

 

REPORT  Z_PROXY_O_PHP.

DATA: o_proxy TYPE REF TO ZCONSCO_SERVICE,
o_ex1
TYPE REF TO CX_ROOT,
v_val
TYPE ZCONSTEST,
v_response
TYPE ZCONSTEST_RESPONSE,
v_text1
TYPE string.

START
-OF-SELECTION.

v_val
-NEW_OPERATION_REQUEST = 'Beauty'.

TRY.
CREATE OBJECT: o_proxy.

CALL METHOD o_proxy->test
EXPORTING
input = v_val
IMPORTING
output = v_response.

commit work.

CATCH CX_AI_SYSTEM_FAULT INTO O_EX1.
V_TEXT1
= O_EX1->GET_TEXT( ).
ENDTRY.

IF v_response IS NOT INITIAL.

WRITE: v_response-NEW_OPERATION_RESPONSE.

ENDIF.

 

 

Result


According to our PHP webservice it will be displayed the second column of the report:

 

result.jpg


Sending Email with PDF Attachment

$
0
0


Hi All,

This blog is with reference to one of the threads(question) in SCN (http://scn.sap.com/thread/3373098) wherein the requirement was to send email with PDF attachment.

As I have implemented the same for one similar requirement I replied to this question and wanted to share the steps/code if required.I have documented all the steps and code in a MS word document but as SCN does not allow to upload MS Word document ,I told to send one email to my personal email ID(mentioned in the reply) and I will send the MS word document.

 

Thereafter I kept on receiving a lot of email requests to share the word document.So thought of creating this blog with all those steps and code so that anyone needs those can get them here in SCN itself .And also they will not need to wait for my email reply instead they can get it immediately here in SCN itself.

 

I will maintain this blog link to that original thread/question.

 

Here are the steps and codes for the development of the class, method to send email with PDF attachment(content of internal table) and also one sample program to test the same.

 

Step1:  Go to tcode SE24-> Provide the class name

 

  class1.png

 

Properties:

 

2.png

 

Step2 : Declare the PDF Method: SEND_PDF_FILE

 

PDF: Parameters:

 

3.png

 

Parameters list to copy and paste:

 

*----------------------------------------------------------------------------------------------------------------------------------------------------------*

IM_FILE_HEADER_TEXT        Importing    Type    STRING                 Header Text

IM_EMAIL_SUBJECT_LINE     Importing   Type    SO_OBJ_DES       Subject Line

IM_EMAIL_BODY_TEXT         Importing    Type    BCSY_TEXT          Body Text

IM_EMAIL_ADDRESS            Importing   Type    ZTP_SMTP_TAB    Email Address

IM_EMAIL_DIST_NAME          Importing   Type   SO_OBJ_NAM       Email Distribution Name

IM_EMAIL_DIST_ID                Importing   Type    SO_OBJ_ID           Email Distribution Id

IM_ATTACHMENT_NAME       Importing   Type    SOOD-OBJDES     Attachment Name

IM_FILE_DELIMETER             Importing   Type    C                          File Delimiter

IM_ITAB_1                             Importing   Type    ZTP_TEXT_TAB      Itab with data content

IM_ITAB_2                             Importing   Type    SOLI_TAB             Itab with column headers

EX_SEND_TO_ALL                Exporting   Type    OS_BOOLEAN      Send Email

IM_EMAIL_DO_NOT               Importing   Type    CHAR1                  Flag to determine whether mail needs to be sent or not

IM_LAYOUT                          Importing    Type    SYPAART             Layout mode Portrait or Landscape

EX_PDF_TABLE                    Exporting   Type    RCL_BAG_TLINE   Contains converted data in PDF format

 

*-------------------------------------------------------------------------------------------------------------------------------------------------------------*

 

Custom types are highlighted , please find them below:

 

Step3 : Custom types needs to be created which is used above:

Email Table Type :

4.png

 

Corresponding structure:

 

5.png

 

Data Content table type:

 

7.png

 

Corresponding structure:

 

8.png

 

Exceptions:

 

9.png

 

Step4 : Paste the below code in the method body.

 

*--------------------------------------------------------------------------------------------------------------------------------------------------*

Please find the code in the attached file , with file name : Source code _method.txt

*--------------------------------------------------------------------------------------------------------------------------------------------------*

 

 

Step5:Activate the method and class and write the below small executable test program to test it.Go to SE38 create the below program to test:

 

10.png

 

Code :

 

*-------------------------------------------------------------------------------------------------------------------------------------------------------------*

Please find the code in the attached file , with file name : Source code _program.txt

*--------------------------------------------------------------------------------------------------------------------------------------------------------------*

 

Step6: Activate the program and execute it.Please change the email ID to yours one to receive the email to Lotus Notes / Outlook /Other email configured.

 

Execute the program and test whether the email is sent successfully or not (Check in SOST tcode):

 

12.png

 

Open the PDF file attached in the email and make sure the content is correct:

 

13.png

 

Please let me know if anyone face any issue while developing or testing it.

 

Thanks & Regrads,

Hafizul Mollah,

SAP ABAP Consultant @Capgemini India Pvt. Ltd.

Characteristics for Materials - Quotation

$
0
0

I recently had a lot of trouble finding the requested characteristics on a specific material. For some reason the class assigned to the material didn't contain the characteristics that I needed. I'm not an SD consultant so I had no idea where to even begin to resolve this problem. After running to every SD and MM consultant that I knew, I finally found one who helped me really quickly.

 

First, he indicated that there is a configurable master material that contains the characteristic class. But unfortunately that material wouldn't contain any data as it's the master of many child materials. After some digging he helped me find the link to MARA that CLAF_CLASSIFICATION_OF_OBJECTS for some reason couldn't find. This function module is a result of that.

 

For ZIBSYMBOL I just made a copy of structure of table IBSYMBOL and added ATINN_S as a type CHAR20.

 

Also, keep in mind that the from field will be required for some characteristics. It's a floating point value and will need to be converted to be readable.

 

Function - Get Master Characteristic Data for a Material
FUNCTION z_get_config_mat_char.
*"----------------------------------------------------------------------
*"*"Local Interface:
*"  IMPORTING
*"     REFERENCE(P_MATNR) TYPE  MATNR
*"  TABLES
*"      T_SYMBOLS TYPE  ZIBSYMBOL
*"----------------------------------------------------------------------


   DATA: lv_objmara TYPE mara-cuobf,
     lv_objibin TYPE ibin-instance,
     lv_recnr TYPE ibin-in_recno,
     lv_sumid TYPE ibinvalues-symbol_id,
     w_symbols LIKE LINE OF t_symbols.

   DATA: it_symids TYPE TABLE OF IBINVALUES,
         wa_symids LIKE LINE OF it_symids.

   DATA: it_selsym TYPE TABLE OF selopt,
         wa_selsym LIKE LINE OF it_selsym.

   SELECT SINGLE cuobf
     FROM mara
     INTO lv_objmara
     WHERE matnr = p_matnr.

   lv_objibin = lv_objmara.

   SELECT SINGLE in_recno
     FROM ibin
     INTO lv_recnr
     WHERE instance = lv_objibin.

   SELECT * FROM ibinvalues
     INTO CORRESPONDING FIELDS OF TABLE it_symids
     WHERE in_recno = lv_recnr.

   LOOP AT it_symids INTO wa_symids.
     wa_selsym-SIGN = 'I'.
     wa_selsym-OPTION = 'EQ'.
     wa_selsym-low = wa_symids-SYMBOL_ID.
     APPEND wa_selsym TO it_selsym.
   ENDLOOP.

   SELECT * FROM ibsymbol
     INTO CORRESPONDING FIELDS OF TABLE t_symbols
     WHERE symbol_id IN it_selsym.

   LOOP AT t_symbols INTO w_symbols.
     CALL FUNCTION 'CONVERSION_EXIT_ATINN_OUTPUT'
       EXPORTING
         input         = w_symbols-atinn
       IMPORTING
         OUTPUT        = w_symbols-atinn_s.

     MODIFY t_symbols FROM w_symbols.
   ENDLOOP.

ENDFUNCTION.

 

 

Conversion Routine

data : w_float type f,
        w_character type ausp-atwrt,
        w_number(10) TYPE P DECIMALS 0.


...

        

         w_float = wa_symbols-ATFLV.
         CALL FUNCTION 'CEVA_CONVERT_FLOAT_TO_CHAR'
           EXPORTING
             float_imp = w_float "Feld TYPE F
             format_imp = w_number "Field Format
             round_imp = ' ' "Round off
           IMPORTING
             char_exp = w_character. "Feld TYPE C
         <field> = w_character.

ABAP to XLSX in less than 10 minutes

$
0
0

Sometimes you just want to have a simple internal table into a simple Excel. You might want to store this one on the client pc or on the application server or even send it as an email Attachement. And you might want to do it in the coming 10 Minutes or less.

 

Prerequisite: it's all based on the fine functionalities in these standard packages: CL_SXML_WRITER, SALV_BS_EXPORT, CL_OPENXML_PACKAGE.

 

You've got those packages? Let's get started. Unzip the sourceCodes.zip file, create 3 Simple Transformations and 1 Class. Then you can already start using it with the 2 demo programs.

 


  1. From the attachment simpleTransformations.txt, create these Simple Transformations in transaction STRANS:
    • ZSCN_XLSX_SHEET from the code in the ZSCN_XLSX_SHEET part.
    • ZSCN_XLSX_STYLE from the code in the ZSCN_XLSX_STYLE part.
    • ZSCN_XLSX_CUSTOM_DOC_PROPS from the code in the ZSCN_XLSX_CUSTOM_DOC_PROPS part
  2. Create and activate the class ZSCN_CL_XLSX_WRITER in Transaction SE24
    • Make sure your class builder editor is in "Source Code-Based Class Builder" mode. This makes the copy / paste much easier. You can set thist from the menu in SE24, Utilities --> Settings, then in the tab "Class Builder", check the option "Source Code-Based Class Builder", or Switch it as shown in the Video.
    • Create the class ZSCN_CL_XLSX_WRITER from the code in ZSCN_CL_XLSX_WRITER.txt
  3. Celebrate, you're done!
  4. Create a first test program in SE38 ZSCN_XLSX_WRITER_DEMO_BASIC from the source code part ZSCN_XLSX_WRITER_DEMO_BASIC in the attachment programs.txt.
  5. Create a second test program ZSCN_XLSX_WRITER_DEMO_ADV with some more functionality in SE38 from the source code part ZSCN_XLSX_WRITER_DEMO_ADV in programs.txt.

 

* Only 3 attachments are allowed. so I needed to merge some content with some old Fashion "begin of" and "end of".

Enterprise Service Provider for SOAP Client

$
0
0

Objective


     This blog is about the implementation of provider proxies from enterprise services and consuming it through PHP SOAP Client. The communication via Web services is based on SOAP. Currently, SOAP is only supported by HTTP(S). SOAP requests are processed via the Internet Communication Framework (ICF). For this purpose, SAP NetWeaver AS ABAP uses HTTP in the ICF for the communication between consumer and provider. SAP NetWeaver AS ABAP can be used as a provider for Web services and as a consumer of Web services. The ABAP proxy runtime supports Web services for which an integration server is used as well as P2P connections via SOAP. In both cases, a consumer proxy is required to send the message to the receiver or a provider proxy that implements the desired function. By configuring an ABAP consumer, you can define whether the connection is a SOAP-based P2P connection or whether the message is supposed to be sent via the SAP NetWeaver XI protocol. In this example we´ll configure the endpoint URL to the ES provider proxy to be used as a P2P connection, so that it is possible to consume this service through PHP SOAP client. The figure below summarizes the landscape we described above:

P2P_XI.png

 

 

Configuration of Provider Proxies    

  

     A service definition itself is no unit that can be called. To consume a Web service, you first have to create a runtime representation of the service definition, which is also referred to as service endpoint. The service endpoint contains the configuration settings for the Web service definition and is located on the provider system at a specific location, the so-called service endpoint URL. The consuming application uses this URL to call the configured Web service.

To create service endpoints, you can use the SOA Management tool, which can be called via Transaction SOAMANAGER.

The service endpoints allow for the following configuration settings:

 

  • Provider Security:  You can implement the settings for the transport guarantee (e.g., with- out transport guarantee, HTTPS, signature and encryption, etc.) and for the authentication method (e.g., no authentication, HTTP authentication via the user ID/password, X.509 SSL client certificate, logon ticket, or message authentication with SAML 1.1).
  • Web Service Addressing:  Here you can select the protocol for the Web service addressing.
  • Messaging: You can define the protocol for reliable messaging and the duration of the confirmation interval (in seconds) here. The confirmation interval is the period within which the provider must confirm to the consumer the receipt of a message.
  • Transport settings: In addition to the determined access URL, you can specify an alterna- tive access URL. If the service is not locally available (e.g., behind a firewall), you must specify the alternative path.
  • Message attachments:  You can define whether message attachments are supposed to be processed. In this case, several files can be sent with one message.
  • Operation specific: Here you can specify a SOAP action for an operation. The SOAP action is defined as a URI and transferred as an HTTP header if the Web service is called via HTTP.

     You can assign multiple service endpoints with different configuration settings to a Web service definition. This enables you to provide identical Web service definitions with different configuration settings to consumers. Services define groups of service endpoints. A service definition may include several services, which in turn may consist of several service end- points. This relationship is shown in figure below:

1.png

 

ES Development


     In this case, the enterprise service provider is going to be build based upon a remote function module.  This example uses the SFLIGHT tables data in order to be consumed by the PHP SOAP client. Create the FM as described in the figures below:


serv3.png

Create the Z_SFLIGHT FM with the parameters:

serv2.png

Next, create a webservice for the FM through menu->utilities. The output of the ES provider will look like:

 

serv1.png

serv4.png

 

     Activate your Service definition in the ES and go to SOAMANAGER transaction in order to create the endpoint URL. In SOAMANAGER go to service administration and search for the service you create.

 

soa.png

 

soa1.png

     You can generate a WSDL document for each service endpoint. In contrast to the port type WSDL, which doesn’t contain configuration information yet, this WSDL document already contains the binding information. The trick here is to change the settings from ws_policy to standard, so that the WSDL can be called.

soa2.png

Activate your endpoint URL definition

 

PHP Development

 

     After creating the WSDL endpoint URL in SOAMANAGER, the next step is to create the client to consume this service through SOAP protocol. We´ll be using PHP since there are plenty of SOAP function libraries already developed for that purpose. The SoapClient function will consume the endpoint URL and bind it to the object $client. The simplified code is as follows:

 

<?php

$url = "http://pcbv:8000/sap/bc/srt/wsdl/srvc_0800279E7F2C1ED3B5F4C63E39130EA3/wsdl11/allinone/standard/document?sap-client=001";

$login = "xxxx";

$pass = "xxxx";

 

//Call the client

$client = new SoapClient($url, array('login' => $login, 'password' => $pass, 'trace'  => true, 'exceptions' => true));

 

try {

              $air = 'AA';

              $flight = $client->ZSflight(array('Carrid' => $air));

            

              $value = get_object_vars($flight);

              $arrayf = array_map('objectToArray', $value);

              ?><table border="1">

              <tr>

              <td><?php echo ('Carrid');?></td>

              <td><?php echo ('Connid');?></td>

              <td><?php echo ('Price');?></td>

              <td><?php echo ('Currency');?></td>

              <td><?php echo ('Planetype');?></td>

              <td><?php echo ('Seatsmax');?></td>

              <td><?php echo ('Seatsocc');?></td>

              <td><?php echo ('Paymentsum');?></td>

              </tr>

               <?php

              foreach ($arrayf['TSflight']['item'] as $arr) {

              //echo $arr['TSflight']['item']['0']['Fldate'];

              ?>

              <tr>

              <td><?php echo $arr['Carrid'];?></td>

              <td><?php echo $arr['Connid'];?></td>

              <td><?php echo $arr['Price'];?></td>

              <td><?php echo $arr['Currency'];?></td>

              <td><?php echo $arr['Planetype'];?></td>

              <td><?php echo $arr['Seatsmax'];?></td>

              <td><?php echo $arr['Seatsocc'];?></td>

              <td><?php echo $arr['Paymentsum'];?></td>

              </tr>

<?php

}

?></table><?php

}

catch (SoapFault $e) {

              echo 'Caught an Error: [' . $e->faultcode . '] - ' . $e->faultstring;

 

}

 

function objectToArray( $object )

{

              if( !is_object( $object ) && !is_array( $object ) )

              {

                            return $object;

              }

              if( is_object( $object ) )

              {

                            $object = get_object_vars( $object );

              }

              return array_map( 'objectToArray', $object );

}

?>

 

     Note that this code has the functionality of conerting the object returned from the provider service to an array to be used in an HTML table display. For that purpose the function objectToArray does the conversion.

 

 

Results


     Here is the output from my eclipse IDE:

 

result.png

Internal Excel date format and date calculations in SAP

$
0
0

It is commonly known, that Excel stores dates internally as numbers. So, when uploading XML-Files based on Excel or when using Excel tables without date formatting, you might get something like this:

 

ExpectedProvided
2014/01/0141640
2014/05/1541774
2013/06/2541450
1998/01/0135796

 

So, with a little bit of code like this, we could reconstruct in SAP the date requested by adding the number of days to the base format used by Excel, which is January, 1st, 1900:

 

REPORT  z_excel_date_to_sap.

 

DATA:

   gv_mydate TYPE sy-datum.

 

PARAMETERS:

   p_days TYPE i OBLIGATORY.

 

START-OF-SELECTION.

   gv_mydate = '19000101'.     "Base date used by Excel

   ADD p_days TO gv_mydate.

   WRITE: / 'Excel-Basedate 1900/01/01 +', p_days,

            'days gives', gv_mydate.

 

So let's try the first example:

 

blog01.jpg

 

What's that? We are two days wrong!

 

A little investigation, eg by entering 1900/01/01 in Excel shows, that counting of days in Excel starts with 1. Therefore, day 1 is the first of January 1900. So when adding days we have to subtract one.

 

Ok, so change the code and let's hope that something magic will happen:

 

  gv_mydate = gv_mydate + p_days - 1"Because of 1900/01/01 is day 1.

 

blog02.jpg

 

Quite close, but still very wrong. The reason for this is explained by Chip Pearson in http://www.cpearson.com/excel/datetime.htm

 

The Excel day counting assumes that there was a February, 29th, 1900 (which was not!) in order to stay combatible with Lotus-1-2-3 formulas (where it was a bug) and worksheets.

 

So when using the internal date representation of Excel for date calculations, we have to subtract 2 in order to get a correct result:

 

  gv_mydate = gv_mydate + p_days - 2"Because of 1900/01/01 is day 1 and Excel

                                                                              "counts a non-existing 1900/02/29

 

Now we get the correct result for our examples, here shown  for the first one:

 

blog03.jpg

A complete guide to OpenSQL statements - Step-by-step tutorial with screenshots

$
0
0

In my first blog post on SCN, I’d like to give an overview of the OpenSQL query syntax. Starting from the simplest SELECT statement I’d like to demonstrate all language elements of OpenSQL, gradually building up a very complex database query. For each step, I’ll provide a business example and a screenshot so you can see the result of each SELECT query. I have used the Flight demo database available in every SAP system, so you can test the queries for yourself.


Note: in this post I will not discuss performance-related topics in detail. That would make this post way too long. Maybe in a future post


So let’s begin!


Example 1: the simplest SELECT statement


There are three mandatory parts of a SELECT statement, basically defining what you want to read from which table and where to put the result:


1, After the SELECT keyword, we must specify the fields of the so called “result set” (“resulting set” in the SAP help). Here we define which fields we want to see in the result of the selection. In Example 1, we entered “*”, a special character that means that all fields from the defined table(s) will be returned. This is a handy feature but keep in mind if that you select all fields from a table that consists of 250 columns, and you use only five of them, that wastes a lot of CPU, memory and network resources. In this case it is better to list the five fields explicitly (especially if you select a lot of records).


2, the FROM clause defines which table(s) to read the data from. Here you must specify at least one  table or view that exists in the Data Dictionary. In the first example we will only access one table, and the later examples will use more tables.


3, The INTO clause defines where to put the results, which is usually called as a "work area". Here you must define the data object that will hold the result set using one of the options below:


- INTO x, where x is a structure.

- INTO TABLE y, where y is an internal table.

- APPENDING TABLE z, where z is an internal table. In this the result set is appended to the internal table (not clearing the existing contents).

- INTO (a,b,c, ... ), where a, b, c etc. are variables with elementary types


There are several instructions in the SAP help regarding the prerequisites of work areas (data type etc.) and assignment rules (automatic conversions etc.) which I don't want to copy and paste here. I'd anyway recommend either using a structure identical to the field selection, or using the "CORRESPONDING FIELDS OF" addition. This will assign the selected fields to the fields of the work area based on the field name (not from left to right).


For all constructs except INTO TABLE, if the result set is empty, the target remains unchanged.


Hint: the INTO clause can be omitted in a special case, when you use a group function to count the number of lines that match the WHERE clause. In this case system variable SY-DBCNT will hold the number of records found. See Example 5.


Note: The INTO clause is missing from the screenshots because the tool I’ve used automatically generates it. Anyway, all the examples would use the INTO TABLE variant to fetch all matching records at once.


Business requirement: we want to select all data for 100 bookings from the booking table. Really simple, right?


How to achieve this? Simply define "*" after the SELECT keyword, specify table SBOOK in the FROM clause and limit the number of records fetched using the “UP TO N ROWS” clause. This can be used to define the maximum number of lines we want the query to return. This is typically used for existence checks (UP TO 1 ROWS), but in this example we have limited the query to return maximum 100 records. Unless you specify the ORDER BY clause, this will return 100 arbitrary records, so you cannot know which 100 it will return.


1_simple_select.jpg

Screenshot 1: Simply select 100 bookings from table SBOOK


Example 2: Adding a WHERE clause and a table join


The WHERE clause


Usually SELECT statements do not read all the contents of a table (typical exceptions are small customizing tables holding a few records), but return only specific entries. The WHERE clause is used to filter the result set, or in other words tell the database which records to retrieve. Here you define a logical expression that the database will evaluate for each row in the database.


Business requirement: return only bookings of Lufthansa (field CARRID must contain “LH” ) and cancelled bookings are not needed (CANCELLED must be equal to space).


The database will evaluate this condition for each record in the table and if the condition is true, it will be placed into the result set. You can write much more complex logical conditions as we will see in the following examples. Any number of logical expressions can be linked to a logical expression using keywords AND or OR and the result of a logical expression can be negated using keyword NOT. Keep the order of evaluation in mind (ensure the proper use of parentheses). Simple operators used to compare field values are EQ, NE, GT, LT, GE and LE (equivalent to =, <>, >, <, >=, <=).


Table joins


The second interesting addition in this example is the table join. Often data required for a specific business process is stored in several tables. Although it could be an option to select data from each table using separate SELECT commands and combine the results using ABAP code executed on the application server, many times it is more convenient and performant to use only one SELECT statement to read all the tables at once.


Business requirement: read customer data from table SCUSTOM for the customer of each booking.


This is achieved using a so called “table join”: a construct that instructs the database to access a further table based on a condition. This so called “join condition” (or “join expression”) represents the logical link between the two tables. In this example it is the customer number: for each booking in table SBOOK the database will search for the customer data in table SCUSTOM based on the customer number.


Because we use “*” to define the fields of the result set and we read from two tables in this example, the result set contains all the fields from both tables.  If there is a field with the same name in both tables, only one will be returned (the one from the last table in the FROM clause - each join overwrites the field contents).


Note: the tool I’ve used for the screenshots automatically generates a separate field in this case. This is the reason why you can see duplicate fields.


The syntax of a join condition is almost the same like in a WHERE clause with some notable differences which I don’t want to copy and paste here from the SAP Help (can not use subqueries, must use AND to link logical expressions etc).


There are two kinds of table joins in OpenSQL: inner joins and outer joins. We will discuss the difference in the next example.


Hint: SELECT statements using table joins bypass SAP buffering.


2_inner_join_and_where_clause.jpg

Screenshot 2: adding a WHERE clause and a table join


Example 3: Adding data from another two tables


Fortunately OpenSQL allows to join multiple tables together at once: maximum 25 tables can be joined in a SELECT statement. Here in this example we add data from two more tables: T005T which holds country data and GEOT005S which contains geographic information of regions.


Business requirement: Display the country name instead of country code and display the latitude and longitude of the region of the customer.


One special thing in this example is the join condition of table T005T. This table is language dependant, so a language key is needed to get the textual description of a country in a specific language. Remember: the join condition tells the database how to get a record from table B based on a record from table A (A being the left hand side and B being the right hand side table). This is special now because we use the logon language of the current user from field SY-LANGU. All fields of the SY structure can be used in join conditions (current date, time, system ID etc.) as well as WHERE and HAVING clauses.


What would happen if we would omit the language key specification? The database would return multiple rows with the same booking, customer and region information. Why? Simply because there are more than one entries in table T005T for the same country. Let’s say we have two entries for country ‘DE’: it is ‘Deutschland’ in German and it is ‘Germany’ in English. In case of a German customer, the database engine would evaluate the join condition (T005T-LAND1 = SCUSTOM-COUNTRY) for both records, and both would be true, so two rows would be returned: one with English text and one with German text.


3_another_inner_join_geocode.jpg

Screenshot 3: multiple table joins. Notice that all lines contain a latitude and a longitude.


Example 4:  Another kind of table join - Left outer join


Here comes the difference between the two kinds of table joins in OpenSQL: the inner join and the outer join (for ex. “SELECT * FROM A INNER JOIN B” / “SELECT * FROM A LEFT OUTER JOIN B”).


Basically the difference is the behavior in case there is no corresponding entry in table B for a record in table A. In case of an inner join, there would be no record placed into the result set. In case of a left outer join, there would be a record in the result set, but all fields coming from table B would be empty.


This is behavior is very easy to see in this example: we added geographic coordinates to our query in the previous example using an inner join. Table GEOT005S contains coordinates for country regions. Whenever a record was not found in table GEOT005S for the region of a customer, the whole line was dropped from the result set. This is why you can only see customers with non-empty latitude and longitude.


In the current example we add the latitude and longitude of the region of the customer using a left outer join. As you can see in the screenshot, there is only one customer, for whom there are coordinates returned. For all other records, the DB did not find a suitable record in GEOT005S, so the coordinates are empty. If GEOT005S would be accessed using an INNER JOIN, these records would be excluded from the result set.


4_left_outer_join_error.jpg

Screenshot 4: table join using a left outer join. Notice that now customers with empty latitude and longitude appear on the list.


Note: if you have carefully checked the screenshot, you could notice that there is a customer with a region defined (‘IL’) but there are no coordinates displayed. The reason could be that there is no entry in GEOT005S for country ‘US’ region ‘IL’, but it is not the case. Somehow the standard flight demo database contains incorrect entries having an unnecessary space. This is the reason why the DB does not find the matching record, since ‘IL’ <> ‘ IL’.


5_error_reason.jpg

Screenshot 4 b: incorrect entries for region ‘IL’ in table GEOT005S


I corrected these entries using this simple update statement:


6_error_correction.jpg

After correcting the entries in table SCUSTOM, the query fills the coordinates for all customers in region ‘IL’:


7_left_outer_join_corrected.jpg

Screenshot 4 c:  table join using a left outer join after correcting the problematic database entries. Notice that now every customer which has a region defined has the coordinates filled.


Example 5: adding a simple group function


Many times the business is not interested in all individual data records, but want to see an aggregation based on a specific field. For example, the sum of all sales per salesperson, costs per project etc. In this case we have two options: either retrieve all relevant records from the database and perform the calculations on the application server using our own ABAP code, or perform the calculation using the database engine.


Business requirement: we want to see the number of bookings that match our selection criteria. (bookings of Lufthansa that are not cancelled).


In order to achieve this using the database, we have to add a GROUP BY clause and we have to define the aggregate function (also called as group function) COUNT in the SELECT clause.


The GROUP BY clause combines groups of rows in the result set into a single row. In this very simple case, we want to see the total number of bookings, not broken down by any other field.


The COUNT( * ) function determines the number of rows in the result set or in the current group. Right now we don’t have any groups defined so in our case it returns the total number of bookings.


8_simple_count_function.jpg

Screenshot 5: a simple group function counting the number of bookings that match the selection criteria.


Note: The COUNT function can be also used to determine the number of  different values of a specific field in the result set. In this case, simply put the field name in the parentheses and add the DISTINCT keyword. For ex. to count how many countries are the customers from, use “COUNT( DISTINCT country )” in the SELECT clause.


It is important to note that the group functions are always performed after the evaluation of the WHERE clause. The database engine first reads the records matching the WHERE clause, then forms groups (see next example) and then performs the group function.


Using the GROUP BY clause and aggregate functions ensures that aggregates and groups are assembled by the database system, not the application server. This can considerably reduce the volume of data that has to be transferred from the database to the application server. Of course on the other side, this needs more resources from the database.


Hint: With the use of GROUP BY, the statement SELECT avoids the SAP buffering.


Example 6: defining groups for the group functions


We’ve learned that group functions perform certain calculations on groups of database records. If we do not explicitly specify a group, then all the records in the result set are considered as one big group, as in the previous example.


Business requirement: display the number of (non-cancelled Lufthansa) bookings per customer.


In this case we form groups of database records based on the customer number by listing field ID in the group by clause. Also, we have to enter the ID field of table SCUSTOM in the SELECT clause before the group function.


9_group_by_clause_1.jpg

Screenshot 6: Grouping rows by customer ID. The COUNT group function is performed on every group to count the number of bookings for each customer.


As you can see, now the database engine returns as many records as many customer IDs we have in the result set, and the number of relevant bookings next to them. Exactly what we wanted.


Note: in all of our previous examples, we’ve used the “*” sign in the SELECT clause (field list). However, here we have to explicitly define the fields needed to create groups of records. It is mandatory to add the table name and the ~ sign before the name of a database field, if more than one table has a field with the same name.


Example 7: Adding extra information


Business requirement: display additional fields in the result list (name, city, country, region code, coordinates).


How to do it? Simply add them to the SELECT clause after the customer number before the COUNT function. Keep in mind to add the field to the GROUP BY clause too, otherwise you will encounter a syntax error. The fields you use to form groups must be in the SELECT clause, and nothing else should be in the SELECT clause that is not in the GROUP BY clause and is not a group function.


12_group_by_clause_4.jpg

Screenshot 6: additional fields are displayed in the list.


Note: Now you could wonder that in the previous example I told you that by adding fields before the group function is how we define groups, but here the number of bookings did not change. The reason is that we have added fields from tables that have a 1:1 relation to the customer. A customer has only one name, a city is in one country and region and has one pair of coordinates. If we would have chosen to add the Flight Class field (First class / Business / Economy), then the result set could contain more than one line per customer: as many lines per customer as many kind of flights he/she had. We will see how this works in example 15.


Example 8: Defining the order of records in the result set


You can use the ORDER BY clause in a query to define the sort order of the records returned. Simply list all the fields that you want to use as a sort criteria. You can use keywords ASCENDING and DESCENDING for each field to specify the sort mode (ASCENDING is the default so it can be omitted).


Business requirement: display the customers with the most bookings.


What do we do now? We sort the list by the result of the COUNT function in descending order and then by the name of the customer.

 

13_order_by_clause.jpg

Screenshot 7: the result set is sorted using the ORDER BY clause.


As you can see, there are several customers who have more than ten bookings (non-cancelled, Lufthansa).


Note: If all key fields are in the field list and a single database table is specified after FROM (not a view or join expression), the addition PRIMARY KEY can be used to sort the result set in ascending order based on the primary key of the table.


Example 9: Filtering based on a group function


Business requirement: the boss is only interested in customers having more than ten non-cancelled Lufthansa bookings.


How do we do this? I guess the first idea would be to add a new condition to the WHERE clause to filter records where the COUNT function is higher than ten. However, this will not work because of the OpenSQL (and SQL in general) language.


The reason is that the WHERE clause filters the database records before the groups are created by the database engine. After the groups are created, the group functions are calculated and the result set is created. The WHERE clause cannot be used to filter based on the group function results.


In cases like these, the HAVING clause must be used. This is similar to the WHERE clause, but the difference is that it is evaluated after the groups are created and group functions are performed. To simply put: to filter based on the result of group functions, the HAVING clause must be used (also called as group condition).



14_having_clause.jpg

Screenshot 8: using the HAVING clause to filter the result set based on group functions.


As you can see on the screenshot, now only 23 records returned, although we have allowed to have 100 records by using the UP TO N ROWS addition. So this means that there are 23 customers having more than ten non-cancelled Lufthansa bookings.


Note: If you don’t specify any groups using the GROUP BY clause, the HAVING clause will consider the whole result set grouped into one line. For a quick example assume we have 10 records in table SCARR. The query “SELECT COUNT( * ) FROM scarr HAVING count( * ) GT 0” will return one single line with the number of records in the table, but the query “SELECT COUNT( * ) FROM scarr HAVING count( * ) GT 10” will not return any lines (empty result set).


Example 10: using subqueries


The possibility to combine multiple SELECT statements into one is a very handy feature in OpenSQL. This is mostly used when you don’t know for exactly the filter criteria during design time or when you have to use a comparison to a dynamically calculated value.


Business requirement: the list must include only customers who have never cancelled a flight (any airline, not only Lufthansa). At the first glimpse, you could logically ask that this is already done, since we have “cancelled EQ space’ in our WHERE clause. This is not correct, because this only influences our group function so that only non-cancelled bookings are counted. This means that if a customer has 20 bookings with one cancelled, he/she will be on our list with 19 bookings. According to our requirement, we don’t want this customer to be on our list, so how do we achieve that?


An easy way to solve this is to add a so called subquery to our WHERE clause which checks if there is a cancelled booking for the customer.


Subqueries in general


Basically a subquery is a SELECT statement in parentheses used in a logical expression. There is no need to have an INTO clause because the result of the subquery will be processed by the database engine.


How are the results of subqueries evaluated? This depends on if the subquery is correlated or not. If a subquery uses fields from the surrounding SELECT statement in its WHERE condition, is called a correlated subquery. In this case, the result of the subquery will be evaluated for each line in the result set of the surrounding SELECT statement (in which’s WHERE clause the subquery is placed). This implies that the subquery is for each record in the result set of the surrounding SELECT statement. On the other hand, a subquery without any reference to the surrounding SELECT statement is executed only once.


There are different operators which one can use with subqueries, we will use the “EXISTS” operator (negated) in this example. I’ll discuss the others in Example 13.


Subqueries can be nested which means that you can put a subquery in the WHERE clause of a subquery. We will see an example of this later in Example 13.


Now take the current example: we need to check if a customer has a cancelled booking, so we have to create a relation between the customer ID in the result set of our outer SELECT statement and the subquery (so we use a correlated subquery). This is done by adding a condition to the WHERE clause of the subquery to match the customer IDs.


15_subquery.jpg
Screenshot 9: using a subquery. As you can see, now only 20 records are returned, so three customers had a cancelled flight on our previous list.


Table aliases


Notice that here we must use so called “table aliases” because we select from the same table in both the surrounding query and the subquery. This means that we must somehow explicitly define the table name for the fields to be compared, otherwise the DB engine would not know which table field do we refer to. This is done with the use of table aliases. Basically you can give a name to your tables and refer to them using the alias. Here I’ve defines “sq” as an alias for the subquery. You have to use the ~ character between the table alias and the field name (and of course between the table name and field name as in the previous examples).


Notes:

  • Subqueries cannot be used when accessing pool tables or cluster tables.
  • the ORDER BY clause cannot be used in a subquery.
  • If a subquery is used, the Open SQL statement bypasses SAP buffering.


Note: subqueries can be used in the HAVING clause too as seen in example 14.


Hint: we could have solved the requirement without a correlated subquery. In this case, the subquery would select all the customers who had a cancelled booking, and the surrounding SELECT statement would check every customer if it is in the result set of the subquery:


Simplified example:


SELECT ...WHERE customid NOT IN ( select customid from sbook where cancelled EQ 'X' ).
is equal to
SELECT ...WHERE not exists ( select bookid from sbook as sq where sq~customid EQ sbook~customid and cancelled EQ 'X' )

Example 11: Special operators


In all the previous examples we have only used the ‘EQ’ (same as ‘=’) operator and the ‘GT’ (>=) to compare field values. However, there are more complex ones too.


BETWEEN


This operator is very simple, it is basically “<=” and “>=” together.


Business requirement: exclude only customers from our list, who have a cancelled bookings in the first quarter of 2005.


The syntax for this is “field BETWEEN value1 AND value2”.


Hint: from the
business perspective we expect more customers in the result set, since we exclude less customers due to a more restrictive subquery.


16_between_operator.jpg

Screenshot 10: using the BETWEEN operator. As you can see, there are 21 customers on our list, so there is one customer who appears again (had cancellation before or after Q1 of 2005).


LIKE


This expression is a very flexible tool for character string comparisons based on a pattern. The pattern can be defined using wildcard characters: "%" represents any character string (even an empty one) and "_" represents any single character. The LIKE expression is case sensitive, and blank characters at the end of the pattern are ignored (LIKE ‘__’ is equal to LIKE ‘__   ‘).


Business requirement (quite strange…): restrict our list further to only contain customers with a name starting with “A”.


We add “name LIKE ‘A%’” to the WHERE clause to achieve this.


17_like_operator.jpg

Screenshot 11: using the LIKE operator to filter the list based on the name of the customer.


Note: What to do if we want to search for records that contain ‘_’ in a specific field? Since this is a reserved symbol, we have to use the addition ESCAPE, which allows an escape character can be defined. This escape character cancels the special functions of wildcard characters (simply place the escape character before the wildcard character to be cancelled).


A quick example: select all material numbers which contain an underscore:


Wrong approach (returns every material number):
SELECT matnr FROM marawhere matnr LIKE '%_%'


Good approach:
SELECT matnr FROM marawhere matnr LIKE '%~_%' ESCAPE '~'


Hint: It is not possible to specify a table field as a pattern.


Business requirement (even more strange): now we want to only see customers with a name starting with ‘A’ and having ‘d’ as the third letter.


18_like_operator_2.jpg

Screenshot 12: using the LIKE operator with both special characters “%” and “_”. As you can see, we still have three customers who match all our selection criteria.


IN


This operator allows you to compare a field to a set of fixed values. This comes handy as it can replace a longer and much less readable expression:


“field IN (‘01, ‘03, ‘05’)” is equal to the much longer “field EQ ‘01’ or field EQ ‘02’ or field EQ ‘03’”


Business requirement: extend our selection to customers of American Airlines and United Airlines too.
19_in_operator.jpg

Screenshot 13: using the IN operator to count bookings of other airlines too. As expected, we now have much more customers who match the less restrictive selection criteria.


Note: the IN operator can be used with a selection table too, as seen in chapter 17.


Example 12: Other group functions


So far we have only used the COUNT group function to count the number of bookings that match our selection criteria. There are a total of five group functions available in OpenSQL. The remaining four that we haven’t seen yet are all mathematical calculations: SUM, MIN, MAX and AVG that calculate the total, minimum, maximum and average of a field respectively.


There are some restrictions related to the use of group functions:

  • If the addition FOR ALL ENTRIES is used in front of WHERE, or if cluster or pool tables are listed after FROM, no other aggregate expressions apart from COUNT( * ) can be used.
  • Columns of the type STRING or RAWSTRING cannot be used with aggregate functions.
  • Null values are not included in the calculation for the aggregate functions. The result is a null value only if all the rows in the column in question contain the null value.


Business requirement: the boss wants to see the total, average, minimum and maximum of the booking price for each customer (in the currency of the airline).


20_other_group_functions.jpg

Screenshot 14: using all group functions (MIN, MAX, SUM, AVG, COUNT).


Note: just like with the COUNT function, the DISTINCT keyword can be used to perform the group function only on distinct values (so the result of SUM( DISTINCT A ) for two records having value 10 in a field A would be 10).


Note: the data type for AVG and SUM must be numerical. The data type of MIN, MAX and SUM is the data type of the corresponding table field in the ABAP Dictionary. Aggregate expressions with the function AVG have the data type FLTP, and those with COUNT have the data type INT4.


Hint: the tool I’ve used for demonstration replaces the field types of these aggregate functions for a more user friendly display (instead of the exponential notation).



Example 13: Nesting subqueries


Subqueries can be nested which means that you can put a subquery in the WHERE clause of a subquery. A maximum of ten SELECT statements are allowed within one OpenSQL query (a SELECT statement may have maximum nine subqueries).


Business requirement: exclude only customers who have cancelled bookings in Q1 of 2005 and the language of the travel agency is English, where the cancellation was made. Pretty awkward, but I had to figure out something


We can implement this logic using a nested subquery: we add this criteria to the WHERE clause of the outer subquery (select all agency numbers where the language is English). This is different from our previous subquery, because of the keyword we use for evaluating it.


Logical expressions for subqueries


- EXISTS: this is what we have used in our first subquery. This returns TRUE if the subquery returns any records (one or more) in its result set, otherwise this returns FALSE.


- EQ, GT, GE, LT, LE: these operators can be used to compare a field with the result of the subquery. If the subquery returns more than one row, obviously the database engine will not know which one to use for comparison: a non-catchable exception will occur.


- In order to use subqueries that return multiple rows, you either have to use the IN operator (checks if the field is equal to any of the values returned by the subquery) or one of the ALL, ANY, and SOME keywords together with EQ, GT, GE, LT or LE. These will influence the comparison in a pretty self-explaining way: the comparison will be carried out with all the records returned by the subquery, and the comparison will return TRUE if all (ALL) or at least one (ANY, SOME) records return TRUE for the comparison. There is no difference between keywords SOME and ANY.


What result do we expect? Since our outer subquery is used to filter out customers with cancelled bookings, a less restrictive subquery (achieved by the nested subquery) would mean more customers on our list. Practically: the less agencies we include in our search for cancellations, the more customers we get on the list.


21_nesting_subqueries.jpg

Screenshot 15: nesting subqueries. As you can see, we actually have one more customer on our list, who cancelled his booking at an agency where the language is not English.


Example 14: HAVING and GROUP BY in a subquery


Business requirement: only exclude customers who have at least three cancellations (Lufthansa flight in Q1 of 2005 at an English speaking agency).


Since we have to count the number of these bookings, we have to use group function COUNT and group the bookings by the customer ID. This way we get the number of matching bookings per customer. Then we simply add the HAVING clause to make sure we only exclude customers having more than two cancellations from our main query.


We can expect to have more customers in our result set, since we have a more restrictive subquery that we use to filter out customers.


 

22_having_gb_in_subquery.jpg

Screenshot 16: using the GROUP BY and HAVING clauses in a subquery. As you can see, we have two more customers on our list (who have one or two matching cancelled bookings).


Example 15: Let’s extend the GROUP BY clause


Business requirement: include only customers who have more than 10 bookings for the same airline. It doesn’t matter which, but it should be more than ten.


So far we have counted all the bookings of the customers who satisfy all our criteria (for example having more than 10 bookings of any airline). This could be like someone having 5 bookings for Lufthansa, 5 for American Airlines and 2 for United Airlines (the total being higher than 10). Now we want to see something like 11 for Lufthansa.


It is very simple to solve this by adding the airline code (CARRID) to the field list of our main query. Remember, the database engine will create groups of records based on all fields listed before the first group function in the field list (SELECT clause). If we add the airline code here, groups will be made per airline for each customer and the COUNT function will count the number of bookings per customer and airline.


What changes do we expect in our result set? There should be much less customers on our list, because they must have more than ten bookings for the same airline.


23_add_carrid_to_group_by.jpg

Screenshot 17: adding the carrier ID to the GROUP BY clause (and the SELECT clause as well).


The result shows exactly this: we only have three (very loyal) customers who match our selection criteria. Notice that the highest number of bookings is now only 12, while in the previous example it was 19.


Example 16: Going back to the LEFT OUTER JOIN


In order to have some coordinates displayed in the result list, we make two changes:


- change the WHERE condition of the main query: instead of checking the name of the customer and the airline code, we select customers from the US. This way we will have much more customers on our list (100 which is limited by the UP TO N ROWS addition) and since they are from the US, we will see some region codes for the customers (coordinates are maintained for the US regions).


- remove the HAVING clause to include customers with less than 11 matching bookings.


24_remove_having_clause.jpg

Screenshot 18: first change: removing the check to have at least ten bookings.


25_change_where_clause.jpg

Screenshot 19: second change: select customers from the US.


Now you can see again the behaviour of the LEFT OUTER JOIN: coordinates are filled for all records, where the region code is filled and coordinates are found for the region in table GEOT005S.


Example 17: Using a selection table in the WHERE clause


Selection tables are used to define complex selections on a field. They are mostly used together with selection screens (using statement SELECT-OPTION). Any selection the user makes on the user interface will be converted to a complex logical expression using the operators that we have worked with in this tutorial (EQ, GT, LE, GE, LT, NE, BETWEEN, NOT etc.). This conversion is made by the OpenSQL engine automatically.


In order to compare the values of a table field with a selection table, you have to use the “IN” operator.


Business requirement: only count bookings of Business Class and Economy Class (‘C’ and ‘Y’) in a complex time range.


26_use_range_tables.jpg

Screenshot 20: performing complex selections using selection tables. Notice the “IN” keyword used as a comparison operator.


Note: the tool used for this demonstration offers a UI to define the selection tables as on selection screens. Also, the generated WHERE clause is visible on the right side, next to the selection tables R_SBOOK_FLDATE and R_SBOOK_CLASS.


Example 18: The “FOR ALL ENTRIES IN” construct


This construct is widely used in ABAP and is similar to a table join in a way that it is used to read data from a table for records we already have (typically selected from another table or returned by a Function Module). However, there are big differences between the two constructs.


The “FOR ALL ENTRIES IN internal table” construct allows you to use an internal table as the basis of your selection, but not like a selection table from Example 17. If you use this addition, you can (actually, must) refer to the fields of the internal table in the FOR ALL ENTRIES IN clause to perform comparison with the fields of the database table(s) being read. Naturally the fields used in the comparison must have compatible data types.


As this construct is ABAP specific, there is a mechanism that translates the OpenSQL command to one or more native SQL statements. Actually the WHERE clause(es) that will be passed to the database engine will be generated based on the contents of the internal table and the WHERE clause you define. There are several profile parameters that influence this conversion, which you can check in SAP Note 48230 - Parameterization for SELECT ... FOR ALL ENTRIES statement.


The main difference between table joins and this construct is that table joins are carried out by the database server and all data is passed to the application server at once. On the other hand, in case of a FOR ALL ENTRIES IN construct, the entire WHERE clause is evaluated for each individual row of the internal table. The result set of the SELECT statement is the union of the result sets produced by the individual evaluations. It is very important to note that duplicate records are automatically removed from the result set (but on the application server and not on the database server).


Syntactically, there is a difference that you have to use the “-” sign instead of the “~” sign between the internal table name and the field name of the internal table in the WHERE clause.


Very important note: If the referenced internal table is empty, the entire WHERE clause is ignored and all lines from the database are placed in the result set. Always make a check on the internal table before executing a select query using this construct.


Business requirement: read airline information for all airlines that appear on our list.


How to implement this? We already have a SELECT statement from the previous example, so create a second SELECT statement using the FOR ALL ENTRIES IN construct. Simply add the carrier ID as a link into the WHERE clause (similar to the join condition in case of table joins) and that’s it.


27_for_all_entries.jpg

Screenshot 21: using the “FOR ALL ENTRIES IN internal table” construct.


Note: the tool I’ve used for demonstration uses “outer_table” as the name of the internal table. The contents of it are coming from the select query of example 17.


Note: As of release 6.10, the same internal table can be specified after FOR ALL ENTRIES and after INTO. However, be careful because in this case all fields of the internal table that are not filled by the SELECT query will be cleared.


Note: performance-wise there are endless discussions on SCN if a table join or the FOR ALL ENTRIES IN construct is better. It really depends on the buffering settings of the tables you select from, the fields you use for joins and selections, the indexes that are available, the number of records in both tables, profile parameters of the SAP system etc. In general I prefer joins since it is a “tool” that is designed especially for the purpose of reading data from multiple tables at the same time and it is done on the database layer. Of course certain situations are against a table join. Also, you have no choice if you use a BAPI/Function Module/Class method to get records from the database, since obviously in that case you cannot use a table join but you have to use the FOR ALL ENTRIES IN construct.


Other keywords


SINGLE and FOR UPDATE


If you use the SINGLE addition, the result set will contain maximum one record. If the remaining clauses of the SELECT statement would return more than one line, only the first will be returned.


The FOR UPDATE addition can be used only with the SINGLE addition, which you can use to set an exclusive lock for the selected record. However, this is rarely used and I also prefer using lock function modules separately.


Note: The addition SINGLE is not permitted in a subquery and the ORDER BY clause can not be used together with it.


CLIENT SPECIFIED


This addition switches off the automatic client handling of Open SQL. When using the addition CLIENT SPECIFIED, the first column of the client-dependent database tables can be specified in the WHERE and ORDER BY clauses.


BYPASSING BUFFER


This addition causes the SELECT statement to bypass SAP buffering and to read directly from the database and not from the buffer on the application server.


ENDSELECT


A SELECT statement may retrieve database records one by one (functioning as a loop using keyword INTO), or together at once (this is called “array fetch” and is used with keyword INTO TABLE/APPENDING TABLE). In the first case the ENDSELECT statement closes the loop started with SELECT. Both constructs retrieve the same result.


Out of scope for this blog post


Database hints


Basically using hints is used to specify how the database engine should execute our query. If you omit this, then the DB engine will use its own optimizer to determine the best strategy to execute the SELECT statement. Using hints to override this default strategy is quite frequent outside of SAP, but it is seldom used with ABAP.


One reason is that not many developers know that it is possible to use hints with OpenSQL statements (not only with native ones). Also, there are certain drawbacks (problems during DB upgrade or change of DB server) and there is more possibility for human errors.


There is a very good overview of database hints in SAP Note 129385 - Database hints in Open SQL


Dynamic token specification


It is possible to assemble OpenSQL statements during runtime. So instead of coding a static SELECT statement, you can use character-type variables to hold the SELECT, FROM, WHERE etc. clauses. This may come handy in certain cases, but it has disadvantages regarding performance, maintainability and code readability. Also, there are certain SAP release dependent restrictions on the allowed syntax elements. There are some nice materials on SCN and other sites that deal with this topic.


Conclusion


As you can see, even though Open SQL is a very limited subset of the modern SQL language, it still allows you to execute quite complex queries. In most cases the whole business logic of a complex report cannot be mapped into a single select query, however if you know what possibilities you have, you can write much more elegant and compact program code with better performance.


Thanks for reading and have fun using Open SQL.




Does SAP really want NON SAP Developers ? 1 of 3

$
0
0

I believe that this blog series will show that, despite the talk, SAP are not serious about enabling access to SAP systems for non SAP developers to download install and run SAP systems for development purposes. If SAP really did want non SAP developers using their tools, they would

  • Provide clear non ambiguous instructions on how to get the software,
  • Provide clear non ambiguous paths to actual running systems,
  • Provide clear non ambiguous descriptions of the software to enable non SAP developers choose the appropriate package,
  • Remove (the appearance of) geographic restrictions,
  • Provide a mechanism for keeping the software up to date and appropriately patched.

 

I apologise for the length of this post, but it could have been even longer. As it is, it's broken into a series of three.The subsequent entries in this series are

Does SAP really want NON SAP Developers ? 2 of 3

Does SAP really want NON SAP Developers ? 3 of 3

 

Edit (March 28, 6:45 PM) - start

Please bear in mind that is NOT how I would go about installing a Developer / Trial ABAP system, and I wouldn't recommend anyone follow this path through the labyrinth. The point of this series was how people NOT experienced with SAP products or naming conventions or SAP web sites or SAP UX, without a large network of fellow SAPPers, would see the process.

Edit - end

 

My scenario is that that Alice is a non-SAP developer, physically located in Australia. She is familiar with web front-end development on both windows and Linux systems. Alice wants to install the latest ABAP developer platform to learn about SAP, and even to develop something using the latest greatest SAP development tools – using API’s (ODATA / JSON / XML etc) and SAPUI5 (which she knows is built on jQuery, a tool she is already familiar with).

 

How do she go about doing this, and once she has it installed, how does she go about maintaining it ?.

 

 

Getting access, using the SAP provided documentation

First things first; Our intrepid heroine has to install setup or get access to a system. She has help from a relatively experienced SAP BASIS person (i.e. me), but IANAD (I am not a Developer). Our first assumption is that Alice knows where to find documentation on the Developer Editions. I suggest she starts at the SAP site.

 

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-70.png

It’s not that obvious, but there is a link to free trials. Lets go…

 

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-71.png

Now, I thought there was a heading Developer Trials here. his seems to have disappeared. There is a “SAP Netweaver Business Warehouse” (which Alice does not want – she wants to develop in ABAP and a full blown Business Warehouse is overkill). However, she sees, in the fine print at the bottom, a link “Looking for more free trials? Visit our Developer Center in the SAP Community Network (SCN)”.

This takes her to

 

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-72.png

Where she finds a link (in an image) saying "ABAP 7.40 Trial Available". So she clicks on it, and…

 

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-73.png

Voila. She is now in the "ABAP Developer Centre", where she sees a button that promises to Get the new SAP Netweaver ABAP 7.40 Trial.

 

Issue 1 We seek him here, we seek him there ….

The Learn more link takes her to “Developer & Trial Editions: SAP Application Server ABAP and SAP Business Warehouse”

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-25.png

 

Selecting "AS ABAP 7.4 SP2 on SAP MaxDB 7.9" takes her to a blank page.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-26.png

 

 

Where the hell are these free downloads (and what’s the difference between a Developer Edition and a Trial Edition, anyway ?).

 

Issue 2 SAP institutes new KPI called page view ?

Never fear, there’s always more links…. For example, on the Header of these Community pages there is a link labelled downloads.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-74.png

 

 

This takes her to Software Downloads

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-11.png

 

Pretty much the first entry on this page is "The Software Download Search or Where did this Package Go?" So she follows that link. It’s a mess of complaints about stuff not being available. Not that promising. However, further down the Software Downloads page is "General Process Description for the SCN SAP Software Downloads on SAP Store" which does look promising. So she opens it. And the result tells her to open another document (funny thing is, up till now, she thought THIS was the hard part !!))

 

First entry in the PDF (how quaint) is "How to find a free of charge SCN SAP Software Download Package"

 

Anyway, both of these places tell her” How to find a free of charge SCN SAP Software Download Package”. Sounds like what she is looking for (and the price is right)

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-13.png

 

Issue 3 It’s 2014; how hard is search ?

Remember, Alice doesn’t know what her package name is, but searching for "Developer Edition" using the "Software Download Search” ...

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-11.png

… results in

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-14.png

 

Other attempts include

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-15.png

And

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-16.png

So Alice gives up on this and looks for the "SAP Store" link that she has seen referred to in several places.

 

... something very very small. It’s in the header of the Community page

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-75.png

I included the arrow to help you find it. But at least she is starting to get some results.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-21.png

Too many results; which of these 411 pieces of software does Alice want (or need)? She knows enough to know she wants an ABAP system, so ….

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-22.png

But this doesn’t show the latest (7.4) version. Lets try something else...

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-23.png

 

Issue 5 Distinction between HANA and Max DB

But now Alice has a decision to make – the page has returned 21 entries, and if she pages down there’s TWO types of ABAP 7.4 system – one on HANA and another on SAP MaxDB. Which one does she want ?

 

Issue 6 Availability of HANA Components by Country

Now, checking the country availability only 15 items are available to Australia. For example, "SAP NetWeaver Application Server ABAP 7.4 on SAP HANA - Developer Edition" is available in Australia, but "SCN - SAP HANA Studio Developer Edition" is not.

http://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-23a.png

 

Can we expect someone like Alice, new to SAP, to know the difference between HANA and MAXDB, to know SAP’s strategic direction and how it relates to what she wants to do ? Does Alice know enough to know there is a separate set of HANA sites, and if so, what impact would that have on her subsequent decisions ?

 

Alice’s adviser (me) knows that she needs the HANA Developer Studio if she is going to do any HANA work, including maintenance and general care and feeding of a HANA based system (in other words, even for an ABAP on HANA system). Does the BW7.4 on HANA system comes with a HANA Studio installed? Neither of us know yet. Additionally, remember that Alice is in Australia, so the HANA Developer Studio is (apparently) not available to her.

 

Issue 7 Trial v Developer Edition ?

Is there any significance in the naming conventions ? Where do does Alice find out what (if any) the difference is, and more importantly, the impact on of these differences on how she can use what she develops ?

 

Issue 8 What does the SAP Store login do ?

BTW, Alice has not actually logged on to the SAP store yet.

 

Issue 9 SAP 101 Finding is not Getting

Alice selects the "SAP NetWeaver Application Server ABAP 7.4 on SAP MaxDB - Trial Edition" (which IS available in Australia) and get taken to a more detailed description.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-24.png

 

The Learn More button takes her to a familiar page.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-25.png

We’ve been here before, remember, so Alice goes back and tries the Contact Me button…

 

Issue 10 Silly Alice,why would the SAP web site support firefox ?

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-27.png

Firefox does not show Alice the submit button for this form.

 

One more left What happens if she selects the Demo Now button ? AHA !!! A request to logon to the “SAP Cloud Appliance Library"...

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-76.png

 

Issue 11 Mean what you say does not mean say what you mean

So the Cloud version of SAP is MaxDB ?

Or is MaxDB the Cloud version of SAP ?

Or what ?

Alice's adviser (me) intervenes and explains the difference between the real world (i.e. IAAS, PAAS and SAAS) and SAP's love of naming things differently, with a diversion about how terribly clever the marketing people . Any references I make regarding Mad Hatter tea parties are treated with the contempt they deserve.

 

 

Anyway, Alice enters her registration data and accepts some legalese.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-77.png

 

The activation email arrives quite quickly, but instead of logging her on to anything, she gets taken to this screen.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-79.png

 

But Alice doesn’t have an SAP id or an SCN number. What the hell, maybe it's just naming conventions, so she tries logging on with the credentials she specified in the registration form for the “SAP Cloud Appliance Library" (the “SAP Cloud Appliance Library", right ? got that ? We have screen dumps and we aren't afraid to use them !!)

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-80.png

 

Issue 12 Silly Alice thinks registration means registration

And finds out she is not really registered after all

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-81.png

 

She’s not going BUY anything, She is already registered (she thinks..) with the SAP ID service, so she tries one of the links (the SAP HANA Cloud Platform home page for developer accounts).

 

 

Issue 13 SAP tries very hard (NOT) to distinguish between HANATRIAL and HANA

The answer is that Alice registered at hanatrial.ondemand.com not hana.ondemand.com. It would be nice if there was some way of distinguishing the two sites. By, say, including the word TRIAL somewhere ?

 

Ignoring the mechanics of how Alice works out that there are two almost identical sites, she logons at HANATRIAL.ondemand.com. Some more legalese about the SAP HANA Cloud. No to worry; Alice isn’t using HANA in the cloud (is she ?)

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-83.png

  

As it turns out, she still don’t have an account yet (let alone access to her “free” trial).

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-84.png

It better be worth it….

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-85.png

 

Issue 14 But it’s not what we asked for !!!!

Except it’s not actually what she wants is it ? This is not the SAP NetWeaver Application Server ABAP 7.4 on SAP MaxDB - Trial Edition. In fact, it's not even the “SAP Cloud Appliance Library" that she registered for.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-86.png

 

 

Summary

So Alice has followed the bouncing ball, from the Free Trial link on sap.com, all the way through to find that she has been been given access to a free trial of something she didn't want. Alice has to take a break for work (paying work, that is), but she is nothing if not persistent.

 

What happens next in Alice's exciting adventure ?

Will she get her ABAP system up and running before she kills me (someone, anyone) in frustration ?

Cick for more epic exploits in the adventures of our intrepid explorer.


Does SAP really want NON SAP Developers ? 2 of 3

$
0
0

For full details of our remarkable journey, I refer you to the first post in this series, but to summarize, Alice is a non-SAP developer, physically located in Australia who is familiar with web front-end development on both windows and Linux systems. Alice wants to install the latest ABAP developer platform to learn about SAP, and even to develop something using the latest greatest SAP development tools – using API’s (ODATA / JSON / XML etc) and SAP UI5 (which she knows is built on one of her favourite javascript tools, jQuery).

 

In her first attempt to install a Developer Edition of the Netweaver 7.4 ABAP system on MaxDB, she ended up with a trial account on the HANA Platform.

 

 

After reading this, you may want to continue on to Does SAP really want NON SAP Developers 3 of 3? (EDIT - link added)

 

  

Getting access, using the SAP provided documentation (2nd go)

Yesterday, Alice found her NW 7.4 ABAP system in the SAP Store, She registered for the SAP Cloud Appliance Library, but ended up in the HANA Platform. Today is another day – She has spoken to her trusted adviser (i.e. me) who has suggested that she finds her package on the SAP store and tries again, but this time using the credentials she used to register for the SAP Cloud Appliance Library.

 

Issue 15 SAP Store inspires much confidence in SAP products

Unfortunately, I don’t have screenshots of this, because, well,

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-93.png

.. but lets pretend the site is working. Hopefully, in real life, it will take us to the Cloud Appliance Library, but we have to fake it. Because, well, SAP.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-94.png

 

So Alice logs on with yesterday’s credentials, and gets a set of legalese (this is quite clever, and Alice is going to borrow the design pattern - the user has to scroll to the bottom before the 'Accept' button opens; it gives the impression that someone has read all the fine print whether they have or not).

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-95.png

 

Issue 16 What part of I registered an Account do you not understand ?

Come on. Alice has registered, but STILL has to create an account ?

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-96.png

 

So Alice goes to the Solutions tab scrolls down and finds her free trial

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-32b.png

She clicks on the Try now link and gets more Terms and Conditions (and again with the pretending to read all the way to the bottom before she can hit the Accept button). , and we get to

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-32c.png

 

Now Alice didn’t capture this screen properly, but when she moused over the greyed out Activate link, it says "You cannot activate the solution because you are not an account owner" WTF ?? Well she didn't say that but she is much politer than I am

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-32d.png


It turns out Alice has to create a User for her Accountfinds the reason here on the Accounts tab

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-33.png

 

Issue 17 What part of I registered an Account do you not understand (part 2)?

So Alice registers another user / Account / whatever by clicking on the Create Account link. From Alice's experience, the result looks like the start of a customized Amazon Web Services wizard. There's a few drop down menus  that can't actually be changed, but quite frankly things are going forward, so she isn't going to complain.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-34.png

 

Issue 18 SAP makes use of flexible Cloud (for very small values of flexible)

One of the drop down menus that can't be changed is the physical location of the SAP instance. It is only configured to run in one specific AWS region (US-East-1). Remember that Alice is in Australia; Assuming she is in Sydney, she is maybe 50Km from the AP-SouthEast-2 datacentre. She is nearly 10,000Km from the US-East-1 datacentre. This will  impact on her system response time.

Note that if Alice was in Perth, Australia, then the Singapore datacentre would be closer than either the AP-SouthEast-2 or the US-East-1 datacentres.

 

 

Issue 19 Amazon Web Services Secret Keys is NOT AWS best practice

Alice thought she could use her standard AWS configuration (silly girl). Specifically, Amazon recommend using their IAM product and not AWS access keys.  More to the point you cannot retrieve existing access keys and secret keys. See http://aws.amazon.com/blogs/aws/important-manage-your-aws-secret-access-keys. Some organizations using Amazon Web Services may / will not share these keys with users, Alice is lucky, she is her own boss so she goes to AWS and recreates her key (just like Amazon tell you to in that link.

Anyway, Alice ends up with an Account user

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-37.png

Scroll down to see more ..

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-36.png

 

And back to the Solutions tab, where Alice sees…

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-38.png

 

Issue 20 User Guide (including prerequisites) is not available till AFTER you start a system ?

Now Alice can see her user guide https://caldocs.hana.ondemand.com/caldocs/help/SAP_NetWeaver_AS_ABAP_740_SP2_MaxDB_Trial_Appliance_EndUser_Documentation.pdf   The first (and only) place it is visible is after Alice starts creating the SAP system.

 

Creating a VPN is easy stuff for someone like Alice (especially using the Amazon one), and it took about half an hour to get the VPN created and the SAP instance started. For safety sake, she created the VPN before she activated her SAP system (which she understood to be creating the Amazon Machine Image) and created her SAP instance (which she understands to be her Amazon Image).

 

Issue 21 No SAP GUI for YOU

The user guide says the SAP GUI is available on the server at location /sapmnt/A4H/custom/SAP_GUI_for_Windows_7.30_Patchlevel_4_Hotfix_1_for_SAP_SCN_(Trial)_20130611_0830.exe

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-97.png

Ummm... No it's not.

 

  

Summary

It takes a real effort to forget all the tricks and shenanigans and crap that you learn about or have to put up with when using SAP’s support sites. Unfortunately, when you do, you see how poorly documented the Developer Editions are, the generally poor user experience that we have got used to, the poor search engines etc. I would be surprised if any non SAP Developers would actually perservere to the extent that our mythical Alice has.

 

And she hasn't even been able to log on to the SAP system yet.

 

We have seen a number of issues with creating a running useable instance of the latest SAP ABAP system suitable for a developer. I can’t even say we have created a Developer Edition using this process, because the documentation is so unclear that of the paths I could choose I ended going down one that only let me choose a Trial Edition. Now, none of these issues (Twenty one, so far) are on their own enough to stop a determined developer in their tracks. However, each one provides additional friction which, when combined, would stop most if not all non SAP Developers from following the process through to completion.

 

For SAP AG, what I have just shown is not the problem. The real killer is ththe alternatives; Remember how I said Alice is familiar with web front-end development on both Windows and Linux systems ?  She can get a web server up and running in 10 minutes with Ruby or Python by downloading and executing a file, then running the following command lines.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-90.png

 

By comparison, Alice can not even logon to her SAP system yet. let alone look at the mechanics of caring and feeding it; that’s the next entry in this series.

Does SAP really want NON SAP Developers ? 3 of 3

$
0
0

Previous entries in this series

Does SAP really want NON SAP Developers ? 1

Does SAP really want NON SAP Developers ? 2

 

 

Remember our scenario? Alice is a non-SAP developer. She is familiar with web front-end development on both windows and Linux systems who wants to install the latest ABAP developer platform to learn about SAP, and even to develop something using the latest greatest SAP development tools – using API’s (ODATA / JSON / XML etc) and SAPs User Interface built on JQUERY (i.e. SAPUI5)

Remember that the latest non HANA system available is the "SAP NetWeaver Application Server ABAP 7.4 on SAP MaxDB - Trial Edition". There is no developer edition. Also, remember that Alice deliberately ignored the HANA version, because she is in Australia and the SAP Store tells her that the HANA Developer Studio is not available in Australia.

 

 

Alice has an SAP system up and running.

Now to clear something up. Remember how the'SAP GUI wasn't where the user guide said it should be ?

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-98.png

Turns out it's in a different location. Doesn't matter, she's got a copy from me.

 

Now, some outstanding issues (Still)

 

Issue 22 Licensing

So Alice has a SAP GUI. First thing is to get the system licensed. But what is an SDN ID ? What is SDN ?

http://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-61.png

I told her to use the P number from when she registered for the SAP Cloud

 

Now Alice has (finally) got her SAP system up and running. I tell her to look at transaction SPAM which will show full details of what her system has, and what it is capable of. But, of course, the first thing she sees is

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-51.png

 

Ah well, it still lets us look at stuff. We can see exactly what her system has, and Gateway and UI5 are included…

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-52.png

 

Issue 23 System Configuration

Before the system will work (such things as webdynpro and ICF), configuration is required. There's no instructions on what to do and how to do it; The only instructions given in the user guide are for ensuring that the hostname and domain are consistent. For other stuff, if you do full profile configuration (via transaction RZ10), you get the following screen.

http://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-99.png

 

 

Issue 24 Not all SICF transactions are active

Ran RSICF_SERVICE_ACTIVATION via SE38

 

 

Issue 25 HTTPS is not enabled

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-A1.png

The quick fix is available at Suppress "Protocol cannot be switched to HTTPS . Note that this doesn't implement HTTPS, it only gets rid of the ugly message

 

Maintenance and Patching

Remember that without a Maintenance Certificate, she can’t update Gateway, UI5 or even SAP itself. In fact even, with a Maintenance Certificate, there is another bunch of yaks to be shaved

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-92.gif

 

 

To summarise;

Adding / updating / upgrading plugins, components, EHPs, etc

  • requires Solution Manager
  • Solution Manager requires a Service market Place user-id that is related to a  customer number and that has specific access criteria to the Software Download Centre of the Service Market Place
  • Downloading software via the Solution Manager requires a SAP Router
  • which requires a customer number

 

 

Installing Solution Manager

  • requires maintenance of Solution Manager (via the Software Download Centre of the Service Market Place)
  • which requires a SAP Router
  • which requires a customer number

 

 

Applying patches to repository or configuration data ….

  • Requires SNOTE
  • which requires a Service Market Place id related to a  customer number

 

 

Database and kernel patches don’t require Solution Manager, but they do require access to the Software Download Centre of the Service Market Place, which requires a Service Market Place id related to a customer number. The same requirement applies for access to http://service.sap.com/notes.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-91.png

This may be a blessing in disguise for Alice, but it doesn't help SAP get developers on board.

 

 

And we haven't even covered backup and log file management, disk space management in general, tuning of the operating system or database and so on.

 

 

Summary

By comparison, as mentioned elsewhere in this series, you can can get a web server up and running in 10 minutes with Ruby or Python by downloading and executing a file, then running the following command lines.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-90.png

That is the competition that SAP faces for Developers. Alice is no longer interested in SAP. She has found something else (anything else) that lets her start producing results now.

 

EDIT:

Thanks for making it this far, and thanks to everyone for the positive feedback, both here and in social media.

 

Please bear in mind that is NOT how I would go about installing a Developer / Trial ABAP system, and I wouldn't recommend this path through the labyrinth to anyone. The point of this series was how people NOT experienced with SAP products or naming conventions or SAP web sites or SAP UX, without a large network of fellow SAPPers, would see the process.

 

hth

Currency conversions

$
0
0

In this article I'd like to show an easy approach on how to convert an amount given in one currency to another currency and warn you before a possible bug in SAP standard FM used for currency conversions.


DATA:  l_in(15) TYPE p DECIMALS 5,  l_out(15) TYPE p DECIMALS 5.

l_in = 1.

CALL FUNCTION 'CONVERT_TO_LOCAL_CURRENCY'
  EXPORTING    date             = sy-datum    foreign_amount   = l_in    foreign_currency = 'EUR'    local_currency   = 'DKK'  IMPORTING    local_amount     = l_out  EXCEPTIONS    no_rate_found    = 1    overflow         = 2    no_factors_found = 3    no_spread_found  = 4    derived_2_times  = 5    OTHERS           = 6.
IF sy-subrc = 0.  WRITE: l_in, 'EUR = ', l_out, 'DKK'.
ENDIF.


Result of the program is:

CURRENCY_CONVERSION01.png

IMPORTANT NOTE (valid at 2014-05-27): It is VERY important to use variables defined equally - having the same precision/decimal places and length. Otherwise the results might be VERY surprising: check the following modification with different decimal precision on INPUT and OUTPUT


DATA:  l_in(15) TYPE p DECIMALS 5,  l_out(15) TYPE p DECIMALS 3.
l_in = 1. CALL FUNCTION 'CONVERT_TO_LOCAL_CURRENCY'   EXPORTING    date             = sy-datum    foreign_amount   = l_in    foreign_currency = 'EUR'    local_currency   = 'DKK'  IMPORTING    local_amount     = l_out  EXCEPTIONS    no_rate_found    = 1    overflow         = 2    no_factors_found = 3    no_spread_found  = 4    derived_2_times  = 5    OTHERS           = 6. IF sy-subrc = 0.   WRITE: l_in, 'EUR = ', l_out, 'DKK'. ENDIF.


Result of the second example:

CURRENCY_CONVERSION02.png

Original of the article is at my blog (oprsteny.com)

Web Service using ADOBE forms

$
0
0

Scenario: Fetching the Material Description for a Material in Adobe Form using Web Service.


Consider a Scenario User enters a Material Number in a Adobe Form and the Material Description has

to be fetched from the Material master table , for this we use web service to fetch the description.


I. Create a Web Service:

  Step 1: we need to create a Remote enable function module

 

1.jpg

 

 

Step 2 : Define one Import parameter material number(MATNR) and Two Export parameter material Description (MAKTX) and BAPIRETURN (Mandatory)

 

2.jpg

3.jpg

 

  Step 3: Write the code for to fetch the material description in source code.

 

1.jpg

Then activate and check the function module

 

Step 4: Goto the Utilities->More utilities->Create web service->From the function module

 

1.jpg

 

Step 5: A Web Service Wizard  will be displayed

 

Give the Service name and Description

 

Press continue

 

1.jpg
Press continue

 

1.jpg
Press continue

 

1.jpg
Check local object or package name

 

Press continue

 

1.jpg
Press complete

 

1.jpg

Step 6: Now goto Transaction SOAMANAGER

 

The browser will start

 

Goto Tab Business Administration ->Web Service Administration

 

1.jpg
To find your webservice type your Service name in the Search  Pattern and then select your service then click

 

1.jpg
Step 7: Click Open WSDL Document for selected binding

 

1.jpg
A Browser Window opens with XML code

 

Copy the URL, This URL is the generated WSDL Link.

 

1.jpg

II . Create Adobe Forms

 

Step 1 : Then goto the transaction SFP and create a empty interface then create a form

 

Goto the layout tab

 

Edit ->New data connections

 

1.jpg

 

Enter a New Data Connection Name

 

Select WSDL File and click next

 

1.jpg


  Paste your URL and press next

 

1.jpg

Step 2 : Select your web service and press finish

 

The web service is added to the form .

 

Step 3 : In layout to create a new button and change it's control type to Execute

 

1.jpg

In execute tab select your Data connection name

 

1.jpg

Step 4: Drag and drop the MATNR and MAKTX fields to the Form.

 

1.jpg
  activate the form

 

Step 5 : Create a print program for the form.

 

 

In the '####' place  paste your form name

 

data: ie_outputparams type sfpoutputparams.

data: i_name type fpname,

i_funcname type funcname.

data: fp_docparams type sfpdocparams.

data: fp_formoutput type fpformoutput.

data: data_tab type SOLIX_TAB.

ie_outputparams-getpdf = 'X'.

ie_outputparams-nodialog = 'X'. " suppress printer dialog popup

call function 'FP_JOB_OPEN'

changing

ie_outputparams = ie_outputparams.

try.

i_name = '######################'.                                 “Your Form Name

 

call function 'FP_FUNCTION_MODULE_NAME'

exporting

i_name = i_name

importing

e_funcname = i_funcname.

catch cx_fp_api_repository.

catch cx_fp_api_usage.

catch cx_fp_api_internal.

endtry.

fp_docparams-langu = 'E'.

fp_docparams-country = 'US'.

fp_docparams-FILLABLE = 'X'.

call function i_funcname

exporting

/1bcdwb/docparams = fp_docparams

importing

/1bcdwb/formoutput = fp_formoutput

exceptions

usage_error = 1

system_error = 2

internal_error = 3.

 

call function 'FP_JOB_CLOSE'

exceptions

usage_error = 1

system_error = 2

internal_error = 3

others = 4.

 

data: filename type string,

path type string,

fullpath type string,

default_extension type string value 'PDF'.

cl_gui_frontend_services=>file_save_dialog(

exporting

default_extension = default_extension

changing

 

filename = filename

path = path

fullpath = fullpath ).

check fullpath is not initial.

 

call function 'SCMS_XSTRING_TO_BINARY'

exporting

buffer = fp_formoutput-pdf

tables

binary_tab = data_tab.

 

cl_gui_frontend_services=>gui_download(

exporting

filename = filename

filetype = 'BIN'

changing

data_tab = data_tab ).

cl_gui_frontend_services=>execute(

exporting

document = filename ).

 

 

Output :

 

Step 6 : Run the print program and then save the pdf form in your drive or desktop , then open the form  using the Adobe Reader (7.0 or above) .

 

Give the material number and press description Button , then it fetches the description of the Material entered.


1.jpg

 

Please let me know if anyone face any issue while developing or testing it.

 

Thanks & Regrads,

Arun,

Arunkumar Chandrasekar,

Senior Consultant - SAP Practice @Kaavian Systems Pvt. Ltd.

Does SAP really want NON SAP Developers ? 4 of 3

$
0
0

Based on the social media feedback and the activity on the previousblogs in this series, I've obviously hit a sore point with a lot of people. However, it was a bit rude of me to rant about how bad things are and demand someone fix it, without providing some ideas, some suggestions (plus every good trilogy requires at least four parts).

 

Please take this opportunity to tell me what you think of these ideas, add your own suggestions, and maybe even rank what you see here and in the comments. The SAP Developer Relations team are aware of how ugly the existing situation is, but they can't provide us with what we want (or at least decent reasons why they can't provide what we want) until they hear from us.

 

 

Alice is getting there...

First a progress report; Alice has got her SAP Developer Edition up and running, she has got her head around the fact that it is an entire server and development system in one, and she's coding away. There were some minor issues with stupid things like the difference between SAP namespaces and Customer namespaces....

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-4-01.png

 

 

but it took Alice only a few minutes to produce

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-4-02.png

 

 

What do want and when do we want it  ?

Now, as a developer, I'm stuck at simple ABAP procedural reporting, so I'm going to leave any critique about the documentation and training paths that Alice should now follow to some one who knows better. One thing I will say, though, is that while there is a lot of good stuff in the Cross-Technology Developer Center and ABAP Platform Developer Center Community Spaces, there are a lot of places that will confuse the unguided Developer.

 

For example, some documents point Alice at old releases of the ABAP system and other documents and pages appear to be confusing ABAP on HANA with ABAP on MaxDB. Alice started off with the "Develop an on-premise business application with HTML5 & in-memory persistence" tutorial (again with the naming confusion - in-memory persistence ? is this different to database systems ? and does this imply the existence of some kind of out of memory or even out of body persistence ? No wonder people get confused with what happens to your HANA in-memory data when you turn the box off...) and while it talks about in terms of an ABAP on HANA system, the requisite tables for working through this document were also available in her ABAP on Max DB system.

 

However, one problem that the issue with Customer v SAP namespaces raised is that unless Alice jumps through the hoops to become a certified partner, she can't get her own namespace for her developments. What happens if she builds the next 2 Clicks, and her object names clash with something that is already installed at a customer's site ?

 

This leads into code management in general - Fifteen or twenty years ago, the Transport Management System was light years ahead of anything else in change management and control of source code, but modern tools have caught up and passed it by. SAP needs to be open to GIT and Subversion and other repository tools. SAP needs to provide a ABAP client or interface that serializes SAP content to push and pull it from the non ABAP repository, and at the very least integrate with the ABAP in Eclipse tools. In a perfect world, it would also work from within the SAP GUI as well, but modern non SAP Developers won't be using SE80 unless they have to .

 

While we are on the subject of modern tools, despite SAPs flagship accounts being ones where the business runs SAP, the vast majority of customers use SAP for a specific function within their business, and run most of their processing on other software. Integrating SAP systems into other software should be as easy as making a twitter API call. The easier this integration is, the more "Innovation at the Edge" will occur, providing more value for existing SAP customers and increasing the value for new SAP customers. SAP needs to provide a wider range of APIs or pre-configured ODATA interfaces to allow on premise systems in general to integrate better with other systems; everything from Salesforce to monitoring tools like BMC to configuration software like Chef and Puppet.

 

Easy Installation

The most obvious problem that many people identified is that there are "too many step" required to install a Developer Edition. Apart from discouraging potential developers, this also introduces opportunities for error by the installer (by incorrectly performing or missing a step) and by SAP (incorrect links and documentation).

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-4-06.gif

(from KIDS REACT TO OLD COMPUTERS)

 

In a perfect world, all SAP on-premise systems (whether on a physical server or Amazon, Rackmount, Open Stack, Azure...) should be able to connect and download notes, notes, bug fixes, patches, support packs, upgrades, and apply these, etc. themselves. Just like Windows Update, or "npm update" or "git pull". No Solution Manager, no SAPRouter, no nothing except a HTTP connection to SAP.

 

In a perfect world, installation should consist of running sapinst, filling out the fields and hitting go. With no questions asked after you hit the go button. Checking against a licence or control file would be a good way of ensuring that only the appropriate components are installed on appropriate hardware; for example, a "Developer Edition Licence" would allow installation on any hardware key, but would not install or run the full blown ECC or SRM components.

 

In a perfect "Innovation without disruption" world, the install would also give the option to update our installation to the latest Enhancement Pack automagically (if that is what the installer wants), but that's a bigger issue than a technical upgrade. Anyway, regardless of this, there would be an option to update the DBMS and Operating System to the appropriate patch levels for the current EHP.

 

This is how server and desktop operating systems from Windows to SUSE already work.

 

I understand that making this work would require a bit more control over the target environment than SAP has. For example, if there's an issue with the installation process, where does the blame lie ? With SAP, or the OS vendor or the DBMS vendor ? The only way this could even hope to work is for SAP to own or have some control over the DBMS and the Operating system. Like they do already do with HANA. Like the relationship they have already built with SUSE for the HANA environment. Even if it's not practical for all combinations of Hardware, Operating System and Database system, it's manageable for ABAP on HANA or MaxDB. As an aside, making this one-shot installer only available for ABAP on SUSE on HANA would be useful in marketing ABAP on HANA (make it easier to install HANA / ABAP on SUSE than on any other platform...)

 

In fact the Developer Editions would be an excellent place to start this with, but could we please have the systems in a downloadable form, not tied to a specific cloud platform in a specific location, like the Corporate Network versions of both the NW 7.40 ABAP systems currently are.

 

Easy Documentation

Can we have a simple and accurate path through the process, from a link on sap.com that says in large letters ABAP Developer Editions - no licence fees forever, that goes to a one pager that distinguishes between the on premise systems and other SAP products, and includes the download link and installation instructions for Azure or SUSE 11 (with a pointer to the appropriate SUSE image for Amazon Web Services, Rackmount etc), and a VMWare version (in the appropriate format to allow people to choose either VMWare or Virtualbox).

 

Given that the current (NW 7.4) on premise Developer editions appear to have had some of the traditional post implementation work already done (such as SGEN and Transport Management configuration), why stop there ? For example, I noted that there was no simple way of downloading the SAP GUI from within the Developer Edition that Alice installed. ABAP systems are Web Servers. The User Guide should incoporate a link tohttp://<ip-address>:50000/sap/bc/gui/sap/its/webgui/! to allow Alice to logon and begin her journey as soon as possible.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-4-03.png

 

Another suggestion is to build on the example of the older SAP Developer Editions, which came with a small set of help files that could be configured (ABAP systems are Web Servers) via the icm/HTTP/file_access profile parameter to be accessible via a web browser.

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-4-04.png

 

Apart from documentation links, this is an appropriate place to include links to the various SAP GUIs included in the system. In the example below, I just copied the files from an older Developer system, but I could include any static html or javascript (including UI5 ...)

https://s3.amazonaws.com/notes.basissap.com/sdn/SAP-dev-4-05.png

 

Easy Information

if SAP want to attract modern developers, they have to be open in spirit not just word. This means that all Developers - freelance, employed by an SAP customer, an official SAP partner or even SAP themselves - must have the same access to howtos, work rounds and patches. Any suggestion that this is not so suggests that SAP are playing favorites among their partners and friends at the big System Integrators.

 

SAP needs to decide what to do with SCN. SAP Support needs to either commit to SCN and properly curate support documentation on SCN or stop pinching community created SCN content to respond to Service Market Place messages and to support basic installations scenarios. They need to be all in or all out. If they are going to leverage SCN then they owe it to the community to return the favor, by making their Service Market Place content (not product downloads) as accessible as SCN is.

 

Summary

I've already been approached by a member of SAPs Developer Relations team who sounds very enthusiastic about changing the process of accessing and installing Developer Editions. I'm taking the opportunity to provide his team with some feedback on what I want to see. Now it's your turn

Viewing all 948 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>