ADM355 FS310 Inkasso/Exkasso - APO System Administration
ADM355
APO System Administration
THE BEST-RUN BUSINESSES RUN SAP SAP AG 2003 SAP AG©2003
SAP R/3 4.6C SAP APO 3.1 liveCache 7.4 2003/Q2 50062596
Copyright
Copyright 2003 SAP AG. All rights reserved. No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP AG. The information contained herein may be changed without prior notice. All rights reserved.
SAP AG 2003
Trademarks: Some software products marketed by SAP AG and its distributors contain proprietary software components of other software vendors. Microsoft®, WINDOWS®, NT®, EXCEL®, Word®, PowerPoint® and SQL Server® are registered trademarks of Microsoft Corporation. IBM®, DB2®, DB2 Universal Database, OS/2®, Parallel Sysplex®, MVS/ESA, AIX®, S/390®, AS/400®, OS/390®, OS/400®, iSeries, pSeries, xSeries, zSeries, z/OS, AFP, Intelligent Miner, WebSphere®, Netfinity®, Tivoli®, Informix and Informix® Dynamic ServerTM are trademarks of IBM Corporation in USA and/or other countries. ORACLE® is a registered trademark of ORACLE Corporation. UNIX®, X/Open®, OSF/1®, and Motif® are registered trademarks of the Open Group. Citrix®, the Citrix logo, ICA®, Program Neighborhood®, MetaFrame®, WinFrame®, VideoFrame®, MultiWin® and other Citrix product names referenced herein are trademarks of Citrix Systems, Inc. HTML, DHTML, XML, XHTML are trademarks or registered trademarks of W3C®, World Wide Web Consortium, Massachusetts Institute of Technology. JAVA® is a registered trademark of Sun Microsystems, Inc. JAVASCRIPT® is a registered trademark of Sun Microsystems, Inc., used under license for technology invented and implemented by Netscape. MarketSet and Enterprise Buyer are jointly owned trademarks of SAP AG and Commerce One. SAP, SAP Logo, R/2, R/3, mySAP, mySAP.com, and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries all over the world. All other product and service names mentioned are the trademarks of their respective companies.
Course Objectives
At the conclusion of this course, you will be able to: Understand the fundamentals of APO, from the architecture to the day-to-day administration techniques Perform administration tasks, monitor and troubleshoot APO systems using tools provided by SAP
SAP AG 2003
Course Content
1 APO Overview 2 APO Core Interface 3 CIF Monitoring 4 APO Optimizers 5 APO and BW 6 APO Sizing & Performance 7 Data Consistency 8 Disaster Recovery SAP AG 2003
Duration, Target Group, Prerequisites
Duration: 2 days
Target group: APO system administrators APO project team members Technical Consultants
Prerequisites: SAPTEC ADM100
SAP AG 2003
APO Overview
1 APO Overview 2 APO Core Interface 3 CIF Monitoring 4 APO Optimizers 5 APO and BW 6 APO Sizing & Performance 7 Data Consistency 8 Disaster Recovery SAP AG 2003
© SAP AG
ADM355
1-1
APO Overview
Contents Supply Chain Planning APO System Landscape and Architecture Core Business Processes
Objectives At the end of this unit, you will be able to: Describe the approach of mySAP Supply Chain Management (SCM) and mySAP Advanced Planner and Optimizer (APO) Describe the APO system architecture and landscape Explain typical core business processes within APO Explain the application components of APO Check the versions of APO system components SAP AG 2003
© SAP AG
ADM355
1-2
Supply Chain
INFORMATION MATERIAL
Transfer
Transfer
Suppliers
Manufacturers
Transfer
Distributors
Transfer
Retail Outlets
Consumers
CASH
Supply chain is a set of business processes and assets (production, transportation, storage, and inventory) that link together buyers and sellers.
SAP AG 2003
A supply chain includes all the parties involved in our business processes and that includes everyone from raw material suppliers to consumers. The general goal of supply chain management (SCM) is to effectively handle and manage all of the operations and processes in the supply chain environment. Possible levels of SCM include: Intra-enterprise Extended enterprise Inter-enterprise
© SAP AG
ADM355
1-3
Supply Chain Planning: Complexity of the Problem
100 plants
50 DCs
100,000 customers
50,000 materials
10,000 resources
SAP AG 2003
In real life, supply chain is in a form of a supply network rather than a linear chain and can be very complex. It can include many plants and distribution centers, as well as tens of thousands of customers, materials, and resource items. Often there are several SAP or ERP systems to support each task in the supply chain. Our goal is to map such a network into a relational data structure and to enable fast navigation between the objects. Because of the complexity, these goals usually cannot be achieved using traditional methods such as SQL.
© SAP AG
ADM355
1-4
Example: Supply Chain Scenario Can my system automatically trigger production in Spain and schedule shipment to my customer instead of me spending hours on the phone checking availability? Customers
Sales
Distribution Production
Real-time available-to-promise Global, multi-site production & inventory visibility Seamless integration SAP AG 2003
Questions often asked in a supply chain environment: Can we get information about the availability of a product immediately? Can production be triggered automatically by sales orders? How can we manage production for another company location?
© SAP AG
ADM355
1-5
mySAP SCM Landscape Today
SAP CRM SAP R/ 3
SAP EM Plug-In
R/ 3 Plug-In
R/ 3 Plug-In
SAP R/ 3
OLTP CIF
Non-SAP XML, BAPI...
SAP BW
SAP Data Extractors
SAP APO
IDoc, Business Connector
SAP R/ 2
SAP AG 2003
mySAP Supply Chain Management (mySAP SCM) offers a user-friendly, powerful and competitive solution which enables modeling and optimization of the whole logistic chain. Online Transaction Processing System (for example, SAP R/3 Enterprise system) used mostly to support Materials Management and Finiancials is used in the mySAP SCM environment as a backend system. Data is exchanged between the SAP APO system and a connected OLTP system via Core Interface (CIF). SAP APO is an advanced planning and scheduling tool that enables real time decision support and collaborative network optimization across the extended supply chain. SAP APO helps companies to synchronize their supply chain activities with their partners to help improving customer service and order fulfillment. SAP Business Information Warehouse (SAP BW) is a powerful, flexible data-warehousing solution that gathers and refines information from internal and external sources. Data exchange between APO and BW is through BW Data Extractors. The SAP Event Manager (SAP EM) offers the possibility to track events for specific objects, processes throughout the entire supply chain. It allows you to to monitor, measure and evaluate business processes. It can automatically inform the decision makers in critical situations when action is required. SAP EM is a new additional component in mySAP SCM. It runs on the standard SAP Web Application Server architecture 6.20 and requires a BASIS Plug-in.
© SAP AG
ADM355
1-6
mySAP SCM Software Components
mySAP Supply Chain Management software components: OLTP system (SAP R/3) SAP Advanced Planner and Optimizer (SAP APO) SAP Business Information Warehouse (SAP BW) SAP Event Manager (SAP EM) SAP Event Manager – Web Communication Layer (WCL)
SAP AG 2003
The software components in mySAP SCM include: OLTP system (SAP R/3) SAP Advanced Planner and Optimizer SAP Business Information Warehouse SAP Event Manager SAP Event Manager – Web Communication Layer - WCL is SAP EM specific frontend which offers information access and full supply chain visibility for SAP EM via the internet. It is based on Java Server Pages technology.
© SAP AG
ADM355
1-7
SAP APO System
SAP APO (Advanced Planner and Optimizer) is part of the mySAP SCM solution Based on SAP Basis system, has an integrated BW engine Powerful analytical engine → APO Optimizers liveCache Open interfaces to both SAP and non-SAP systems
SAP AG 2003
SAP’s Advanced Planner and Optimizer (SAP APO) plays a very important role in the mySAP SCM solution. liveCache is a data management system developed by SAP. It combines features of relational and object-oriented databases and resides in main memory of the server. Optimizers are used in the business processes to optimize the process flow and minimize costs.
© SAP AG
ADM355
1-8
SAP APO System Components SAPGUI
SAPGUI
APO
Core R/3
SAP APO APO Optimizer
liveCache
Legacy system
SAP AG 2003
The SAP APO 3.1A system consists of the following components: SAP APO (based on Basis 4.6D/BW 2.1C) running on a RDBMS liveCache Data Management System 7.4 APO Optimizer SAPGUI (>=6.10) APO is based on R/3 Basis (same ABAP and system administration functionalities) and SAP BW. It is a much scaled down R/3 system containing only 4,394 tables in APO 1.1A versus 14,000+ tables in 4.5A, and 7,911 tables in APO 3.0A versus 23,015 in 4.6C IDES system. Core SAP R/3 System includes the following releases: 3.1H, 3.1I, 4.0B, 4.5B, and 4.6B/C. SAP APO communicates with core R/3 synchronously and with legacy systems asynchronously.
© SAP AG
ADM355
1-9
SAP APO System Landscape I Web Browser
Web server
SAPGUI
ITS
SAP APO
SAP OLTP SAP OLTP Database
SAPGUI
Database
SAP AG 2003
A typical system landscape consists of one or more SAP OLTP systems and one APO system. The OLTP systems send their data to the APO system and vice versa. In a collaborative scenario, there may be an ITS Server linked to the APO system.
© SAP AG
ADM355
1-10
SAP APO System Landscape II Browser
Presentation Client
Dedicated HW/SW system for liveCache Low impact on OLTP performance Specialized and optimized solutions ITS
App. Server
SAP APO
SAP OLTP SAP OLTP Database
Presentation Client
APO Application
Database Server
Web server
Optimizer
BW Layer
APO DB
Database
APO Application
liveCache
Database
SAP AG 2003
liveCache uses object-oriented and memory-based computing, it is based on SAP DB. Along with the ability to store data in the liveCache using ODBMS and RDBMS techniques, liveCache also contains application and business logic thus improving speed and processing times. The APO relational database management system provides BW functionality that is mainly used by Demand Planning. SAP R/3 uses the APO Core Interface (CIF) to handle data transfer to APO. APO CIF is part of the R/3 Plug-in. APO modules: SNP (Supply Network Planning), CTM (Capable To Match), PP/DS (Production Planning/Detailed Scheduling), ND (Network Design) and VSR (Vehicle Scheduling) use standalone programs called optimizers. These programs provide sophisticated optimization algorithms that also communicate with liveCache.
© SAP AG
ADM355
1-11
R/3 and APO Integration Seamless Integration SAP R/ 3
SD SD
GATP
Order Entry Order Entry
LIS / BW
DP Demand Planning Release Demand
MM MM
SNP
Inventory Mgmt Bestandsführung Purchasing Purchasing
Distribution Planning Planning Horizon
PP PP
liveCache
PP/DS
MRP Bedarfsplanung Production Control Production Control
Production Planning Scheduling
SAP AG 2003
© SAP AG
ADM355
1-12
OLTP and APO
SAP SD Online Online ATP ATP Request Request ATP
SAP
Collect Collect Statistics Statistics
DP
Distr. Request Request Transport Transport Orders Orders
NonSAP
PP
Opt.
APO
Submit Submit Production Production Plan Plan
SAP AG 2003
This is an example of a distributed system that integrates several standard SAP OLTP systems, a legacy system and SAP APO.
© SAP AG
ADM355
1-13
Databases in APO
An APO system has two databases: The APO DB liveCache (based on SAP DB)
The APO work processes connect to both databases If you replace the SAP kernel you may need to import two database libraries: The library for the APO DB The library for the SAP DB
SAP AG 2003
As of SAP kernel release 4.5A, the database dependent part of the SAP database interface is stored in a separate library that is dynamically linked to the SAP kernel. The file naming convention for this library is db
slib.. For example, for the SAPDB, the library is DBADASLIB and the file name is: dbadaslib.dll for SAPDB on Windows NT / 2000 dbadaslib.so on Solaris or Linux. If the SAP kernel is replaced, for example through the application of a kernel patch, you may have to import the current database library patch so that it corresponds to the kernel patch. An APO system has two databases: The APO DB liveCache, which is based on SAP DB The APO work processes connect to both databases. Therefore, if you replace the SAP kernel you may need to import two database libraries: the library for the APO DB and the library for the SAP DB. However, not every new liveCache support package requires the new SAP DB library. If you need to install it, in most cases a prompt will appear to inform you about this. The patch numbers of disp+work and of the database libraries are listed when you run command disp+work –V . You can also get this information from the developer traces of the work processes, or in transaction SM51 by choosing Release Info. For more information about DBADASLIB import, see SAP Notes 400818 and 325402.
© SAP AG
ADM355
1-14
Database Server
APO Data
APO DB live Ca c he
Master data: resources, materials, locations, setup matrixes, PPMs, planning books, transport relations, customizing data, historical data, and prognosis of demand planning
Parts of master data used for planning, without resource names or material descriptions Transaction data like customer and transport orders, data as a result of the resolution of bills of materials
SAP AG 2003
In SAP APO, some data is stored only in the APO DB, some in liveCache, and some in both. liveCache contains the data needed for planning steps. Data generated from planning is transported from the liveCache back into the APO DB and OLTP systems. APO DB and liveCache must always be consistent. This is called internal data consistency. Consistency between external SAP systems (or other OLTP systems that import data into APO DB and APO) is called external data consistency.
© SAP AG
ADM355
1-15
What is liveCache?
A data management system
App. Server
Presentation Client
Manages large amounts of data in main memory Application Application buffer
COM routines = LC applications
Has hybrid functionality (relational and objectoriented databases)
liveCache
Database Server
Integration with SAP system Database buffer
Business logic runs in liveCache applications in the context of liveCache process
Same as standard database integration, via SQL interface
Standard transaction handling
Database
Locks Rollbacks Commits
SAP AG 2003
The built-in business functions to access and process objects stored in liveCache are implemented in C++ using the COM-architecture (Component Object Model). These routines are compiled into native machine codes and dynamically linked (DLL or shared libraries) with the liveCache kernel. The new name for COM routines is liveCache applications (LcApps). Processing of the LcApps is invoked by stored procedures from ABAP reports. In the operating system, files sap*.dll and sap*.lst contain the COM routines. The routines are stored in the directory /sapdb//db/sap. From January 2002 and on, there is no differences between liveCache on MS Windows Server and UNIX. This is valid for all SAP APO Releases >= SAP APO 3.0A
© SAP AG
ADM355
1-16
Performance Benefits of liveCache
Avoids/reduces disk I/O: Data is kept in main memory (large data cache is necessary) All operations are run in memory
Reduces dataflow between application and data management: Application logic runs where the data is and minimizes network load
Reduces runtime of operations on data: Application logic operates with COM routines on object data faster than SQL commands on relational data
SAP AG 2003
© SAP AG
ADM355
1-17
APO Optimizers
A wide variety of computational solvers applied to specific planning functions with industry specific variations Available optimizers: Supply Network Planning (SNP) Detailed Scheduling (PP/DS) Capable-to-Match (CTM) Sequencing (SEQ) Network Design (ND) Vehicle Scheduling and Routing (VSR) Model Mix Planning (MMP)
C++ instead of ABAP Faster programs Complex data structures
Built on top of C-libraries from ILOG Only available on Windows platforms SAP AG 2003
© SAP AG
ADM355
1-18
SAP APO 3.0A / 3.1 Supported Platforms Heterogeneous OS combinations between liveCache, APO DB and application server supported Presentation Server : MS Windows, SAP GUI for HTML, Thin Clients supported Application & DB Server : AIX, HP-UX, Solaris, Tru64, MS Windows Server, Linux as of APO 3.0A SR3 Databases : SAP DB, DB2/UDB, DB2/390 (as of APO 3.0A since February 2001, DB server only), MS SQL Server, Informix, Oracle
lilivvee C Caacch hee
SAP liveCache: 64-Bit AIX, 64-Bit HP-UX, 64-Bit Solaris, Tru64 UNIX, MS Windows Server (also with AWE) Optimizer : MS Windows Server 64-Bit Windows 2003 Server on IA-64 planned for most SCM architecture components
SAP AG 2003
Please see http://service.sap.com/scm → mySAP SCM Technology → Platforms and System Requirements → link Availability of DB & OS Platforms for SAP APO 3.1 . Here you will find supported DB & OS versions for all SAP APO architecture components (SAP APO DB, application server, liveCache, optimizer), minimum required OS patch levels and OS parameterization for liveCache, SAP GUI information, etc. Note that the DB & OS platform support matrix for SAP APO 3.1 is similar but not exactly the same as the one for SAP APO 3.0A Differences are mostly minor and only related to operating system releases for liveCache and the optimizers or dependencies with liveCache 7.4 For liveCache, some new operating system releases will be only released for liveCache >= 7.4 Optimizers in SAP APO 3.0A runs on Windows NT Servers. SAP APO 3.1 optimizers now are only supported on Windows 2000 Advanced Server as a minimum - New Windows operating system releases are planned to be released only with the SAP APO 3.1 optimizers
© SAP AG
ADM355
1-19
APO Release Status
APO 3.0A is based on BW 2.0 and SAP Basis 4.6C with SAP kernel 4.6D Supported with liveCache 7.4 Support of liveCache 7.2 ends in December 2003
APO 3.1 is based on BW 2.1C and SAP Basis 4.6D Supported with liveCache >= 7.4
SAP AG 2003
© SAP AG
ADM355
1-20
Components of an SAP APO 3.x System APO Add-On SAP GUI Operating system
An SAP APO system comprises many components Each component may require maintenance and/or support packages
Optimizers BW
Operating system
APO 3.x
2.x SAP Basis 4.x
COM routines
Database
liveCache
Operating system
Operating system
SAP AG 2003
© SAP AG
ADM355
1-21
OLTP and APO
SAP SD Online Online ATP ATP Request Request ATP
SAP
Collect Collect Statistics Statistics
DP
Distr. Request Request Transport Transport Orders Orders
NonSAP
PP
Opt.
APO
Submit Submit Production Production Plan Plan
SAP AG 2003
This is an example of a distributed system that integrates several standard SAP OLTP systems, a legacy system and SAP APO.
© SAP AG
ADM355
1-22
SAP APO 3.1 SP Components and Versions SAP APO 3.1 support package requirements include: ABAP SPs SPAM update 4.6D SAP Basis 4.6D SAP ABA 4.6D SAP BW 2.1C SAP APO 3.1 SAP APO component SPs liveCache 7.4 COM version (relative to APO SP and liveCache build) SAP APO Optimizer build level SAP kernel & other (R/3) executable builds SAP Kernel 4.6D dbadaslib – database interface library for liveCache SAP GUI / frontend upgrades Core SAP GUI upgrades – 6.10 or higher SAP APO 3.1 specific upgrades / .ocx files
Besides SAP component SPs, don't forget the operating system and database patches
SAP AG 2002
The SAP APO component Support Package is a transport, SAPKY. In addition to the SAP component support packages, you may also have to install the operating system or the database support packages. For more information, see the SAP Note corresponding to the SAP APO support package you are installing.
© SAP AG
ADM355
1-23
Compatibility Between APO Software Components
SAP Kernel 4.6 and patches are downward compatible to SAP Basis 4.6 and can be exchanged independently of the rest of APO components In general, SAP BW support packages for APO can be exchanged independently of the rest Each SAP APO Optimizer patch is shipped with one particular SAP APO SP: Downward compatibility with lower SAP APO SPs cannot be guaranteed
New qRFC versions are downward compatible SAP AG 2002
See http://service.sap.com/r3-plug-in → SAP R/3 Plug-In → Integration of SAP R/3 and mySAP.com components → SAP APO
© SAP AG
ADM355
1-24
APO SP ABAP / COM / liveCache Compatibility
Each COM Build is supported with ONE specific liveCache version : A new liveCache version requires a new COM Build version A new COM Build version need not require a new liveCache version
The „COM Build – liveCache“ combination is only downward compatible with the ABAP part of the SAP APO Support Packages : You can upgrade COM Build AND liveCache without upgrading the ABAP part of the SAP APO Support Package
Latest COM
SPn
Latest liveCache
SPn+1 SPn+2 ...
You cannot apply a new SAP APO Support Package without upgrading to the corresponding COM Build AND liveCache releases Exceptions to these rules will be properly documented SAP AG 2002
© SAP AG
ADM355
1-25
To Find out Versions of APO System Components
Use System→ Status to check the current patch level of an SAP APO system Transaction LC10 to check the liveCache kernel version Transaction /SAPAPO/OM04 to display the COM object version Transaction /SAPAPO/OPT09 to view the versions of optimization programs About SAP Logon to find out SAP GUI release on your local PC
SAP AG 2002
To display the current patch level of the APO system. Choose System→ Status and select Component information. To check the liveCache kernel version, call transaction LC10 and choose liveCache Monitoring. Every new APO support package includes a new COM object build. The Changelist number identifies the COM object build. To display the changelist number of the currently installed SAP APO COM object, choose Tools → APO Administration → liveCache/COM routines → Tools → COM Version (transaction /SAPAPO/OM04). To view identifiers and versions of optimization programs available on the optimization servers, choose Tools→ APO Administration→ Optimization→ Version Display (transaction /SAPAPO/OPT09). Applications in APO use new additional frontend functionality. APO add-on is required in the APO frontend. If the front-end release does not match the APO version, this may cause runtime errors during application transactions. If you find runtime errors like CNTL_ERROR in your APO system, check the front-end release of your SAPGUI. SAP recommends using at least SAP GUI 6.10 with the current patch. To find information about the front-end release on your local PC, call SAPLOGON, choose About SAP Logon and check the File Version number (such as 4640.3.0.8841). The first two digits represent the SAP Basis release, the third digit is the release letter, the single digit after the first dot is the compilation of the SAP GUI.
© SAP AG
ADM355
1-26
Overview: System Status SAP APO system always includes these components
SAP AG 2002
To display the current patch level of the SAP APO system, choose System >> Status and select component information.
© SAP AG
ADM355
1-27
liveCache Kernel Version: Transaction LC10
SAP AG 2002
To check the liveCache kernel version, run transaction LC10 and choose liveCache Monitoring or liveCache: Console.
© SAP AG
ADM355
1-28
COM Routines: Transaction /SAPAPO/OM04
/SAPAPO/OM04 /SAPAPO/OM04
CLIST BUILD SP
LC
NOTE
DIR
287644
01
01 7.4.1-build-015 455402
/SP01/SAPCOM31_01.SAR
289050
02
02 7.4.1-build-016 488298
/SP02/SAPCOM31_02.SAR
03
=== not released ===
291822
04
03 7.4.2-build-003 494681
293080
05
04 7.4.2-build-003 502934
294704
06
04 7.4.2-build-005 514569
See SAP Note 455457
SAP AG 2002
To display the changelist number of the currently installed SAP APO COM object, choose Tools → APO Administration → liveCache / COM Routines → Tools → COM Version (transaction /SAPAPO/OM04). The changelist number identifies the COM object build. For example, the changelist number 294704 shown in the graphic corresponds to COM object build 06 of APO 3.1. To check the assignment between changelist numbers and COM object build numbers (and also the locations of COM object builds on sapservX), see SAP Note 326494 for SAP APO 3.0A and 455457 for SAP APO 3.1. Every new APO support package includes a new COM object build. Additionally, intermediate COM object builds may be released between support packages. The intermediate builds usually contain performance improvements but may also include important repairs. COM object builds are usually downward compatible and so you can upgrade to higher COM object builds without upgrading to a higher support package. However, you cannot install a support package together with a COM build older than the build shipped with that support package. For example, with APO 3.1 Support Package 03, you need COM object build 04 or higher, with Support Package 04, you need COM object build 05 or higher, and so on. To check whether your COM routines correspond to your APO Support Package, choose Tools → APO Administration → liveCache / COM Routines → Tools → liveCache Test Program (transaction /SAPAPO/OM03, or run report /SAPAPO/OM_LCCHECK in SE38). If your COM build level is not high enough, you get the message COM routines and/or liveCache are buggy. For a detailed description of the procedure for replacing COM objects for liveCache in an SAP APO system, see SAP Note 157265 for APO 3.0A and 456744 for APO 3.1.
© SAP AG
ADM355
1-29
Optimizer Version: Transaction /SAPAPO/OPT09
SAP AG 2002
To view the identifiers and versions of the optimization programs available on the optimization servers, choose Tools → APO Administration → Optimization → Version Display (transaction /SAPAPO/OPT09).
© SAP AG
ADM355
1-30
How to Find SAP GUI Release
About SAP Logon
SAP AG 2002
For information about the frontend release on your local PC, call SAPLOGON and choose About SAP Logon.
© SAP AG
ADM355
1-31
SAP GUI Release
SAP AG 2002
Check the File Version number : The first two digits represent the SAP Basis release: in this case, 6.10. The single digit after the first dot gives the compilation of the SAP GUI. Front-end packages for Windows are now delivered as compilations.
© SAP AG
ADM355
1-32
Runtime Errors Caused by Incomplete Frontends
SAP AG 2002
If the frontend release does not match the APO version, this might cause runtime errors during application transactions. Some transactions need OCX files on the frontend, they are also called controls (OCX = Object Component Extension). If you find runtime errors like CNTL_ERROR in your APO system, check the frontend release of the PC and look for the latest SAP Notes about the APO frontend. For an overview of frontend packages delivered on CDs, see SAP Note 166130.
© SAP AG
ADM355
1-33
APO Frontend Patch
Additionally: APO SAP GUI patch for update of OCX files Procedure for import described in SAP Note 422446 Path to OCX files: choose Program Files >> Common Files >> SAP [ Shared ] >> System
SAP AG 2002
There is a special APO SAP GUI patch for updating the controls (OCX files). To access an APO server, you must have this patch installed on your frontend. For a detailed description of how to import the patch, see SAP Note 422446. To check the controls installed on your local PC, choose Program Files >> Common Files >> SAP Shared or SAP (this depends on installation type) >> System .
© SAP AG
ADM355
1-34
APO Restrictions: Client Concept
APO is client-dependent with restrictions Current restrictions BW InfoCubes (client-independent) InfoCubes can only be used within one single client which is determined through the first call of the BW-dependent application It is not possible to change the productive client at a later stage
All APO applications using BW InfoCubes are clientindependent Demand and Supply Planning (planning area) Product allocation check (global ATP) if using BW InfoCubes Collaborative Demand and Supply Planning Network Design if data supply through BW InfoCube
InfoCube-dependent applications such as DP and/or SNP are restricted to one client only! SAP AG 2002
Since SAP APO is based on SAP BW technology, it is a single client system. SAP APO 3.0A made all components except Demand Planning multi-client. However, because of the interaction among these components, whenever DP data is used the multi-client concept does not apply. Note: There is an effort underway to make Demand Planning multi-client as well.
© SAP AG
ADM355
1-35
APO - Product Map SCC-Global Visibility & Performance Measurement Collaborative Planning Global ATP CTP
DP-Demand Planning DP/SNP - Sales & Operations Planning
PP-MPS, Block Planning DS-SFS,PFS,RS Deployment/TLB
SNP Concurrent DRP/MPS/MRP/CRP
Network Design
VMI
Transportation Planning & Vehicle Scheduling
Scheduling
• • • • •
Operational
SCC: Supply Chain Cockpit DP: Demand Planning SNP: Supply Network Planning PP: Production Planning DS: Detailed Scheduling
Tactical
Strategic
• VMI: Vendor Managed Inventory • CTP: Capable to promise • MPS; Master Production Scheduling • TLB: Transport Load Builder • PFS: Process Flow Scheduler • RS : Repetitive Manufacturing
SAP AG 2002
© SAP AG
ADM355
1-36
SAP APO Application Components
Application Components Network Design (ND) Demand Planning (DP) Supply Network Planning (SNP) Production Planning and Detailed Scheduling (PP/DS) Transportation Planning and Vehicle Scheduling (TP/VS) Global Available To Promise (GATP) Collaborative Planning (CP)
SAP AG 2002
© SAP AG
ADM355
1-37
Network Design
Design and Redesign Your Supply Chain
SAP AG 2002
Within Network Design, you can make tactical and strategic decisions about your supply chain. With the help of simulations and what-if analysis, you can redesign your supply chain. A supply chain network includes: Existing and potential locations (suppliers, plants, distribution centers, customers) Existing and potential transportation lanes Products (final, semi-final, raw materials) Resources (production, handling) Production process models Required delivery times per location and product Example: product A in location 0001: 10% < 8 hours, 90% < 24 hours Strategic aspects/decisions: Which products should we assign to which locations? How can we satisfy the demands of customers and distribution centers (DC)? What are the capacity requirements for plants and DCs? What is the cost situation in the supply chain network? Where should we place new locations? How many new locations minimize the total costs? Where should we locate production to minimize transportation times?
© SAP AG
ADM355
1-38
Demand Planning
Multiple demand streams Statistical modeling Time series models Casual models Pick the Best
Promotion planning Pattern database % & Unit based lifts
OLAP+
Models
Macros
Profile estimation
Life cycle planning
InfoCubes
Like modeling Phase in/out profiles
Administration Workbench
SAP AG 2002
Demand Planning (DP) identifies and analyzes patterns and fluctuations in demands and create accurate, dynamic demand forecasts. DP forecasts future demand for products using historical and current sales data. It also uses sophisticated statistical models such as time series. The sales order information can be received from SAP OLTP, SAP BW or legacy systems. The forecast can be adjusted for promotion and product life cycle planning. For make-to-stock environments, DP drives the entire manufacturing planning process by forecasting future requirements.
© SAP AG
ADM355
1-39
Supply Network Planning
100 plants
50 DCs
100,000 customers
50,000 materials
10,000 resources
SAP AG 2002
Supply Network Planning (SNP) matches purchasing, production and transportation processes with demands, optimizing and balancing your entire supply network. SNP makes use of advanced heuristics, constraint based programming, and optimization techniques such as mixed integer linear programming to simultaneously optimize distribution, production, and procurement. SNP consists of: Global Load Balancing Vendor-Managed Inventory (VMI) Optimized Distribution
© SAP AG
ADM355
1-40
Production Planning / Detailed Scheduling
SAP AG 2002
Production Planning and Detailed Scheduling optimizes the use of resources and creates accurate plant-by-plant production schedules. This shortens production life-cycles and helps to respond rapidly to the changes needed to meet market demands. Production Planning: Rapid-Response Production Planning uses dynamic pegging and optimization techniques to generate executable plans. Detailed Scheduling: Real-time scheduling for finite sequencing and final assignment of production resources, creating an optimal production schedule.
© SAP AG
ADM355
1-41
Transportation Planning / Vehicle Scheduling APO
Short-term order-based planning
Transportation Planning
Consolidate route transportation requirements in terms of days
Vehicle Scheduling
BW
LES (Logistics Execution System) SAP AG 2002
TP/VS helps to optimize use of transport resources (vans, trains, ships, airplanes) to perform deliveries as punctually and economically as possible. TP/VS is used for short-term, order-based planning (on a scale of days). For long or mid-term oriented planning, the DP and SNP are probably used to determine aggregate transportation requirements (weeks/months/years). TP/VS (planning), LES (execution) and SAP BW (monitoring) together deliver a perfect solution for Transportation Management within Supply Chain Management.
© SAP AG
ADM355
1-42
APO Planning Process
Demand Planning
Unconstrained Aggregate Planning
Supply Network Planning
Constrained Mid-Term
Production Planning Detailed Scheduling
Transportation Planning / Deployment
Sourcing/Replenishment
Forecast
Planned Prod. Orders
Stock Transfer Orders
Constrained Detailed Planning (Plant Level)
Constrained Load Consolidation Delivery Scheduling
Planned Delivery Items Planned Shipments
Vehicle Scheduling
SAP AG 2002
© SAP AG
ADM355
1-43
APO Planning Horizon Suppliers
Plants
Distr. centers
Retailers
Planning Horizon Demand Planning (6-24 month)
Supply Network Planning (1-6 month)
Production Planning (4-8 weeks)
Detailed Production Scheduling (1 week)
Deployment (1 day)
SAP AG 2002
© SAP AG
ADM355
1-44
Global Available To Promise
Ship to customer
Customer (Lille)
55
11 Enter customer order Trigger production order
44
SAP APO Global ATP
22
Check availability
Confirmation for Spain
33 Plant (Madrid)
Create sales order
Sales (Nice)
SAP AG 2002
Global Available To Promise (ATP) matches supply with demand on a worldwide scale and gives your customers reliable delivery commitments by providing Real-time checks against the current production plan Sophisticated simulation methods, taking capacity constraints into account Global ATP’s multi-level rule-based availability checking considers allocations, production, transportation capacities, and costs in your environment. It takes alternative locations, products or components into consideration too. The example shows three areas from a supply chain: customer, sales department, and plant. The processing steps are: A customer calls to order some goods The salesman who is answering the call enters the sales order temporarily, without saving it (1) The availability is checked in the APO system; possible problems are solved by the salesman (2) The salesman creates the sales order permanently (3) The APO system triggers the creation of a production order in the production system (4) The product is shipped to the customer (5) Important for step 2: The APO system always has the latest information about the production situation and availability.
© SAP AG
ADM355
1-45
Collaborative Supply Chain Management Collaborative Production Planning Collaborative Purchasing Planning
Production Partners
Suppliers
Procurement
E-Procurement
Suppliers
Production
Development Partners Collaborativ Engineering
Budget Planning
Sales Forecast
Logistics Service Providers Collaborative Transportation Management
Collaborative Forecasting
Local Customers Subsidiaries
Sales Force
Customers Internet Sales
SAP AG 2002
APO Demand Planning also supports collaborative planning among business partners such as distribution centers and manufacturers. The collaboration partners can provide joint inputs to the planning modules. Partners can exchange data in two ways: Automatically using time series data exchange between SAP systems Manually via a Web browser for collaboration between SAP and non-SAP systems Time series data exchange: Planning book data can be exchanged directly between two APO systems. In this step, you transfer planning data stored in a planning book from one APO system to another. For example, this type of data transfer can be used in the following collaborative scenarios: CPFR consensus-based forecasting between manufacturer and customers Exchange of demand data between manufacturer and supplier Exchange of inventory data between manufacturer and supplier Using only a Web browser, the external partner can: View planning data Enter, edit, or delete data, for example the weekly forecast Choose and execute a macro for data evaluation Drill down with standard functionality on one characteristic at a time Sort column of characteristic values Access APO alerts that are displayed in Alert Monitor MiniApp For data exchange via a Web server, you will need the SAP Internet Transaction Server (ITS).
© SAP AG
ADM355
1-46
Supply Chain Cockpit Network Map Planung Stammdaten Planung Material Material Auftrag Auftrag Stammdaten
System
Hilfe
Liste
Planning Objects
APO
Control Panel
Toolbar
Alert Monitor
Lens
SAP AG 2002
Supply Chain Cockpit models, monitors and manages a supply chain with a specially designed graphical user interface. The Supply Chain Cockpit provides users with a bird’s eye view of all activities and applications Supply Chain Engineer builds graphical model of the supply chain.
© SAP AG
ADM355
1-47
Business Scenario 1: Sales Order Management with APO
SAP OLTP System
SAP APO
Create inquiry
Global availability check (GATP)
Non-SAP System
Create quotation Create sales order Create delivery Generate picking list / request Send / print delivery documents Post goods issues Create invoice Send / print billing document
SAP AG 2002
In SAP OLTP, the creation of a sales order line item initiates the shipment schedule and activates APO ATP. ATP first checks product availability according to the predefined rules. Global ATP also supports product allocation which can be executed before or after the product availability check. The result is written to a temporary quantity assignment. If the available quantity does not meet the requirement, ATP explodes the PPM and creates a Capable To Promise (CTP) production schedule. As an alternative, ATP can be configured to check against forecast rather than stock or incoming receipts. The CTP creates temporary planned orders or purchase requisitions in APO. The resulting ATP delivery proposal can be adapted and confirmed. The confirmation of the order creates schedule lines in the OLTP system. As the sales order is saved, the temporary orders are converted to permanent planned orders or purchase requisitions in both the OLTP system and APO. At this point, the reserved quantity updates time series in liveCache and deletes the temporary quantity assignment. In SAP OLTP, a local ATP check can be used to check availability for products that are not planned in APO. The ATP check is for primary locations and specific products. The APO GATP check is capable of checking for alternative locations and substitution products, as well as exploding a PPM for a planned order and returning a CTP proposal. CTP creates planned production orders for inhouse production products, and planned purchase requisitions for externally procured products.
© SAP AG
ADM355
1-48
Business Scenario 2: Sales Order with CRM + ATP with APO
SAP OLTP System
SAP APO
Global availability check (GATP) Create temporary quantity assignment
SAP CRM
1 2
Create sales order Call up ATP Saves sales order
4 Update liveCache/ Delete qty assignment
3
Replicate sales order in SAP OLTP
5 Create sales order With confirmed quantities
SAP AG 2002
In this scenario, a sales order is created in mySAP CRM. The order line item activates APO ATP and initiates the shipment schedule. The result is written to a temporary quantity assignment which reserves the quantity against other ATP requests. When the sales order is confirmed and saved in CRM, a duplicate sales order is created in the OLTP system. The replicated order in the OLTP system initiates the schedule lines and activates APO ATP. The ATP check retrieves the reserved quantity from the temporary quantity assignment. As the sales order is saved in the OLTP system, the reserved quantity updates time series in liveCache and deletes the temporary quantity assignment.
© SAP AG
ADM355
1-49
Business Scenario 3: Production Planning in APO (DP)
SAP OLTP System Sales Information System (SIS)
Perform Material Requirements Planning (MRP)
SAP APO Demand Planning Data Load into APO Data Mart Execute Promotion Planning
Determine Realignment Procedures
Execute Forecast Run
SAP BW Update Info cubes
Legacy / Flat File Extract Data
Forecast Accuracy Reporting Manufacturing / Purchasing
Execute Consensus Meeting
Confirmation of Final Forecast
Collaboration Partner Forecast / Promotion Data Collection/ Distribution
Release to OLTP or SNP SNP * Constrained Demand Plan
SAP AG 2002
* Alternative steps, capacity constraint-based planning
This example shows the usage of some major planning modules in APO, especially Demand Planning (DP). DP forecasts future demands for products based both on historical and current sales data. Sales order information can be received from the OLTP system BW, or legacy systems. DP supports collaborative planning among business partners. Collaboration partners provide inputs to the forecasting run and perform interactive consensus on forecasting results. Before the final forecast is confirmed, it can be released to SNP for rough-cut capacity planning and then released to the OLTP system (or legacy ERP systems) for MRP planning.
© SAP AG
ADM355
1-50
Business Scenario 4: Production Planning in APO (SNP)
SAP OLTP System
SAP APO
Non-SAP System
Demand Planning Perform Sales and Operation Planning (SOP)
Supply Network Planning
Supplier
SNP Planning Run
Create/Update Planned Order/ Purchase Order
Perform Material Requirements Planning (MRP)
Interactive Planning Exchange Requirements with supplier * Create/Update Planned Order/ Purchase Order Release Constrained Demand Plan back to DP Release to PP/DS *
* Alternative steps
SAP AG 2002
The SNP component plans supply throughout the entire supply network to meet the forecasted demands. In this scenario, the SNP planning run creates or updates planned purchase requisitions, planned production orders or planned transport orders. These orders are created for the end items and for the components modeled in the SNP PPMs. The independent and dependent requirements of these orders are transferred to the OLTP system where confirmed SAP purchase requisitions, planned orders and transport orders are created. In the OLTP system, MRP planning explodes the BOMs and creates the dependent requirements for the remaining components that are not planned in APO. In addition, the SNP planning result can be distributed to the supplier for review and update. The feedback from the supplier allows for additional adjustments of the SNP planning results.
© SAP AG
ADM355
1-51
Business Scenario 5: Production Order Processing with APO
SAP OLTP System Create planned orders Change planned orders Create production orders Release production orders
SAP APO
Non-SAP System
Production Planning Release from SNP to PP/DS Perform PP Planning Run
Print shop floor control documents
Detailed Scheduling Issue material components
DS Scheduling Run
Confirm Order processing
Global availability check (GATP)
Receive produced goods into stock Settle Prod. Order SAP AG 2002
The PP planning run includes automatic planning, manual planning and order processing. The DS planning run includes schedule sequence and optimization. For products requiring in-house production, APO planning results are transferred to the SAP OLTP system as confirmed planned orders. In the OLTP system, the confirmed orders are converted to production orders. During order creation or order release, an ATP check at component level is activated. PP routines are used to process the orders through their production life cycles. In this scenario, planned orders can be created in the OLTP system from MRP runs or in APO from PP/DS planning. The planner adjusts the planning results against capacity constraints in APO and updates the planned orders in the OLTP system. When the planned orders are converted to production orders in the OLTP system, the APO planned orders are deleted and replaced with production orders. Before the production order is released in the OLTP system, an ATP check is executed in APO for component availability. The order release updates statistics in the BW system. Once the order is released, components are issued against the order. The goods issue posts material costs to general ledger accounts and updates inventory level, material requirements and BW statistics. Upon completion of the production process, the order is confirmed and the capacity load is relieved.
© SAP AG
ADM355
1-52
Integration with other Applications
Demand & Supply Planning
Collaborative Planning
Production Planning
Transportation Planning
Profitable to Promise
Integration Layer Core Interface, Business Connector, APX, BAPI, XML, …
Data Warehouse
ERP
CRM
PLM
Marketplace
SAP AG 2002
© SAP AG
ADM355
1-53
APO Core Interface
1
APO Overview 2 APO Core Interface 3 CIF Monitoring 4 APO Optimizers 5 APO and BW 6 APO Sizing & Performance 7 Data Consistency 8 Disaster Recovery
SAP AG 2002
© SAP AG
ADM355
2-1
APO Core Interface
Contents System Roles in ERP and APO APO Core Interface Business System Groups Integration Model Data Transfer
Objectives At the end of this unit, you will be able to: Explain system roles and their integration Explain the APO Core Interface Set up a business system group Set up an integration model
SAP AG 2002
© SAP AG
ADM355
2-2
Integration: System Roles
Planning & Optimization System
Enterprise Resource Planning System
ATP Demand planning / forecast Production scheduling ...
Sales Production ...
ERP APO
ERP
BW ERP
ERP APO
ERP
SAP AG 2002
Integrating an SAP APO system into your environment enables you to offload the planning functionality from your OLTP/ERP system. SAP APO provides very powerful planning and optimization features in terms of planning and optimization. You should identify the roles of each system in your enterprise so that all of systems can be integrated to work together in your supply chain.
© SAP AG
ADM355
2-3
Integration: ERP and APO ERP
APO
APO
Transactional Data
Master Data Plants, vendors, customers Materials, products Bills of material and routing (production process models) Characteristics Capacities
ERP
Planning Results
Planned/production orders Sales orders Purchase orders Stocks ATP requests
ATP results Manufacturing orders Procurement orders VMI sales orders
ERP APO
ERP
BW ERP
ERP APO
ERP
SAP AG 2002
The OLTP/ERP system provides master and transactional data to the APO system for planning, and receives back the planning results. The OLTP system remains the dominant system for master data. The APO system cannot change the master data stored in the SAP OLTP (ERP) system. However, the APO system can create and modify its own master data (for example, it can maintain some table fields). Simulation results for planning runs are not sent back to the SAP OLTP.
© SAP AG
ADM355
2-4
Interface Scenario
SAP OLTP
Non-SAP System
LO
SD
qRFC
HR
BAPI
CIF (Core Interface)
BAPI
SAP APO
Comm.IDoc Comm.IDoc
BAPIs
BAPI
RFC
Internet
SAP AG 2002
Two different integration techniques are used to link SAP APO with OLTP systems. Linking APO with one or more SAP OLTP systems: The required interface is the APO Core Interface (CIF). CIF defines and controls the data transfer between SAP OLTP and APO. The CIF interface is an add-on in SAP R/3 System that is installed using the relevant R/3 Plug-In. CIF uses queued remote function calls (qRFC). This guarantees serialization and update in the target systems. Linking APO with non-SAP systems: The required interfaces are implemented using Business Application Programming Interfaces (BAPIs). BAPIs are documented standard interfaces that enable object-oriented access to SAP systems (via C++, Java, Visual Basic, IDocs, and COM/DCOM).
© SAP AG
ADM355
2-5
APO Interface: General Requirements
Supply APO with planning and optimization relevant (master and transactional) data Return APO planning results to OLTP system Perform initial and incremental data transfer Transfer transactional data changes in real-time Perform data transfer to APO in a transactional manner (consistency of data in APO and OLTP) Allow uninterrupted production operation of an OLTP system Allow simultaneous access to multiple APO systems
SAP AG 2002
The key function of an interface between the OLTP system and APO is to ensure data are exchanged in a controlled environment. Features of CIF: Transfer only relevant data objects from OLTP to APO. Enable both initial and incremental data transfer to APO. Forward transactional data changes to APO in real time. Forward master data changes periodically. Return planning results to OLTP system either in real time or periodically. By default, for PP/DS they are returned in real time and for SNP they are returned periodically. Transfer data to APO in a transactional manner to ensure data consistency. Allow OLTP system supplying the relevant data to operate without interruption. Allow OLTP system to supply data to more than one APO system simultaneously.
© SAP AG
ADM355
2-6
APO Core Interface
APO Core Interface (CIF) Connects an APO and a standard SAP OLTP system Determines source and target systems within complex system environments through Integration Models Supplies APO with the relevant master and transaction data Transfer of planning relevant data only Initial and incremental data transfer Real-time interface Returns planning results to the OLTP system
SAP AG 2002
The SAP APO core interface: Is used to connect the APO to SAP OLTP systems Provides a tight coupling between APO and one or more SAP OLTP systems Is a real-time interface Ensures the supply of relevant incremental data changes to APO An integration model is defined in an SAP OLTP system. This model is used to select the data objects that are needed in the APO streamlined data structures and that must be transferred from the SAP data tables via the core interface.
© SAP AG
ADM355
2-7
APO-CIF / Plug-InTechnology
APO-CIF (Core InterFace) offers the technology to integrate OLTP-R/3 in realtime with APO APO-CIF is installed in core R/3 with the R/3 Plug-In APO-CIF has the following components Integration Model controls the transfer of master/transaction data from OLTP to APO and back System internal Active Data Channel (ADC) Message Serialization (qRFC) Event Channel, Middleware-Adapter
SAP AG 2002
For compatibility matrix between Plug-In Releases - APO releases - R/3 releases / R/3 Support Packages, see http://service.sap.com/r3-plug-in → SAP R/3 Plug-In → Integration of SAP R/3 and mySAP.com components → SAP APO
© SAP AG
ADM355
2-8
SAP R/3 Plug-in
SAP AG 2002
An SAP plug-in is an interface that enables the exchange of data between two SAP systems. SAP R/3 Plug-In which is installed on an SAP R/3 System, integrates this system with one or more of mySAP.com components (SAP APO, SAP B2B, SAP BW, SAP CRM, SAP SEM). It allows you to use several components concurrently. The SAP R/3 Plug-In supplies the mySAP.com components with transaction data and master data in real time. Technically, the SAP R/3 Plug-In is an add-on. Add-ons are enhancements to the SAP standard R/3 software with additional functions. They are developed on the basis of SAP Basis releases in special add-on systems. Add-ons consist of: newly developed add-on objects, add-on-specific customizing, and possibly also of modified SAP R/3 standard objects. These objects are modified in order to adjust SAP standard functionality to meet add-on requirements. They are not basis objects but objects from the SAP business applications such as FI, and MM. Add-on functionality is based on and integrated into SAP business applications. Because neither the SAP kernel nor the SAP Basis objects are modified, there are generally no add-on objects that are DB or OS specific and instead they are based on SAP Basis interfaces. Within SAP R/3 Enterprise, the SAP R/3 Plug-In needs the SAP Basis Plug-In as a prerequisite. The SAP Basis Plug-In is an add-on, which can be installed on a SAP Web Application Server or any other product based on SAP_BASIS 620 and SAP_ABA 620 or higher. The releases of the SAP R/3 Plug-In and of the SAP Basis Plug-In depend on each other. Both components always must have the same release level.
© SAP AG
ADM355
2-9
qRFC Versions (1)
When transferring data between SAP R/3 and APO, CIF / Plug-In components use a Basis technology called queued RFC The qRFC is normally updated via SAP Basis support packages, but you can also upgrade to the latest version of qRFC (in both R/3 and APO) independently of Basis SPs
R/3 Plug-In
Component version
APO - CIF
qRFC
qRFC version
qRFC
Basis SP
Basis release and SP level
Basis SP
Make sure that you have the correct qRFC version in your system: there are different qRFC versions dependent on which SAP Basis release you are using
!
SAP AG 2002
© SAP AG
ADM355
2-10
qRFC Versions (2)
Why update just the qRFC version and not implementing a Basis SP? A new qRFC version may be available and required in your systems before the Basis support package in which it is contained is released, or before you can implement the Basis SP in your environment Newer qRFC versions usually have improved performance and functionality and help to improve: Queue handling Queue monitoring Overall system performance System stability
SAP AG 2002
© SAP AG
ADM355
2-11
Integration Model: Data Mapping APO
APO Model Generator APO Planning Model
Automatic Mapping
SAP ERP
Modeling Tool
SAP ERP
Legacy ERP
Non-SAP ERP
SAP AG 2002
To enable the integration between APO and OLTP systems, the SAP APO software includes a communication layer. This layer includes a technical component called the integration model. You define the integration model within the SAP OLTP system. Its main role is to setup and maintain a consistent data pool in both APO and SAP OLTP. The functions of the integration model are to: determine the source and target system select consistent master data and transaction dataset for initial data load (to ensure referential integrity of data in APO) filter objects and routes for incremental transfer of master and transactional data (to pass only the key and the changed areas) The red part is obviously not correct or at least very badly formulated – it sounds like a contradiction to the last paragraph on p. 22 (“With the transfer of master data changes, the system always transfers the complete data records. ”) route APO planning results to execution (standard SAP OLTP) systems To integrate two systems, data mapping must take place. Data mapping includes matching table/structure names and field names between systems. CIF integration models provide automatic data mapping between the complex relational model in the OLTP system and the object model in APO. Between non-SAP ERP and APO, other interfaces like BAPI or ALE are used. The BAPI interface also provides automatic data mapping.
© SAP AG
ADM355
2-12
Data Transmission
Standard SAP OLTP
SAP APO
APO
APO DB Master Data Transactional Data
Initial Initial Transfer Transfer
Incremental Incremental Transfer Transfer
liveCache
SAP AG 2002
Master data representing the current factory layout from the OLTP side is mapped to a consistent data pool that is transmitted into APO. This data is made up of information such as products, business partners, and work centers. Running operations in the OLTP system is not being interrupted when this transfer is made. Modes of operation are separated into initial data transfer and incremental transfer. The first data transfer including basic datasets is called initial, and the transfer of changes is called incremental. Any necessary incremental transfers that include data changes from the OLTP side are filtered and routed to APO automatically (either periodically or immediately if there is a need). The referential integrity of these data pools is guaranteed. Transactional data, which changes often, includes: Sales orders Process orders Production orders The procedure for transmitting transactional data is similar to that of the master data and includes two transmission modes: initial and incremental. After the initial transfer, only data changes need to be forwarded. Transactional data is sent in real time or as close to real time as possible. Incremental transfer is event driven. For those transactions containing changes of a planning-relevant element in the OLTP system, the changed data will be sent immediately to APO.
© SAP AG
ADM355
2-13
Logical Systems and Integration Models
SAP OLTP A Logical System 1 SAP APO Integ.Model 1
SAP OLTP C Logical System 4 Logical System 3
Integ.Model 3
SAP OLTP B Logical System 2
Integ.Model 2
SAP AG 2002
A system within the APO system landscape is a logical system. It is either a client in the APO system, a client in a standard SAP OLTP system, or a non-SAP ERP system. Logical systems are defined and assigned unique names within the APO and any SAP OLTP system. SAP recommends the naming convention CLNT. Within standard SAP systems, an integration model is used to integrate the corresponding SAP client with APO client. This involves creating a link from a logical system in the OLTP to a logical system in APO based on a RFC destination matching the name of the target logical system. In the example shown in the graphic, each OLTP system needs a logical system defined for its own client and a logical system for the APO client. Therefore, logical systems 1 and 4 must be defined in the first OLTP system (A). The APO system needs a logical system defined for its own client and for every OLTP system that is going to be linked to it. Logical systems 1, 2, 3, and 4 must be defined in the APO system.
© SAP AG
ADM355
2-14
Business System Groups SAP OLTP A Logical System 1 M1 - Screw
Integ.Model 1
Business System Group (BSG): Area of the same naming convention SAP APO
Assigned to either BSG1 or BSG2 SAP OLTP C
Logical System 4 M1 - Screw
Logical System 3 M1 - Hammer
BSG1 M2 - Nail
SAP OLTP B
BSG2
Integ.Model 3
M1 - Hammer
Logical System 2 M1 - Screw M2 - Nail
Integ.Model 2
SAP AG 2002
A Business System Group (BSG) integrates the APO logical system and one or more SAP OLTP logical systems into a higher-level logical unit from the point of view of APO. An RFC destination must be defined in the APO system for every target OLTP logical system. APO uses BSGs to differentiate master data from different source systems with identical material numbers. In the example shown in the graphic, material number M1 identifies (the same type of) a screw in logical systems 1 and 2 whereas the same material number identifies a hammer in logical system 3. To resolve this conflict and to be able to transfer all these SAP material numbers into APO, logical system 3 needs to be assigned to a different business system group than logical systems 1 and 2. The APO system internally uses BSG names to create unique material numbers. There must always be at least one business system group defined within APO. Both APO and each linked SAP OLTP system must be assigned to exactly one BSG. For APO itself, it does not matter which BSG it is assigned to. Each BSG contains at least one SAP source system and the different master data objects must have unique names within the group. More than one BSG is necessary if there is no unique naming convention among the SAP systems. By default, business transaction events (BTEs) are inactive for APO integration. Changes to transaction data are not transferred from SAP OLTP to APO, although the initial transfer works when you activate the corresponding integration model. To activate the events, call transaction BF11 to display the view Application indicator and flag the box ND-APO. For details, see SAP Note 322800.
© SAP AG
ADM355
2-15
Creation of a Business System Group (1)
In the SAP APO system Define all logical systems (transaction SALE) Assign the local logical system to the corresponding client (transaction SCC4) Define an RFC destination for every target system (transaction SM59)
SAP APO
SAP AG 2002
Steps to be done in the APO system: Define a local logical system and one logical system for every combination of target SAP R/3 System and client: Choose Tools → AcceleratedSAP → Customizing → Carry Out Project (transaction SPRO) → SAP Reference IMG → SAP APO – Implementation Guide → SAP R/3 Basis Customizing → Application Link Enabling (ALE) (transaction SALE) → Sending and Receiving Systems → Logical Systems → Define Logical System. Assign the local logical system to the corresponding client: Choose Tools → AcceleratedSAP → Customizing → Carry Out Project → SAP Reference IMG → SAP APO – Implementation Guide → SAP R/3 Basis Customizing → Application Link Enabling (ALE) → Sending and Receiving Systems → Logical Systems → Assign Client to Logical System (transaction SCC4). Define an RFC destination for every target system: Choose Tools → AcceleratedSAP → Customizing → Carry Out Project → SAP Reference IMG → SAP APO – Implementation Guide → SAP R/3 Basis Customizing → Application Link Enabling (ALE) → Sending and Receiving Systems → Systems in Network → Define Target Systems for RFC Calls (transaction SM59). For the third step, you must create a RFC user with full authorizations in every target system. See SAP Note 352844.
© SAP AG
ADM355
2-16
Creation of a Business System Group (2)
Define a Business System Group (transaction /SAPAPO/C1) Assign every logical system to the Business System Group (transaction /SAPAPO/C2)
SAP AG 2002
Steps to be done in the APO system (continued): Define a business system group: choose Tools → AcceleratedSAP → Customizing → Carry Out Project → SAP Reference IMG → SAP APO – Implementation Guide → SAP Advanced Planner and Optimizer (SAP APO) → Basic Settings → Integration → Business System Group → Maintain Business System Group (transaction /SAPAPO/C1). Create a new entry with a unique name of the BSG and a description. Assign the local logical system and every target logical system to one BSG: Choose Tools → AcceleratedSAP → Customizing → Carry Out Project → SAP Reference IMG → SAP APO – Implementation Guide → SAP Advanced Planner and Optimizer (SAP APO) → Basic Settings → Integration → Business System Group → Assign Logical System and Queue Type (transaction /SAPAPO/C2). Create a new entry for each logical system. - For an SAP R/3 System, enter an X into the field R/3 flag, and the SAP Basis release into the field SAP Rel. - For an SAP APO system, leave the field R/3 flag empty, and enter the SAP APO release into the field SAP Rel. In general, APO can use more than one client in the system. However, DP depends on SAP BW. BW is limited to use only one client.
© SAP AG
ADM355
2-17
SAP R/3: Integration with an SAP APO System (1)
In the SAP OLTP system Define two logical systems: One for the APO system A local one
Assign the local logical system to the corresponding client Define an RFC destination for the APO system
SAP AG 2002
Steps to be done in the standard SAP R/3 OLTP system: Define two logical systems, one representing the APO system and the local one. Call transaction SALE and choose Sending and Receiving Systems >> Logical Systems >> Define Logical System. Assign the local logical system to the corresponding client. Call transaction SALE and choose Sending and Receiving Systems >> Logical Systems >> Assign Client to Logical System. Define an RFC destination for the APO system. Call transaction SM59 (or in SALE choose Sending and Receiving Systems >> Systems in Network >> Define Target Systems for RFC Calls). For the third step, you must create an RFC user in the APO system with full authorizations. See SAP Note 352844. If ATP checks are started in APO from within an SAP OLTP system, the RFC user in the APO system must be a DIALOG user or a SERVICE user. If you wish to use this RFC destination for debugging, all issues mentioned for the SAP OLTP system as target system apply.
© SAP AG
ADM355
2-18
SAP R/3: Integration with an SAP APO System (2)
Assign the type SAP_APO and the APO release to the logical system of APO (transaction NDV2) Assign an operation mode to the target system APO (transaction CFC1)
Activate Activatethe thebusiness business transaction event transaction eventfor forAPO APO Integration in BF11 Integration in BF11
SAP AG 2002
Steps to be done in the standard SAP OLTP system (continued): Assign the type SAP_APO to the logical system of APO. Choose Logistics >> Central Functions >> Supply Chain Planning Interface >> Core Interface Advanced Planner and Optimizer >> Settings >> APO Releases (transaction NDV2). Create a new entry with the corresponding logical system name and system type SAP_APO. As the OLTP system performs the integration in different ways for different APO releases, you must also enter the release level of the target APO system. Assign an operation mode. Choose Logistics >> Central Functions >> Supply Chain Planning Interface >> Core Interface Advanced Planner and Optimizer >> Settings >> Target Systems (transaction CFC1). Create a new entry with the appropriate logical system name. You cannot enter the operation mode manually. It is assigned automatically during the activation of the integration model. The system can also change it later automatically. Possible values for the operation mode are: I for initial data transfer (standard setting for integration models covering master data), T for activating incremental transfer of transaction data (standard setting for integration models covering transaction data), or D for down (no data transfer). Depending on the R/3 Plug-in version, business transaction events (BTEs) may be inactive for APO integration. If they are inactive, changes to transactional data are not transferred from SAP OLTP to APO, although the initial transfer works when you activate the corresponding integration model. To activate the events, call transaction BF11 to display the view Application indicator and flag the box ND-APO. For details, see SAP Note 322800.
© SAP AG
ADM355
2-19
SAP R/3: Creation of an Integration Model
Define an integration model, specify data for transfer (transaction CFM1) Activate the integration model (transaction CFM2)
SAP AG 2002
Steps to be done in the standard SAP OLTP system (continued): Define an integration model. Choose Logistics >> Central Functions >> Supply Chain Planning Interface >> Core Interface Advanced Planner and Optimizer >> Integration model >> Generate >> Create (transaction CFM1 or report RIMODGEN). Supply a name for the new model, assign the logical system of APO as target system, type a reasonable application for which data is included in the model (for example, MATERIALS, VENDORS, TRANS_DATA) and specify the data to be transferred (first choose the data types and then specify selection criteria for data). Activate the integration model. Choose Logistics >> Central Functions >> Supply Chain Planning Interface >> Core Interface Advanced Planner and Optimizer >> Integration model >> Activate (transaction CFM2 or report RIMODAC2). Choose the integration model and execute. The initial transfer of data to the APO system is started immediately.
© SAP AG
ADM355
2-20
Creation of Integration Models
General strategy Define separate integration models for master data and transaction data Use unique combinations of integration model name and application to transfer different parts of data Do not create integration models with large data pools
SAP AG 2002
General strategy: An integration model distinguishes between master data and transaction data. SAP recommends putting the two types of data in separate integration models so that they are transferred separately. An integration model is not uniquely defined by its name. If you wish, you can create several integration models with the same name but different applications and put different data in them. Having several integration models active at a time improves data processing and error handling. Technically, you can even create integration models with the same name, the same application but different target logical systems. To facilitate error handling, keep the data pools of the integration models as small as possible.
© SAP AG
ADM355
2-21
Master Data: Initial Data Transfer Integration model
SAP OLTP
PUMPS
Name
Material master AA Material Materialmaster master B
Target system APOCLNT800 Application
Active/Inactive
Activate PUMPS
Material master A
MATERIALS
APOCLNT800
MATERIALS
10:00:00
Start APO
APO
Master data in APO
Product B Product A
SAP AG 2002
To transfer data into APO initially, simply activate the corresponding integration model using transaction CFM2. When you choose Start, the system triggers the data transfer into APO. Only one data channel is currently available for intial data loads. This ensures data consistency and prevents two users from creating the same data object. This also means that only one initial data load integration model can be actively passing data at a time. Incremental loads are not restricted in this way.
© SAP AG
ADM355
2-22
Master Data: Transfer of New Data (1)
SAP OLTP New Material master Q
X0
Name
PUMPS
Existing integration model
Target sys. APOCLNT800 Application MATERIALS Material master
Execute
...
Active/Inactive
PUMPS
APOCLNT800
MATERIALS
11:00:00
PUMPS
APOCLNT800
MATERIALS
10:00:00
Start
APO
Save
Activate
X0
MRP type
+
Difference is transferred
New master data in APO
APO Product Q
SAP AG 2002
New master data that corresponds to the selection criteria of an existing integration model can be transferred into APO by regenerating the existing model and activating it. The graphic shows an example where materials with the MRP type X0 are selected in the integration model. Two models with the same name are then temporarily active, differing in date and time. In this case, during data transfer the system compares the new model with the old one and transfers just the new data that is not included in the old model. After the data transfer, the system deactivates the old model and leaves the new model as the active one. The system does not allow two versions of an integration model with the same name to coexist while they are both active. If you want the system to retransfer all the master data of an existing integration model, you must first deactivate the old model and then activate the new one. All active models are always compared. In this case, if the model with the old timestamp is not active then all data is transferred again. Another special case for integration models with different names: If you have activated model 1 with material masters A and B and then create model 2 with material masters B and C, at the time you activate model 2 only material master C is transferred. If, later on, you decide to deactivate model 1, the integration for material B remains valid. To ensure that the system has transferred all the APO-relevant master data, you can periodically regenerate and activate the existing integration models.
© SAP AG
ADM355
2-23
Master Data: Transfer of New Data (2)
Execute integration model periodically Generate integration model Name
Target system APOCLNT800 Application
...
JOB_1
PUMPS
MATERIALS
Variant PUMP_MAT
Execute
+
Step 1
Save
Report RIMODGEN
Alternative:
JOB_1_AND_2 Activate integration model Name
PUMPS
Target system APOCLNT800 Application
MATERIALS
JOB_2 Variant PUMP_MAT
Active/Inactive
+
Step 2
Start
Report RIMODAC2 SAP AG 2002
As the SAP OLTP system continues to create new APO-relevant master data, you should regenerate and activate the integration models at regular intervals. To do so, you can define appropriate background jobs. Executing an integration model consists of two steps: generation and activation. The system generates an integration model with report RIMODGEN. Define a variant for this report and schedule the variant as a job. The system activates an integration model with report RIMODAC2. Define a variant for this report, too, and schedule the variant as a job. You can schedule these two steps with one job and run the two variants as consecutive steps.
© SAP AG
ADM355
2-24
Master Data: Incremental Data Transfer Configuration procedure for transfer of master data changes Transaction CFC5
APO
APO
Changed SAP master data objects are transferred into APO when the changes are saved in real time
Changes to SAP master data objects are recorded and the transfer of the changes is (periodically, for example) triggered
SAP AG 2002
Any changes made to the master data in the R/3 System that are also APO-relevant (data contained in an active integration model) must be transferred into the APO system. You don‘t need a new initial data transfer but just a transfer on the individual changes made to the relevant master data. Similarly, a deletion flag for a material must also be transferred into APO. This is done by an incremental data transfer. You can use transaction CFC5 to control incremental data transfer of master data. You can decide whether changes to material masters, customers, and vendors are transferred to the APO system immediately (in real time), periodically, or not at all. Depending on the extent of the changes, immediate data transfer may impact the performance of the system, so in most cases you may prefer periodic data transfer. However, if you choose periodic data transfer, you must also maintain the ALE change pointer settings. In future releases, when you choose periodic incremental data transfer in transaction CFC5, the change pointers will be activated automatically. With the transfer of master data changes, the system always transfers the complete data records. For example, if only one field in the material master is changed, the entire material master is retransferred within the incremental data transfer.
© SAP AG
ADM355
2-25
Master Data: Periodic Incremental Data Transfer Master data change
Change pointer generally active? Relevant message type active?
Material master A
Change pointer
Material planning MRP type
X0
Plan.del.time 10 days
Mat B in-house pr.time
Customizing
Mat A plan.delivery time ...
11 days RCPTRAN4
Incremental data transfer
Variant DELTA_MAT
Target sys. APOCLNT800 Object types Material master
APO SAP AG 2002
Incremental data transfer Execute
...
Changed master data in APO
Product A Plan.del.time 11 days
! Delete change pointers regularly
Periodic incremental data transfer uses ALE change pointers. The change pointers select the master data to be re-transferred. If you select periodic incremental data transfer, you must set in SAP R/3 Customizing that ALE change pointers are to be written for master data changes. Customize ALE change pointers as follows: Activate change pointers: in transaction SALE choose Activate Change Pointers → Generally (transaction BD61). Determine which master data objects should have change pointers: choose Activate Change Pointers for Message Types (transaction BD50). The relevant message types: CIFMAT for material masters, CIFVEN for vendors, CIFCUS for customers, CIFSRC for info records, CIFPPM for BOMs and Routings and so on. The availability of certain message types depends on the installed Support Package level. You can also initiate incremental data transfer of the master data manually in transaction CFP1. You must specify the logical target system and the master data objects (material masters, vendors, sources of supply, customers) for which changes are to be transferred. The transfers include changes to all master data specified in CFP1 that belong to an active integration model. To schedule incremental data transfer as a job, save the settings as a variant in report RCPTRAN4 (the report used by CFP1). For performance reasons, delete change pointers regularly (approximately once a week), either manually with BD22 or by scheduling report RBDCPCLR. Always delete all processed change pointers that are more than two weeks old.
© SAP AG
ADM355
2-26
Transactional Data Transfer
Integration model
SAP OLTP
SAP OLTP
PUMPS
Name
Target system APOCLNT800 Application
Activate
TRAN_DATA
Material master A Material loc. master A B Storage stock Planned order A
Active/Inactive Storage loc. stock B
PUMPS
APOCLNT800 TRAN _DATA
Start
APO
Transactional data in APO
10:00:00
automatic
Incremental data transfer
Stor loc. stock B Planned order A
Planned order A
SAP AG 2002
To transfer transactional data between OLTP and APO, simply activate the corresponding integration model using transaction CFM2. To trigger the initial data transfer, choose Start. The transactional data that you have selected is transferred into APO for the first time. After this initial transfer, a real-time link is set up automatically between the OLTP system and the APO system for the selected transaction data. Whenever a storage location stock of a selected material changes due to a goods movement posting, the new stock is transferred into APO. In the same way, production orders that are generated in APO are immediately transferred to the OLTP system. In other words, incremental transfer of transactional data is processed automatically. No explicit action is needed to initiate this.
© SAP AG
ADM355
2-27
Integration Model: Transactions Summary
CFM1/CFM2
Consistency check Check the consistency of the selected data in the integration model
CFM2
Deactivate integration model Connection between R/3 and APO for the relevant master and transactional data will be cancelled
CFM5
Filter object search Check whether the data objects are already contained within an integration model
CFM7
Delete integration model Deactivated models can be deleted
SAP AG 2002
Deactivating the integration model: For example, after you have deactivated the integration model for transactional data (which contains sales and planned orders), relevant sales orders created in OLTP are no longer transmitted to APO. Also, planned orders that are created in APO after deactivation are not transmitted to APO. After you have deactivated the integration model for master data, no data changes are transmitted. Delete integration model: Before you can delete an integration model, you must deactivate it. Deleting an integration model does not delete the previously transmitted data in APO.
© SAP AG
ADM355
2-28
Data Transmission Non-SAP
Non-SAP
APO
APO
APO DB Master Data Transactional Data
Initial Initial Transfer Transfer
Incremental Incremental Transfer Transfer
liveCache
SAP AG 2002
Methods for the transfer of master and transactional data include: BAPI ALE Batch input (BDC) RFC
© SAP AG
ADM355
2-29
Interfacing Using APO BAPIs
Pros
Cons
Can be used with multiple programming languages (special libraries available)
External system has to take care of error handling / system availability
Immediate reaction in case of error is possible
No monitoring function of data flow (inbound and outbound)
Calling system immediately Knows if method call was successful No conversion into different data formats required
SAP AG 2002
© SAP AG
ADM355
2-30
Interfacing Using IDocs
Pros
Cons
Complete overview of transferred data
IDoc data has to be stored on database
Errors can be fixed on APO side
Conversion into IDoc format necessary
Asynchronous communication (transfer can be independent of execution)
No immediate reaction on errors External system does not know when IDocs are booked
Parallel booking possible after data transfer
Workflow can be used in case of errors
Data transfer with files Use of IDocs depends on the company’s current system landscape SAP AG 2002
© SAP AG
ADM355
2-31
Further Documentation
For additional information about mySAP components, including mySAP APO, go to URL: service.sap.com/scm service.sap.com/r3-plug-in
SAP AG 2002
© SAP AG
ADM355
2-32
Core Interface: Summary
You are now able to: Explain system roles and their integration Explain the APO Core Interface Set up a business system group Set up an integration model
SAP AG 2002
© SAP AG
ADM355
2-33
APO Core Interface Exercises Unit: APO Core Interface
At the conclusion of this exercise, you will be able to: • Create a Business System Group (BSG) in an APO system • Create and activate an integration model in an SAP OLTP system • Transfer data from SAP R/3 to SAP APO You create your own BSG in the APO system. Create your own integration model in the SAP OLTP system and then activate it.
1-1
Create a Business System Group (BSG) in the APO system, client 001 1-1-1 Define all logical systems in your APO environment. Setup TT3CLNT800 for the SAP OLTP system and TTOCLNT001 for the APO system. 1-1-2 Assign the local logical system to the client 001. 1-1-3 Check and test the RFC destination for the SAP OLTP system. 1-1-4 Define a new Business System Group with the name BSG1 and a description of your choice. 1-1-5 Assign the logical systems defined in step 1-1 to your Business System Group.
1-2
Create and activate an integration model in your SAP OLTP system, client 800 1-2-1 Check that business transaction events are active for APO integration. 1-2-2 Define all the logical systems in your environment. Setup TT3CLNT800 for the SAP OLTP system and TTOCLNT001 for the APO system. 1-2-3 Assign the local logical system to the client 800. 1-2-4 Check and test the RFC destination for the APO system.
© SAP AG
ADM355
2-34
1-2-5 Assign the type SAP_APO to the logical system of APO. 1-2-6 Assign an operation mode to the logical system of APO. 1-2-7 Define an integration model that includes plant 1000 and material master T-F2xx . 1-2-8 Activate the integration model.
1-3
Check that the data included in the integration model is transferred to the APO system using the materials that you picked up from 2-7.
1-4
Perform Incremental Data Transfer. 1-4-1 Activate change pointers and define periodic transfer for material masters. 1-4-2 Change material master T-B1xx in plant 1000: In the MRP2 view, change the planned delivery time from 10 to 15 days. 1-4-3 Start the incremental data transfer. 1-4-4 Go to the APO system, display and find out the planned delivery time for the product T-B1xx in plant 1000.
© SAP AG
ADM355
2-35
Solutions
APO Core Interface Unit: APO Core Interface
1-1
Create a Business System Group in the APO system 1-1-1 Define the logical systems: Logon to your APO system, client 001, and choose Tools → Business Engineer → Customizing (or call transaction SPRO) → SAP Reference IMG → APO – Implementation Guide → R/3 Basis Customizing→ Application Link Enabling (ALE) (or call transaction SALE) → Sending and Receiving Systems → Logical Systems → Define Logical System. If TTOCLNT001 and TT3CLNT800 not in the list : Select New entries and create the two entries, one for the local system and another one for the SAP OLTP system. 1-1-2 Assign the local logical system TTOCLNT001 to the client 001: Choose Tools → Business Engineer → Customizing → SAP Reference IMG → APO – Implementation Guide → R/3 Basis Customizing → Application Link Enabling (ALE) → Sending and Receiving Systems → Logical Systems → Assign Client to Logical System, or call transaction SCC4. Edit client 001 and assign the logical system TTOCLNT001 to it. 1-1-3 Define and test the RFC destination for the SAP OLTP system: Choose Tools → Business Engineer → Customizing → SAP Reference IMG → APO – Implementation Guide → R/3 Basis Customizing → Application Link Enabling (ALE) → Sending and Receiving Systems → Systems in Network → Define Target Systems for RFC Call, or call transaction SM59. If TT3CLNT800 does not exist, create the RFC destination. Test the connection and the remote logon.
© SAP AG
ADM355
2-36
1-1-4 Define a new business system group with the name BSG1 and a description of your choice: Choose Tools → Business Engineer → Customizing → SAP Reference IMG → APO – Implementation Guide → Advanced Planner and Optimizer (APO) → Basic Settings → Integration → Business System Group → Maintain Business System, or call transaction /SAPAPO/C1. Check to see if the business group BSG1 exists or not, if not then create a new entry with the name BSG1 and description.
1-1-5 Assign the logical systems defined in step 1-1 to your business system group. Choose Tools → Business Engineer → Customizing → SAP Reference IMG → APO – Implementation Guide → Advanced Planner and Optimizer (APO) → Basic Settings→ Integration → Business System Group → Assign Logical System, or call transaction /SAPAPO/C2. Create two new entries for your BSG, one with logical system TTOCLNT001 and another one with logical system TT3CLNT800. For the remote system (TT3CLNT800), enter an X in the field R/3 flag (it would remain empty if this was a non-SAP system). For both systems, enter the SAP release in the field SAP Rel.
1-2
Create and activate an integration model in your SAP OLTP system 1-2-1 Check that business transaction events are active for APO integration: Logon to your SAP OLTP system, client 800, and call transaction BF11. If the New Dimension PlugIn APO (ND-APO) is not activated, flag the corresponding box and save the changes. 1-2-2 Define the two logical systems you are going to need: Choose Tools → AcceleratedSAP → Customizing → Edit Project (transaction SPRO) → SAP Reference IMG → Basis Components → Application Link Enabling (ALE) (transaction SALE) → Sending and Receiving Systems → Logical Systems → Define Logical System. Select New entries and create the two entries, one for the local system and another one for your APO system, using the same names as in 1-1.
© SAP AG
ADM355
2-37
1-2-3 Assign the local logical system TT3CLNT800 to the client 800: Choose Tools → AcceleratedSAP → Customizing → Edit Project → SAP Reference IMG → Basis Components → Application Link Enabling (ALE) → Sending and Receiving Systems → Logical Systems → Assign Client to Logical System, or call transaction SCC4. Edit client 800 and assign the local logical system TT3CLNT800 to it. 1-2-4 Define and test a RFC destination for the APO system: Choose Tools → AcceleratedSAP → Customizing → Edit Project → SAP Reference IMG → Basis Components → Application Link Enabling (ALE) → Sending and Receiving Systems → Systems in Network → Define Target Systems for RFC Call, or call transaction SM59. If TTOCLNT001 does not exist, create the RFC destination. Test the connection and the remote logon. 1-2-5 Assign the type SAP_APO to the logical system TTOCLNT001: Choose Logistics → Central Functions → Supply Chain Planning Interface → Core Interface Advanced Planner and Optimizer → Settings → APO Releases (transaction NDV2). Create a new entry with the logical system name TTOCLNT001 and system type SAP_APO. As the SAP system performs the integration in different ways for different APO releases, you must also enter the release level of the target APO system. 1-2-6 Assign an operation mode to the logical system TTOCLNT001: Choose Logistics → Central Functions → Supply Chain Planning Interface → Core Interface Advanced Planner and Optimizer → Settings → Target Systems (transaction CFC1). Create a new entry with the logical system name TTOCLNT001. The system will assign an operation mode to the target APO system. 1-2-7 Define an integration model: Choose Logistics → Central Functions → Supply Chain Planning Interface → Core Interface Advanced Planner and Optimizer → Integration model → Generate → Create (transaction CFM1). Supply a name for the new model, assign the logical system of TTOCLNT001 as target system, type a reasonable “application” name according to which data will be included in the model (example: MATERIAL), and specify the data that should be transferred. In the section Add to integration model select Material masters and Plants. In the section General Selection Options for materials, type 1000 for Plnt and select a Material T-F2xx . Then click Execute and Save. Double-click on Material to see the list of material master records and make a note on the names. © SAP AG
ADM355
2-38
1-2-8 Activate the integration model: Choose Logistics → Central Functions → Supply Chain Planning Interface → Core Interface Advanced Planner and Optimizer → Integration model → Activate (transaction CFM2). Type or choose the names of the integration model, the logical system and the APO application, then execute. In the next screen, double-click the last line with the creation date of your integration model (or click it once and use the Active/Inactive button), and then click Start. The initial transfer of data to the APO system is started immediately.
1-3
Check that the data included in the integration model is transferred to the APO system Go to the APO system, client 001: Choose Master Data → Location (transaction /SAPAPO/MAT1). Use the names that you picked up from 2-7 and enter it under Product Name and click Display.
1-4
Perform Incremental Data Transfer. 1-4-1 Activate change pointers and define periodic transfer for Material masters. Under R/3 Customizing in OLTP (transaction SPRO→ SAP IMG), Basis Components → Application Link Enabling (ALE) → Modeling and Implementing Business Processes → Master Data Distribution → Replication of Modified Data → Activate Change Pointers - Generally and Activate Change Pointers for Message Types. (BD61) Use transaction CFC5, select ALE Change pointer, Periodic for material masters. 1-4-2 Change material master T-B1xx in plant 1000: In the MRP2 view, change the planned delivery time from 10 to 15 days. Use transaction MM02; enter material T-B1xx and Plant 1000. Select view MRP2, change Planned delivery time to 15 days. 1-4-3 Start the incremental data transfer. R/3 CIF menu, Integration Model → Incremental Data Transfer → Master Data, transaction CFP1): Select material masters, execute incremental data transfer. 1-4-4 Goto the APO system and display the product T-B1xx.: Master Data → Product → enter product T-B1xx, location 1000→ Display
Choose the Procurement tab page, you should see 15 days has been entered as the planned delivery time.
© SAP AG
ADM355
2-39
CIF Monitoring
1
APO Overview 2 APO Core Interface 3 CIF Monitoring 4 APO Optimizers 5 APO and BW 6 APO Sizing & Performance 7 Data Consistency 8 Disaster Recovery
SAP AG 2002
© SAP AG
ADM355
3-1
Contents / Objectives
Contents Core Interface and Queued RFC Inbound and Outbound Queues Queue Management Application Logging
Objectives At the end of this unit, you will be able to: Describe the components of CIF Describe the technology of CIF Set up and use monitoring tools for CIF
SAP AG 2002
© SAP AG
ADM355
3-2
Interface Scenario
SAP OLTP
qRFC
LO
SD
SAP APO
HR
CIF Core Interface
SAP AG 2002
CIF uses a SAP Basis technology called queued RFC (qRFC) and guarantees serialization and update in the target system. An update is either processed completely as a logical unit of work (LUW) in the target system, or not at all. CIF internal components: Active data channel (sends data changes immediately to APO) Message serialization Event channel (receives event-driven changes from APO)
© SAP AG
ADM355
3-3
Core Interface with Help of Queued RFC (1)
Sending System
Receiving System
tRFC
Queue1
tRFC
Queue2
tRFC
Queue3
tRFC
Queue4
Outbound Queues
Processing of RFCs qRFC
LUW
SAP AG 2002
The queued Remote Function Call (qRFC) technique uses transactional RFCs. The tRFC technique offers the following features: All tRFC calls terminated with the statement COMMIT WORK belong to one LUW (logical unit of work), which automatically receives a unique transaction ID. Within a LUW, all function modules are executed in the target system asynchronously in the same sequence in which they were called in the sending system. However, with tRFC there is no guarantee that several LUWs are executed in the receiving system in the sequence dictated by the application of the sending system. qRFC with send queue is an enhancement of tRFC. tRFC calls are serialized by buffering them in a named queue. The name of the queue are determined by the application. The sequence of LUWs within a queue is determined by a counter; a new value for it is created at COMMIT WORK. LUWs belonging to one queue are processed in the target system asynchronously in the same sequence they were called. This serialization is often required to assure data consistency. - Example: Changes to production orders in an SAP OLTP system. Because of asynchronous processing, the sending system does not have to wait for the update to be completed in the target system. If the sending system is the SAP OLTP system, the addition of CIF to the system causes minimal impact on OLTP functionality.
© SAP AG
ADM355
3-4
Core Interface with Help of Queued RFC (2)
Sending System
Receiving System
tRFC
Queue1
tRFC
Queue2
tRFC
Queue3
tRFC
Queue4
Outbound Queues
Processing of RFCs qRFC
LUW
SAP AG 2002
In some cases, due to complex logic of applications that create the queues, multiple calls in the sending system can be combined into one LUW independently of the queue names, which can generate interdependences between different queues. All the calls that belong to one LUW must be processed in the target system again as a unit. Example: For a change of a purchase order and the corresponding inventory posting, two different queues are used, but these two changes may only be processed together.
© SAP AG
ADM355
3-5
Core Interface with Help of Queued RFC: Errors
Error Sending System
Receiving System
tRFC
Queue1 Dependency
tRFC
Queue2
tRFC
Queue3
tRFC
Queue4
Outbound Queues
Processing of RFCs qRFC
LUW
SAP AG 2002
If there is a problem in transferring or processing the queue, the whole queue is stopped. After the problem is corrected, saved function calls can be rerun without generating data inconsistencies. If a LUW spans more than one queue, an error in one queue can potentially block a large number of related queues. For example, if a qRFC in Queue1 and a qRFC in Queue2 belong to one LUW (they can only be processed together), an error in processing Queue1 blocks Queue2. A collection of queues with interdependences between them is sometimes called a thread of queues. An error in one queue can block the related queues in the same thread but not queues in other threads. Generally, multiple threads of stacked CIF queues can arise within the data channels.
© SAP AG
ADM355
3-6
Inbound Queues Standard APO CIF delivery Only outbound queues are used
Outbound
Outbound
Outbound
Inbound
tRFC
tRFC
tRFC
tRFC
Inbound
Inbound
Inbound
Outbound
Implement Implementusage usageof ofinbound inboundqueues queuesin incustomizing: customizing: Transaction CFC1 in SAP R/3 Transaction CFC1 in SAP R/3 Transaction Transaction/SAPAPO/C2 /SAPAPO/C2in inSAP SAPAPO APO SAP AG 2002
During data transfer between the SAP OLTP and APO systems, there may be heavy load on the target system for mass transactions (planning in the SAP OLTP system, planning in APO, backorder processing, automatic planning). This is because all arriving qRFCs are processed immediately. The sending system waits and the network connection remains open until the CIF task is finished. When just outbound queues are used on the sending side (standard delivery APO CIF), this results in a poor system load distribution for qRFCs. This can cause a capacity overload and system breakdown. For this reason, it is strongly recommended to implement the usage of inbound (receive) queues on the receiver side, which makes it possible just to store the incoming RFCs in the database and process them later. This facilitates better load distribution in the recipient system, which should improve overall performance and system availability. However, if inbound queues are used in the APO and SAP OLTP systems and errors occur during the communication, they are no longer visible in the outbound queue monitor (SMQ1) in the sending system. You must use the inbound queue monitor (SMQ2) in the recipient system instead. Even if inbound queues are implemented in the APO and SAP OLTP systems, they are not used for the initial data supply. The initial data transfer from the SAP OLTP system is always through the outbound queue, due to a high serialization level. Up to R/3 Plug-In 2000.2 and APO 3.0A Support Package 13, inbound queues can only be implemented with help of advanced corrections described in SAP Notes 388001 (for the SAP R/3 System) and 388528 (for the APO system). As of Plug-In 2001.1 and APO 3.0A Support Package 14 or APO 3.1, inbound queues can be activated in customizing – see SAP Notes 430725 and 416475. In R/3, transaction CFC1 (for the assignment of an operation mode to the target APO system) is used for this purpose. In APO, use transaction /SAPAPO/C2 (assignment of a logical system to a business system group). © SAP AG
ADM355
3-7
Transactional RFC Options Transaction SM59
In the APO environment, do not suppress background job on connection error Set time between two tries to 2 or 3 minutes SAP AG 2002
As queued RFC uses transactional RFC, the tRFC options must be set correctly. The settings determine how calls are handled that could not be successfully processed due to connection errors or locks in the target system. To set the tRFC options, call transaction SM59 and choose R/3 connections (select connection) >> Change >> Destination >> TRFC options. If tRFC options are not set explicitly, the default setting is: Background job on connection error not suppressed 30 connection attempts (tries) 15 minutes between two consecutive tries Empirical values that have proven to be appropriate for achieving the best performance are: 10 connection attempts Time between two tries set to 2 or 3 minutes In special situations, you might have to use different values.
© SAP AG
ADM355
3-8
QOUT Scheduler
Transaction SMQS (as of qRFC version 6.10.042)
SAP AG 2002
Normally, all LUWs are processed immediately and in parallel by the qRFC manager in the source system. It can happen that the receiving system is overloaded by the large number of parallel incoming RFCs. On the other hand, in a small sending system, it can happen that all dialog work processes are used by the parallel RFCs (status SYSLOAD). For these reasons, the qRFC version 6.10.042 enables you to define a maximum number of parallel links (tRFC or qRFC) to a certain destination via the QOUT scheduler (transaction SMQS, Max. Connections). You can also control the maximum number of RFCs that can be run via this destination. The destination that you want to control must be registered in transaction SMQS. If you have defined a server group in your sending system (transaction RZ12), you can also define which application servers are used for the transmission: choose Edit >> Change AS group. By default, a destination is registered with the server group DEFAULT, which means that all active application servers are used to send the tRFCs/qRFCs to the registered destination. You do not have to maintain the group DEFAULT in RZ12. You can check the qRFC version in transactions SMQ1, SMQ2, SMQS, or SMQR by choosing Information >> Version. For more information about the QOUT scheduler, see http://service.sap.com/scm >> mySAP SCM Technology >> Integration >> qRFC Monitoring (qRFC 6.10) : qOUT Scheduler.
© SAP AG
ADM355
3-9
QIN Scheduler
Transaction SMQR
SAP AG 2002
Normally, qRFC applications are responsible for activating their inbound queues (using some API calls) so that LUWs written to these queues are executed automatically. The QIN scheduler (transaction SMQR) manages the task of activating registered inbound queues and controlling the usage of resources in the receiving system. The QIN scheduler only starts the inbound queues registered in SMQR. Unlike the QOUT scheduler, you do not register a destination in SMQR but a queue. When specifying the queue name for registration, use of wildcards is recommended to improve the performance of the QIN scheduler. Further registration parameters include: Execution mode (D for execution of LUWs in a dialog work process, B for execution in a background work process) Maximum runtime in seconds Logical destination (if you need to change the user context, client, or language) Number of retries Delay between retries in seconds When processing in dialog work processes, the registered inbound queues are started via parallel RFCs. To control the usage of resources, it is recommended to define a group of application servers and their dialog work processes, and assign this group to the QIN scheduler in SMQR. If no server group is assigned here, the QIN scheduler uses all application servers and all available dialog work processes. For a more detailed description of the QIN scheduler, see SAP Note 369007.
© SAP AG
ADM355
3-10
RFC Connection Parameters / Gateway Parameters Important SAP instance parameters for restricting the number of dialog work processes used for tRFC/qRFC communication: rdisp/rfc_min_wait_dia_wp Minimum number of dialog work processes held free for dialog users
rdisp/rfc_max_own_used_wp Percentage of available dialog work processes that can be occupied by one user for RFC calls
... For a description of all RFC connection parameters, see SAP Note 74141 Use transaction RZ12 or SM59 to define RFC server groups and set these parameters dynamically
Maximum number of connections to an SAP instance can be limited by gateway parameters See SAP Note 384971 SAP AG 2002
It may be useful to install a separate instance just for transferring data between APO and SAP OLTP. This is an easy way to limit resource consumption by RFC calls. For a detailed description of RFC connection parameters, see SAP Note 74141.
© SAP AG
ADM355
3-11
CIF Data Channel Monitoring qRFC monitor Display transfer queues Display waiting qRFC calls Restart waiting calls
qRFC problem causes: Communication errors Dialog work processes unavailable Missing RFC entry Network problems
Application errors (non-posting of data to APO) Missing master data Locking of objects Bugs
Application Application problems problems must must be be solved solved by by aa system system administrator administrator in in cooperation cooperation with with an an application application manager manager SAP AG 2002
In SAP R/3 and SAP APO systems, a qRFC monitor is available. It displays all transfer channels (queues) for all target systems, including waiting qRFC calls. It can be used to restart waiting calls. Use the qRFC monitor to monitor a variety of errors connected with data transfer through the CIF, including: Communication/network problems (status CPICERR) Failure of the application to post data to the target system because of missing master data for transaction data, locking of objects, or program errors or bugs (status SYSFAIL) In the case of a connection error, the data can usually be transferred successfully after correcting the problem simply by executing the function call again. However, application errors require intensive analysis. Under some conditions, the function call in the target system cannot be made to run correctly and the entry must be deleted from the queue to enable transfer of the data following it. Deletion of function calls from queues may result in inconsistencies, so this should be avoided if possible. The preferred solution is resolve the problem and unlock the queue. As return parameters cannot be delivered back to the sending system for qRFC activities, potential error messages cannot be directly returned there. For example, even if you find no error related to CIF in the qRFC monitor on the SAP OLTP side, you may find errors recorded in the application log on the APO side. CIF queues are client dependent.
© SAP AG
ADM355
3-12
Outbound Queue Overview Transaction SMQ1
SAP AG 2002
Both in R/3 and APO systems, you can start the qRFC monitor for outbound queues with transaction SMQ1 (report RSTRFCM1). Alternatively, in the OLTP system, you can call transaction CFQ1, but this only shows queues within the current client. The qRFC monitor presents an overview of queues that are not empty, the number of LUWs in each one, and the target system. For more detailed information (status, date/time of the first and last LUW written into the queue, and possibly the name of a queue that must be processed first), choose a queue and select Display selection. In the next screen, double-clicking the queue displays the individual calls. Queue names are generated by the application programs. The qRFC monitor only displays the waiting calls. Because of message serialization, if an error occurs, the highest entry in the queue blocks all other entries. For any qRFC error, a detailed error log is always saved in the application log of the system. To find this entry in the application log: For the call with the qRFC error, copy the value in field TID (transaction ID). In the selection screen of transaction /SAPAPO/C3 (APO application log) or CFG1 (OLTP application log), enter this value in the field External ID, select a time period, and execute. The next screen displays all messages related to the erroneous qRFC call. An error can appear in the APO application log without appearing in the qRFC monitor. In R/3, you can also monitor CIF channels with transaction CFP2 (report RCPQUEUE): choose Logistics >> Central functions >> Supply Chain Planning Interface >> Core Interface Advanced Planner and Optimizer >> Integration Model >> Change Transfer >> Transaction Data.
© SAP AG
ADM355
3-13
Columns of Outbound Request List
Cl.
Client for the request
User
User who created the request
Function module
Function module to be processed in the target system
Queue name
Name of the queue
Destination
Logical name of the target system
Date
Creation date of the request
Time
Creation time of the request
Status txt
Possible error message
TID
Transaction ID of the request: each request gets a TID
Host
Name of the application server that created the request
Tktn
Transaction code that created the request
Programs
Program that created the request
Rpts
Number of retries
SAP AG 2002
To track a specific request, you can use transaction ID.
© SAP AG
ADM355
3-14
Common Statuses of Outbound Queues
READY Queue is ready for transmission This should only be a temporary status
RUNNING The first LUW of the queue is currently being processed
EXECUTED The first LUW of the queue has been processed Before further LUWs are processed, the system waits for a confirmation from the target system
STOP The queue was stopped explicitly
SAP AG 2002
The most common statuses displayed in SMQ1 are: READY Queue is ready for transmission. This should only be a temporary status. If a queue was locked manually and then unlocked without being activated, the queue stays ready until it is activated explicitly. RUNNING The first LUW of the queue is currently being processed. If a queue in this status hangs for more than 30 minutes, activate the queue again. This status can mean that the work process that sent this LUW has terminated. Activating a queue in this status can cause a LUW to be executed several times, so always wait at least 30 minutes before you activate the queue again. EXECUTED The first LUW of the queue is processed. The system waits for a qRFC-internal confirmation from the target system before further LUWs are processed. If a queue in this status hangs for more than 30 minutes, the work process responsible for sending this LUW may have terminated. However, the current LUW has been executed successfully and you can activate the queue. The qRFC Manager automatically deletes the executed LUW from the queue and sends the next LUW. STOP A lock was set explicitly (via SMQ1 or a program). qRFC never locks a queue in its processing. Inform the corresponding application, then unlock and activate this queue using SMQ1.
© SAP AG
ADM355
3-15
Error Statuses of Outbound Queues (1)
SYSLOAD At the time of the qRFC call, no dialog work processes were free in the sending system for sending the LUW asynchronously
SYSFAIL A serious error occurred in the target system while the first LUW of the queue was executed. The execution was interrupted
SAP AG 2002
SYSLOAD transmission to the target system immediately. The system automatically retries to send queue object again by creating and scheduling a batch job. The number and frequency of retries depend on the chosen tRFC options. Check the number of dialog work processes that can be used by the tRFC/qRFC: it is determined for each application server by the number of existing dialog work processes and by the profile parameters rdisp/rfc* described in SAP Note 74141. Check gateway parameters for number of connections allowed. See SAP Note 384971. For more details, see SAP Notes 319860 and 384077. SYSFAIL A serious error occurred in the target system while the first LUW of the queue was executed. The execution was interrupted. No batch job is scheduled for an automatic retry, and the queue is stopped. When you double-click the status field in SMQ1, the system displays an error text. You can find additional information on this error in the corresponding short dump (ST22) or system log (SM21) in the target system. For an explanation of the error text Connection closed and a list of situations that can prompt it, see SAP Note 335162.
© SAP AG
ADM355
3-16
Error Statuses of Outbound Queues (2)
CPICERR During transmission or processing of the first LUW in the target system, a network or communication error occurred
WAITSTOP The first LUW of the queue has dependencies to other queues, and at least one of these queues is locked
WAITING The first LUW of this queue has dependencies to other queues, and at least one of these queues contains other LUWs with higher priorities
See SAP Notes 378903 and 366869
SAP AG 2002
CPICERR During transmission or processing of the first LUW in the target system, a network or communication error occurred. Depending on the definition in SM59 for the destination used, a batch job is scheduled to send the queue object later. Double-click the status field in SMQ1 to display the corresponding error text. For more information on this error, see the syslog (SM21) and the trace files dev_rd or dev_rfc*. Generally, you should check the network, and user authorizations in the target system. WAITSTOP The first LUW of this queue has dependencies on other queues, and at least one of these queues is currently locked. WAITING The first LUW of this queue has dependencies on other queues, and at least one of these queues contains other LUWs with higher priority. If one queue has status SYSFAIL, all the queues that depend on it get status WAITING. For a complete list of statuses of both outbound and inbound queues, see SAP Note 378903.
© SAP AG
ADM355
3-17
Inbound Queue Overview Transaction SMQ2
SAP AG 2002
Both in OLTP and APO, you can start the qRFC monitor for inbound queues with transaction SMQ2 (report RSTRFCM3). You use it in the same way as the qRFC monitor for outbound queues.
© SAP AG
ADM355
3-18
Columns of Inbound Request List
Cl.
Client for the request
User
User who created the request
Function module
Function module to be processed in the target system
Queue name
Name of the queue
Date
Creation date of the request
Time
Creation time of the request
Status txt
Possible error message
TID
Transaction ID of the request: each request gets a TID
Original TID
TID that was in the dedicated outbound queues
Host
Name of the application server that created the request
Tktn
Transaction code that created the request
Program
Program that created the request
SAP AG 2002
In an inbound queue, you have the TIDs for the requests in the inbound queue and the former TIDs of the dedicated outbound queues.
© SAP AG
ADM355
3-19
Common Statuses of Inbound Queues
READY Queue is ready for transmission This should only be a temporary status
RUNNING The first LUW of the queue is currently being processed
STOP The queue was stopped explicitly
SAP AG 2002
The most common statuses displayed in SMQ2 are: READY Queue is ready for transmission. This should only be a temporary status. If a queue was locked manually (in SMQ2 or via a program) and then unlocked without being activated, the queue stays ready until it is activated explicitly. RUNNING The first LUW of the queue is currently being processed. If a queue in this status hangs for more than 30 minutes, activate it again. This status can mean that the work process that sent this LUW has terminated. Activating a queue in this status can cause a LUW to be executed several times, so always wait at least 30 minutes before you activate the queue again. STOP A lock was set explicitly (via SMQ2 or a program). qRFC never locks a queue in its processing. Inform the corresponding application, then unlock and activate this queue using SMQ2.
© SAP AG
ADM355
3-20
Error Statuses of Inbound Queues
SYSFAIL A serious error occurred in the target system while the first LUW of the queue was executed. The execution was interrupted
CPICERR During transmission or processing of the first LUW in the target system, a network or communication error occurred
WAITING The first LUW of this queue has dependencies to other queues, and at least one of these queues contains other LUWs with higher priorities
SAP AG 2002
SYSFAIL A serious error occurred in the target system while the first LUW of the queue was executed. The execution was interrupted. No batch job is scheduled for an automatic retry, and the queue is stopped. When you double-click the status field in SMQ1, the system displays an error text. You can find additional information on this error in the corresponding short dump (ST22), the system log (SM21), and the developer traces in files dev_rd and dev_rfc* in the target system. For an explanation of the error text Connection closed and a list of situations that can prompt it, see SAP Note 335162. CPICERR During transmission or processing of the first LUW in the target system, a network or communication error occurred. Depending on the registration of this queue in SMQR, a batch job may be scheduled for repetition. Double-click the status field in SMQ2 to display the error text R/3 logon failed. For more information on this error, see the syslog (SM21) and the trace files dev_rd or dev_rfc*. A reason for this problem and a reference to its solution are given in SAP Note 369524. WAITING The first LUW of this queue has dependencies on other queues, and at least one of these queues contains other LUWs with higher priority.
© SAP AG
ADM355
3-21
Queue Manager: Systemwide Monitoring
Queue Manager (transaction /SAPAPO/CQ) Makes systemwide CIF queue monitoring possible Is especially appropriate for monitoring from the application point of view: queues are classified according to object types Contains a link to the more technical monitors SMQ1/SMQ2 in the APO system SAP AG 2002
© SAP AG
ADM355
3-22
Queue Manager: Monitoring of Inbound Queues
Queue Manager is: Available in APO 3.0A as of Support Package 14 Standard in APO 3.1
Queue Manager supports monitoring of inbound queues In APO 3.0A if the advanced correction published in SAP Note 460538 is implemented In APO 3.1 as of Support Package 3
SAP AG 2002
© SAP AG
ADM355
3-23
R/3 and APO: Queue Management in SMQ1 / SMQ2 Activate the qRFC Manager for a selected queue. The LUWs in the queue will be sent immediately. Lock a selected queue. A stop mark will be set on the end of existing queue. All previously recorded LUWs will be processed up to the stop mark. Unlock a selected queue. The first stop mark in the queue will be removed. The qRFC Manager will be started immediately and execute the LUWs until the next stop mark, or the end of the queue, if no stop mark is set. Lock a selected queue immediately. The stop mark will be set to the very first line in the queue, so the complete queue will be stopped. Unlock a selected queue without activation. The stop mark will be removed without activating the qRFC Manager.
Queues can be stopped and restarted without losing data changes SAP AG 2002
In transaction SMQ1/SMQ2, the outbound/inbound queue overview enables you to perform the following actions on selected queues (plus Choose and Delete, from the buttons or from menu Edit): Activate - activates the queue. Once the cause of any error state is removed, you can use this button to activate the qRFC Manager. The LUWs in the queue are sent immediately, if there are no stop marks. (Stop marks must be removed with Unlock.) Lock - locks the queue. A stop mark is set at the end of the queue. All further LUWs are written behind the stop mark. All previously recorded LUWs are processed up to the stop mark. Unlock - unlocks the queue. The first stop mark in the queue is removed. If more than one stop mark is set, they are removed one by one from the top down. The qRFC Manager is started immediately and executes the LUWs until the next stop mark, if one is set, or until the end of the queue otherwise. Lock immediately - locks the queue immediately. The stop mark is set at the very first line in the queue. The available LUWs are also stopped. Unlock without activation - unlocks the queue without activation. The stop mark is removed without activating the qRFC Manager. Even if a queue is stopped, corresponding data changes are saved there for later processing so that they are not lost. Alternatively, you can disable CIF queues by deactivating the integration model (transaction CFM2). However, when the model is deactivated, no incremental transfer is performed and the data changes are not stored in the CIF queues.
© SAP AG
ADM355
3-24
Stopping and Starting Queues in Batch Queues can be stopped and restarted in a convenient way through event-driven jobs created with help of corresponding reports Wildcards can be used in queue names
Stopping: To stop outbound queues use report RSTRFCQ1 To stop inbound queues use report RSTRFCI1
Starting: To start outbound queues use report RSTRFCQ3 To start inbound queues use report RSTRFCI3
RSTRFCQ1
RSTRFCQ3
SAP AG 2002
Report RSTRFCQ1 can be used to stop a long-running outbound CIF queue. A normal stop waits until all active LUWs are finished. The FORCE mode rolls back all active queue object processes. If all CIF queues are stopped at the same time using RSTRFCQ1 with FORCE mode, you may need to run report RSTRFCQ3 several times. This is because some of the CIF queues depend on other queues and cannot be restarted if the others are not running. The FORCE option does not guarantee that all queues are restarted at the same time. In report RSTRFCQ3, if NO_ACT is left empty, the report activates the relevant queues. If NO_ACT is set to X, queues are unlocked without activation. In reports RSTRFCQ1 or RSTRFCQ3 in R/3 Systems, using CFSTK* has the same effect as using report RCPQUEUE or transaction CFP2 to stop and start the CFSTK* data channel. Create selection variants to run reports RSTRFCQ1 and RSTRFCQ3 as batch jobs. When restarting erroneous CIF queues, try not to restart them all at the same time. Start them one after another due to the multithreading (interdependencies) in the CIF queues. Inbound queues can be stopped or started using reports RSTRFCI1 and RSTRFCI3.
© SAP AG
ADM355
3-25
Application Logging Display Application Log in SAP R/3 System: Transaction CFG1 Display Application Log in SAP APO System: Transaction /SAPAPO/C3
Maintain logging level in R/3: Transaction CFC2 Maintain logging level in APO: Transaction /SAPAPO/C41 SAP AG 2002
Display entries in the application log to get detailed information about: Date and time of transmission Data object and integration model Source system, user, and SAP transaction Application success and error messages To find the transactions for reading the application log: In R/3, choose Logistics >> Central functions >> Supply Chain Planning Interface >> Core Interface Advanced Planner and Optimizer >> Monitoring >> Application Log >> Display Entries (transaction CFG1) In APO, choose Tools >> APO Administration >> Integration >> Monitor >> Application Log >> Display Entries (transaction /SAPAPO/C3) You can maintain the logging level as follows: In R/3, choose Logistics >> Central functions >> Supply Chain Planning Interface >> Core Interface Advanced Planner and Optimizer >> Settings >> User Parameters (transaction CFC2) In APO, choose Tools >> APO Administration >> Integration >> Monitor >> Application Log >> Switch On System Logging (transaction /SAPAPO/C41) If you want to get the detailed information mentioned above, set the logging level to Detailed. If you set the logging level to Normal, only the number of data records is logged.
© SAP AG
ADM355
3-26
Clearing Application Log !
Delete entries in the application log regularly Report SBAL_DELETE
SAP AG 2002
Delete entries in the application log regularly by scheduling the following as background jobs: In the OLTP system, report RDELALOG In the APO system, report /SAPAPO/RDELLOG Alternatively, in both OLTP and APO you can schedule report SBAL_DELETE. To delete entries manually, do the following: In OLTP, choose Logistics >> Central functions >> Supply Chain Planning Interface >> Core Interface Advanced Planner and Optimizer >> Monitoring >> Application Log >> Delete Entries (transaction CFGD) In APO, choose Tools >> APO Administration >> Integration >> Monitor >> Application Log >> Delete Entries (transaction /SAPAPO/C6)
© SAP AG
ADM355
3-27
SAP R/3: CIF Application Log Customizing Transaction CFC6
SAP AG 2002
To customize the application log in an SAP R/3 System, start transaction CFC6, choose a function module for sending data through CIF (naming convention: CIF_*_SEND), and in the tree structure displayed in the next screen choose what should be saved in the application log.
© SAP AG
ADM355
3-28
SAP APO: qRFC Alert Monitor For outbound queues: Transaction /SAPAPO/CW (report /SAPAPO/RCIFQUEUECHECK) For inbound queues: Report /SAPAPO/RCIFINQUEUECHECK
SAP AG 2002
The qRFC alert monitor checks the selected local or remote queues in the selected destination systems. If there are incorrect queue entries, the report sends a message to specific user about the queues. To view the qRFC alert monitor for outbound queues, call transaction /SAPAPO/CW in your APO system or choose Tools >> APO Administration >> Integration >> Monitor >> QRFC Alert (Output Queue), or run report /SAPAPO/RCIFQUEUECHECK. There is no such monitor in SAP R/3 systems. To monitor the SAP R/3 systems connected to an APO system, monitor them as remote systems from within the APO system. It is a good idea to schedule the qRFC alert monitor as a background job using report /SAPAPO/RCIFQUEUECHECK to run every 15 minutes. If you have implemented inbound queues and wish to implement alert monitoring, use report /SAPAPO/RCIFINQUEUECHECK. In SAP APO 3.0A, you must create this report as an advanced development according to SAP Note 392197. Check also SAP Note 393574.
© SAP AG
ADM355
3-29
SAP R/3: CIF Data Channel Control Transaction CFP2 or report RCPQUEUE Start / Stop data channels without losing data changes Monitor / Display details
SAP AG 2002
In an SAP R/3 System, you can monitor, start, and stop CIF data channels by using transaction CFP2 (Logistics >> Central functions >> Supply Chain Planning Interface >> Core Interface Advanced Planner and Optimizer >> Integration Model >> Change Transfer >> Transaction data). First choose the target APO system and Execute, then select the symbol in the last column and Execute again. As in SMQ1, if a data channel is stopped here, corresponding data changes are saved for later processing.
© SAP AG
ADM355
3-30
tRFC/qRFC Monitoring in Alert Monitor RZ20
The CCMS monitoring architecture has new functionality for monitoring transactional and queued RFC This functionality is available with the following SAP Basis support packages: Release 4.5B SAPKH45B47 SP 47 Release 4.6B SAPKB46B35 SP 35 Release 4.6C SAPKB46C26 SP 26 Release 4.6D SAPKB46D15 SP 15 Release 6.10 SAPKB61008 SP 08
Once the support package is installed, the new monitoring functionality is activated automatically at system restart For more details, including optional customizing and functionality extension, see SAP Note 441269
SAP AG 2002
After installing the required support package and restarting the central instance (the server that provides the enqueue service), the new tRFC/qRFC monitor is displayed in transaction RZ20 (the alert monitor). The SAP Basis support package delivers the tRFC/qRFC monitor without any customizing. This means that all queue errors are monitored in a single, default monitoring subtree. Also, no exit function modules are executed to extend the monitoring functionality or change alert values. You can make the following customizing changes to the monitoring for inbound and outbound queue errors: You can create separate monitoring sub-trees for reporting on groups of queues specified by name. - For example, you can have queues whose names begin with CF* reported on separately. Error messages for these queues are no longer reported on in the default monitoring sub-tree. For each separate monitoring sub-tree, you can perform extended monitoring by specifying a function module that runs each time the monitor runs. - For example, the monitoring package includes function modules for extending the queue error monitoring to check queue age (age of the oldest call waiting to be processed) and number of calls waiting in a queue. For each separate monitoring sub-tree, you can specify a function module that evaluates queue error messages before they are reported to the monitoring architecture. This function module can change the alert level from the default red alert to a lower level. - For example, if STOP alerts are unimportant for queues whose names begin with CRM_SITE*, you can change the alert value from red (alert generated) to green (message reported, but no alert generated).
© SAP AG
ADM355
3-31
tRFC/qRFC Monitor in RZ20 (1)
SAP AG 2002
The new tRFC/qRFC monitor should be displayed in transaction RZ20 under SAP CCMS Monitor Templates, monitor Communications. If the tRFC/qRFC monitor does not appear there, you can find it under SAP CCMS Technical Expert Monitors, monitor All Monitoring Contexts. In this case, you can create your own tRFC/qRFC monitor, or copy and modify the Communications monitor to include the tRFC/qRFC data.
© SAP AG
ADM355
3-32
tRFC/qRFC Monitor in RZ20 (2)
SAP AG 2002
The tRFC/qRFC monitor reports the following data: The number of outbound transactional RFC calls that cannot be processed because of errors. These errors include communication errors (the server that was to process the calls could not be reached); execution errors (there was an error in the function module that was to be executed in a tRFC call); or resource errors (the RFC server group did not have enough servers). Alerts are generated if the number of calls with errors exceeds thresholds. The number of inbound transactional RFC and queued RFC calls waiting to be processed. An alert is triggered if the number of calls waiting exceeds a threshold. Error messages for inbound and outbound qRFC queues. An error message means that the affected queue cannot be processed and that any additional calls added to the queue must wait until the error is corrected. Communication errors or execution errors for inbound and outbound queued RFC schedulers - The monitoring tree for inbound schedulers also reports on queues that have not been registered for processing. Calls in these inbound queues are not executed until the queues are registered.
© SAP AG
ADM355
3-33
tRFC/qRFC Monitor in RZ20 (3)
SAP AG 2002
© SAP AG
ADM355
3-34
Assignment of Objects to Queue Names
Object
Queue Name
Stock
CFSTK...
Sales Order
CFSLS...
Reservation
CFRSV...
Purchase Order
CFPO...
Planned Independent Requirements CFPIR... Planned / Production Order
CFPLO...
Materials
CFMAT...
Confirmation
CFCNF...
Delivery
CFDL...
SAP AG 2002
Queue names are created automatically by applications. Every single document is transferred through a special queue as it is assigned a unique number that becomes part of the queue name (for example CFPO000000000942). In APO 3.1, an improved naming convention for queues that contain APO planning data makes monitoring easier. In many cases, external order numbers are used in queue names, instead of GUIDs that are created in the APO system for internal data handling. This makes queue names for APO planning data more descriptive.
© SAP AG
ADM355
3-35
Monitoring qRFC/CIF: Summary (1) R/3 transaction/report APO transaction/report tRFC options
SM59
SM59
Outbound queues monitoring
SMQ1
SMQ1
Stopping selected outbound queues
Report RSTRFCQ1
Report RSTRFCQ1
Restarting selected outbound queues
Report RSTRFCQ3
Report RSTRFCQ3
Outbound scheduler
SMQS
SMQS
Inbound queues monitoring
SMQ2
SMQ2
Stopping selected inbound queues
Report RSTRFCI1
Report RSTRFCI1
Restarting selected inbound queues
Report RSTRFCI3
Report RSTRFCI3
Inbound scheduler
SMQR
SMQR
SAP AG 2002
© SAP AG
ADM355
3-36
Monitoring qRFC/CIF: Summary (2)
qRFC inbound queue alert
R/3 trans./rep.
APO transaction/report
----
Report /SAPAPO/RCIFINQUEUECHECK
qRFC outbound queue --alert Connection to RZ20 Alert Monitoring
Report /SAPAPO/RCIFQUEUECHECK
With Basis 4.6C Support Package 26 With Basis 4.6D Support Package 15 With Basis 6.10 Support Package 08
SCM Queue Manager
----
/SAPAPO/CQ
CIF Data Channel
CFP2
---
Application log
CFG1
/SAPAPO/C3
Delete application log
Report SBAL_DELETE
Report SBAL_DELETE
SAP AG 2002
© SAP AG
ADM355
3-37
Standard System Monitoring
Standard SAP Basis monitoring (SAP APO, SAP R/3) System log - SM21 ABAP dump - ST22 System process overview - SM50, SM66, SM51 Locking - SM12, DB01 Update - SM13 Batch - SM37 Database - DB02 RFC destination - SM59 Gateway - SMGW
SAP AG 2002
To monitor the inbound qRFC of the CIF user specified in the RFC destinations, use transaction SM50. If there is a qRFC communication error, check it using transaction SM59.
© SAP AG
ADM355
3-38
Further Documentation
For additional information about CIF, go to URL: http://service.sap.com/scm >> mySAP SCM Technology >> Integration http://service.sap.com/r3-plug-in >> Integration of SAP R/3 and mySAP.com components >> SAP APO http://service.sap.com/r3-plug-in >> Release Notes SAP Note 384077: APO: Optimizing CIF communication
SAP AG 2002
© SAP AG
ADM355
3-39
Summary
You are now able to: Describe the components of CIF Describe the technology of CIF Set up and use monitoring tools for CIF
SAP AG 2002
© SAP AG
ADM355
3-40
CIF Monitoring Exercises Unit: CIF Monitoring
At the conclusion of this exercise, you will be able to: • Use basic CIF monitoring tools
You create and activate a new integration model that causes errors during data transfer. You monitor the data transfer and resolve the problem.
1-1
Create and activate a new integration model in your SAP OLTP system that causes an error, and monitor the data transfer 1-1-1 Create a new integration model in your SAP OLTP system, client 800, for transferring the same materials like in the previous unit but of the plant 1300. (Plant 1300 should not have been transferred to the APO system before.) 1-1-2 Activate the integration model. 1-1-3 Use monitoring tools for analysis. 1-1-4 Optional: Monitor the corresponding queue in the SAP OLTP system from the APO system. 1-1-5 To release the blocked data channel, delete the erroneous LUW. Then check whether your integration model is activated.
© SAP AG
ADM355
3-41
CIF Monitoring Solutions Unit: CIF Monitoring
1-1
Create and activate a new integration model in your SAP OLTP system that causes an error, and monitor the data transfer 1-1-1 Define a new integration model that includes some material masters from the plant 1300: Start transaction CFM1. Supply a name for the new model (for example MAT_X), assign the logical system of APO TTOCLNT800 as target system, type an appropriate application name (for example M/1300), and specify the data that should be transferred: In the section Add to integration model, select only Material masters. In the section Relevant materials, type 1300 for Plant and make a choice for Material in the same way you did in the previous unit, exercise 2-9. Then click Execute and Save. Note: You should not have transferred plant 1300 to the APO system before. 1-1-2 Activate the integration model using transaction CFM2. 1-1-3 Monitor the data transfer: Choose Logistics → Central Functions → Supply Chain Planning Interface → Core Interface Advanced Planner and Optimizer → Monitoring → qRFC Monitor (transaction CFQ1), or alternatively use transaction SMQ1. The queue CF_ADC_LOAD appears in the list if it is not empty, for example because there is a problem with processing a call. Display the queue, and in the next screen its LUWs. Use the transaction ID of a LUW that caused an error to find a corresponding entry in the application log. In our case, as the error was generated in the target system, you do not find any information in the log of the source system. Log on to the APO system, client 800, and choose Tools → APO Administration → Integration → Monitor → Application Log → Display Entries (transaction /SAPAPO/C3).
© SAP AG
ADM355
3-42
1-1-4 Optional: Monitor the corresponding queue in the SAP OLTP system from the APO system: In the APO system TTO, client 800, choose Tools → APO Administration → Integration → Monitor → QRFC Alert (transaction /SAPAPO/CW), delete the entries in the upper part of the screen (Local output queue), in the bottom part (Remote output queue) type or choose the name of the SAP OLTP logical system (Remote systems) and the target logical system of the queue (Destinations in remote systems), choose to receive a notification, and execute. 1-1-5 Release the blocked data channel and check whether your integration model is activated: Use the qRFC monitor in the source system (transaction SMQ1 or CFQ1), double click to display the queue CF_ADC_LOAD, and in the overview of LUWs delete those you created (check the column User). Use the Delete pushbutton for the entire queue only if no other course participants are using the same system. Generally speaking, you could delete just the first LUW in a queue that caused the error, and then activate the queue with the corresponding button, or process (execute) the next LUWs individually. After you have deleted your own LUWs, the activation of the integration model completes even if no data transfer took place. You can check that your integration model was activated in transaction CFM4 (Logistics → Central Functions → Supply Chain Planning Interface → Core Interface Advanced Planner and Optimizer → Integration Model → Display), where you can for example display a list of all activated integration models.
© SAP AG
ADM355
3-43
APO Optimizers
1
APO Overview 2 APO Core Interface 3 CIF Monitoring 4 APO Optimizers 5 APO and BW 6 APO Sizing & Performance 7 Data Consistency 8 Disaster Recovery
SAP AG 2003
© SAP AG
ADM355
4-1
Optimizers
Contents APO Optimizers IMG Settings for Optimizers
Objectives At the end of this unit, you will be able to: Describe the role of optimizers in APO Maintain customizing settings for optimizers Check optimizer version Plan optimizer installations
SAP AG 2003
© SAP AG
ADM355
4-2
Review: APO System Architecture SAPGUI SAPGUI SAP GUI
SAP APO
APO System APO
Optimizers
Database
liveCache
SAP AG 2003
The APO optimizer is not a required APO architecture component. Heuristic approaches can be used for optimization runs.
© SAP AG
ADM355
4-3
Optimizers: Introduction APO optimizers are complex mathematical optimization algorithms written in C++ They are built on top of C libraries from ILOG SA (co-development) There are 7 optimizers: Supply Network Planning (SNP) Detailed Scheduling (PP/DS) Network Design (ND) Vehicle Scheduling and Routing (VSR) Sequencing (SEQ) Capable-to-Match (CTM) Model Mix Planning (MMP)
new
They can be located in their own optimization server(s) to improve performance They are currently only available on Windows NT 4.0 / 2000 SAP AG 2003
APO optimizers are complex programs based on mathematical models and algorithms for optimization. For example, they could be used to optimize routes and costs of transports and production. These programs are written in C++. They have no relation to database optimizers. Reasons for using C++ instead of ABAP: Faster programs Data structures that are most suitable for optimization At the operating system level, optimizers consist of executable files (such as snpopsvr.exe) and dynamic link libraries (cplex*.dll). The optimization process is CPU intensive. Therefore, SAP recommends using separate multiprocessor machines for SNP, PP/DS, and CTM optimizers. APO optimizers are only available on Windows platforms. There are no plans to develop UNIX versions of the optimizers. A variety of optimization algorithms are available, some developed by SAP, others are industry standard. Examples of computational solvers: Demand Planning Exponential smoothing, Holt–Winters, multiple linear regression PP/DS Linear programming, mixed integer linear programming
© SAP AG
ADM355
4-4
Integration of the Optimizers in APO
Frontend GUI, Opt.-OCX
APO Opt.-Server
APO Appl.-Server
APO DB-Server
APO LiveCache
SAP AG 2003
© SAP AG
ADM355
4-5
APO Optimization Extension Workbench
Open interface Seamless integration of third party and legacy optimization algorithm OCX integration into APO GUI for input/output Flexible data elements in APO structure (>2.0)
OCX
liveCache
Co-development with ILOG SA
Optimization Cartridge
SAP AG 2003
You can integrate external non-SAP optimizers into your APO environment. The Optimization Extension Workbench was first introduced in APO Release 2.0. It allows non-SAP optimization solutions to become integral parts of the APO systems. ILOG SA provides a toolset to build these optimization solutions in the form of plug-in optimization cartridges.
© SAP AG
ADM355
4-6
Optimizer: Communication Method
APO Application
Optimizer
Server
Server
Dialog
SNP
Dialog APO 3.0A
Dialog Batch Batch
SAP Gate -way
RFC call to optimizer executable
PP/DS SAP Gate -way
CTM SEQ
Data request:
ND
RFC call to APO application server
VSR
Data request: Direct to liveCache via database interface (Native SQL) SAP AG 2003
An SAP gateway must be installed on the optimizer host. A request from an APO application server to the optimizer can originate either from a dialog work process or a batch work process. The application server calls the optimizer executable via its own SAP gateway and the optimizer gateway using RFC. Two methods for supplying data to the optimizer executable include: Data supply via a direct liveCache connection and Native SQL (PP/DS) Data supply from APO application server via RFC calls (SNP, CTM, SEQ, ND, VSR)
© SAP AG
ADM355
4-7
IMG Settings for Optimizers Transaction /SAPAPO/COPT01
SAP AG 2003
To maintain customizing data for optimization server(s), choose SPRO >> SAP Reference IMG >> SAP APO - Implementation Guide >> SAP Advanced Planner and Optimizer (SAP APO) >> Basis Settings >> Optimization >> Basic Functions >> Maintain Master Data for Optimization Server (transaction /SAPAPO/COPT01). This customizing transaction lets you specify: Which standard optimizers are available for APO Where they are located (RFC destination) Which of them are active and for which ones logging is activated (Status) The path to the log file (Log file) and how access to them is controlled (MaxNoUsers) In each row, information for just one optimizer is maintained. As of SAP APO 3.0A, these settings are preconfigured. In previous versions, they had to be maintained.
© SAP AG
ADM355
4-8
Optimizer High Availability
Transaction /SAPAPO/COPT01 Transaction /SAPAPO/COPT00 SAP AG 2003
To increase the availability of a particular optimizer, you can install it twice on two different servers (A and B). Configure two different RFC destinations for these optimizers, one pointing to server A, the other one to B. In the customizing transaction for optimization servers (/SAPAPO/COPT01), copy the line corresponding to this optimizer, use a different name (Identifier) for the new one and select the other RFC destination. To set the feature Ping before Optimization run, choose SPRO >> SAP Reference IMG >> SAP APO - Implementation Guide >> SAP Advanced Planner and Optimizer (SAP APO) >> Basis Settings >> Optimization >> Basic Functions >> Maintain Global Settings (transaction /SAPAPO/COPT00) and flag the box Check server availability. If one server fails, this is recognized automatically by the ping feature and the alternative server defined in /SAPAPO/COPT01 is used. Load balancing between optimizers of the same type can only be implemented manually by defining which optimizing job will run on which optimizer server. Automatic load balancing is planned to be supported in future APO releases.
© SAP AG
ADM355
4-9
Checking Optimizer Versions
Transaction /SAPAPO/OPT09
SAP AG 2003
Versions of optimizers can be found using transaction /SAPAPO/OPT09.
© SAP AG
ADM355
4-10
Optimizer Installation
Optimizers are available only on Windows platforms For SAP APO 3.1, the only supported platform is Windows 2000 Advanced Server
Optimizers are installed with R3Setup Two installation possibilities: On an APO application server As a standalone optimizer server - recommended by SAP In this case, SAP Gateway must be installed and started
All programs are installed to subdirectories of :\apoopt\ A PP/DS optimizer should be installed on every optimizer machine, even if for example only SNP is used on the machine Reason: PP/DS optimizer has an internal remote administration functionality (such as OS process kill via /SAPAPO/OPT12) For a PP/DS installation that is never used, set in /SAPAPO/COPT01 MaxNoUsers to 0 SAP AG 2003
To enable you to administer a standalone optimizer server through the SAP GUI, the PP/DS optimizer must be installed on such a machine even if it will not be used directly. This is because the ABAP code of corresponding transactions uses a connection to PP/DS optimizer only. Example: If you have a standalone SNP optimizer server with PP/DS optimizer installed and activated and choose Tools >> APO Administration >> Optimization >> System Monitoring >> Process Overview (transaction /SAPAPO/OPT12), a complete list of all operating system processes running on both hosts is displayed, and any of them can be killed. If you deactivate the PP/DS optimizer on the standalone SNP optimizer server, processes running on that host are no longer displayed in transaction /SAPAPO/OPT12. If you do not want the PP/DS optimizer on the standalone optimizer server to be used for optimization runs, in transaction /SAPAPO/COPT01 set the maximum number of users to 0.
© SAP AG
ADM355
4-11
SNP/ND Optimizer User
Up to SAP APO Release 2.0, a special user was required in the APO system to run SNP or ND optimizer These optimizers had to make an RFC call to an application server for a data request. User information (name and password) for this call was stored on optimizer server in file :\apoopt\optuser.ini
As of SAP APO 3.0A, these RFC calls are no longer used by the SNP or ND optimizer, but the installation still requires an SNP user name and password You must enter a name and password even though the user is not created in the SAP APO system
SAP AG 2003
For older APO releases (up to 2.0), communication between the SNP optimizer (or the ND optimizer) and the APO system was based on two channels. After an APO application server has made an RFC call to the optimizer and sent a request to it, the optimizer back-connected to the APO system to download the model data. Later, the optimizer used this connection channel to send the result data back to the APO system. But to open this channel, an SAP user defined in the APO system was required. This user was created during installation, and for the optimizer, the user information was saved on the optimizer server in file :\apoopt\optuser.ini. This communication mechanism between the two processes was used even if they ran on the same machine. A user was required for an SNP or ND optimizer connection to APO system even if the optimizer was installed on the same machine as an application server. As of APO 3.0A, the comunication mechanism has changed. However, the installation procedure still asks you for the SNP user name and password. This information is entered into the optuser.ini file, but no corresponding user is created in the SAP APO system.
© SAP AG
ADM355
4-12
SAP APO 3.1 Optimizers
There are no relevant technology updates for the SAP APO 3.1 optimizers compared to the SAP APO 3.0A optimizers Only the operating system support matrix is different: The optimizer server is planned to be supported on 64-Bit Windows 2003 Server (IA-64) only for SAP APO 3.1 The optimizer server is not supported on Windows NT 4.0 any more as of SAP APO 3.1 Most updates for SAP APO 3.1 are functional: For example, as of SAP APO 3.1, optimizers will take into account stocks
SAP AG 2003
© SAP AG
ADM355
4-13
Optimizer Sizing: CPU
CPU requirements The PP/DS optimizer is the only one that can handle multithreading Other optimizers cannot take advantage of multiple CPUs If several optimizers run in parallel, the optimizer server effectively uses multiple CPUs If you have a CPU bottleneck when there is a large enough number of CPUs for the number of optimizers running simultaneously, you may need to install the optimizers on faster processors SNP optimizations contain especially complex functions, so the SNP optimizer is the one with the highest hardware requirements To minimize the optimization run time, the recommended SNP optimizer CPU speed is at least 700 MHz
SAP AG 2003
The SNP optimizer uses rather complex optimization functions, therefore it needs more hardware resources. The resourse requirement depends largely on number of product/location combinations, but also on number of variables and constraints required for the optimization run. The recommended minimum CPU speed for the SNP optimizer is 700 MHz. This provides acceptable run times for optimization models with up to 400 000 variables and 200 000 constraints. In other cases, you should use: - CPUs with at least 1 GHz for optimization models with larger number of variables and constraints, up to 1 200 000 variables and 600 000 constraints - CPUs with at least 1.4 GHz for optimization models with more than 1 200 000 variables and 600 000 constraints
© SAP AG
ADM355
4-14
Optimizer Sizing: Main Memory and Disk
Main memory requirements PP/DS: 512 MB is normally enough, depending on time horizon SNP: up to 2 GB, or even more for some customers, depending on the number of product/location combinations and the number of variables and constraints required for the optimization run
Disk requirements 1 GB should normally be enough Better: as large as optimizer memory (for big traces and dumps)
SAP AG 2003
Amount of main memory recommended for the SNP optimizer 512 MB to 1 GB for optimization models with up to 400 000 variables and 200 000 constraints 1 to 2 GB for optimization models with up to 1 200 000 variables and 600 000 constraints More than 2 GB for optimization models with more variables and constraints If there is not enough main memory for the SNP optimizer, and logging was activated in transaction /SAPAPO/COPT01, an entry appears in the log file.
© SAP AG
ADM355
4-15
Discrete and Continuous Models
There are two categories of optimization models:
Continuous models with continuous variables
Discrete models with discrete variables
Critical hardware parameters: Discrete models: CPU clock speed on the optimization server 11
12
Continuous models: Amount of main memory available
1 2
10 9
3 8
4 7
6
5
SAP AG 2003
There are two categories of optimization models based on the simulation version and selection criteria for the optimizer: Discrete models with discrete variables Continuous models with continuous variables. Depending on the number of variables and constraints required for the optimization run, these models can be small or large. For larger optimization models (in terms of the number of constraints and variables), the main memory requirements for the APO optimization server are bigger. Discrete models have a more complex structure and are only able to handle smaller numbers of constraints and variables compared to continuous models. Critical hardware requirements: Discrete models: CPU clock speed on the optimization server Continuous models: available amount of main memory
© SAP AG
ADM355
4-16
Further Documentation
For additional information about optimizers: http://service.sap.com/scm >> mySAP SCM Technology >> DB & OS Platforms and System Requirements >> System Requirements for SAP APO 3.0A and SAP APO 3.1 >> System Requirements for SAP APO 3.0A and SAP APO 3.1 Database server, application server, SAP liveCache and Optimizer >> System Requirements for the SAP APO Optimizer http://service.sap.com/instguides
SAP AG 2003
© SAP AG
ADM355
4-17
Optimizers: Summary
You are now able to: Describe the role of optimizers in APO Maintain customizing settings for optimizers Check optimizer version Plan optimizer installations
SAP AG 2003
© SAP AG
ADM355
4-18
Optimizers Exercises Unit: Optimizers
At the conclusion of this exercise, you will be able to: • Find out information about the optimizer settings in the SAP APO system
Use SAP APO transactions to find out the optimizer’s settings.
1-1
Check the optimizer settings in the APO system. 1-1-1 Find out how many optimizer servers are active in the APO system: What is the Priority and Max Number of users for each optimizer? 1-1-2 Check the RFC destination for the SNP optimizer. 1-1-3 Check the versions of the optimizers. 1-1-4 Check the history of the optimization runs.
© SAP AG
ADM355
4-19
Optimizers Solutions Unit: Optimizers
1-1
Check the optimizer settings in the APO system. 1-1-1 Find out how many optimizer servers are active in the APO system: What is the Priority and Max Number of users for each optimizer? In the APO system, do transaction SPRO → SAP Reference IMG → APO Implementation Guide → Advanced Planner and Optimizer → Basis Setting → Optimization → Basic Functions → Maintain Master Data for Optimization Server (transaction /SAPAPO/COP01). Look up the Priority and MaxNoUsers column. 1-1-2 Check the RFC destination for the SNP optimizer. Transaction SM59, under TCP/IP connection and double-click on OPTSERVER_SNP01. 1-1-3 Check the versions of the optimizers. Transaction /SAPAPO/OPT09. 1-1-4 Check the history of the optimization runs. Tools → APO Administration → Optimization → Log Display. (Transaction /SAPAPO/OPT11).
© SAP AG
ADM355
4-20
APO & BW
1
APO Overview 2 APO Core Interface 3 CIF Monitoring 4 APO Optimizers 5 APO and BW 6 APO Sizing & Performance 7 Data Consistency 8 Disaster Recovery
SAP AG 2003
© SAP AG
ADM355
5-1
APO and BW
Contents BW Architecture How BW is Used in APO Integration of BW and APO
Objectives At the end of this unit, you will be able to: Explain the basics of how APO and BW are integrated
SAP AG 2003
© SAP AG
ADM355
5-2
APO, BW and OLTP
BW
SC Cockpit
Network Design
APO
APO
Sales data
Demand Planninge
Supply Network Planning
PP/DS Production Planning Detailed Scheduling
Transportation Planning
Available To Promise
e
Upload via BW extractors
Integration via CIF for SAP OLTP
Plug-In xx
OLTP SAP
SAP
SAP
SAP
Non-SAP OLTP
Non-SAP OLTP
SAP AG 2003
Both SAP APO and an external SAP BW can be linked to the same OLTP system, which provides basic information for different functions: BW extracts transaction and master data from SAP systems for reporting purposes. To support this function, you need to implement the R/3 Plug-In on SAP systems, which provides extraction programs and related functions. These Plug-Ins are not available for non-SAP systems. SAP APO can utilize the same Plug-In functions in the SAP system to extract data for DP. Since SAP APO includes all the functions of an SAP BW system, the extraction procedure works in the same way as in SAP BW. The Core Interface (CIF) provides master data as well as production and inventory information from an SAP R/3 System that are crucial for APO applications. Both systems also provide interfaces to non-SAP systems to incorporate their data for the corresponding processing. If sales history data is extracted directly from SAP system to DP, an SAP BW extractor is used. The SAP BW extractor and CIF do not conflict with each other. SAP Business Information Warehouse (SAP BW) is a component in the SAP APO environment so the standard SAP BW Administrator Workbench is delivered with SAP APO. SAP APO 3.0A is based on the SAP Basis 4.6C Kernel and SAP BW 2.0B. SAP APO 3.1 is based on the SAP Basis 4.6D Kernel and SAP BW 2.1C.
© SAP AG
ADM355
5-3
Source System Connections A BW system can serve as a source system for another BW system. This scenario is called a Data Mart. The Operational Data Store (ODS) can be a DataSource for another BW system An InfoCube can be a DataSource for another BW system A BW system can be a source system for an APO system An APO system can be a source system for a BW system Two-way data exchange is possible BW can be a source system for other mySAP components, such as SEM and CRM SAP AG 2003
SAP data mart is an InfoCube that can be used as a data source for SAP APO. InfoCubes are central data containers in SAP BW. All data that is kept in the InfoCube can be extracted and delivered to the SAP APO system. SAP BW can interact with SAP APO in two ways: As an APO component that provides functions to simulate and store different planning cycles based on the information coming either from a data warehouse or from different ERP systems As an external data provider that supplies consolidated information from different ERP systems
© SAP AG
ADM355
5-4
Business Information Warehouse: Architecture
SAP AG 2003
The bottom layer within an SAP BW environment contains the data sources for the SAP BW system. This can be any OLTP system such as SAP R/3 System or any external data source. SAP BW includes functionality for loading data from many different sources. - SAP datasources and infosources (SAP R/3, SAP BW, SAP APO, mySAP CRM and so on) - Flat files - DB link - Extraction tools using BAPIs - Customer programs communicating via BAPIs - Remote Cubes (real-time OLTP data access during reporting only) Administrator Workbench (transaction RSA1) contains tools for SAP BW administration, configuration, monitoring and scheduling. End-user interface: - Business Explorer Analyzer - Business Explorer Browser - Web Reporting - Portals Integration - Third-party front-ends For detailed information on SAP BW, refer to the SAP BW online help and documentation.
© SAP AG
ADM355
5-5
East
InfoCube: Concept
Dept. Stores
North
Region
South
Customer group
Wholesale Retail Glassware Ceramics
Plastics
Division SAP AG 2003
An InfoCube can be thought of as an object used for data storage and designed to facilitate multidimensional analysis. InfoCubes are maintained using the SAP BW Administrator Workbench (RSA1).
© SAP AG
ADM355
5-6
Basic InfoCube
Central data store for reporting and analysis Contains two types of data Key figures Characteristics
1 fact table and up to 16 dimension tables 3 dimensions are predefined by SAP Time Unit Data package ID BasicCube BasicCube
SAP AG 2003
InfoCubes contain two types of data: characteristics and key figures. Characteristics are master data or organizational elements (or their attributes) that are used for analysis and reporting. Characteristics are usually used for storing master data, information such as company code, product, material, customer group, month, or region. Key figures (also known as facts) are values or amounts, that means numerical data. Examples of key figures are costs, number of hours worked, profit, sales, or order quantity. An InfoCube contains several relational tables which can be logically described in a star schema. The InfoCube’s underlying database tables contain the InfoCube data. A star schema contains one fact table and several dimension tables. The fact table holds the numerical data (key figures) and the dimension tables hold characteristic data. These tables are joined at query time to return results. InfoObjects are used to build InfoCubes. InfoObjects are also used to build transfer structures and communication structures
© SAP AG
ADM355
5-7
InfoCube: Multi-Dimensional Analysis 1
Initial Query execution
Division
SAP AG 2003
DeptStores Wholesale Retail
East
4
Customer group
South
Region
South East
Region
North
Customer group Glass- Ceramics Plastics ware
Analysis of Plastics division
3
North
Analysis Analysis of of Ceramics Ceramics division division
2
Glass- Ceramics Plastics ware
Division
DeptStores Wholesale Retail
Analysis of Plastics division and Southern region
East
Drill down
Customer group
South
Division
Region
DeptStores Wholesale Retail
North
East South Glass- Ceramics Plastics ware
InfoCube
Product group Customer group Division Area Company code Region Period Profit Center Bus. Area
Customer group
North
Region
Characteristics:
Glass- Ceramics Plastics ware
DeptStores Wholesale Retail
Division
The primary method of accessing the data stored in the Business Information Warehouse is to define queries to read the data from the InfoCubes. Think of querying as the act of “slicing” a certain portion of a cube to obtain the relevant results from the characteristics and key figures stored in the cube. In this example, the InfoCube consists of many characteristics (product group, area, etc.), but only the customer groups, division and region are chosen for initial query execution (step 1). The result set of this initial query execution would be a summarized view of the data for these particular characteristics. In steps 2, 3, and 4, drill down occurs. Drilling down means to limit the result set of the query to only data which pertains to a particular characteristic value. This means, a more detailed level of data is displayed. SAP BW querying allows multi-dimensional analysis. That means you can create various views of the data in an InfoCube based on selection criteria and drilldowns. The OLAP processor controls access to the data and summarizes results from the presentation of the dataset. Each navigation through a dimension (drilldown) is treated as a separate query execution.
© SAP AG
ADM355
5-8
SAP BW Data Model An InfoCube is designed or “modeled” to meet a set of business reporting requirements Modeling is the process by which reporting requirements are structured into an object with the facts and characteristics that will meet the reporting needs Characteristics are structured together in related branches called dimensions
Dimension 2 Dimension 1
Dimension 3
Facts
Dimension n
Dimension 4
The key figures form the facts The configuration of dimension tables in relation to the fact table results in what is known as the star schema SAP AG 2003
The primary function of SAP BW is to provide reporting and analysis of transactional data, which has been summarized and stored in an efficient manner. In data modeling terms, the table structure in SAP BW is referred as a star schema. In reality, since there are other tables (not shown) such as SID tables, hierarchy and master data tables in the SAP BW data model, it can be referred to as a snowflake schema. This is a common data warehousing industry term. In SAP’s implementation of the star schema, characteristics are grouped together into dimensions. The facts are the statistics (could be key performance indicators KPI’s) analysts use to understand their businesses. The dimensions answer the questions “Who? What? When?” The facts answer the questions “how much money, how many people, how much did we pay them…?” This star schema approach is used for all InfoCubes in SAP BW no matter which application area they represent.
© SAP AG
ADM355
5-9
BW: Extended Star Schema / Snowflake Schema Master Master SID Table
SID Table
Text
Master
Text Hierarchy
Text
SID Table
Hierarchy
Master SID Table
Hierarchy
Dimension table Master SID Table
Text Hierarchy
Dimension table
FACT
Dimension table SID Table
Hierarchy
Master
Text
Text
SID Table
Hierarchy
Master
Text
Dimension table
Dimension table
SID Table
Hierarchy
Master
Text
Hierarchy
SID Table
Hierarchy
Master
Text
SID Table
Hierarchy
Master
Text
SID Table
Hierarchy
Master
Text
SAP AG 2003
The SAP BW extended star schema differs from the industry star schema. It is divided by a solution dependent part (InfoCube) and a solution independent part, which is shared among other InfoCubes (master data attribute, text and hierarchy tables). The attributes of a dimension table can be linked to master data tables. These master data attributes can be implemented as navigational attributes and then they can be used in analysis like dimension characteristics. Physically, an InfoCube is a set of database tables that can be joined together. There are special values (DIMIDs or Dimension IDs) that are used to relate records in the fact and dimension tables of the star schema. The relationship between dimension and characteristic attribute values is facilitated by SIDs (Set Identification). These SIDs are special values that allow faster join and union operations. SID values have no meaning alone but only exist to map the relationship between dimension and characteristic attribute values at the time of query execution. At the time of query execution, the optimizer will use the DIMIDs to join relevant dimension tables to the fact table and use the SIDs to join the tables that hold the characteristic values with the corresponding dimension tables. The sole purpose of DIMIDs and SIDs is to map existing relationships between fact and characteristic values from the different tables.
© SAP AG
ADM355
5-10
Transaction LISTSCHEMA A listing of all InfoCube tables and supporting tables
Review table design and links
Navigate to table contents
Review nested structure
Example for the Costs and Allocations InfoCube
Note the BW table naming convention. All BW object tables start with the prefix “/BI*/*”. “/BI0/” is for SAPdelivered objects and and “/BIC/” means customercreated. “F” is for “Fact table”, “D” is for “Dimension table”.
SAP AG 2003
The LISTSCHEMA transaction shows not only the InfoCube tables but also the other tables in the extended star schema down through the attribute tables. Hierarchy and text tables are not shown. To display InfoCube contents, use transaction LISTCUBE or the ABAP Dictionary Browser (SE16).
© SAP AG
ADM355
5-11
Data Targets: InfoCubes InfoCubes BW Server
Update Rules Communication Communication Structure Structure
Transaction InfoSource
Transfer Rules DataSources (user-defined)
Transfer Transfer Structure Structure
Transfer Transfer Structure Structure
DataSources (replicas) DataSources
Transfer Transfer Structure Structure
Extract Extract Structure Structure Transaction Data
Transaction Data
Flat File Source System
mySAP.com component
SAP AG 2003
A DataSource contains a number of fields in a flat structure used to transfer data into SAP BW (Extract Structure). When a DataSource replication is performed, the properties of the DataSource are copied into SAP BW. An extract structure temporarily holds data from a DataSource in the source system. The extract structure contains all the fields that are offered by an extractor in the source system for the data loading process. A transfer structure contains selected fields from the source system’s extract structure. The specific fields in a transfer structure determine the makeup of the data (from this DataSource) that flows into the SAP BW system. A transfer structure in the source system is a database table and corresponds to a transfer structure in the SAP BW system. A DataSource is activated in the source system and then replicated into the SAP BW system. Thus, the transfer structure in SAP BW contains a field mapping from the source system transfer structure. Transfer rules describe how fields from the DataSource are mapped into SAP BW. It is possible to also have routines in the transfer rules to do data cleansing, data validation, or data transformation. These rules are contained within the transfer structure. Data flows from the transfer structure in the source system through the SAP BW transfer structure into the BW communication structure using the transfer rules. A transfer structure always refers to a DataSource from a source system and to an InfoSource in SAP BW.
© SAP AG
ADM355
5-12
APO/BW Interfaces
There are three interfaces between BW and APO: Demand Planning (data mart interface) Supply Chain Cockpit - SCC (calling KPIs) Business Content (extracts data from APO live cache and from Data dictionary tables)
SAP AG 2003
© SAP AG
ADM355
5-13
Demand Planning – Data Mart Interface
BW
APO
BW
APO
Planning Area
Sales reporting
Time Series Orders
Historical sales data Historical sales data
BW staging BW staging
liveCache
Planning Data
Extractors CIF
SAP OLTP SAP AG 2003
Non-SAP OLTP
An external BW system can be used to keep historical sales data for reporting purposes. The same data can also be used as a starting point for future sales planning. So the data does not have to be extracted by APO a second time from the underlying OLTP system but can be retrieved directly from the BW system. To upload data from any source system to APO, you must create a BW-like InfoCube in the APO system. BW-like InfoCubes contain both transaction data (key figures in the fact table) and master data (characteristics in dimension tables). BW-like InfoCubes can also be used as backup InfoCubes. To store DP planning data in APO, a different type of InfoCube is generated automatically during the process of customizing (creation of the planning object structure). If several planning object structures are used, the same number of InfoCubes are generated. As of APO 3.0A, the Demand Planning data is stored partly in the APO database and partly in liveCache. - Master data is stored in the InfoCube and that means in the APO database. There is an entry in the InfoCube for each characteristic combination (that is, for each row with unique values for the characteristics stored in this InfoCube). - Transaction data is not stored in the fact table of the InfoCube (in contrast to BW-like InfoCubes). As a default, transaction data is stored in liveCache in time series. - The InfoCube stores a pointer that links the master data to the transaction data stored in liveCache. When Demand Planning information is collected from liveCache, the InfoCube is accessed first to read the pointer to the liveCache data.
© SAP AG
ADM355
5-14
Demand Planning Configuration
Planning view
Planning book Characteristics
Key figures
Planning area Which and how many key figures, where stored
Planning object structure DP/SNP characteristics: • • • • •
Region Sales area Product hierarchy Sold-to party Location
Key figures: • • • • •
Demand Plan Actual sales Overrides Promotions Production quantity
Active version 000
Version dependent master and transaction data
Planning version 001 Version dependent master and transaction data
...
SAP AG 2003
A planning object structure contains characteristics (characteristic combinations) that can be used in one or more planning areas. Demand Planning characteristics are stored in APO-generated InfoCubes. In Demand Planning and Supply Network Planning, data is divided into planning areas and subdivided into versions. A planning area contains characteristics and key figures for planning and must be initialized for every planning version. A key figure is a numerical value that is either a quantity or a monetary amount; for example, projected sales value in dollars or projected sales quantity in pallets. Characteristics are the objects by which you aggregate, disaggregate and evaluate business data. They define the levels to which you can plan and save data. For example, your master data could include all the products, product families, regions, and customers that your company is going to plan with APO Demand Planning, plus all the appropriate combinations of these (for example, which customers buy which products, in which regions). In order to be able to save data for a planning area version, time series must be created for that version. The system creates time series in the liveCache for each characteristic value combination, and each key figure. Each planning version (or DP version) is a separate set of data. You can display only one version at a time in interactive planning.
© SAP AG
ADM355
5-15
Planning Areas in /SAPAPO/MSDP_ADMIN Transaction /SAPAPO/MSDP_ADMIN
Demand Planning Environment Current Settings S&DP Administration
SAP AG 2003
With the help of transaction /SAPAPO/MSDP_ADMIN, you can display a list of all planning areas. To display a planning area: Right-click and choose Change. In the dialog box that appears, choose Confirm. Choose Key figures → Details.
© SAP AG
ADM355
5-16
Key Figures: InfoCube or liveCache?
If there is an entry for an InfoCube, data is stored in this in APO DB Otherwise, data is stored in liveCache SAP AG 2003
With the help of the planning book and the Planning view, the customer decides which key figures are relevant for planning. If all key figures are stored in liveCache, there is only an InfoCube generated by default in APO DB that includes combinations of characteristics and pointers into liveCache. Reading this is very fast. For reporting, you might want to transfer the data into the SAP BW system. To do so, you must back up data from liveCache into InfoCubes created in APO DB. In liveCache 7.2, there is no logging for Demand Planning data. In this case, you can back up data into InfoCubes.
© SAP AG
ADM355
5-17
Storage of Demand Planning Data Transaction data (key figures)
?
De fa ul t
Customizing InfoCubes in APO DB
Object oriented in liveCache
liveCache
SAP AG 2003
Application specialists decide which data is important for planning. Data relevant for planning is stored in liveCache and other data is stored in InfoCubes in APO DB.
© SAP AG
ADM355
5-18
Saving DP Data into an Infocube
1.
Generate an export DataSource for the planning area
2.
In the Administrator Workbench, right-click the source system and choose Replicate DataSources
3.
In the source systems view of the Administrator Workbench, right-click the source system and choose DataSource Overview
4.
Assign an InfoSource to the DataSource
5.
Goto the InfoSource view and assign the communication structure to the transfer structure
6.
Activate InfoCube and load data
SAP AG 2003
Generate an export DataSource for the planning area: - From the SAP Easy Access menu, choose Demand Planning → Environment → Current Settings → Administration of Demand Planning and Supply Network Planning (or use transaction /SAPAPO/MSDP_ADMIN). - Generate export data source. Specify which fields you wish to be selectable later for reporting purposes. You must select the field for version, such as /BI0/9AVERSION. - Refer to the BW310 course for detailed information on InfoCube setup.
© SAP AG
ADM355
5-19
Loading Data from InfoCube into liveCache
From the SAP Easy Access menu, choose Demand Planning Environment Load planning area version Alternatively, use transaction /SAPAPO/TSCUBE
SAP AG 2003
To copy data occasionally from an InfoCube to the planning area, choose Demand Planning → Environment → Load planning area version (transaction /SAPAPO/TSCUBE).
© SAP AG
ADM355
5-20
Planning Versions in Demand Planning SAP does not recommend the use of planning version 000 for your Demand Planning run, especially if you are using further SAP APO applications like PP/DS, CTM or SNP together with DP The reason for this is that planning version 000 is always active, and as such it always contains orders This is described in SAP Note 429400: Recommendations about Planning Versions for SAP APO
SAP AG 2003
© SAP AG
ADM355
5-21
BW and SCC: Integration of Workbooks Into APO
APO Supply Chain Cockpit
BW Excel Frontend SAP AG 2003
For APO users to know their current operation status, they can call BW reporting from Supply Chain Cockpit. The workbooks are stored in the BW server and you need to create a link between the context menu in SCC and workbooks in the BW server.
© SAP AG
ADM355
5-22
APO-BW Content
SAP AG 2003
There are standard APO-BW Content to help interpreting the APO planning results. Examples: - Overview and details of actual (or historic) plan: ‚What will be produced? What‘s the corresponding resource load?‘ - Optimization: Compare plan A with plan B (APO planning versions) - Compare with execution data - History: Keep information on master plan - Store (aggregated) information on production plan
© SAP AG
ADM355
5-23
Scenario 1: Single Cube
BW
APO
Reporting APO
BW
3
• liveCache (Plan data) • Planning Area 1 • Planning Area 2 • ...
2
Sales and Plan data
Sales data 1
Source systems SAP AG 2003
© SAP AG
ADM355
5-24
Scenario 2: Multi Cube
BW
Reporting
APO APO
BW
Multi Cube
• liveCache(Plan data) • Planning Area 1 • Planning Area 2 • ...
3
2 Sales data Plan data 1
Sales data
Source systems
SAP AG 2003
© SAP AG
ADM355
5-25
Scenario 3: Multi + Remote Cube
BW
APO
Reporting APO
BW
Multi Cube
• liveCache (Plan data) • Planning Area 1 • Planning Area 2 • ...
3
2 Sales data
1
Plan data (SAP Remote Cube)
SAP AG 2003
© SAP AG
Sales data
Source systems
ADM355
5-26
Scenario 4: Multicube + Basic Cubes
Reporting
APO APO
Multi Cube • liveCache (Plan data) • Planning Area 1 • Planning Area 2 • ...
2
Sales data
Plan data
1
Source systems SAP AG 2003
© SAP AG
ADM355
5-27
BW Reporting Frontend - Business Explorer Business Explorer MS Excel
BEx Map
Query Definition
BEx Browser BEx Analyzer
OLAP Processor
InfoCube InfoCubes/ ODS
Document Storage
Business Information Warehouse Server SAP AG 2003
The Business Explorer Analyzer is the SAP delivered reporting front-end. It is a modified OLAPenabled version of Microsoft Excel. In the Business Explorer Analyzer, you define queries that are based on a selection of InfoObjects or predefined InfoCube global structures. By navigating through the queries, you can generate various views of the data which enable you to analyze and present the InfoCube data in different ways. You can save the result of the navigation in a workbook which you can call up when you next initialize the Business Explorer Analyzer. You can manage and execute workbooks which you stored on the server directly from the Business Explorer Browser. BEx Map is an additional reporting tool which you can use to display your query data graphically according to geographical criteria. You can do this from the Business Explorer Analyzer.
© SAP AG
ADM355
5-28
Creating a Query with the BEx Analyzer
SAP Business Explorer
New Query InfoCube
1
2
Filter
New Query InfoCubes for new query Right Accounts rec. acc. Freie Columns FIAR transaction data Merkmale Asset Accounting
SAP BEx: Choose Query Available objects
Annual values
Sales and distribution
name Annual values andTechnical tr Zeilen Local search ....
Customer Credit memos Customer incoming...
Find:
Purchasing
OK
Purchasing data
Cancel
Find Open
Definition
New query
Cancel
SAP AG 2003
The Business Explorer Analyzer can be started from the SAPGUI front-end or by using transaction RRMX. It is a tool for defining queries. Queries can be saved into a Role or simply created and run ad hoc. The BEx Analyzer provides the ability to select from the characteristics, key figures and navigational attributes within any InfoCube. When you launch the BEx Analyzer and try to open a query, you will be prompted to log on to the SAP BW system. Thus, the BEx Analyzer session serves as the client in the client/server interface. To create a query, proceed as follows: Choose the New icon in the toolbar. A selection screen containing all of the available queries is then displayed. Choose New query again. The selection screen containing all the InfoCubes for which you can define a new query is then displayed. Choose the InfoCube whose data is to be used as the basis for your query. You can also display the technical names of the InfoCubes by right-clicking the InfoCubes for new query text (top node in the tree) and choosing the corresponding option.
© SAP AG
ADM355
5-29
APO and BW: Summary
You are now able to: Explain the basics of how APO and BW are integrated
SAP AG 2003
For detailed information on SAP BW, refer to SAP BW online help and documentation or attend the SAP BW courses.
© SAP AG
ADM355
5-30
APO and BW Exercises Unit: APO and BW
At the conclusion of this exercise, you will be able to: • Query an InfoCube
Use Administrator Workbench and BEx Analyzer to investigate and query an InfoCube.
1-1
Using the Administrator Workbench, investigate the Sales InfoCube. 1-1-1 Find out the data model structure of the Sales InfoCube. What are the dimensions, characteristics and key figures ? 1-1-2 Find out the source system of the Sales InfoCube.
1-2
Find out the names of the Fact table and Dimension tables of the Sales InfoCube.
1-3
Display the contents in APO-Location 1000 of the Sales InfoCube Fact table.
1-4
Evaluate the aggregated historical data for product T-F2##. Log on to the APO system using the BEx Analyzer and open the query SALES DATA for the SALES InfoCube. Display report on product T-F2## with Sold-to party 1000 and Sales Organization 1000.
© SAP AG
ADM355
5-31
APO and BW Solutions Unit: APO and BW
1-1
Using the Administrator Workbench, investigate the Sales InfoCube. 1-1-1 Find out the data model structure of the Sales InfoCube. What are the dimensions, characteristics and key figures ? In the APO system, use transaction RSA1 to go into the Administrator Workbench → Modeling → Data Targets → expand Sales → double-click on the Sales InfoCube (or right-click on the Sales InfoCube and choose Display Data Model). 1-1-2 Find out the source system of the Sales InfoCube. In the Administrator Workbench, select Modeling → Data Targets → expand Sales. Right-click on the Sales InfoCube and choose Show Data Flow. The source system should be at the very bottom. Double click on the source system to find out the system name.
1-2
Find out the names of the Fact table and Dimension tables of the Sales InfoCube. Use transaction LISTSCHEMA to display the tables.
1-3
Display the contents in APO-Location 1000 of the Sales InfoCube Fact table. Use transaction LISTCUBE. Enter B in InfoCube Type and Sales in InfoCube Name. Choose Execute. Enter 1000 in APO-location.
1-4
Evaluate the aggregated historical data for the product T-F2##. Log on to the APO system using the BEx Analyzer and open the query SALES DATA for the SALES InfoCube. Display report on product T-F2## with Sold-to party 1000 and Sales Organization 1000. Menu Path: Start → Programs → SAP Frontend → SAP Business Explorer Analyzer. Enable macros and press the Open icon. Log on to the APO system. (or transaction RRMX). Select the Queries icon. Expand the Sales info area and the Sales InfoCube. Select the Sales Data query and press OK. You are given an aggregated view of the invoiced sales quantity and invoiced sales value of the three sales organizations. Right-click on the field next to APO Product to restrict the evaluation for your product T-F2xx using Select Filter Value. By right clicking on Sold-to-party, you can drill down to the details.
© SAP AG
ADM355
5-32
APO Sizing & Performance
1
APO Overview 2 APO Core Interface 3 CIF Monitoring 4 APO Optimizers 5 APO and BW 6 APO Sizing & Performance 7 Data Consistency 8 Disaster Recovery
SAP AG 2003
© SAP AG
ADM355
6-1
APO Sizing & Performance
Contents APO Sizing Tuning Measures Monitoring Tools : Workload Monitoring Operating System Monitoring Database Performance Monitoring
Objectives At the end of this unit, you will be able to: Use SAP performance monitoring tools for the components of an APO system Detect the most critical performance bottlenecks Plan your performance monitoring strategy
SAP AG 2003
© SAP AG
ADM355
6-2
SAP APO Sizing Sizing should be performed if you are starting a new SAP APO project, changing the platform, or upgrading to a new release The SAP APO 3.0A Quick Sizer is also valid for SAP APO 3.1
SAP AG 2003
See http://service.sap.com/sizing and http://service.sap.com/quicksizer See also sizing examples in http://service.sap.com/scm → mySAP SCM Technology → Platforms & System Requirements → see link System Requirements for SAP APO 3.0A and SAP APO 3.1
© SAP AG
ADM355
6-3
SAP APO Sizing Models Available sizing models for SAP APO: DP SNP
SAP APO
PP/DS CIF: New in the latest Quick Sizer ATP: New in the latest Quick Sizer TP/VS in progress Available sizing models in the Quick Sizer are based on heuristic approaches For the Optimizer itself we have sizing examples Available sizing models are focused on Batch Planning (mass processing) and are OrderVolume based. Overhead for interactive planning is also taken into consideration based on the number of concurrent users SAP AG 2003
© SAP AG
ADM355
6-4
CIF Quick Sizer
CIF sizing is important to ensure adequate performance on integration In the CIF model, the Quick Sizer provides additional information : Required CPU hardware resources for SAP R/3 and SAP APO application servers
SAP AG 2003
© SAP AG
ADM355
6-5
Quicksizing of mySAP APO http://service.sap.com/sizing Choose Quicksizer >> Start Quick Sizing >> SAP Advanced Planner and Optimizer
SAP AG 2003
A sizing project normally starts by using the Quick Sizer program in SAPNet (see screenshot above). The Quick Sizer is based on the results of publicly available SAP standard application benchmarks and considers only the dialog workload. It can therefore only reflect the standard components of the SAP System. The Quick Sizer only considers the average dialog usage of an SAP System, since sizing for an SAP System varies according to factors such as peak workload versus average workload, number of users, user behavior, amount of background processing, user customizing and reporting. After the Quick Sizer has given you a rough estimate, your hardware partner(s) will give you a detailed sizing analysis for your system landscape. Your hardware partner is in continuous communication with SAP so as to obtain the latest sizing information. During the Analysis Session of the GoingLive Check service, SAP asks for detailed information on user and document volume, and information on ALE/EDI volume and legacy interfaces. This information is needed to counter-check the hardware partner’s sizing.
© SAP AG
ADM355
6-6
Installation on One Host For a production SAP APO system: On a 32-bit Windows platform, liveCache and the SAP APO system must be installed on different hosts to avoid memory bottlenecks On UNIX, installation of the SAP APO system and liveCache on one host is recommended to avoid network bottlenecks Optimizers must be installed on at least one separate server Reason: they run on Windows only, and new optimizer versions need plenty of memory
Never install a production SAP APO system on a single host Between all servers and SAP APO components, use a high speed network connection Use FDDI with high performance switches
SAP AG 2003
Regarding installation of all SAP APO components on one server, see SAP Note 392852 (Several Applications on one liveCache server).
© SAP AG
ADM355
6-7
APO Scalability Small System (for evaluation purposes) All APO components on one server (currently only under Microsoft Windows 2000 / NT 4.0)
S
Medium System 32-bit Windows system: liveCache on a separate server 64-bit UNIX: all APO components except optimizer on one sever Optimizer always on a separate server Large System liveCache on 64-bit UNIX due to RAM required All APO components (except optimizer) on one sever possible Possibly additional application servers Optimizer on a separate server
M
L
Very Large System liveCache on 64-bit UNIX due to RAM required All APO components (except optimizer) on one sever possible Possibly additional application servers Possibly database on a separate server Possibly several optimizer servers
XL
SAP AG 2003
© SAP AG
ADM355
6-8
APO Sizing: Small System Implementation Frontends Generally • • • •
S
APO GUI Pentium 350 MHz or faster Min. 128 MB main memory Graphic card: •Screen resolution 1024x768 pixels •Min. 32768 colors •4 MB memory
APO server APO GUI / App. server / DB server / liveCache / Optimizer on one host Hardware requirements: • 2 to 3 processors (depending on type and speed; minimum speed 700 MHz) • 1 GB main memory • 20 GB hard disk space
SAP AG 2003
A small system with all APO components installed on one machine, as in the graphic, can handle approximately: 1 000 characteristic combinations (of, for example, 50 products, 10 locations, 10 customers) 10 key figures stored in liveCache 500 location products 2 000 sales orders 1 000 procurement orders 1 000 transfer orders 1 000 forecast orders 2 000 production orders 1 hour timeframe for background planning run Independently of the size of the APO system, frontends accessing APO should have: The special APO GUI components installed A CPU with 350 MHz or faster At least 128 MB main memory A graphic card with - A screen resolution of at least 1024x768 pixels - A color palette with at least 32 768 colors (True Color is better) - 4 MB memory
© SAP AG
ADM355
6-9
APO Sizing: Medium System Implementation Optimizers on a separate server Hardware requirements per optimizer running simultaneously: • 1 processor (at least 700 MHz) • 512 MB to 1 GB main memory • 2 GB hard disk space
M
liveCache (for Windows on a separate server)
Opt
Hardware requirements: • 2 to 3 processors (depending on type and speed) • 4 GB main memory • 50 GB hard disk space
App server / DB server on one host Hardware requirements: • 4 to 6 processors (depending on type and speed) • 3 GB main memory • 50 GB hard disk space SAP AG 2003
A medium APO system implementation is required for (approximately) the following key figures: 20 000 characteristic combinations (of, for example, 500 products, 20 locations, 100 customers) 10 key figures stored in liveCache 20 000 location products 40 000 sales orders 20 000 procurement orders 20 000 transfer orders 20 000 forecast orders 10 000 production orders 1 hour timeframe for background planning run Because of memory limitations of a 32-bit Windows system, optimizers should be always installed on a separate server. Values shown in the slide are valid for the SNP optimizer, which is the one with the highest hardware requirements. Recall that sizing of optimizers mainly depends on number of variables and constraints required for optimization runs. For the same reason, if your operating system is 32-bit Windows, install liveCache on a separate host. In a Windows environment, you will need 3 servers to install a medium APO system. For a 64-bit UNIX system, installation of all APO components (except optimizers) on one server is recommended. In this case, you need just one additional server for optimizers.
© SAP AG
ADM355
6-10
APO Sizing: Large System Implementation M
Optimizers Hardware requirements per optimizer running simultaneously: • 1 processor (at least 1 GHz) • 1 to 2 GB main memory • 2 GB hard disk space
liveCache on a UNIX host (one host for liveCache and app. server recommended) Opt
L
Hardware (example): • 4 to 5 processors (depending on type and speed) • 16 GB main memory • 50 GB hard disk space
App. server / DB server on one host Hardware requirements: • 8 to 10 processors (depending on type and speed) • 4 GB main memory • 50 GB hard disk space SAP AG 2003
A large APO system implementation is required for (approximately) the following key figures: 200 000 characteristic combinations (of, for example, 5 000 products, 50 locations, 1 000 customers) 10 key figures stored in liveCache 100 000 location products 200 000 sales orders 100 000 procurement orders 100 000 transfer orders 100 000 forecast orders 100 000 production orders 1 hour timeframe for background planning run Because of memory limitations of a 32-bit Windows system, optimizers should be installed on a separate server. Values shown in the slide are valid for the SNP optimizer, which is the one with the highest hardware requirements. Recall that sizing of optimizers mainly depends on number of variables and constraints required for optimization runs. If the PP/DS optimizer is used regularly, 2 CPUs reserved for its run may improve performance. For the same reason and due to high requirements on liveCache memory, liveCache should be installed on a 64-bit UNIX system. Installation of all APO components (except optimizers) on one 64-bit UNIX server is recommended. Additional application servers on separate hosts may improve scalability.
© SAP AG
ADM355
6-11
APO Sizing: Very Large System Implementation L
Optimizers: separate server especially for SNP, PP/DS and CTM optimizer Hardware requirements per optimizer running simultaneously: • 1 processor (at least 1.4 GHz) • >= 2 GB main memory • 2 GB hard disk space
liveCache on a UNIX host
XL
Opt Opt
Hardware requirements: • >= 16 processors • >= 80 GB main memory • >= 250 GB hard disk space
Database server and (multiple) application servers Hardware requirements: • >= 20 processors • >= 8 GB main memory • >= 300 GB hard disk space SAP AG 2003
A very large APO system implementation is required for (approximately) the following key figures: 1 000 000 characteristic combinations (of, for example, 50 000 products, 100 locations, 10 000 customers) 10 key figures stored in liveCache 500 000 location products 1 000 000 sales orders 1 000 000 procurement orders 500 000 transfer orders 500 000 forecast orders 500 000 production orders 3 hours timeframe for background planning run Due to the memory limits of a 32-bit Windows system, optimizers should be installed on a separate server. Values shown in the slide are valid for the SNP optimizer, which is the one with the highest hardware requirements. Recall that sizing of optimizers mainly depends on number of variables and constraints required for optimization runs. If the PP/DS optimizer is used regularly, 2 CPUs reserved for its run may improve performance. For the same reason and due to high requirements on liveCache memory, liveCache should be installed on a 64-bit UNIX system. Installation of all APO components (except optimizers) on one 64-bit UNIX server is possible. Database server and additional application servers on separate hosts may improve scalability.
© SAP AG
ADM355
6-12
SAP APO Sizing: Tips & Tricks (1)
Characteristic Combinations
liveCache Main Memory
The amount of Characteristic Combinations is not the product of all products, locations, customers etc. : Not all combinations are relevant for your planning run
Key Figures
liveCache Main Memory
In the planning area, you can distribute your Key figures between liveCache and BW Info Cube Default as of APO 3.0A is 100 % Key Figures in liveCache
Orders
liveCache Main Memory
DP Planning Versions
liveCache Main Memory
Do you really need all your versions in liveCache ? You can also store versions in BW Info Cube
SAP AG 2003
© SAP AG
ADM355
6-13
SAP APO Sizing: Tips & Tricks (2)
SNP Versions
liveCache Main Memory
Are you really using 100% copies of your active versions? Are all versions equally big? Are all users using all versions in parallel?
Periods in Planning Horizon
liveCache Main Memory
Duration of Planning Run
liveCache CPU
Can you afford a bigger duration of your planning run?
Retention Period
Disk
Optimizer : No. of Variables
Optimizer Main Memory
Optimizer : No. of Constraints
Optimizer Main Memory
Duration of Optimizer Run
Optimizer CPU
SAP AG 2003
© SAP AG
ADM355
6-14
Possible Performance Bottlenecks in APO System SAP R/3 instance
Potential performance problems: GUI
WTS WTS
GUI
1. Frontend communication 2. CIF communication 3. SAP APO instance 4. APO database 5. liveCache 6. Network within APO system 7. Optimizer
1
2 SAP APO instance
3
SAP APO instance
7
6 4
APO database
Optimizer liveCache memory
5
liveCache devspaces
liveCache SAP AG 2003
In an SAP APO environment, performance problems can occur in different areas: During frontend communication During data transfer between APO and OLTP system and processing of RFCs In the ABAP on the SAP instance In the APO database In the liveCache During network communication between SAP APO instance and database server or between SAP APO instance and liveCache server In an optimizer server Considering all the components involved, performance analysis is more complex in APO than in a standard SAP system. SAP performance analysis experience can still be applied in the APO environment but extra background information is required especially for the liveCache. In this unit, only the most important aspects of performance monitoring and analysis of APO system are described.
© SAP AG
ADM355
6-15
Tuning Measures: Basis Tuning Optimize system parameters SAP Basis (memory management, …) Database (database buffer sizes, …) liveCache Optimizer Operating system and network
Optimize database disk layout (I/O balancing) Database server liveCache server
Optimize workload distribution Number of work processes, background scheduling, logon groups
Check Hardware Sizing Hardware bottlenecks? SAP AG 2003
© SAP AG
ADM355
6-16
Tuning Measures: Application Tuning
Apply SAP Notes Explore bug fixes, corrections, patches, hints Optimize the SAP customizing (APO: Selection variants, …)
Optimize customer’s ABAP codes (Z-reports, user exits, …)
Create, change, or drop indexes Use table buffering
SAP AG 2003
© SAP AG
ADM355
6-17
Tuning Measures: Responsibility
Basis Tuning
Application Tuning Goal
Goal Distribute the workload
Avoid unnecessary workload
optimally to avoid performance bottlenecks
by optimizing programs and usage of applications
Responsible Party
Responsible Party Technical Team Lead
ABAP Developers
System/Database Administrator
Application Consultant
Technical Consultant
System/Database Administrator Technical Consultant
SAP AG 2003
Performance tuning is a joint effort between the application team and the basis administrators. Procedures for monitoring and tuning should be always defined and well documented.
© SAP AG
ADM355
6-18
Support Package Level
Determine the latest available support package for SAP APO Go to http://service.sap.com/scm >> mySAP SCM Technology >> Availability of mySAP SCM Support Packages: SAP APO SPs, SAP liveCache and COM Builds, SAP EM, R/3 Plug-In >> Overview Matrix SAP APO 3.1 [3.0A] SP / COM / liveCache / Optimizer versions
Show current patch level with SPAM Upgrade to the latest support package as soon as possible
SAP AG 2003
SAP recommends that you maintain your system at the current support package level.
© SAP AG
ADM355
6-19
Typical Performance Problems Overall poor response time General problem caused by a system-wide bottleneck (such as memory bottleneck on the liveCache server due to inadequate hardware sizing) General problem caused by a serious specific performance problem (such as memory bottleneck on the liveCache server caused by named consistent views that are open for many hours) Solution approach: analyze average system-wide response time
Long response time of a specific transaction Solution approach: analyze the response time of individual dialog steps of the transaction
Long running background jobs Solution approach: Analyze the overall system-wide parameters first (SAP Basis, RDBMS, liveCache) Analyze the particular job (SQL, …) Schedule large background jobs for times of low system load SAP AG 2003
Long response time for a specific transaction: In a standard SAP system, a response time longer than 1 to 1.5 seconds is considered to be too long. In contrast to this, for APO transactions there is no overall value that is considered a maximum acceptable runtime. Runtimes of APO transactions can vary a lot. In a standard SAP system, expensive SQL statements may cause performance problems for specific transactions that can sometimes escalate to general problems. In an APO system, such problems can be caused by long-running COM routines. You can tune expensive SQL statements in an RDBMS but you cannot tune long-running COM routines.
© SAP AG
ADM355
6-20
Monitors for Detection and Analysis (1) Workload analysis transaction ST03/ST03N Workload overview: identify a system-wide bottleneck (SAP instance, database, in ST03N also liveCache) Transaction profile: identify long-running report/transaction and the corresponding system component causing a bottleneck Single records in ST03N / Transaction STAD: identify most expensive COM routines in liveCache
liveCache monitoring transaction LC10 Memory Areas: monitor usage of data cache and heap Kernel threads / liveCache Console: monitor liveCache threads and tasks OMS Monitor: monitor frequency of calls, runtimes and memory usage of COM routines
SAP AG 2003
To analysis APO system performance: You can use all the monitoring transactions in standard SAP systems. Some of them (such as ST03N) may contain additional, APO-specific information. You can also use the APO-specific transaction LC10 for monitoring liveCache.
© SAP AG
ADM355
6-21
Monitors for Detection and Analysis (2) Work Processes / Users: SM50 / SM66 / AL08 / SM04 Identify table name, report / transaction name Critical transaction must be running at the time of monitoring
Shared SQL Area of RDBMS: in ST04 choose Detailed Analysis >> SQL Request Identify expensive SQL statements Find table name, index used, report where the statement is called
SQL Trace on application servers: ST05 Must be used for some RDBMSs (such as SAP DB, Microsoft SQL Server) instead of ST04 Like ST04, it shows the statement, table name, index used Report / transaction name or user name can be used for filtering
ABAP runtime analysis: SE30 SAP AG 2003
© SAP AG
ADM355
6-22
Time Measurement for a 'Simple' Transaction Step
Wait time
Roll in
Load time
Processing time
Network
Network
CPU time
Database time
Response time
Presentation Server
Application Server
Database Server
Workload time statistics are presented in the workload monitor ST03 or ST03N SAP AG 2003
Workload time statistics presented by the workload monitor in a standard SAP system include the following average times per step, given in milliseconds: Response time: Starts when a user request enters the dispatcher queue; ends when the next screen is returned to the user. For a “simple” transaction step (that does not include synchronous RFCs and processing of SAP GUI controls), the response time does not include the time to transfer information between the frontend and the application server. Wait time: This is the time a user request sits in the dispatcher queue. It starts when the user request is entered in the dispatcher queue and ends when the request starts being processed. Roll-in time: The amount of time needed to roll user context information into a work process. Load time: The time needed to load from the database and generate objects like ABAP source code, CUA or screens. Database request time: Starts when a database request is put through to the database interface; ends when the database interface has delivered the result. Enqueue time: Time spent waiting for the assignment of an enqueue lock if necessary. Processing time: This is equivalent to response time minus the sum of wait time, roll time, load time, database request time, and enqueue time. Must be calculated as the workload monitor does not show this information directly. CPU time: This is the time of effective CPU usage by the SAP work process.
© SAP AG
ADM355
6-23
Overall Response: Workload Monitor (ST03)
SAP AG 2003
To start the workload monitor, choose Tools → Administration → Monitor → Performance → Workload → Analysis (transaction ST03), then select Detail Analysis and Workload. SAP R/3 Release 4.6C contains a preliminary version of the new transaction ST03N which replaces ST03 in SAP Basis Release 6.10. The proportion of database calls to database requests in the workload monitor is quite important. If data from a database table is buffered in the SAP buffers, a call to the database server is not required. The ratio of database calls to database requests gives an overall indication of the efficiency of table buffering. The lower this ratio is, the better as it should never go over 1:10.
© SAP AG
ADM355
6-24
Time Measurement: Simple Transaction Step in APO
Wait time
Roll in
Load time
Processing time
Network
Network
CPU time
Database Server
Database time DB proc. time Response time
Presentation Server
Application Server
liveCache Server
You need the workload monitor ST03N to see the time spent on liveCache General rules for average response times in dialog tasks that apply to SAP OLTP systems do not apply to APO systems SAP AG 2003
Response time on an APO application server also includes time spent on the liveCache which is mostly time spent with processing COM routines. As of SAP Basis 4.6C SP 20 or SAP Basis 4.6D SP 10, this time is presented in ST03N as DB proc.time.
© SAP AG
ADM355
6-25
Workload Monitor ST03N in an APO System
SAP AG 2003
Transaction ST03N replaces ST03 in SAP Basis release 6.10. SAP Basis releases 4.6C and 4.6D contain a preliminary version of this new transaction. To get an overview of the performance of your APO system, start transaction ST03N. In expert mode select the instance (or Total to see the response time for the entire system) and the period of interest. In section Analysis views choose Workload overview. Check Average time per step in ms for high average response times. If the runtime in specific areas such as the APO database or liveCache is too high, check whether this is due to incorrect configuration. For example, check whether the APO database data buffer or the liveCache data cache is too small.
© SAP AG
ADM355
6-26
ST03N: Transaction Profile
SAP AG 2003
After detecting a long runtime for a specific task type, you can use the transaction profile in ST03N to analyze time statistics for individual programs or SAP transactions. You can use it to identify programs or transactions that cause high response times, and possibly also reasons for this. In section Analysis views choose Standard. To see the transaction causing the highest load on the system, sort by total response time (column T response time). To see the transactions for which the end users have to wait for the longest time, sort by average response time (column Avg. Response~). Within this view, you can see which transaction(s) is (are) causing the highest response time. To analyze in which areas the largest part of the runtime we spent, use tab Parts of response time. To display single executions of a particular transaction, choose Single records (only available for the current day and for a specific APO instance, not for Total), or call transaction STAD.
© SAP AG
ADM355
6-27
Collect DB Procedure / COM Routine Statistics
1. Define number of DB procedures to be logged
2. Define APO instance(s) where parameter will be changed
3. Choose Enter
SAP AG 2003
A separate statistical record is automatically created for every transaction step. As of APO 3.0, you can also activate special subrecord writing, for example for collecting information about the most expensive COM routines of every individual APO transaction. To do so, assign a positive value to the SAP instance parameter stat/dbprocrec. The instance parameter stat/dbprocrec specifies the number of expensive COM routines to be displayed for a transaction. The COM routines with the highest accumulated runtime are displayed, not the COM routines with the highest individual runtime for one execution. The default value is 0 (subrecord writing for DB procedure calls is deactivated). The parameter must be set before the transaction is executed. You can change the parameter permanently in the instance profile of the APO instance, or temporarily as follows: In ST03N, choose Expert mode >> Collector and performance DB >> Statistics Records and File >> Online Parameters >> Dialog step statistics. - Enter the new value for stat/dbprocrec - Select APO instances for which the parameter is to be changed. - Choose Enter Prerequisite is Basis Support Package 24 (4.6C) for APO 3.0A and Basis SP 12 (4.6D) for APO 3.1. Because subrecord writing for DB procedures results in a higher workload, SAP recommends setting the parameter to a positive value in production systems only for the time of analysis. A reasonable value of stat/dbprocrec for the purpose of monitoring is 5, which means that information about the 5 most expensive different DB procedure calls are collected.
© SAP AG
ADM355
6-28
Workload Analysis for Single Executions
SAP AG 2003
To check statistics for single executions of a transaction or a report, either call transaction STAD or in transaction ST03N choose Single records. The screen of single statistics records displays each execution of an SAP business transaction and its processing by different task types (dialog, background, ...). By default, only statistics records for the last two hours are displayed. To change this time window for the current day, choose Edit → New selection. For the detailed contents of a single statistics record, double-click a line.
© SAP AG
ADM355
6-29
APO Transactions and Expensive COM Routines
DB procedure
"SAPAPO_PP_ORDER_GET_DATA" "SAPAPO_PP_ORDER_CHANGE" "SAPAPO_CHANGE_SCHEDULE_LINES "SAPAPO_TLO_CHANGE" "SAPAPO_CHANGE_PEGAREAS"
Log. DB
No. of exec.
LCA LCA LCA LCA LCA
101 23 8 24 26
Exec. time Time/exec. (ms) (ms) 2.580 757 345 205 203
25,5 32,9 43,1 8,5 7,8
SAP AG 2003
If the selected transaction called liveCache and performed COM routines there, while the instance parameter stat/dbprocrec is set, the most expensive COM routines are displayed. The aggregated runtime over all calls of the COM routine in the selected business transaction is something to look at.
© SAP AG
ADM355
6-30
Application Server
Network
Presentation Server
Network
Transaction Step with Synchronous RFC
Database Server
External System
Wait, Roll in & Load time Processing time (1)
RFC start CPI/C Communication
Program rolls out after RFC is started
Roll out time Response time
Roll wait time
RFC + CPIC time
= Program not in work process
Roll in time
RFC end
Processing time (2) SAP AG1999
SAP AG 2003
For dialog steps with synchronous RFC, RFC+CPIC time shown in ST03 is the run time of the RFCs plus the time for setting up the connections. During a synchronous RFC, the ABAP program is rolled out from the work process. The time span during which the program is not in the work process can be observed in roll wait time. In ST03, the roll wait time is the sum in seconds of all such time intervals with wait time, not the average per step. For dialog steps with asynchronous RFC, RFC+CPIC time is the run time for setting up the connections to the external system. No rollout occurs. The program continues the work after starting the asynchronous RFC or as soon as the receiver confirms reception of the data.
© SAP AG
ADM355
6-31
Application Server
Network
Presentation Server
Network
Transaction Step with Roundtrip
Database Server
Wait, Roll in & Load time Processing time (1) Database time
Roundtrip to GUI starts
GUI time
Processing time (2) Time spent for network transfer Time spent for building up the screen in the GUI
Program rolls out after RFC to GUI is started
Roll wait time
Response time
Processing time (3) SAP AG 1999
SAP AG 2003
In SAP Basis Release 4.6, SAP implemented SAP GUI controls which handles some of the tasks formerly processed by the application server such as scrolling, navigation in a tree, and sorting. During a transaction step, several communication steps might occur between the application server and the front-end. These steps are called roundtrips. In the course of a roundtrip, the application server first transfers the control data to be processed in the front-end and then the R/3 context is rolled out which releases the work process. The time up to the next roll-in is again measured as roll wait time. The entire time for each roundtrip is measured in the application server as the GUI time. If the dialog step does not include RFC communication with an external system (only RFC communication to GUI) then the GUI time is more than roll wait time. If the dialog step includes RFC communication with an external system, the GUI time may be less than the roll wait time. Roll wait time covers roll wait situations from the RFCs and GUI time only covers the roll wait situations from the SAP GUI communication. For more details, see SAP Notes 8963 and 364625.
© SAP AG
ADM355
6-32
Very High Roll Wait Time
High roll wait time
SAP AG 2003
Symptoms: transaction ST03 shows very high roll wait times and a high average GUI time. Solutions: The high roll wait time is due to a slow GUI display at front-end PC. Possible reasons for high GUI time are: - Network bottleneck (too much time spent on network transfer). To check the network, in transaction OS06 choose Detail analysis menu → LAN Check by Ping → Presentation server and so on. For a ping with 4 KB of data (to set this, choose Edit → Block size), response time should be less than 50 ms for LAN, less than 150 ms for WAN and less than 500 ms when using a 56K modem connection. - CPU or memory bottleneck on local desktop computer (too much time spent on building up the screens for GUI). Note: Synchronous RFC (sRFC) is used for communication between APO application servers and SAP GUIs. The SAP GUI control technology requires much more sRFC time and CPU to build the screen trees.
© SAP AG
ADM355
6-33
Roll Wait Time in ST03 – Transaction Profile
Sum: 689 ms
Delta: 688 ms = Roll wait time? GUI time? SAP AG 2003
Time statistics can be analyzed for individual programs or SAP transactions using the transaction profile in transaction ST03. The programs or transactions causing high response times can be identified here.
© SAP AG
ADM355
6-34
Example: Performance Trace
Set GUI progress indicator in status bar (such as message Loading data) RFC trace record
Deactivate if GUI connection is slow
RFC OLE_FLUSH_CALL indicates a roundtrip to GUI to build up the tree control
SAP AG 2003
In transaction ST05, you can activate the following performance traces for particular users: SQL trace Enqueue trace RFC trace Buffer trace Traces are written to files in the instance subdirectory log. SAPGUI_PROGRESS_INDICATOR: As of SAP Basis Release 3.0F, application progress indicators can be switched off by setting the SPA/GPA value SIN to 0 in the user master record. See SAP Note 51373. In an APO system, the parameter SIN is missing. It must be entered manually into the table TPARA as described in SAP Note 358649. OLE_FLUSH_CALL indicates a roundtrip to GUI to build up the screen. The number of roundtrips is determined by the application program. Roundtrips are processed in sequence. If the number is too high, the GUI time increases. - For a normal screen in a standard SAP system, the number of roundtrips should be 2 or less. - For complex screens, the number should be at most 10. Ideal case: - One call of DP_PUT_CLIENT_TABLE per control - One call of OLE_FLUSH_CALL per dialog step
© SAP AG
ADM355
6-35
Identifying Hardware Bottlenecks
Hardware bottleneck types High paging rates High CPU utilization High disk read/write (I/O) times High network times
Reasons Incorrect sizing (physical main memory, CPU) Poor workload distribution Expensive programs (SQL statements, external programs, …) Incorrect disk layout or slow disks Poor network topology or slow line connections
SAP AG 2003
In an APO system, the optimizer server usually has a higher CPU demand. With SAP APO 3.0A, the demand on liveCache server increases as well due to storing of DP data in the liveCache. An APO system has higher network traffic because of increased graphical data transfer between front-end PC and the application server.
© SAP AG
ADM355
6-36
Operating System Monitor (OS06)
CPU: Load average
CPU utilization: Idle %
Memory: Pages in/s Swap: Actual swap space KB
SAP AG 2003
To start the operating system monitor, choose Tools → Administration → Monitor → Performance → Operating system → Local → Activity (transaction OS06). Transaction OS06 replaces transaction ST06. Important statistics are: CPU utilization and load average (= average length of the queue), memory utilization (physical memory free compared to physical memory available) and paging, system swapping information, disk utilization information, OS configuration parameters (Detail analysis menu → HW Info). Make sure there is enough memory for APO application server: RAM + Swap Space. On a Windows NT platform, Pages in/s should be less than 20% of RAM. On a Unix platform, Pages out/s should be less than 20% of RAM. Memory bottlenecks can be recognized through either of the following: - Increased paging in or paging out - Physical memory free < 2 MB while CPU utilization system > 30-40% CPU bottlenecks: CPU idle ~ 0% during several snapshots Load average > 3 (average number of threads/processes waiting for the CPU is larger than 3 for the last 1 / 5 / 15 min.)
© SAP AG
ADM355
6-37
OS Monitoring of Non-SAP Remote Hosts
Install RFCOSCOL and SAPOSCOL on remote host In case of a Windows NT platform, also install a standalone gateway
Create RFC destination of type TCP/IP in APO system (transaction SM59) Create SAPOSCOL destination in APO system (AL15) Information about operating system workload is available in APO system through transaction OS07
SAP AG 2003
To monitor the operating system workload on other servers where no R/3 instance is installed (such as where no work process is running) like the standalone optimizer server and the liveCache server, you must install programs RFCOSCOL and SAPOSCOL on the remote host as described in SAP Notes 20624 and 202934. If the remote operating system is Windows NT, a standalone gateway has to be installed as well. In an APO system (or another SAP R/3 System) from which the monitoring will be done, a corresponding RFC destination of type TCP/IP and a SAPOSCOL destination must be created. When SAPOSCOL is started, it creates a shared memory and periodically writes performance data into it. An R/3 system (including APO system) can then call RFCOSCOL using an RFC and transfer information on the operating system from the shared memory of the remote host to the SAP work process over the network. To do such an operating system workload analysis for a server such as a liveCache server, choose Tools → Administration → Monitor → Performance → Operating system → Remote → Activity (transaction OS07) and select the corresponding SAPOSCOL destination from the list. Note: For installation details, see SAP Notes 20624 and 202934.
© SAP AG
ADM355
6-38
Identifying Heavy Load in Work Processes
SAP AG 2003
From the operating system monitor (transaction OS06), choose Detail analysis menu → Top CPU. This gives you an overview of the top CPU users with the process names (column Command), their PIDs, current CPU utilization and other characteristics. You can easily identify a process with a high CPU utilization and its PID. In the R/3 Process Overview (transaction SM50) with the help of the PID, identify the ABAP program causing the heavy load.
© SAP AG
ADM355
6-39
Check for I/O Bottleneck on Database Server
SAP AG 2003
In the operating system monitor, if you see high disk utilization or response time, select Detail analysis menu >> Disk to get the corresponding information concerning disk I/O for all disks in the server. In this way, you can check whether the I/O is distributed equally across all disks. Choose the server for monitoring carefully. Usually there is no need to monitor disks on application servers. In an APO system, make sure the disk information comes from the APO database server or from the liveCache server..
© SAP AG
ADM355
6-40
Database Performance Monitor (ST04) - Oracle
SAP AG 2003
To start the database performance monitor, choose Tools → Administration → Monitor → Performance → Database → Activity (transaction ST04). Important statistics are: Buffer quality (hit ratio) and Calls statistics. The size of the data buffer is defined by the parameter DB_BLOCK_BUFFERS (number of 8 KB blocks) in init.ora. It should be at least 95% for the cost based optimizer in APO. The shared pool consists of the shared SQL area (shared cursor cache) and row cache (DD cache). Its overall size can be configured by SHARED_POOL_SIZE in init_.ora. The shared SQL area caches parsed SQL statements whereas the row cache stores administrative information about accessed Oracle objects. Their hit ratios should be like this: DD cache quality > 80% SQL area getratio > 95% The quality of the row cache is also indicated by the ratio of User to Recursive calls which should be greater than 2.0. A user call is any call sent to the database (especially by a work process). A recursive call retrieves the administrative information from the hard disk if it is not available in the row cache. From the Detail analysis menu: For further file system statistics, select File system requests To analyze the shared SQL area, select SQL request For an overview of lockwaits, select Exclusive lockwaits
© SAP AG
ADM355
6-41
Common Oracle Problems
Redo log too small Rollback segments: Snapshot too old Expensive SQL statements Table buffering Fragmented indexes and tables Missing / old / inaccurate Oracle optimizer statistics Oracle parameters (BW settings not used)
SAP AG 2003
Redo log too small: Since APO updates the planning results back into the InfoCube, a lot of redo information is generated by the INSERT. You may see some redo log contention during the parallel run. For solutions in the case of frequent generation of error message Checkpoint not completed, see SAP Note 079341. Snapshot too old: Often caused by long running queries. Before you add more rollback segments or make them larger, check to see if one or more expensive statements are causing the problem. Expensive SQL statements: Do SQL cache analysis with ST04 (choose Detail analysis menu → SQL analysis). Look for statements that produce more than 5% of total Buffer Gets (reads). Table buffering Fragmented objects: Fragmented index segments can cause unnecessary reads in the database. Check for fragmentation only if you have found expensive statements. Before you can find statistics about space allocated by a table or index in DB02, you must perform an analysis. However, this creates optimizer statistics which affects the behavior of the cost-based optimizer. If space analysis is available for a table or index, call DB02. In the Tables and indexes select Detailed analysis. For an index, allocated space, which is the storage quality, should be better than about 30%. Missing / old / inaccurate Oracle optimizer statistics: Refresh statistics at least once a week.
© SAP AG
ADM355
6-42
SQL Trace in APO System
SAP AG 2003
SQL statements can be analyzed using SQL Cache but nothing similar is available for COM routines. COM routines do not use any index because pointers reference objects so you cannot tune them with corresponding tools. However, you can record transactions with the help of an SQL trace in an APO system the same way as in a standard SAP system. SQL trace created in an APO system contains more than SQL statements, calls to COM routines are also recorded there. You can recognize a COM routine call in the trace. Choose Extended list and observe column Conn. The column contains the name of the connection to liveCache, which is either LCA or LDA. To view complete program or transaction names, select Long names. The Explain function does not work for database procedures. The run time of database procedures varies. There is no common rule for runtime duration. If liveCache is not on the same server as the APO instance, the run time includes network time (as for SQL statements). Processing of a COM routine consists of several steps. The total processing time is the sum of the durations of all the steps.
© SAP AG
ADM355
6-43
Incorrect Oracle Parameter Settings
Symptom Expensive SQL InfoCube SELECT statement Solution ORACLE parameters (init.ora) always_anti_join
hash
hash_area_size
sort_area_size * 2
hash_join_enabled
true
hash_multiblock_io_count
32
SAP AG 2003
Because SAP APO uses SAP BW InfoCubes for storage, the database parameter settings are crucial. Setting up for BW is very different from setting up OLTP. The above example is showing Oracle settings. A very important parameter for data warehousing is the hash join. The hash join is good for data warehouse when there are InfoCubes and repeated queries. However, it is not good for OLTP. Note: For detailed lists of parameter values for all relevant Oracle releases, see SAP Note 180605.
© SAP AG
ADM355
6-44
Missing InfoCube Statistics
Symptoms Long running transactions (MC94, Mass Processing, Copy Key Figures, Version Copy ...) Very high DB request time Expensive SQL InfoCube SELECT statement Solution Schedule program SAP_ANALYZE_ALL_INFOCUBES
SAP AG 2003
Query performance depends largely on the database optimizer. The database optimizer is responsible for choosing the most cost-effective plan for your query. The Oracle cost-based optimizer works best if it can use statistics with histograms. Unfortunately, such statistics cannot be created using SAPDBA. Report SAP_ANALYZE_ALL_INFOCUBES analyzes all SAP APO system tables that are related to APO (InfoCubes, master data, and aggregates) and creates statistics for the database optimizer. It has one input parameter: the size of the sample to be drawn from database tables. This size is defined as a percentage. For Oracle and Informix APO databases, all the other tables that are not related directly to APO need to be analyzed using the SAPDBA tool. You should schedule a job with report SAP_ANALYZE_ALL_INFOCUBES to run weekly with a sample size of 10%. You should also schedule a weekly analysis to be done by SAPDBA on all nonAPO tables. When you schedule, make sure that the SAPDBA analysis finishes before the report SAP_ANALYZE_ALL_INFOCUBES starts. Note: For more details, see SAP Note 129252. If you need to load into the InfoCubes (file import, initial data load), delete the indexes first before loading and create them again after the load. Transaction RSA1 can be used for that purpose. See SAP Note 126459.
© SAP AG
ADM355
6-45
R/3 Memory Areas (ST02)
SAP AG 2003
Transaction code is ST02. Guidelines for Memory Usage R/3 Roll Area: Max. Use should not exceed In Memory. The roll file should not be used. R/3 Extended Memory: Max. Use should be 80% less than In Memory to provide sufficient free extended memory. Allocate 6-10 MB extended memory for each user. Allocate 70-120% of RAM as extended memory.
© SAP AG
ADM355
6-46
Memory Access Sequence (Dialog)
Heap Memory Allocation for Dialog Work Process Exclusive Use of Heap Memory PRIV Mode and Its Effects Exclusive Use of WP
SAP AG 2003
Once the entire roll area and extended memory has been exhausted, the system is forced to allocate R/3 Heap Memory (local) for the dialog work process. Heap Memory allocated by one work process is not accessible by any other work process. If a user has been forced to allocate Heap Memory (local) in one work process, he will not be able to access that memory in any other work process. This means that the user will be unable to continue his transaction in any other work process. The user is now effectively locked to that work process. This situation is called PRIV mode. A dialog process, which was forced to allocate R/3 Heap Memory automatically enters PRIV mode. The time that the transaction is locked in PRIV mode, no other user can access this work process. Since the R/3 architecture uses a limited number of work processes to satisfy a larger number of users, all other users suffer when a user goes into PRIV mode. If too many users go into PRIV mode, you will have a situation where certain users can work very well (those in PRIV mode) and others barely at all (those competing for the few remaining work processes).
© SAP AG
ADM355
6-47
COM Routines Monitoring: liveCache Monitor In transaction LC10, enter "LCA" for Logical connection and choose liveCache monitor
SAP AG 2003
To see how often a COM routine was executed and other runtime information, in the initial screen of transaction LC10 choose liveCache monitor. The average run time (column Runtime (average) in the graphic) does not include network time. Exception: if any FETCH statements are included in the database procedure, the network time for those statements is included in the calculation of the average. Outside transaction LC10, you can switch on and analyze traces for COM routines. To switch on a trace and set the trace level, choose Tools → APO Administration → liveCache / COM Routines → Tools → Change Trace Level (transaction /SAPAPO/OM02). To display a trace, choose Tools → APO Administration → liveCache / COM Routines → Tools → Display Trace File (transaction /SAPAPO/OM01).
© SAP AG
ADM355
6-48
Performance Tips: Large Background Jobs
Large background jobs: Initial loading of data Calculation of proportional factors Copying planning versions or key figures All mass processing jobs Advice on execution: During low system load No concurrent run with other jobs Run as background jobs Avoid debugging in production systems
SAP AG 2003
© SAP AG
ADM355
6-49
Performance Tips: Release from DP to SNP
Implement parallel processing for releasing from DP to SNP Schedule several batch jobs for different products Schedule jobs to run during the same time period Make sure variants have roughly equal numbers of products and demands Make sure individual product not belonging to more than one variant Define mass processing jobs for your release from DP to SNP and run them during hours of minimal usage of DP interactive planning
SAP AG 2003
The release from DP to SNP can lock data both in the InfoCube and in the liveCache. Therefore, other job types should not be run concurrently so we can avoid performance or any functional problems. Avoid scheduling batch jobs for SNP release together with the following: Any data import job SNP Heuristic runs Daily liveCache reorganization Any DP transaction or mass processing job which uses the same data as the release
© SAP AG
ADM355
6-50
Performance Tips: InfoCube Improvement
Use several small InfoCubes instead of one large InfoCube Fix any missing secondary indices on the InfoCube Define the Time dimension in InfoCube as coarsely as possible, the finest should be weeks or longer Drop bitmap indices in Oracle before file import, then recreate them Analyze InfoCube statistics by running report /SAPAPO/RMDP_ICUBE_PERFORM Minimize the number of dimensions and characteristic combinations, especially the ones with many-to-many relationships Use fewer key figures
SAP AG 2003
Use /SAPAPO/PARA to find active InfoCube. Use transaction DB02 to check if an index is missing in the database. Set flags in /SAPAPO/RSA1 to drop index before file import and then recreate. Note: Most customers may not be able to change the last two points on short notice, therefore it is important to be aware of this early in the implementation.
© SAP AG
ADM355
6-51
Operating System Recommendations for liveCache Operating system recommendations for liveCache: IF (Main Memory ( = Data Cache + Heap)) << 4 GB THEN (32-Bit MS Windows Server is OK)
IF
(Main Memory ( = Data Cache + Heap)) >> 4 GB THEN (64-Bit Operating System (currently UNIX and in the near future 64-Bit Windows.Net Server))
IF 3 GB < (Main Memory (= Data Cache + Heap)) < 8 GB THEN (IT DEPENDS...) : Could you optimize your sizing requirements ? Is this UNIX ? If it‘s NT, you can temporarily use Windows 2000 AWE In long term, a 64-Bit operating system is recommended SAP AG 2003
Accurate heap sizing depends on the APO application. Rule of the thumb: Heap <= Data Cache
© SAP AG
ADM355
6-52
Windows 2000 32-bit Memory Limits Windows 2000 memory limits due to 32-bit platform: Each operating system process (for example an SAP work process or the liveCache process) can address a maximum of 4 GB virtual memory. By default, the application can only use 2 GB; the other 2 GB are used by the operating system in kernel mode The boot.ini setting /3GB expands the maximum virtual memory accessible by an application on Windows 2000 Advanced Server or Windows 2000 Datacenter Server to 3 GB By default, the operating system can access a maximum of 4 GB physical memory The boot.ini setting /PAE expands the maximum physical memory accessible by the operating system to 8 GB for Windows 2000 Advanced Server 32 GB for Windows 2000 Datacenter Server
Itanium Chip: Together with a 64-bit application, much more than 4 GB of virtual and physical memory (up to several TB) can be addressed
SAP AG 2003
A process on a 32-bit Windows operating system can only address 4 GB of virtual memory. Of these 4 GB, only the lower 2 GB are accessible in user mode for the application. The upper 2 GB of the 4 GB address space are reserved for the operation system, they can be only be accessed in kernel mode. To increase the size of the area of a process address space that can be used by the application, use the switch /3GB in the file boot.ini. This parameter changes the division of the address space to 3 GB for the application and 1 GB for the operating system. On Windows 2000, the /3GB switch may be used in a production environment only with Windows 2000 Advanced Server and Windows 2000 Datacenter Server. - For Windows 2000 Professional or Windows 2000 Server, the user-mode memory space is still limited to 2 GB. See Microsoft article Q291988. By default, a 32-bit Windows operating system can only access 4 GB of physical memory. To allow the processor to access more physical memory under Windows 2000 Advanced Server or Windows 2000 Datacenter Server, use the /PAE switch (Physical Address Extension) in the boot.ini file. - Windows 2000 Advanced Server can use up to 8 GB RAM, Windows 2000 Datacenter Server up to 32 GB. - However, if the /PAE switch is used together with the /3GB switch, the operating system can always use 16 GB of physical memory as a maximum.
© SAP AG
ADM355
6-53
Address Windowing Extension on Windows 2000 On Windows 2000 Advanced Server or Windows 2000 Datacenter Server, the address space accessible by certain applications, including Microsoft SQL Server and SAP liveCache, can be further extended by the 36-bit solution AWE AWE enables the application to use extended memory over 4 GB for swapping data out of the 4 GB address space In the special case of liveCache, any data stored in the data cache can be swapped to extended memory The common size of liveCache data cache and heap is still limited to 3 GB The activation of AWE for liveCache is described in - SAP Note 384680 for liveCache >= 7.2.4 Build 5 and < 7.4 - SAP Note 560528 for liveCache >= 7.4.2 B4
SAP AG 2003
Address Windowing Extensions (AWE) is a set of application programming interfaces in the memory manager that enables programs to address more memory than the 4 GB available through the standard 32-bit addressing. AWE enables programs to reserve physical memory as non-paged memory, and then dynamically maps portions of the non-paged memory to the program´s working set of memory. The Data cache and the Heap process of liveCache add up together are limited to 3 GB on a 32-bit Windows operating system (if the /3GB switch in boot.ini is used). With AWE enabled, liveCache can swap data from the data cache into extended memory instead of swapping to devspaces. The boot.ini switch /PAE must be used to be able to use AWE. For details about activation of AWE for liveCache on Windows 2000, see SAP Notes 384680 and 560528.
© SAP AG
ADM355
6-54
32-Bit Windows 2000 / liveCache with 36-Bit AWE
32-Bit NT / liveCache Main Memory <= 3 GB
Extended W2K AWE Memory <= 32 GB
DataCache Cache Data
Data Cache Memcopy to AWE possible Heap Memcopy to AWE NOT possible
Heap Heap
W2K 36-Bit 36-Bit W2K AWE AWE
liveCache Disk
Disk SWAP if not enough Virtual Memory
SAP AG 2003
© SAP AG
ADM355
6-55
Other Windows 2000 Issues Number of CPUs supported on Windows 2000 Windows 2000 Professional:
2 CPUs
Windows 2000 Server:
4 CPUs
Windows 2000 Advanced Server:
8 CPUs
Windows 2000 Datacenter:
32 CPUs
Regarding the option "/3GB" for an SAP R/3 System running on a Windows 2000 server, see Microsoft article Q304887
SAP AG 2003
Microsoft article with the PSS ID number Q304887: The information in this article applies to: - Microsoft Windows versions 2000, 2000 SP1, 2000 SP2 Professional - Microsoft Windows versions 2000, 2000 SP1, 2000 SP2 Server - Microsoft Windows versions 2000, 2000 SP1, 2000 SP2 Advanced Server SYMPTOMS When you run SAP R/3 on a Windows 2000-based server that has been started with the /3GB switch in effect, some of the SAP processes may peg the CPU and not terminate. Because of the nature of SAP, the occurrence of this behavior on one server may also negatively affect remote servers, making it necessary to restart those servers also. CAUSE This behavior occurs if there is not enough virtual address space reserved to manage all of the potential non-direct working-set entries. Starting the server with the /3GB switch is more likely to expose this behavior because starting the server with the /3GB switch limits the memory that is available for the hash table of the process's working set that the operating system maintains. RESOLUTION A support fix is now available from Microsoft, but it is only intended to correct the problem described in this article and should be applied only to systems experiencing this specific problem. This fix may receive additional testing at a later time, to further ensure product quality. Therefore, if you are not severely affected by this problem, Microsoft recommends that you wait for the next Windows 2000 service pack that contains this fix. To resolve this problem immediately, contact Microsoft Product Support Services to obtain the fix.
© SAP AG
ADM355
6-56
Guidelines for a Successful GoingLive The following tests must be successful: Basis, IT, system tests Administration tests Configuration tests (parameters, settings, tuning) Application tests Integration tests Backup and recovery tests High availability tests Consistency check tests Maintenance, change management tests Security tests Globalization tests Volume & stress tests with the expected production volume: Best Practices is available Going live check SAP AG 2003
© SAP AG
ADM355
6-57
SAP GoingLive Check: Steps Planning - only for complex solutions Solution Management - only for complex solutions Analysis OS parameters, user distribution, work process distribution Sizing plausibility check Two months before going live, contact SAP Support
Optimization Core business processes, key transactions with high resource consumption To be performed four weeks prior to the start of production
Verification Re-examines the system components and validates all the recommended changes from two previous sessions To be performed when the system is in production operation
SAP AG 2003
Refer to the SAP Support Web page at http://service.SAP.com/GoingLiveCheck.
© SAP AG
ADM355
6-58
Useful Transactions
SM50
Process Overview
ST02
SAP Memory Configuration Monitor
ST03
Workload Monitor
ST04
Database Performance Monitor
DB02
Database Analysis
ST05
Performance Trace
OS06
Operating System Monitor
ST10
Table Call Statistics
STAD
Single Record Statistics
LC10
liveCache Administration / Monitoring
SAP AG 2003
© SAP AG
ADM355
6-59
Further Documentation
For additional information about mySAP APO performance, go to URL: http://service.sap.com/scm mySAP SCM Technology Performance & Configuration http://service.sap.com/ATG
SAP AG 2003
© SAP AG
ADM355
6-60
APO Performance: Summary
You are now able to: Use SAP performance monitoring tools for the components of an APO system Detect the most critical performance bottlenecks Plan your performance monitoring strategy
SAP AG 2003
© SAP AG
ADM355
6-61
APO Performance Exercises Unit: APO Sizing & Performance
At the conclusion of this exercise, you will be able to: • Monitor and find out information about your APO system
Use various monitors to gather information about the system performance of your APO system.
1-1
Using the workload monitor (transaction ST03), find out the CPU time, Response time and DB time for all the programs ran today.
1-2
From the liveCache monitor, find out information about the database procedures called. 1-2-1 Which database procedure was called the most often ? 1-2-2 Which database procedures had the highest average run time ? 1-2-3 How often was the database procedure FORCE_CHECKPOINT executed?
1-3
Find out the values of all those R/3 parameters that have influences on RFC connections, for example the maximum # of work processes allowed for RFC and the % of work processes that can be assigned to a user for RFC’s call.
1-4
Optional. (one group at a time) Use SQL Trace to trace ABAP program /SAPAPO/OM_CHECKPOINT_WRITE.
© SAP AG
ADM355
6-62
APO Performance Solutions Unit: APO Sizing & Performance
1-1
Using the workload monitor (transaction ST03), find out the CPU time, Response time and DB time for all the programs ran today. In the APO system, use transaction ST03. Select Today’s workload and go into Transaction Profile. Sort statistics in descending order to see the programs that caused the most CPU time, Response time, and DB time.
1-2
From the liveCache monitor, find out information about the database procedures called. 1-2-1 Which database procedure was called the most often ? In transaction LC10, enter LCA for Logical Connection and choose liveCache monitor. Click on the column header Calls and select Sort in descending order. 1-2-2 Which database procedures had the highest average run time ? 1-2-3 How often was the database procedure FORCE_CHECKPOINT executed ?
1-3
Find out the values of all those R/3 parameters that have influences on RFC connections, for example the maximum # of work processes allowed for RFC and the % of work processes that can be assigned to a user for RFC’s call. Run transaction SE38, enter report name RSPFPAR. The two instance parameters related to RFC connections are: rdisp/rfc_min_wait_dia_wp (maximum number of allowed work processes for sending qRFC messages) and rdisp/rfc_max_own_used_wp (defines how many percent of the work processes can be used by one user for RFC calls)
1-4
Optional. (one group at a time) Use SQL Trace to trace ABAP program /SAPAPO/OM_CHECKPOINT_WRITE. Call transaction ST05 and switch on Trace on for your user. Start the program /SAPAPO/OM_CHECKPOINT_WRITE in SE38 in a second session. Wait until it finishes. Switch off SQL trace in ST05. List Trace. Choose Extended list and observe column Conn. Those lines from LCA are from liveCache.
© SAP AG
ADM355
6-63
Data Consistency
1
APO Overview 2 APO Core Interface 3 CIF Monitoring 4 APO Optimizers 5 APO and BW 6 APO Sizing & Performance 7
Data Consistency
8 Disaster Recovery SAP AG 2003
© SAP AG
ADM355
7-1
Data Consistency
Contents How to check and reconcile consistency in SAP APO landscapes Internal consistency External consistency
Objectives At the end of this unit, you will be able to: Explain how internal and external consistency of SAP APO systems is checked and restored
SAP AG 2003
© SAP AG
ADM355
7-2
SAP APO Internal and External Data Consistency SAP APO system has two definitions for consistency:
External
Internal
Between APO and R/ 3 System
Between APO DB and liveCache
SAP R/ 3 DB
SAP APO DB liveCache
SAP AG 2003
APO internal consistency refers to consistency between data in the APO DB and data in liveCache. Data in an APO system is either stored in the APO DB, or the liveCache, or stored redundantly in both APO data must have a consistent status in both the APO DB and the liveCache - Redundant data stored in the APO DB as well as in the liveCache must always be consistent. Example: A resource stored in the APO DB must also exist in the liveCache, and vice versa. - Non-redundant data stored in the APO DB and in the liveCache must be logically consistent. Example: When a production order is stored in the liveCache, the corresponding texts for the order must also exist in the APO DB. OLTP external consistency refers to consistency between data in the OLTP DB and data in APO DB. One or more OLTP systems can be connected to an APO system Data stored in the APO system must be consistent with the data in the connected OLTP systems This applies not only to data transferred from an OLTP system to APO system but also planning results created in the APO system as they can be released back to the related external OLTP systems.
© SAP AG
ADM355
7-3
SAP APO Internal Consistency After Crashes There is no two-phase commit in APO systems APO transactions are committed in liveCache first
COMMIT2 Application
COMMIT1
liveCache liveCache
COMMIT2
APO RDBMS
Application
COMMIT1
APO DB log
liveCache log devspaces
liveCache liveCache
Case 1: No inconsistency
APO RDBMS
APO DB log
liveCache log devspaces
Case 2: Potential inconsistency
SAP AG 2003
An APO application that changes data generates two separate transactions: one in the APO database and one in the liveCache. The transaction on the APO database is only committed if the liveCache transaction committed successfully. There is no two-phase commit between APO DB and liveCache. If one or more work processes crash then data inconsistencies can occur between APO DB and liveCache. However, this happens very rarely in reality. Case 1. If liveCache crashes, all changes in the APO transaction are rolled back. This ensures the internal data integrity of the APO/liveCache system. Case 2. In a very unlikely case, if the APO system, APO work processes, or the APO database crashes after the liveCache transaction was committed but before the transaction on the APO database was committed, the changes exist then in the liveCache only but not in the APO database. This will lead to an inconsistency.
© SAP AG
ADM355
7-4
How Consistency of Systems is Maintained R/3 Application Server 5 Thanks qRFC !
4 Fine, I‘ll tell R/3 now everything is fine with the queues. 3
R/ 3 DB
qRFC
OK, App Server! Now I have also committed.
APO Application Server
2
OK, liveCache! I‘ll tell APO DB to commit.
1
Hi APO App Server! I‘ve just committed. You can continue!
APO DB liveCache
SAP AG 2003
If an application program triggers a commit (or rollback) on the application server, this commit will be transmitted to the liveCache first and then (almost simultaneously – after the confirmation from liveCache) to the APO DB, and possibly also back to the OLTP system.
© SAP AG
ADM355
7-5
Why Check Consistency? Reasons for internal inconsistencies: Incomplete recovery of APO DB or liveCache (also point-in-time recovery) APO instance crash (very rarely leads to inconsistencies) Program errors or handling errors
Reasons for external inconsistencies: Incomplete recovery (also point-in-time recovery) of one system of the system group Program errors, handling errors, deleting CIF queues Repair of internal inconsistencies between APO DB and liveCache Initialization of liveCache
SAP AG 2003
It is not necessary to schedule regular consistency checks. Checks are needed only after special events. Due to the architecture of the APO system and of qRFC, a complete recovery of either the liveCache, the APO database or the database of a linked R/3 System normally does not cause internal or external inconsistencies.
© SAP AG
ADM355
7-6
Internal Consistency Check Transaction /SAPAPO/OM17 enables you to check and re-establish the internal consistency of an APO system This procedure should be also incorporated into the disaster recovery strategy Re-establishing internal consistency must be a joint task between application consultants and technical consultants/administrators because it may result in deleting some data Application know-how is required to handle missing data
Consistency checks should be run during times of minimal system activity, otherwise there may be inconsistencies caused by the currently active system processing
APO
SAP AG 2003
? =
liveCache
/ SAPAPO/ OM17
To correct internal inconsistency, that is, to synchronize APO DB and liveCache, choose Tools → APO Administration → liveCache / COM Routines → Tools → liveCache Consistency Check (transaction /SAPAPO/OM17).
© SAP AG
ADM355
7-7
Restoration of Internal Consistency (1)
New Newfrontend frontendportal portal as of APO 3.1 as of APO 3.1SP SP44
/SAPAPO/OM17 /SAPAPO/OM17
SAP AG 2003
During the consistency check and comparison, no other activities should be carried out to impact the liveCache. Before the consistency check you should make sure that: User activities have been stopped No new user activities can be started (by locking the users) No released background jobs can be started during the consistency check No data can be transferred from a linked SAP R/3 System To stop inbound queues in the APO system, choose Stop CIF Queue. If you are not using inbound queues, you must stop the outbound queues directly in each linked R/3 System. With the exception of stopping and starting CIF queues, all the other new functions should be available in /SAPAPO/OM17 from APO 3.0A with SP 19 installed. - The new portal delivered with APO 3.1 Support Package 4 is not planned to be backward released to APO 3.0A. If you perform the check in a system where activities cannot be completely stopped and the result shows some inconsistencies, you should run the check once more to be sure that the inconsistencies really exist in the system. Sometimes differences between data in APO DB and liveCache can be caused by temporary discrepancies due to those transactions running concurrently and those transactions possibly have been commited in the liveCache already but not yet in the APO DB.
© SAP AG
ADM355
7-8
Restoration of Internal Consistency (2)
/SAPAPO/OM17 /SAPAPO/OM17
SAP AG 2003
When you lock users, this does not affect those users already logged in to the system, it only prevents new connections.
© SAP AG
ADM355
7-9
Restoration of Internal Consistency (3)
/SAPAPO/OM17 /SAPAPO/OM17
SAP AG 2003
In the Overview of Active Users/Tasks/Jobs, you will see which users are currently logged in to the system, which processes are currently active and which background jobs are scheduled to run or are due to start in the next two hours. When you can perform the consistency check, users already in the system should be stopped. You can inform the relevant users using System Message (SM02) or Mail. You also need to check the system for tasks that are still active, the tasks have to be ended before the consistency check. Active background jobs should also be prevented during the consistency check. Select the background jobs and remove their scheduling with the corresponding function. Using the function Release Jobs Again from the the main menu, you can once again release the background jobs that you have stopped before the consistency check.
© SAP AG
ADM355
7-10
Restoration of Internal Consistency (4)
/SAPAPO/OM17 /SAPAPO/OM17
SAP AG 2003
To do the consistency check, flag all the checkboxes and select a planning version. If you do not specify a plan version, the check is carried out for all plan versions. Choose either button Execute or menu Program >> Execute in Background. This check compares transactional and master data in APO DB and liveCache.
© SAP AG
ADM355
7-11
Restoration of Internal Consistency (5)
/SAPAPO/OM17 /SAPAPO/OM17
SAP AG 2003
Possible actions after the consistency check is finished: Transactional data that exist either only in APO DB or in liveCache have to be deleted. If transactional data were modified and these modifications are available either only in APO DB or in liveCache, the objects should be returned to their previous status (the modifications will then be lost). If master data is available in APO DB and missing in liveCache, it can be copied from the APO database back into liveCache. This will also include modified data. Master data available in liveCache and missing in APO DB has to be deleted because the information in liveCache is not complete.
© SAP AG
ADM355
7-12
Internal Consistency: Demand Planning Depending on the APO support package level, not all data can be checked using transaction /SAPAPO/OM17. Special reports or transactions must be used for special purposes Report /SAPAPO/TS_LCM_CONS_CHECK_ALL Checks all existing time series networks of Demand Planning Shows inconsistencies for each planning area and version No option to repair inconsistencies
Report /SAPAPO/TS_LCM_CONS_CHECK Check only one selected planning area Set a flag to repair inconsistencies automatically
SAP AG 2003
© SAP AG
ADM355
7-13
Internal Consistency: Time Series in SNP
If time series are used in the SNP planning area, report /SAPAPO/TS_LCM_CONS_CHECK can also be used to: Check time series network of Supply Network Planning Eliminate inconsistencies if necessary Check and correct SNP master data
Report /SAPAPO/TS_LCM_CONS_CHECK_ALL is not available for SNP planning areas
SAP AG 2003
© SAP AG
ADM355
7-14
Internal Consistency: Other Issues /SAPAPO/OM17 does not check resources and allocations Use transaction /SAPAPO/REST02 to compare resources and generate missing resources in liveCache
Report /SAPAPO/VS_CONS_CHECK or transaction /SAPAPO/VSCC can be used to check consistency of: APO master data relevant for TP/VS TP/VS anchor tables (orders and transport requests)
Any inconsistencies found can be repaired: the relevant orders can be descheduled or published/unpublished in liveCache, inconsistent entries can be deleted from anchor tables For more details on consistency checks, see SAP Note 425825 (for APO 3.0A) or go to http://service.sap.com/scm >> mySAP SCM Technology >> Consistency Checks >> SAP APO 3.1 OM17 (Internal Consistency) Documentation SAP AG 2003
Resources and allocations cannot be checked in /SAPAPO/OM17, use transaction /SAPAPO/REST02 to compare resources and possibly generate missing resources in liveCache. Report /SAPAPO/VS_CONS_CHECK (transaction /SAPAPO/VSCC) can be used to check the following data by making a selection in the initial screen: APO master data relevant for TP/VS, such as vehicle resources, transportation lanes, locations Customizing and optimization profiles R/3 OLTP and APO TP/VS integration and carrier selection settings Consistency between liveCache and TP/VS anchor tables (orders and transport requests) Inconsistent entries can be deleted in anchor tables. SAP Note 425825 describes internal and external consistency checks in detail for SAP APO 3.0A. For corresponding documentation for SAP APO 3.1, goto http://service.sap.com/scm >> mySAP SCM Technology >> Consistency Checks >> SAP APO 3.1 OM17 (Internal Consistency) Documentation.
© SAP AG
ADM355
7-15
External Consistency Check: CIF Delta Report External consistency (SAP R/3 SAP APO system) for transactional data is checked/restored in APO 3.1 using the compare/adjust report /SAPAPO/CIF_DELTAREPORT3 Master data and customizing data cannot be compared
To run the report, call transaction /SAPAPO/CCR (CIF Compare and Reconcile)
Again, this must be a joint task between application consultants and technical consultants or administrators
R/ 3 SAP AG 2003
© SAP AG
? =
APO
/ SAPAPO/ CCR
ADM355
7-16
How the CIF Delta Report Works
R/3
APO
Collect data from R/3
Collect data from APO
Comparison of collected data Check Check existence existence Check Check attributes attributes
Display results SAP AG 2003
© SAP AG
ADM355
7-17
CIF Delta Report Version 3: New Functions Versions 1 and 2 only compare existence of objects in SAP R/3 and SAP APO New features in version 3: Compare additional objects (such as shipments or confirmations in purchasing) Compare contents of objects (such as quantities of production orders) Display transaction data (orders) in SAP APO that are not contained in active integration models Check purchase requistions and planned orders that are flagged for conversion in APO but not yet converted Not all objects can be compared yet Also, comparison is done only on the most important data for an object For example, there is no comparison on production order operation dates between SAP R/3 and SAP APO Issue: comparison detail versus performance SAP AG 2003
Major performance improvements: Collection of SAP R/3 and SAP APO data executed in parallel New modules collect and compare relevant SAP APO and SAP R/3 data Display of messages about differences has been improved
© SAP AG
ADM355
7-18
Run External Consistency Check Tips: Run several CIF delta report comparisons in parallel to improve performance To do so, you can use parameters such as the integration model, and/or restricting the object types The job can then utilize several work processes and data communication channels simultaneously Run the report at times of minimal activity in both SAP R/3 and SAP APO system to reduce the risk of transient inconsistencies SAP AG 2003
You can choose the target system (logical system name) and the objects that are to be checked. The check can be applied to the following objects: stock, sales order, planned order, production order, purchase order, purchase requisition, and manual reservation. If you check the field Use Table VBBE for Sales Order Comparison, the delta report will use the current entries in the table VBBE (Vertriebsbedarfseinzelsätze = Sales Requirements: Individual Records) to find the demands, that means the contents of the table will not be built again. In this case it is important to run report SDRQCR21 in the R/3 System shortly before you start the delta report, otherwise the comparison could deliver false results. Is the field Use Table VBBE for Sales Order Comparison not checked, then the table VBBE will be refreshed automatically before the comparison starts – which corresponds to the function of report SDRQCR21 on the R/3 side. If there is more than one dedicated R/3 system, you need to execute the report for every one of the dedicated systems.
© SAP AG
ADM355
7-19
Display of Results
Display of results Overview Single tabs for objects with differences
Possible resolutions for differences in existence or data: Send to SAP APO
/SAPAPO/CCR /SAPAPO/CCR
Send to SAP R/3 Delete in SAP APO SAP AG 2003
The comparison function generates a list of all objects. Single, multiple, or all objects can be selected for the refresh (posting the data from the SAP OLTP system back to the APO system).
© SAP AG
ADM355
7-20
External Consistency: Important Issues CIF delta report displays differences that are not real Reason: Data was being changed during report execution Example: You start the CIF compare/reconcile tool, and at the same time an order is deleted in SAP APO. When the order data is read in SAP R/3, the order is not yet deleted in R/3 but it is already deleted in APO Tip: Run the report during low system activity (at night)
Handling high data volumes: Run the report in several parallel jobs with fewer object types in each Example: Start the report separately for inhouse production, for external procurement, ...
If you want to reconcile, you must run the CIF delta report interactively. The report does not reconcile if run in the background SAP AG 2003
© SAP AG
ADM355
7-21
CIF Delta Report Version 3 for SAP APO 3.0A
The new CIF delta report version 3 delivered with SAP APO 3.1 is also available in SAP APO 3.0A as of SP 19 For SAP APO 3.0A prior to SP 19, download the transport mentioned in SAP Note 459402 from sapservX In the SAP R/3 System, you need plug-in PI 2001.1 as minimum Always implement SAP Note 458487 To be able to use all features of CIF delta report version 3, install PI 2001.2 with the highest available SP level In APO 3.0A, you must use the new report /SAPAPO/CIF_DELTAREPORT3 directly Do not use transaction /SAPAPO/CCR because it uses the old version of the report
SAP AG 2003
© SAP AG
ADM355
7-22
Further Documentation
For additional information about consistency checks, go to URL: http://service.sap.com/scm >> mySAP SCM Technology >> Consistency Checks http://help.sap.com >> mySAP Cross Industry Solutions >> SCM >> SAP APO >> SAP APO 3.1 >> Integration SAP APO and SAP R/3 >> Technical Integration >> SAP Core Interface >> Administration >> Compare / Reconcile
SAP AG 2003
© SAP AG
ADM355
7-23
Summary
Now you are able to: Explain how internal and external consistency of SAP APO systems is checked and restored
SAP AG 2003
© SAP AG
ADM355
7-24
Disaster Recovery
1
APO Overview 2 APO Core Interface 3 CIF Monitoring 4 APO Optimizers 5 APO and BW 6 APO Sizing & Performance 7 Data Consistency 8 Disaster Recovery
SAP AG 2003
© SAP AG
ADM355
8-1
Disaster Recovery
Contents Backup/Recovery Considerations Backup of Individual Components Recovery of Individual Components Backup/Recovery Concepts
Objectives At the end of this unit, you will be able to: Describe the backup and recovery of components in a mySAP.com landscape Describe backup and recovery strategies in the mySAP.com landscape
SAP AG 2003
© SAP AG
ADM355
8-2
APO System Architecture Browser
ITS
App. Server
SAP APO
SAP OLTP SAP OLTP Database
Presentation Client
Database Server
Web server
Presentation Client
APO Application
Optimizer
BW Layer
APO DB
Database
APO Application
liveCache
Database
SAP AG 2003
© SAP AG
ADM355
8-3
General Backup and Restore Strategies
Single system component Complete system environment
SAP AG 2003
General backup and restore strategies: - Single component - Complete system environment System components include: - OLTP - APO - BW - Operating system - DBMS - Software and configuration files - SAP file systems - Middleware components A consistent backup of the complete environment is normally required for disaster recovery (in case the whole environment is destroyed) or for setting up system copies.
© SAP AG
ADM355
8-4
Important Questions Do we need a consistent backup of the complete environment ? For normal operation: No Since a restore is done only for an individual system, each system can be backed up individually
Do we need a consistent backup in case of a point-in-time recovery ? In general: No Since a consistent backup do not reflect any arbitrary point in time, the restore of a consistent backup would cause much more data loss than a point-in-time restore and would affect all systems instead of one
Do we need a consistent backup at all? For special situations: Yes A consistent backup can be used to provide a savepoint before an upgrade, during data migration or for setting up consistent test systems
SAP AG 2003
If a single system component fails, you do not need to restore the complete environment. Only the affected component has to be restored and recovered. After recovery of this component, data consistency in the system environment should be restored. All open transactions should be rolled back and all committed transactions should be recovered. In case of a point-in-time recovery of one system component, do not restore the consistent backup of the complete environment as this may cause additional data loss in other components. Avoid restoring directly on the production system. Try restoring on another system and fix the problems. If the restore fails, the original production system is still available. Do not test restores in the production environment.
© SAP AG
ADM355
8-5
Requirements on Backup and Restore Backup & Restore of a single system component Minimize data loss Ensure recovery to a point-in-time (point of failure) Ensure data consistency between systems No impact on production system caused by backup Fast backup/restore/recovery Easy handling
Consistent Backup & Restore of the complete system environment Synchronization points Consistent system copies
SAP AG 2003
© SAP AG
ADM355
8-6
Backup/Recovery and Data Consistency IT High Availability concepts Standard Backup and Recovery strategies for R/3 and SAP APO and BW Database server The SAP APO Optimizer is not Backup & Recovery relevant liveCache behaves as a standard Database Data consistency strategy for the whole SCM landscape Middleware backup – ITS
SAP AG 2003
© SAP AG
ADM355
8-7
High Availability Configurations RAID setup Redundant hardware paths Implementation of HA solutions Additional offline copies of database Different devices for logs and data DB log mirroring (on different controllers) Logs saved twice on disk and twice on tape Test restore and recovery regularly Execute database consistency checks regularly Avoid restore on production system
Document the backup and restore process Ensure operability with regular tests and training SAP AG 2003
© SAP AG
ADM355
8-8
Backup/Recovery of Individual Components
Backup/Recovery Backup/Recovery of of Individual Individual Components Components Disaster Disaster Recovery Recovery Concepts Concepts Point-in-time Point-in-time Recovery Recovery
SAP AG 2003
© SAP AG
ADM355
8-9
Data Backup for System Components
Component
Data type
Data class
Backup method
APO System
DB + liveCache
O+R
Traditional DB backup methods + liveCache backup
OLTP (R/3) System
DB
O
Traditional DB backup methods
ITS
Flow & Service files
R
Web Server
Multimedia files
R
Multiple systems and/or Traditional backup and/or Publishing
O = Original R = Replicated SAP AG 2003
The graphic lists all system components that hold data and that must be considered with regard to their backup needs. A detailed backup strategy always depends on the specific customer’s situation and demands, such as amount of data, implemented processes, service level agreements. In general, all systems that hold originals of data must be secured and backed up carefully, while systems that hold only replicated data can be rebuilt from the original data sources. In case of multiple installations of the same component (holding the same data), each one can serve as a data backup of the other.
© SAP AG
ADM355
8-10
APO System Backup SAP APO System
SAP OLTP System
SAP Instance
Database Instance
SAP Instance
liveCache Instance
Database Instance
Storage Mgmt
Storage Mgmt
Backup APO database like in a standard SAP System Backup liveCache SAP AG 2003
An SAP APO system consists of the same components like an SAP R/3 System (SAP_BASIS, SAP_ABA), of a RDBMS (APO DB), and of liveCache. To back up an SAP APO system, use the same methods that you use to back up an SAP system (depending on the database system that is used) backup liveCache These two backups are created independently of each other, and also independently of backups of linked SAP R/3 Systems. If you lose data either in the APO DB or in liveCache and you must recover this data, no inconsistencies will normally occur as long as you perform a complete recovery, that is, you recover up to the last committed transaction. If you lose data either in the APO system or in a linked SAP R/3 System and you must recover this data, no inconsistencies will occur as long as you perform a complete recovery, that is, you recover up to the last committed transaction.
© SAP AG
ADM355
8-11
APO System Recovery Recovery procedure Recover APO database like a database in a standard SAP system Recover liveCache with the appropriate method for your liveCache release Startup APO instances without batch jobs to avoid data synchronization between APO and OLTP system Check APO system: if OK, restart APO with default profile
After the recovery, test all components that are connected to your APO system OLTP backend systems ITS server
Test recovery of the APO system regularly Validate tapes / tape drives, restore duration, completeness of backup SAP AG 2003
To startup an SAP instance without batch jobs, remove parameter rdisp/wp_no_btc from the instance profile or set its value to 0. Location of the instance profile: - UNIX: /sapmnt//profile - Windows NT/2000: :\usr\sap\\SYS\profile Name of the instance profile: - _DVEBMGS_ - Example: P30_DVEBMGS00_saphost
© SAP AG
ADM355
8-12
Restoring Web Middleware Component Data
Copy files from second ITS
ITS
Redistribute flow and service files Restore a backup Copy files from second Web server
Web server
Restore a backup Republish multimedia files
SAP AG 2003
If there are several identical servers (for example, for scalability or high availability reasons), data from the Web middleware (ITS and Web server) don’t need to be backed up. If one server fails, the data can be copied back from another server. A backup may only be needed if all servers of one type failed or are damaged, data will have to be published again from the APO or OLTP system. ITS flow and service files are integrated into the correction management of the corresponding SAP System so they are backed up together with the SAP System. Version data for the components must match the version of the corresponding leading system.
© SAP AG
ADM355
8-13
ITS Server Recovery Registry
Full restore of file system and registry
Registry
Recovery
Web Server
WGate
Internet Web Server
MIME objects
Internet Transaction Server
Stop all SAP services (ITS Admin Service, IACOR Service) Stop all Web server services Republish templates
AGate
Test ITS server Recovery regularly HTML business templates
Service Files
Validate tapes / tape drives, restore duration, completeness of backup
If no actual backup is available Re-install Web server
Contains SAP IACs and customer developed IACs
Re-install ITS in the same drive / directory and with same parameters Re-install the IACOR service Publish ITS templates via SE80
SAP AG 2003
Binaries are only installed once in the ITS server. If you upgrade one ITS instance, all others are affected and should therefore be backup. Web server services: WWW Publishing Service, FTP Publishing Service, IIS Admin Service A reboot may be required after the recovering the registry. As of SAP Basis Release 4.6B, ITS templates are also stored in the SAP database. A copy the current templates is always available in the SAP database. If you change ITS files manually (without use of SE80), make sure to resend them (checkin / checkout) from ITS to the SAP database to ensure a consistent state ready for emergency use. Components of the restore: :\Inetpub\wwwroot, :\Inetpub\scripts, :\Inetpub\ :\Inetpub\\IMS_DOCS :\Program Files\sap\its %SystemRoot%\system32\inetsrv, %SystemRoot%\system32 Registry backup, operating system backup (all files), :\Program Files\sap\its Registry hives for ITS: HKLM\Software\SAP\ITS, HKLM\Software\SAP\ITSConverter Some DLLs (such as SAPBasis20.dll) are stored in the subdirectory %SystemRoot%\system32 of Windows NT
© SAP AG
ADM355
8-14
Disaster Recovery Concepts
Backup/Recovery Backup/Recovery of of Individual Individual Components Components Disaster Disaster Recovery Recovery Concepts Concepts Point-in-time Point-in-time Recovery Recovery
SAP AG 2003
© SAP AG
ADM355
8-15
Architecture APO Server: OLTP APO Server
OLTP
SAP Instance
SAP Instance RFC Plug-In
Three data transfer layers Application Database Storage
Database Instance
Database Instance
Storage Mgmt
Storage Mgmt
Data
Data
SAP AG 2003
For simplicity, only two production systems are displayed here. Similar procedures can be used if there is more production systems in the environment. To get a consistent environment copy, all methods presented in this unit use some mechanism to freeze any modifications that can result in inconsistencies between systems. Data transfer layers : - Application layer - Database layer - Storage layer
© SAP AG
ADM355
8-16
Application Level: Stopping All SAP Instances SAP R/3 System
SAP APO System
SAP Instance
Advantages All database instances stay online
RFC
SAP Instance Plug-In
Easy to implement
Disadvantages No 7x24 availability
liveCache Instance
Storage Mgmt
Database Instance
Storage Mgmt
Database Instance
Storage Mgmt
Performance loss due to reset of SAP buffers Batch processing stopped
SAP AG 2003
© SAP AG
ADM355
8-17
Application Level: Stopping All but One SAP System SAP R/3 System
SAP APO System
SAP Instance
RFC
Advantages All databases instances stay online
SAP Instance Plug-In
At least one SAP system stays online Easy to implement
liveCache Instance
Database Instance
Disadvantages Database Instance
Running instance: Communication not possible
Storage Mgmt
Storage Mgmt
Storage Mgmt
Manual restart of pending RFCs Stopped instances: No 7x24 Performance loss Batch processing
SAP AG 2003
To get a consistent state, with systems synchronized, all systems except one must be down during the backup process of all databases and liveCache. A restore using this backup will provide this consistent state again. However, this is only true if all APO instances are down so that no activities are possible in the APO system. If you allow working with the APO system, the backup of liveCache and the backup of APO database are no longer consistent. Therefore, you cannot choose APO as the system that stays online. - Recall that point-in-time recovery of an APO system is generally not supported. This scenario may apply for a customer who does not use the APO system in a 7x24 timeframe. There may be a serious performance impact if the application that is online creates many queue entries while the other applications are down. Processing of these messages could decrease performance after restart. Therefore this method should not be used during times of high communication load.
© SAP AG
ADM355
8-18
Application Level: Delayed Synchronization Start online backup
System 2 System 3 finished finished Shutdown n-1 systems and start log backup System 1 finished
System 1 Online Backup System 2 Online Backup System 3 Online Backup
Delta Data
Delta Data
Log Backup
Log Backup
Delta Data
Log Backup
Systems synchronized Consistent backup
SAP AG 2003
The time during which all but one systems are offline can be reduced by taking an online database backup while all systems are running, followed by a log backup created while all but one systems are stopped (including APO), that is, while all systems are synchronized. To get a consistent state, all systems except one must be down at one point in time. A consistent state is reached, with systems synchronized, after the last of the n - 1 systems is shut down. Then the log backup of the one system that stays online can be started. The log backup of the shutdown systems can be started earlier because there cannot be any changes in these systems. For the restore of the systems, you must choose recovery up to a point in time after the shutdown of the last system and before the restart of the first one. This recovery will use the backed up logs. Because all but one systems were down, the recovery provides a consistent state of the whole system group.
Steps in detail: Take an online backup of all systems. After the online backup of all systems is finished, shut down SAP instances of all but one system (application shutdown). In each of these systems, start a database log backup immediately after the shutdown. In the system that stays online, start log backup after last of the other systems was shut down. (Oracle: initiate logswitch before you start log backup.) After the log backup on the online system was started, SAP instances can be restarted in all systems. - Do not forget to document the time that can be used for point-in-time recovery.
Advantage: Short downtime, compared to any method where database backup is created while SAP instances are down. Disadvantages: the same as the methods described previously.
© SAP AG
ADM355
8-19
Database Level: Offline Backup SAP R/3 System
SAP APO System
SAP Instance
RFC
Contents of SAP buffers not lost
SAP Instance Plug-In
liveCache Instance
Storage Mgmt
Database Instance
Storage Mgmt
Advantages
Easy to implement
Disadvantages No 7x24 availability (several hours downtime)
Database Instance
Performance loss on database level
Storage Mgmt
SAP AG 2003
When you shut down the APO database, no activities are possible in the APO system because the work processes enter the reconnect status. That is why you do not have to stop liveCache to create a consistent backup of the whole environment. On the other hand, backup of liveCache 7.4 can be also created in the ADMIN mode. If you are using a database system that does not support offline backup, you can shutdown the database and create a backup of all database files at operating system level. Performance loss after restart is due to buffer reset of the database instance.
© SAP AG
ADM355
8-20
Database Level: Suspend Write Advantages SAP BW System
SAP R/3 System
SAP Instance
SAP Instance RFC Plug-In
Low impact on performance Very short standstill of systems Potentially possible at any time Read access possible (not with every DBMS)
Database Instance
Storage Mgmt
Database Instance
Storage Mgmt
Disadvantages Synchronization of suspend writes between databases not easy to handle The more systems involved the more complicated to handle Not possible with SAP APO system
SAP AG 2003
All implementations of the Suspend Write technique have in common that they create a database checkpoint and prevent any further writes to the database system. Applications connected to the database are merely throttled down, until writing is allowed again and the applications can continue. A checkpoint is not a point of consistency. It is a point of known status, where the database system knows which transactions were “in flight” and which were already committed, but not physically written to the database. You cannot recover to a checkpoint in SAP without losing the transactions that were “in flight” and the transactions that committed after the checkpoint. If all systems are suspended and backed up during the same period of time, a consistent copy of the complete environment can be achieved, on the basis of the last checkpoint before the backup. Because all write activity is suspended until the backup is finished, the database can be brought up on this copy without a log recovery. Disadvantages of suspend writes: Suspending writes in all databases must be synchronized by scripts. The more systems are involved in the environment, possibly with heterogenous database management systems, the more complicated and prone to error this procedure becomes. This technique may not be possible with all databases. In particular, it is not supported for SAP DB and liveCache. As this procedure has not yet been tested in a production environment, it cannot be generally recommended.
© SAP AG
ADM355
8-21
Suspend Write Not Possible With APO SAP R/3 System
SAP APO System
Suspend write is not supported by liveCache
!
SAP Instance
RFC
SAP Instance Plug-In
liveCache Instance
Storage Mgmt
Database Instance
Storage Mgmt
Database Instance
Storage Mgmt
SAP AG 2003
Suspend write cannot be used to create a consistent backup of a system group that includes an APO system. As writing cannot be suspended in liveCache, transactions would be committed there but not in the APO DB, so the backup would not be consistent. A possible combination of suspending writes in all RDBMS in the landscape and bringing liveCache into ADMIN mode has not been tested for creation of a consistent backup.
© SAP AG
ADM355
8-22
Suspend Database Writes: Possible Implementation Start suspend
Start re-sync
Re-sync finished
Fast disk copy
Suspend System 2
Suspend System 1
Resume Systems
Split
System 1 System 2 All systems suspended Start suspend
Start backup
Backup finished
Pointin-Time
Suspend System 2
Suspend System 1
Resume Systems
System 1 System 2 All systems suspended
SAP AG 2003
Possible procedure using fast disk copies (simultaneously done for all databases): Resynchronize disk mirror (for all systems) Suspend write on all databases (after synchronization is finished) Split off disk mirror Resume write activity A consistent copy of the complete environment can be achieved if all systems are suspended during the same period of time and operation only resumes after all systems have split off the disk mirrors. These disk copies can then be used to bring up a consistent copy of all included systems. Implementation techniques used for fast local disk copies vary, so the procedure may differ from the one shown here. The techniques used for the different database systems also vary. Possible procedure using standard backup functionality (done simultaneously for all databases): Take an online backup of all involved systems Suspend write on all databases (after the backup of all databases are finished) Resume write activity Back up logs By recovering all systems to a specific point in time during the period when all systems were suspended, a consistent copy of all included systems can be generated. The point in time may be different, within the suspension period, on different systems. Do not forget to document the time that can be used for point-in-time recovery.
© SAP AG
ADM355
8-23
Required Database System Capabilities
DB System
Suspend Write
Resume Write
DB2 UDB S/390
SET LOG SUSPEND
SET LOG RESUME
DB2 UDB open
set write suspend for database
set write resume for database
Informix
ONMODE –C BLOCK
ONMODE –C UNBLOCK
Oracle
ALTER DATABASE .. .. BEGIN BACKUP ALTER SYSTEM SUSPEND
ALTER SYSTEM RESUME ALTER DATABASE .. .. END BACKUP
SQL Server
dbcc freeze_io ()
dbcc thaw_io ()
SAP AG 2003
© SAP AG
ADM355
8-24
Suspend Write on Storage Level SAP R/3
SAP BW
SAP APO
SUSPEND / RESUME WRITE
Mirrored Data
Logical Volumes SAP AG 2003
© SAP AG
ADM355
8-25
Storage Level: Stopping Write Access SAP R/3 System
SAP APO System
SAP Instance
RFC
Low impact on performance
SAP Instance Plug-In
liveCache Instance
Database Instance
Advantages
Database Instance
Potentially possible at any time File systems can be included in backup Read access possible
Prerequisites All systems must write to the same storage system (SAN, NAS)
Storage Mgmt
Backup is created from mirrored data after splitting the mirror
SAN/NAS SAP AG 2003
Some storage systems enable you to stop write accesses. If all systems store their data in the same storage system, this can be used to create a consistent copy. Compared to all methods presented before, these concepts which stop write accesses at storage level offer the additional advantage that file systems can also be included in the backup. Possible procedure using a mirrored disk system: Resynchronize disk mirror After synchronization is finished, stop write accesses - This step may even not be needed if the split is done as an atomic operation Split off disk mirror Resume write activity to the primary disk system Create backup of data from the split off mirror This procedure has not yet been tested by SAP. Additional actions may be required to implement this procedure. The procedure is hardware dependent, check with your storage partner for possible solutions. This scenario does not work if the two systems are connected to different storage subsystems and the stop of the write access cannot be done at exactly the same time. One system may still commit and write changes, while the second system is already suspended. But the changes done by the first system can be transmitted to the second system and processed after the writing resumes. The disk copy of both systems then have the modifications done on the first system but not those that need to be done on the second system. The copy will thus be inconsistent.
© SAP AG
ADM355
8-26
Storage Level: Consistency Group SAP R/3 System
SAP APO System
Advantages Low impact on performance
SAP Instance
RFC
SAP Instance Plug-In
liveCache Instance
Storage Mgmt
Database Instance
Storage Mgmt
Potentially possible at any time Filesystems can be included in backup Read access possible
Database Instance
Storage Mgmt
Prerequisites New storage system functionality needed
Consistency Group
SAP AG 2003
The concept of consistency groups can be used to overcome the limitation of only one storage system may be used when implementing the suspend write at storage level. A consistency group can span multiple storage systems and can thus provide a consistent backup of all data belonging to this group. This can include data of several different databases and file systems.
© SAP AG
ADM355
8-27
Consistency Groups SAP R/3
Consistency Group
SAP BW
SAP APO
SUSPEND / RESUME WRITE
Live Data
Mirrored Data
Logical Volumes
Logical Volumes
SAP AG 2003
© SAP AG
ADM355
8-28
Failover/Second Instance Distributed Systems
SAP R/3
SAP BW
SAP APO
SAP R/3
SAP BW
SAP APO
Remote Copy
Consistency Group SAP AG 2003
The concept of consistency groups can also be extended to include remote copies to another site.
© SAP AG
ADM355
8-29
Multiple Components in One Database (MCOD) SAP BW System
SAP R/3 System
SAP Instance
SAP Instance RFC Plug-In
Advantages Consistent backup very easy Consistent recovery to every point in time Easy copying of complete landscape
Disadvantages Database Instance
All systems affected by a crash or a restore No point-in-time restore for single components
Storage Mgmt
Several other restrictions
Not possible with APO SAP AG 2003
When considering multiple components on one database (MCOD), take the following into account: All components are affected by a crash or a restore A point-in-time restore is usually not possible for a single component A component's performance depends on that of all other components Resource consumption may be quite challenging (64-bit architecture and symmetric multiprocessing necessary) Database administration can no longer be done independently Upgrades cannot be done with logging switched off (this is not valid for all database platforms) Multiple components on one database are supported for SAP BW as of release 3.0 - Remember that OLTP and OLAP components may require different database parametrization, depending on the used database management system SAP APO 3.0A/3.1 do not yet support MCOD for APO DB because APO 3.0A/3.1 have an embedded SAP BW 2.0B/2.1C - Also, liveCache needs a separate instance Middleware components are still not included An installation with multiple components on database is not a general solution. Its use is largely limited to test, demonstration, and development environments. MCOD is currently available only under controlled availability (CA) Production use may be considered for systems with strong consistency requirements For more information about MCOD, see http://service.sap.com/oneDB.
© SAP AG
ADM355
8-30
Synchronization Points: Summary
Application level (offline) Stop all systems during backup time Stop one system during backup time Create backups, stop application, save logs
Suspend write on database level Suspend write on storage management level SAN, NAS Consistency groups
Multiple components in one database
SAP AG 2003
© SAP AG
ADM355
8-31
Point-in-time Recovery
Backup/Recovery Backup/Recovery of of Individual Individual Components Components Disaster Disaster Recovery Recovery Concepts Concepts Point-in-time Point-in-time Recovery Recovery
SAP AG 2003
© SAP AG
ADM355
8-32
Point-in-time Recovery Full Recovery not possible: Possible reasons: Logfiles corrupt Tapes destroyed DB logically corrupt
Not an option if, say, Single tables dropped Wrong transports imported
Consequences: Data loss Inconsistencies between systems SAP AG 2003
All database systems guarantee that it is always possible to recover to the current point in time after a system crash. This implies that all committed transactions can be recovered after a crash and the system will be in a consistent state after recovery. Data between systems can only be inconsistent if it is impossible to recover one of the systems to the point of the crash and a recovery to an earlier point-in-time was performed because of the situation. Avoid point-in-time restore if possible. Usually the reasons for an incomplete recovery are of physical nature. For example, the log files are corrupted, destroyed or missing and therefore cannot be applied, or maybe the database is logically corrupted and cannot be fixed at all.
© SAP AG
ADM355
8-33
Alternatives to a Point-in-Time Recovery To avoid a point-in-time recovery, consider the following: Single tables lost or damaged Reconstruct table from a test system Reconstruct table from redundant data in other tables Reconstruct table from redundant data in other systems Restore system on a sandbox and reconstruct table from there
Wrong transports imported Apply correcting transports
If another solution seems possible, avoid point-in-time recovery
Other handling errors Repair inconsistent data
SAP AG 2003
© SAP AG
ADM355
8-34
Point-in-time Recovery Last Synchronization Point Point-in-time APO / Last Consistent Backup for APO Incomplete Recovery of OLTP - Alternatives
1. Point-in-time restore for one system
A
APO
Data lost Inconsistencies
OLTP 2. Restore of a consistent environment backup
B
APO
Data lost
OLTP
Data lost
3. Point-in-time restore for all systems
C
APO
Data lost Inconsistencies
OLTP
Data lost
SAP AG 2003
Point-in-time recovery of one system always involves data loss and therefore causes inconsistencies in your environment. Weigh the amount of inconsistencies that will arise and the amount of data loss before doing the recovery. Data inconsistencies can be fixed by restoring a consistent backup of the complete landscape. The synchronization point of your consistent system environment backup is very critical, it might not be relevant to the situation and sometimes the data loss might be worse than recovering by point-intime. Data inconsistencies can be prevented by restoring a consistent backup of the complete landscape. With current techniques, a consistent backup of the whole system environment is only available for certain point in time (which may be unsuitable) and not to any arbitrary point in time. If this time is long before the necessary point in time for recovery, the data loss would be unnecessarily big. In that case (or if no consistent backup of the environment is available), alternative A could be an option. If APO and OLTP are both point-in-time restored, minimizing the time difference between the two restores minimizes the inconsistencies. But inconsistencies are possible even if the systems are restored to the same point in time because server clocks are not synchronized (unless both systems run on one host). Point-in-time recovery of an APO system can always result in internal inconsistencies which must be fixed first before repairing external inconsistencies.
© SAP AG
ADM355
8-35
Point-in-time Recovery: Tradeoffs
Alternative Factor
(A) Point-in-time restore for one system
(B) Restore a consistent Backup
(C) Point-in-time restore for all systems
Data loss
Minimum, only 1 system affected
Maximum, all systems affected
Medium, all systems affected
Inconsistencies
Most inconsistencies
No inconsistencies
Little inconsistencies
Downtime
Only 1 system down, downtime depends mainly on amount of logs
All systems down, downtime possibly quite short (depending on method used)
All systems down, downtime depends mainly on amount of logs
SAP AG 2003
© SAP AG
ADM355
8-36
Point-in-time Recovery: Comparison of Strategies Point-in-time recovery needed for one system Consequence: data loss and/or data inconsistencies
B
Restore consistent environment backup
C
Point-in-time also for other systems
A
Point-in-time for one system
• Data loss in all systems • Possibly more data lost than due to PIT-recovery
• Data loss since PIT in all systems • About the same data loss in all systems
• Data loss in 1 system • Least data loss • Chance to salvage data from surviving systems
• No inconsistencies • Possibly inconsistencies between other components Order is lost
• Little inconsistencies
• Most inconsistencies Some orders / business documents inconsistent
Data inconsistency
Data loss SAP AG 2003
It is not possible to do a consistent restore for all systems by doing point-in-time restores for all participating systems. There will always be some modifications that were already done on one system and not yet done by the other systems, leading to an inconsistent state. All systems will lose data starting from the point-in-time to which recovery is done. Keeping all systems as they are maximizes the extent of data inconsistencies but minimizes data loss. It also enables you to rebuild some of the lost data from the data that is still available in the untouched systems. Inconsistencies may lead to errors during business processing, but you can perform manual correction based on the data in the other systems. Automatic identification and correction of inconsistencies may be possible. A backup strategy must include concepts for dealing with point-in-time recovery of different system components. Which of the above concepts is applicable for a specific customer depends mainly on the situation onsite. Relevant questions include: Is the data loss acceptable? Can data be recovered from other systems? Can operations continue with partly inconsistent data? How much downtime is acceptable? Suitable strategies differ, depending on the failing system component.
© SAP AG
ADM355
8-37
Point-in-time Recovery: General Recommendations In case of a point-in-time-recovery of one system, SAP recommends the following procedure: Keep a copy of the old productive system in order to manually reconstruct the lost data at a later stage Keep the other systems running Do not reset other systems holding originals of data Fix the inconsistencies as well as possible Components holding only replicated data can be initialized
Advantages: Data loss is kept to the absolute minimum Downtime is kept to a minimum Downtime of other systems is avoided Business can go on (although some processes may be affected, long restores of the whole environment are harder to accept) Additional problems or errors that can be caused by reset of other systems are avoided SAP AG 2003
The above is a very general SAP recommendation on how to proceed in a situation where point-intime recovery of one system is necessary. It is impossible to give a recommendation that covers all the customers' needs. There is also no general approach that fixes all inconsistencies automatically. Each project must include consideration of what needs to be done from an application point of view if specific application data is lost in one of the systems.
© SAP AG
ADM355
8-38
Point-in-time Recovery of APO System (1) Consequences: Data loss in APO system OLTP system has data that is no longer available in APO If you reset OLTP system (the leading system) this data is lost
Alternatives: A
1. Restore APO to latest possible point in time, keep OLTP running 2. Fix inconsistencies
B
Restore a consistent backup of both systems
C
1. Reset OLTP system to the same or a slightly later point in time than APO 2. Fix inconsistencies
SAP AG 2003
If you must do a point-in-time recovery on the APO system, try to use the OLTP system as the leading system as most of the time it still has the relevant data (except for data that has not yet been replicated to the OLTP or data that is not replicated to the OLTP at all).
© SAP AG
ADM355
8-39
Point-in-time Recovery of APO System (2) Last synchronization point / Last consistent backup
A
Point-in-time for APO
APO
Rebuilt data
APO OLTP Inconsistent OLTP data
B
Lostdata data(not Lost data (such Lost yet replicated) as APO online)
APO Lost data
OLTP
C
APO OLTP
SAP AG 2003
Three alternatives: A Restore APO system to the latest possible point in tjme APO data that was not yet transferred to the OLTP will be lost. Identify data that was already transferred to OLTP. This data can be transferred to the APO system. B Restore consistent system backup Data is consistent, but more data from OLTP and APO will be lost. Downtime also in other systems. It may be possible to reconstruct lost data from a copy of the former OLTP system. C Restore APO system to the latest possible point in tjme, restore OLTP system(s) to a slightly later point in time Data from OLTP and APO will be lost. Downtime also in other systems. Another conceivable alternative is Discard all data ever entered into APO system (since start of production) Perform a complete initial data transfer from OLTP without APO restore Then all APO data that was not replicated to the OLTP will belost. There are two ways to perform this alternative: Delete all application data manually from APO but preserve the customizing data. Set up the APO system completely from scratch, including customizing and so on.
© SAP AG
ADM355
8-40
Point-in-time Recovery of OLTP System (1) Consequences: Data loss in leading system APO references nonexistent OLTP data
Alternatives: A
1. Restore OLTP to latest possible point in time, keep APO running 2. Fix inconsistencies
B
Restore a consistent backup of both systems
C
1. Reset APO system to the same or a slightly earlier point in time than OLTP 2. Fix inconsistencies
SAP AG 2003
© SAP AG
ADM355
8-41
Point-in-time Recovery of OLTP System (2) Last synchronization point / Last consistent backup
A
OLTP
APO Inconsistent APO data
OLTP
B
Point-in-time for OLTP
APO OLTP
C
APO OLTP
SAP AG 2003
Three alternative approaches: A Restore OLTP system to the latest possible point in tjme Data from OLTP will be lost. APO references data that no longer exist in OLTP. Identify lost changes that were made to data that existed in the OLTP before the point in time. This data can then be corrected manually in the OLTP. Some lost data in OLTP can be reconstructed from APO. B Restore consistent system backup Data is consistent, but typically more data from OLTP and APO will be lost. Downtime also in other systems. It may be possible to reconstruct lost data from a copy of the former APO system. C Point-in-tjme recovery for all systems Data from OLTP and APO will be lost. Downtime also in other systems.
© SAP AG
ADM355
8-42
Point-in-time Recovery: Impact
Data loss in OLTP or APO can cause Inconsistencies in their dependent components
OLTP data loss:
APO data loss:
BW: Corrections might be needed
ITS: Publish flow & service files
APO/liveCache: Synchronization might be needed
Web server: Publish multimedia files BW: Corrections might be needed
You only need to act if a component's data was affected by changes made during the lost period SAP AG 2003
After data loss in one of the main systems (OLTP or APO), the data in all dependent components must be examined and might need to be restored to reflect the new situation. If no changes were not made to the data in a dependent component during the period of the data loss, there is no need to adjust the component’s data.
© SAP AG
ADM355
8-43
Point-in-time Recovery: Summary
Point-in-time recovery is not an option generally from a business perspective Modern storage systems prevent hardware failures Database system backup mechanisms ensure complete recovery
In case of a point-in-time recovery of one system, distributed data offers the chance to salvage data from the surviving system
A backup strategy should also include ways to handle inconsistencies SAP AG 2003
It is important to have strategies to deal with inconsistencies. With collaborative business scenarios that implement cross-enterprise processes, it will be hard to avoid inconsistencies if an enterprise needs to do a point-in-time recovery on all of its systems.
© SAP AG
ADM355
8-44
Backup and Recovery: Summary Architecture APO Location of data (original and replicated)
Data Flow and Data Integrity RFC communication ensures consistency
Backup and Recovery Protection from data loss due to hardware errors Traditional backup for each individual component
Consistent System Copies Synchronization points
Point-in-time Recovery Incomplete recovery is an extremely rare situation Alternatives for avoiding a point-in-time recovery Alternatives in case of an incomplete recovery
SAP AG 2003
© SAP AG
ADM355
8-45
Further Documentation
For additional information about disaster recovery, go to URLs: service.sap.com/ATG service.sap.com/onedb
SAP AG 2003
© SAP AG
ADM355
8-46
Disaster Recovery: Summary
You are now able to: Describe the backup and recovery of components in a mySAP.com landscape Describe backup and recovery strategies in the mySAP.com landscape
SAP AG 2003
For more detailed information on backup, restore and recovery, refer to R/3 Basis documentation or Basis courses (for example, BC505—Oracle Database Administration)
© SAP AG
ADM355
8-47
Appendix: liveCache Backup/Recovery
Refer to ADM555 for liveCache Administration course
SAP AG 2003
© SAP AG
ADM355
8-48
liveCache Devspace Configurations
Version 7.2 Data-Devspaces 1-256
System-Devspace
Archive-Log-Devspace 1 - 32
Config Config Converter Converter Restart Restart
Data33 Data Data33 Data Datan3 Data
Archive Archive Archive Archive Log Archive Log Archive Log Archive Log Archive Log Log Log Log
Redo Redo
Version 7.4 Data-Devspaces
Archive-Log-Devspace
Restart Archive Archive Archive Archive Log Archive Log Log Archive Archive Log Archive Log Log Log Log
Data 3 Redo
Archive Archive Archive Archive Log Archive Log Log Archive Archive Log Archive Log Log Log Log
...
SAP AG 2003
liveCache is SAP DB. liveCache version 7.4 works without a System-Devspace. In version 7.2 the System-Devspace contains the Config Pages, the Restart Record and the Converter The Config Pages are not needed anymore. They were used for Configuration Information (e.g. names of devspaces) in SAP DB version 6.1. The Restart Record now is stored on the first Data-Devspace liveCache stores the Converter on the Data-Devspaces. The Converter Pages are distributed on the Devspaces. This prevents from hotspots on one Devspace during Savepoints. Savepoints flush all modified data to the Devspaces. After-Images of modified objects or SQL records are written to the Archive Log. Several Archive Log-Devspaces can be used in parallel. Each User Kernel Thread (UKT) can use it’s own Archive-Log-Devspace. Usertasks inside a UKT don’t run in parallel; i.e.. archive log writing will not reduce scalability. This will not be implemented with the first builds of version 7.4. liveCache can mirror the Archive-Log-Devspaces. In productive systems the log must be mirrored. Please use logmode DUAL or mirroring of operating system / hardware (e.g. RAID 1).
© SAP AG
ADM355
8-49
liveCache 7.4 features liveCache 7.4 is released with SAP APO 3.1 As of December 2003, SAP APO 3.0 will be supported only with SAP liveCache 7.4. liveCache 7.4 supports OMS full logging No more Checkpoints but Savepoints An improved LC10 is available for SAP APO 3.1 and also for SAP APO 3.0A as of SAP Basis 4.6C SP 30 liveCache Hot Standby is planned with a later 7.4 version
SAP AG 2003
liveCache 7.4 is released with SAP APO 3.1, but also backward released for SAP APO 3.0A. Details can be found in http://service.sap.com/scm-> mySAP SCM Technology -> Backup & Recovery SAP liveCache is now able to log all transactions and changes for both the OMS (Object Management System) and RDBMS data into the log devspaces. It is also always able to recover to the most current point in time. Changes that are logged include Demand Planning, SNP time series and all planning versions. Transactional simulations and simulation versions are not logged. liveCache 7.4 now supports savepoints, checkpoints no longer exist. There is no more waiting for checkpoints to complete. Savepoints write to the data devspaces all pages that were modified in the datacache since the last savepoint. They start automatically every few minutes (according to a schedule controlled by the SAP liveCache kernel) and they ensure that there is only a short period of time for which data needs to be restored should the system crash. The program /SAPAPO/OM_CHECKPOINT_WRITE is obsolete in SAP APO 3.10, and will also be obsolete for SAP APO 3.0A after migration to SAP liveCache 7.4. All other repository objects that were responsible for synchronous liveCache logging are no longer available. This includes: Report /SAPAPO/OM_LC_LOGAREA_CHECK Report /SAPAPO/OM_LC_ARCHIVELOGAREA_DELETE Report /SAPAPO/OM_LC_LOGGING_SET Tables /SAPAPO/LC_LOG* Online backup using checkpoint in transaction OM06
© SAP AG
ADM355
8-50
liveCache Backup & Recovery
Backup strategies Save Data
full backup of data
Save Pages
incremental backup of data
Log Backup
full backup of the log
Restore strategies Restore Data Restore Pages Restore Log
SAP AG 2003
A backup cycle should be set to meet recovery needs. The following minimum backup and recovery cycle is therefore recommended: Daily full online backup of SAP liveCache Periodic backup of the log Storage of backups for at least 30 days Backup can be setup using DBMGUI or through SAP GUI using the report RSLVCBACKUP.
© SAP AG
ADM355
8-51
liveCache Backup: Save Data Labels: Data 1
Data n
Archive Log‘
Log Devspaces
DAT_001
Data Devspaces
Data 2
DAT_001
paramfile
Archive Log
/sapdb/data/config/
Complete backup (Save Data) SAP AG 2003
SAVE DATA saves all occupied pages of the data devspaces to the backup medium. The parameter file is also written to the backup. Each backup gets a label. The Database Manager knows the sequence of the backups. The database kernel writes a protocol of each backup to the file control.knl in the Rundirectory. The protocol is also available in the Database Manager.
© SAP AG
ADM355
8-52
liveCache Backup: Save Pages Labels: Data 1
Data Devspaces
DAT_001
Data 2
PAG_002
Data n
PAG_003 DAT_004
Log Devspaces
PAG_002
Archive Log‘ Archive Log
PAG_005 PAG_006
Incremental backup (Save Pages) SAP AG 2003
SAVE PAGES saves all pages that have been changed since the last SAVE DATA The label version is increased with each SAVE DATA and SAVE PAGES.
© SAP AG
ADM355
8-53
liveCache Backup: Log Backup Labels: Data 1 Data 2 LOG_001
Data n LOG_001
Data Devspaces
Archive Log‘
Log Devspaces
Archive Log
Log backup (Save Log, Autosave Log) SAP AG 2003
SAVE LOG saves all occupied log pages from the archive log that have not been saved before. As backup medium no tapes or version files are supported. We recommend to save the log into version files. One version file will be created for each log segment. The version files get a number as extention (e.g. SAVE.LOG.0001, SAVE.LOG.0002, ...) The label versions are independent of the labes generated with SAVE DATA and SAVE PAGES. When the autosave option is activated, SAP liveCache saves log information from log devspaces to a storage medium as soon as the log devspace fill rate reaches a specific value. This means it is not necessary to monitor constantly the usage level of the log devspace and a log overflow should not occur.
© SAP AG
ADM355
8-54
liveCache Recovery
If there is no data loss when liveCache crashes, that means all devspaces are available, restart the liveCache again. Automatic recovery occurs when liveCache is restarted If there is data loss, find out where is the problem and use DBMGUI to restore lost data devspaces from the most recent data/log backup Try to avoid initializing the liveCache
SAP AG 2003
Restore Strategies: Restore Data Restore Pages Restore Log Only initialize the liveCache in an SAP APO system after severe software or hardware errors, or after a failed recovery.
© SAP AG
ADM355
8-55
liveCache Restart Savepoint T1
Savepoint
Savepoint
Crash
Commit T2 T3
Rollb T4
C T5 R T6 C
No recovery for transactions T1 / T5
Redo (read Archive-Log) : T4 / T6 Undo (read Undo-File) : T2 / T3
t
Use LC10 to start the liveCache SAP AG 2003
Automatic recovery occurs when liveCache is restarted. Always use transaction LC10 to start liveCache. The restart performs a redo of transactions that were open at the time of the last savepoint and committed at the crash time. Transactions which were open at the crash time will only be rolled back if they were open at the time of the last Savepoint Start-point for the redo/undo is the last savepoint. All data written after the last Savepoint to the data devspaces will not be considered Our Example: Transactions 1 are 5 are not relevant for redo/undo. Transaction 1 was committed at the time of the last Savepoint. The modifications were written to the Data-Devspaces. Modifications of transactions 5 are not in the data area of the last Savepoint Transaction 2, 3 and 4 were not completed at the time of the last Savepoint. The liveCache will redo transaction 4 REDO Transaction 2 and 3 will be rolled back, beginning at the time of the last savepoint UNDO The restart will completely redo transaction 6. The modifications are not in the data area of the last Savepoint REDO
© SAP AG
ADM355
8-56
Preparing for liveCache Recovery
Ask APO users to log off Prevent users from logging into APO using report /SAPAPO/OM_LOCKUSER Cancel all programs scheduled to be run in the background
SAP AG 2003
Preventing users from logging on and cancelling scheduled programs can be done using transaction /SAPAPO/OM17 as of SAP APO 3.0A SP17.
© SAP AG
ADM355
8-57
Restoring liveCache Restore
Restore
Restore
DAT_004
PAG_006
LOG_010
LOG_011
LOG_011
Restore
DAT_004
PAG_005
LOG_010
Version files
Data 1
Data 2
log.010
Restart
Archive Log‘ Archive Log
log.011
Data n
SAP AG 2003
During RESTORE DATA, pages are written back to the devspaces. RESTORE PAGES checks the position of the data pages in the converter and overwrites the pages in the devspaces with the modified images. After the last RESTORE DATA/PAGES, liveCache immediately performs a restart if the log entries belonging to the savepoint persist in the Archive Log. The restart reapplies the log entries. RESTORE LOG is run if the savepoint belonging to the SAVE DATA/PAGES was overwritten in the Archive Log. liveCache reads the log entries from the backup medium until it can find the next entry in the log.
© SAP AG
ADM355
8-58
After liveCache Recovery
Activate CIF queues using SMQ1/SMQ2 if they were stopped before the recovery Unlock users using report /SAPAPO/OM_LOCKUSER and allow them to log on back to the APO system Reschedule all the cancelled background jobs Check for internal data consistency using /sapapo/om17 and /SAPAPO/CIF_DELTAREPORT3 for external consistency
SAP AG 2003
The program /sapoapo/om_lc_recovery will restart the liveCache and transfer data from the current active log area into the liveCache. If archive log is activated and if is required for the recovery, the program will prompt user about the decision. CIF queues will also be activated. When liveCache recovery is completed, user will be notified. System administrator should also check the following: - Check for internal data consistency using transaction /sapapo/om17 - Job log for possible errors - Application log - Carry out general testing to ensure system is functioning properly - Reschedule the checkpoint procedure
© SAP AG
ADM355
8-59
liveCache/Internal Consistency Check (/SAPAPO/OM17) Use transaction /SAPAPO/OM17 (liveCache/internal consistency check) to check whether data is consistent between selected objects in the SAP APO database and SAP liveCache liveCache consistency check should be carried out in the following situations: If a recovery is incomplete after a system crash If you suspect inconsistencies Review the result overview when the check is completed and resolve any inconsistencies
SAP AG 2003
Perform liveCache consistency check (internal consistency check) regularly. Depending on the object that you want to check, you have to meet the following prerequisites: Ensure that no processes are active in the system during the check Check other dependent objects, if necessary, before executing the liveCache consistency check It is more efficient to start the consistency check in the background so that you can execute several object checks in parallel and reduce the runtime. As soon as the job in the background has finished, you automatically receive an email containing the results of the check. If there are any consistencies, you can start automatic correction for all objects or just for selected objects from the result overview. After the liveCache consistency check, make sure you re-release the system by restarting the CIF queues and release any background jobs again. To restart the CIF Queue, use the Start CIF Queue pushbutton to go to the RSTRFCQ3 program. If you are not using CIF inbound queues, you have to restart the data transfer in the source system. To do this, use transaction SMQ1 and program RSTRFCQ3. This should be a joint effort between the application consultants and the technical consultants or administrators.
© SAP AG
ADM355
8-60