About Me

My photo
I am Suresh Chinta, working on SAP HANA Cloud & SAP BTP Cloud/ AWS/Azure cloud consultant.I have experience in SAP Basis/Netweaver , S4HANA Cloud implementations / Support. I'm certified Microsoft Azure cloud & AWS professional. I have started this blog to share my knowledge with all those who are interested to learn & enhance their career.

Tuesday, March 24, 2020

How to resolve java dispatcher startup issues

How to resolve java dispatcher startup issues?
What checks need to be done if java dispatcher doesn’t start during java startup?
What action need to be performed to fix java dispatcher startup issues?

-----------------------------------------------------------------------------------------
If java dispatcher doesn’t start during java startup, please check the std_dispatcher log from java work directory. It means, go to /usr/sap/<SID>/<instance>/work directory and look out for error messages. If you find below error message, this could be due to conflict between values maintained for thread mananger.

Loading: ThreadManager returned false! Kernel not loaded. System halted.

To resolve this issue, please proceed as follows:
  •  Please login to config tool
  •  Navigate to config tool -> dispatcher -> managers -> threadmanager
  •  Please verify the values maintained for MinThreadCount, InitialThreadCount   and  MaxThreadCount.
  •  Please make sure that MinThreadCount value <= InitialThreadCount <=    MaxThreadCount
  •  If you are not sure about the values to be set, please maintain default values and check if dispatcher starts


If issue, didn’t get resolved even after performing above steps, please perform below mentioned additional tasks and retry to start the system

  • In Config tool, navigate to Cluster-data --> Instance_IDxx --> dispatcher_IDxx --> managers --> ConfigurationManager
  • In the local properties section, please make sure sqljdbc.jar exists for rdbms.driverLocation

START SUM TOOL ON LINUX

STARTING SUM TOOL ON LINUX SYSTEM
The starting procedure of SUM TOOL in Windows and UNIX is different.
In WINDOWS we can directly start tool by running batch file (SUMSTART.BAT).
But in LINUX you cannot start sum tool directly.

Software Required:
SAPHOSTAGENT.SAR
SUM TOOL
Extract both the file using sapcar.
./SAPCAR -xvf <Path of sar file/SAR FILE > -R <PATH to where you want to extract>

INSTALLATION OF SAP HOSTAGENT
Go to the path where SAP HOSTAGENT is extracted.
Run the following command
./saphostexec -install
HOST AGENT  will get installed.

EXTRACTING AND GIVING PERMISSION TO SUM
After extracting the SUM TOOL SAR File
GIVING the PERMISSION and OWNERSHIP to SUM Folder
We have to change the ownership of SUM Folder from root to <SID>adm

Use following Command:

chown <SID>adm:sapsys -r <SUM>

Changing the Permission of SUM Folder

Use following Command:

chmod 755 -r <SUM>

STARTING SUM TOOL

We cannot start SUM gui in Linux.

From ROOT USER

Go to the SUM Folder

Run the following Command:

./SUMSTART confighostagent <SID>

It will start services.
Now change from ROOT user to <SID>ADM User.
su <SID>adm
Rum the following command.
./SUMSTART
Now open the Browser in WINDOWS System.

Enter the following URL


hostname : hostname of the linux server where sum tool is running.


It prompts to enter the Username and password
USER : <SID>ADM
PWD:******
SUM TOOL GET STARTED.

Friday, March 20, 2020

R3trans, tp and transports-How are they all connected with SP/SAINT/SUM


R3trans, tp and transports-How are they all connected.
It is important that a transport administrator or a Basis administrator understands these concepts.
tp: tp is a utility for controlling transports.
    tp calls the following:
    R3trans is usually called in particular from tp
  1. R3 trans –Import DDIC objects into the data base
  2. sapevent is triggered–RDD jobs are scheduled–Activates DDIC objects (wp in SAP are allocated)
  3. R3trans–the main import takes place
  4. sapevt–XPRAS (wp in SAP are allocated)
When we assign R3trans parallel process to support packages or SAINT upgrades or SUM upgrades.
How are the R3trans process allocated?
R3trans is an operating system process, so when we assign parallel process those many operating system processes will be performing the transports.
Only while Activating the DDIC objects and performing the XPRAS, these operating system processes may call the SAP work process.
For example, let us assume that while performing the support packages, we have allocated 16 R3trans parallel process.
Grep the total number of wp from the operating system.
(16+parent process+grep process)
$> ps -ef | grep R3trans | wc –l
18
R3trans.jpg
Note: We cannot assign as many wp’s as we wish, this will utilize the CPU, I/O operations will be consumed to the maximum.

Thursday, March 19, 2020

SUM Upgrades Configuration : Tuning/Process Counts

Configuration Screen of SUM : Parameters for procedure.
Process_.PNG
Ok, So the Problem was on the Process count to use for our updates and upgrades. I turned to many fellows in my touch, All of them had some calculations of random andarbitrary nature. I was not convinced. We had to have something concrete for each and every process type which could be used for direct calculations. Tried multipleruns on a Sandbox, Referred to many notes, Initiated an OSS Incident, and also turned to inhouse experts of Database and SAP.
The final results provided reason for the random arbitrary nature of the view taken by my colleagues. You can’t have something conclusive like (Number of CPUs X 1.3 = R3trans processes to use), although a lot of industry veterans do so. What one can do is fall into the ‘Thought process’ of researching, tuning, observing, andtesting.
One of things that I found myself in great need of but missing was a good SCN blog on the topic. There were tidbits here and there, but hardly any good guidance.
The reason I initiate this blog and discussion is just that : To get thoughts from any and all, so the end page is an ever evolving starting point for everyone at the above screen of SUM for their respective SP/EHP/Release Upgrade.
Lets discuss process by process the thought process I used:
1. ABAP Processes :
Pretty Straightforward. Configure according to BGD processes available in the main system. Make sure enough is left for the Normal jobs users have to run. For downtime, you can use the maximum available. As per the SUM Guide, the returns stagnate after a value of 8. So below is what I used for system with 10 BGD available:
UPTIME : 6
DOWNTIME : 10
Could have increased the BGD in the system, but since the value above 8 should not have had much impact, above counts seemed optimal to me.
2. SQL Processes :
This part looks simple, but was the trickiest for me. Appropriately sizing this can do good for DBCLONE and PARCONV_UPG Phases. But size too large and you may experience frequent deadlocks in various phases, logging space full errors, archiver stuck, or performance severely impacted.
The Problem in my case, when using nZDM with very high SQL count was – “Transaction Log is full” – DB2 Database running out of the logging space. If you are working with a database like DB2 – where you have “Active logging space” constrained by DB parameters, make sure to size this process count small – Too many parallel SQL statements and logging space will fill up quick resulting in the aforementioned error which can only be bypassed by decreasing the count. To unthrottle, increase logging space or Primary/Secondary logs. Also, the log archiving has to be fast with plenty of buffer space in the archive directory.
As for the count, if one can take care of logging space and log archives, the next step is CPU. Different databases may slightly differ when dealing with execution of SQL in parallel. But core concept remains the same. More CPUs Help. Once you have a number, like 8 cores in my example, You next need to finalize the degree of parallelism (DOP – Oracle Term) – The number of parallel threads each CPU will be executing. For example, if 16 SQL Processes would have been used in my case – 2 threads would be executing per CPU – A choice I didn’t took as I wanted minimal impact on the productive operation of the system during the uptime phases.
Referring to the standard documentation of Oracle and DB2 databases – what I noticed was that the default and recommended DOP is 1-2 times the number of online CPUs. Also, the return is stagnated after increase to a particular number, after which the negative effects (Performance deterioration) increase as usual but returns are minimal.
After increasing the logging space, taking enough archiving space directory, following is the number I used for 8 CPUs.
UPTIME : 8 (DOP=1)
DOWNTIME : 12 (DOP = 1.5) Will make this 16 in the next system.
DBCLONE done in couple hours with above – Good for me.
4. R3trans Processes :
So the big one now. This process count has the biggest impact. TABIM_UPG, SHADOW_IMPORTS, DDIC_UPG – Phases with biggest contribution to runtime/downtime – go faster or slower based on how much this is tuned. The below KBA is the first step to understand how tp forks these during imports. There is a parameter “Mainimp_Proc” which is used in the backend to control the number of packages imported in parallel and the below KBA explains just that – The entire concept.
1616401 – Understanding parallelism during the Upgrades, EhPs and Support Packages implementations
1945399 – performance analysis for SHADOW_IMPORT_INC and TABIM_UPG phase
Now, how to tune it. This was one the most confusing ones. There are notes which say to keep it equal to number of CPUs (Refer Above notes – They say this). The SUM Guide seems to love the value of 8 (The Value larger than 8 does not usually decrease the runtime <sic>). You also have to keep in note the memory. A 512 MB of RAM per R3trans Process seems a good guideline. The end result for me was the same process count as SQL Processes :
UPTIME : 8
DOWNTIME : 12
One other thing still left unexplored, but next on my radar, is playing with “Mainimp_Proc”. The below link talks about changing that using parameter file
TABIMUPG.TPP. Since this controls the number of TP Processes, tuning this should be done after results from one system. Readings there in the logs can help here.
http://wiki.scn.sap.com/wiki/display/ERP6/Performance+during+upgrade+phase+TABIMUPG
5. R3Load processes :
For EHP Update/SPS Update, I dont think this plays any part. From what I understood, this is relevant majorly to the Release Upgrade. Anyways, this one was a bummer. I didn’t seem to find any helpful documentation on R3load relevant for the upgrades specifically . However, Communicating with SAP over an OSS. The below guideline was received and used :
“There is no direct way to determine the optimal number ofprocesses. A rule of thumb though is to use 3 times the number of available CPUs.” The Count I used:
UPTIME : 12
DOWNTIME : 24
But anyone from the Community can answer and increase my understanding : Which phases use this in upgrades, if any?
6. Parallel Phases :
Another one of random nature with Scarce details. This one talks about the number of SUM sub-phases which SAPUp canbe allowed to execute in Parallel. Again, had to refer to SAP via OSS Incident for the same.
“The phases that can run in parallel will be dependent on upgrade/update that you will be performing and there is no set way tocalculate what the optimum number would be.” Recommendation was to use default and that is what I did.
UPTIME : Leave default (Default for “Standard” mode – 3, Default for “Advanced” mode – 6)
DOWNTIME : Leave default (Default for “Standard” mode – 3, Default for “Advanced” mode – 6)

SUM Tool Phases Explain

Extraction
CHECK4NOTES_TOOL phase: SUM asks to implement some necessary SAP Notes
CHECKPROF_INI phase: SUM checks the system profiles for problems
CHECKSYSSTATUS phase: SUM reads the profiles and checks the state of the running instances
DBCHK_PRE phase: SUM Determines database version and SAP release
DBCONNCHK_INI phase: SUM tests if new tools can connect to the database
DBQUERY_PRE phase: SUM checks the database state and asks database dependent questions
DETMAINMODE phase: SUM checks the stack xml file and decides about the main program mode
EXTRACTKRN_PRE phase: SUM tests a kernel DVD to install SUM in /usr/sap/<dir>/SUM/abap/exe
INITPUT_PRE phase: SUM reads profiles and initializes knowledge about the system
INSTANCELIST_PRE phase: SUM will gather information about the instances of the system
JOB_RSUPDTEC phase: In this phase a batch job RSUPDTEC will be started. This job resolves inconsistencies in TABART-TABSPACE mapping. Log files: PSUPDTEC.LOG PSUPDTEC.ELG
KX_CPYORG phase: SUM Copies the original kernel to $(PUTPATH)/exe
PROFREAD phase: SUM reads the profiles and prompts for required passwords
READCVERS_DUMP phase: SUM reads the CVERS table content
READDATA_EXP phase: SUM tests a kernel DVD to install SUM in /usr/sap/<dir>/SUM/abap/exe
SCAN_DOWNLOADDIR phase: SUM scans the download directory and extracts the packages
TOOLCHECKXML_INI phase: SUM determines and checks the tool versions in SYS (the active SAP kernel directory)
TOOLIMPD phase: SUM prepares the ABAP dictionary for importing upgrade tools on the standard instance
TOOLVERSXML_UNI phase: SUM checks the tool versions if the system is UNICODE
VALCHK_INI phase: SUM checks if the source and target system is valid for update
VERSCHK_INI phase: SUM checks if the SAP system release
Configuration
ADDON_QCALC phase: SUM calculates the queue for selected add-ons
ADJUSTPRP phase: This phase prepares adjustment calculation: Imports command file flagged in other system, if necessary
EHP_INCLUSION phase: SUM calculates the Enhancement Package (EHP) included into the stack xml file
INITSUBST phase: In this phase you can configure the SUM tool parameters which influence the update runtime and resource requirements
IS_SELECT phase: During the IS_SELECT phase, you will have to decide how the installed and delivered add-ons are to treated during the upgrade
LANG_SELECT phase:
LIST_LOAD phase: SUM retrieves information regarding the tables in the database
SHDINST_DB_PREP phase: SUM checks database-specific settings
SHDINST_OS phase: SUM Performs operating system-specific actions
Checks
ACTREF_CHK phase: This phase checks whether activation errors might occur during the installation. Log file: <SUM_DIR>/log/ SAPupConsole.log
BATCHCHK_GEN phase: SUM tests whether background server can access the upgrade directory
CHECK4NOTES_PACKAGES phase: SUM asks to implement some necessary SAP Notes
ENVFILES_CHECK phase: This phase checks environments of <sid>adm and whether profiles of user <sapsid>adm can be modified
FREECHK phase: This phase checks free space in the file system
JOB_RSAUCHK_DUP phase: This phase checks for double F rules in the table RSUPDINFO
JOB_RSUPGRCHECK_PRE phase: This phase checks consistency of generated repository
LIST_LOAD_SPC phase: In this phase SUM retrieves information regarding the tables in the database
RUN_CHECK_SLT_TRIGGER_PRE phase: SUM will check the switch tables for existing triggers
SPACECHK_INI or SPACECHK_OPT phase: checks the database free space
TABSPC_PREP phase: calculates which tables are part of import into old tables
TR_GET_SPCREQ_ADD phase: calculates the amount of data in add-on and language requests
TR_GET_SPCREQ_IMP phase: calculates the amount of data from the upgrade requests
Pre-processing
ACT_UPG phase and SPDD:
BATCHCHK phase: SUM tests whether background server can access the upgrade directory To do this, the background job RDDIT008 is started on the specified background server.
This job writes a test log in subdirectory tmp of the abap subdirectory of the update directory.
DBCLONE phase: In this phase the target system (shadow system) will be copied from the current system. the Software Update Manager starts a number of background jobs to create copies of the tables required for operating the shadow system.
Depending on your system and your hardware, this operation can take several hours. These background jobs are executed on the background server that is configured with the BATCH HOST parameter. The number of jobs is determined by the parameter MAX BATCH PROCESSES. It cannot be higher than nine or the number of background processes configured on the background server.
Each table is copied with a single INSERT statement. Therefore, the administrator has to take care that the undo-logs can grow accordingly.
DOWNCONF_DTTRANS phase: SUM executes checks and asks all the questions necessary for entering the downtime
EU_CLONE_CRE_SHDVIEWS phase: In this phase SUM Creates views on Shadow
EU_CLONE_MIG_UT_RUN phase: In this phase Entries of *UT* group tables are exported from shadow repo and imported to HANA in parallel. R3load pairs are doing the export and import. The first R3 load (part of the shadow kernel) is exporting the data, the second R3load (part of the target kernel) is importing the data into SAP HANA DB.
EU_IMPORT1 or EU_IMPORT2 phases: In this phase (parallel) R3load processes will be started to import data from DVDs to the database.
If you chose the preconfiguration mode single system, these phases run during downtime.
Log file: SAPup.log
<update directory>/log/
EU_IMP1.ELG
EU_IMNDB.*
EX00000x.DPR
ICNVINIT phase: SUM checks the volumes of the ICNV candidates and initializes successful candidates
ICNVREQ phase: SUM prompts you to start transaction ICNV if there are candidates for the ICNV,but ICNV has not yet been started
JOB_RSVBCHCK2 phase: If there are any outstanding or incomplete updates, the update stops in phases JOB_RSVBCHCK2 (in Preprocessing roadmap) or in JOB_RSVBCHCK_D (Execution roadmap) with a message
PARDIST_SHD phase: SUM starts the distributor on the shadow instance in parallel jobs
PARMVNT_SHD phase: SUM activates nametab entries
REPACHK_CLONE phase: This step is relevant if you perform an enhancement package installation or an SPS update.
If you have chosen preconfiguration mode standard or advanced, the Software Update Manager asked you in this phase to confirm the locking of the ABAP Workbench on all SAP instances. In contrast to the release upgrade, the Software Update Manager requires the lock in this phase only.
This lock prevents development objects (for example, ABAP reports, table definitions, and so on) from being changed during the update since these modifications would be lost.
You can continue to use your SAP system in production operation, even if you confirm that the ABAP Workbench can be locked. However, after you have confirmed the ABAP Workbench lock, no more transports can be made into or out of the SAP system. Some further actions might be blocked that either check for this lock as well or for the running update. This is especially known in the area of Business Intelligence and SAP Solution Manager.
This phase displays all the repairs that are still in open transport requests. They are also written to the REPACHK2.LOG file.
Release these transport requests so that you can continue; otherwise, the objects contained in these repairs are locked. Note SUM checks in this phase also for inactive development objects.
REPACHK2 phase: This phase is relevant in release upgrades
This phase displays all the repairs and corrections that are not released and writes them to the REPACHK2.LOG file.
Before you continue, you have to release and confirm all the open repairs; otherwise, the objects in them are locked.
RUN_CHECK_SLT_TRIGGER_DTTRANS phase: SUM will check before downtime the switch tables for existing triggers
RUN_FDC_STRUCT_ANALYZER: SUM analyses structure changes to prepare Fast Data Copy
RUN_FDC4UPG_PREPROC phase:
RUN_FDCT_TRANSFER phase: This phase is executed by SUM in FDC scenario.
Fast Data Copy (FDC) is a table copy procedure that makes use of (partly database-specific) optimizations to get maximum copy performance. In SUM with the nZDM option switched on it is used to reduce the time needed for copying tables into the shadow.
RUN_RDDIT006 phase: In this phase SUM runs report RDDIT006 The report determines deviations of the current system from the future standard SAP system (objects and modifications that need to be copied)
RUN_RSDROPCDSBAS phase: SUM deletes CDS views
RUN_RSGEN phase:
RUN_RSPTBFIL_TRINIT phase: SUM runs report RSPTBFIL to generate trigger names and to create transports
RUN_RSUMOD10_SPAU_SHD phase: In this phase SUM runs report RSUMOD10 for SPAU preparation on the shadow instance
SCEXEC_ALIAS phase: SUM Creates aliases/views/synonyms
SCEXEC_GRANT phase: SUM Creates grants on shadow tables
SHADOW_IMPORT_INC phase: The shadow import phases are run during the shadow instance. On these phases, data is imported on the shadow tables and on new tables.
The shadow import is a feature aimed to reduce the application downtime caused by the import of the Support Packages. The idea behind this feature is to import, activate and convert all objects belonging to the Support Packages into a shadow repository, and at the end of all phases, switch to the new coding. The import of these packages is performed via the transport tools tp and R3trans.
<dia.>
SHADOW_IMPORT* phases: data is imported on the shadow tables and on new tables
SHDUNINST_DB phase: SUM deletes the shadow schema DB user
::SPDD in upgrade::
START_SHDI_FIRST phase: SUM starts the shadow instance the first time
START_SHDI_PREPUT phase: SUM starts the shadow instance the first time
START_SHDI_SHD2 phase: In this phase SUM starts the shadow instance the second time to execute further actions as replicating changes or running the SGEN on shadow instance.
STOP_SHDI_RES phase: SUM stops the shadow instance if the SUM reset was selected
TABIM_POST_SHD phase: During the TABIM (table import) phase, additional data is loaded in tables that belong to the SAP name range. All SAP table classes S, W, E, C and G are affected by row insertions, modifications and deletions – except for class A, which is protected against row modifications, where only insertions are allowed.
Execution
ACT_TRANS phase and SPDD:
REMARK: If you selected Downtime-Minimized Update strategy, the activation will be executed in the ACT_UPG phase
EU_CLONE_MIG_DT_RUN phase: SUM migrates data from the source database to the target HANA database
JOB_RSVBCHCK_D phase: If there are any outstanding or incomplete updates, the update stops in phases JOB_RSVBCHCK2 (in Preprocessing roadmap) or in JOB_RSVBCHCK_D (Execution roadmap) with a message.
KX_SWITCH phase: SUM installs the standard instance target release kernel
MODPROF_TRANS or MODPROFP_UPG phase: SUM will modify the system profiles for the upgrade
MVNTAB_UPG phase: SUM Converts application views and activates remaining nametab entries
PARCONV_UPG or PARCONV_UPG_DS or PARCONV_TRANS phase: In this phase, the application tables are adjusted to the structure of the target release. Here, several conversion program processes (in the SAP system) and tp processes run simultaneously.
PARDISTPRE_TRANS phase: This phase is executed by SUM if Resource-Minimized Update strategy – Single system preconfiguration mode was selected. SUM starts the distributor in parallel jobs.
PARMVNT_TRANS phase: In this phase SUM activates nametab entries
PARMVNT_XCNV phase: In this phase SUM activates nametab entries for external conversions
RUN_CRR_LAST phase: SUM is performing final data transfer of change recording framework.
RUN_RENAME_KONV_AVOID_CONVERS phase: In this phase SUM executes steps to avoid conversion of table “KONV”
RUN_RUTCNVFUNCCRE phase: SUM runs job RUTCNVFUNCCRE which creates necessary DB-functions for DDIC SQL-views
RUN_RUTDDLSCREATE phase: SUM runs job RUTDDLSCREATE which creates CDS Views after the migration
RUN_SYSTEM_SHUTDOWN phase: SUM runs BW related checks before entering to downtime
::SPDD in upgrade if Single system mode was selected::
SQLRUNTASK_DROP_CDSBASVIEWS phase: Drop CDS-views on switch tables
START_SHDI_SHD3 phase: SUM starts the shadow sytem (3rd)
STARTSAP_NBAS phase: SUM starts the standard instance after kernel switch with the target kernel
STARTSAP_PUPG or STARTSAP_TBUPG phase: SUM starts the standard instance for postprocessing
STARTSAP_TRANS phase: SUM starts the standard instance before kernel switch with the source release kernel
TABIM_POST_CLONE phase:
TABIM_POST_UPG phase:
TABIM_TRANS phase: TABIM_TRANS phase is executed if the Single system.preconfiguration mode was selected
TABIM_UPG phase:
XPRAS_TRANS XPRAS_UPG and XPRAS_AIMMRG phases: XPRA or “Report after Put” is an ABAP program or Function Module that is executed during a transport request import sequence, the import of a Support Package and during Upgrades/Enhancement Package installations.
The reports that are executed during XPRA are application-related conversions, adjustments, data mergers or alignments for the conversion of release-specific SAP-shipped customizing that need to be adjusted during the upgrade. The runtime of the XPRA phase differs from one application component to another – however XPRAs are used by all SAP modules and applications.
The XPRA phase reports are run as one of the last steps of the upgrade. However, due to the fact that table content is adjusted, the XPRA phase needs to be done at application downtime. XPRA can also run as part of the activation of extension sets and IS add-ons.
All XPRAs are executed by a system job, named RDDEXECL
Post processing
RUN_RSREGENLOD phase: In the Configuration roadmap step and Phase INITSUBST the Advanced preconfiguration mode and Expert option 03) – Generate ABAP loads on shadow system during uptime and start asynchronously in post downtime
RUN_RSUPG_ISU_CRR_CLEAN phase: SUM runs cleanup of change recording framework
TOOLIMP_DELETE_ZDM_CRR phase: SUM deletes ZDM, CRR and internal tools
ISSUE 1: Hanging situation in the last roadmap step in SUM
Symptom
The upgrade executed by SUM tool has finished the postprocessing phase and it is running long time in the last roadmap step
Error examples:
After checking the logs in the <DIR_SUM>\sdt\log the trace file server.err contains similar error:
Exception in thread “ProcessWorker” java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.HashMap.keySet(HashMap.java:869)
Root cause
Insufficient memory for the SUM SL controller java process
Solution
Stop the SUM gui and server (SUM menue -Stop update) or by killing the running Sdtserver
command prompt/process
Make a backup copy from the batch file <DIR_SUM>\sdt\exe\DSUService.bat
Edit the file and increase the parameter value
set JAVA_OPTS=-Xmx1024m
to
set JAVA_OPTS=-Xmx2048m
Restart the SUM tool
Repeat the phase.
Related SAP notes/KBAs
SAP KBA 1768708 – Error “java.lang.OutOfMemoryError: Java heap space” during the export
ISSUE 2: Old exchange tablespace is not empty after update
Overview
The standard tablespace layout is described in the SAP Note 541542 and in the Online Help SAP Naming Conventions for Tablespaces and Data Files.
After update as a follow-up activity empty tablespaces can be deleted (Please follow the SUM guide Chapter Oracle/DB2: Deleting Tablespaces)
In some cases the old exchange tablespace is not empty. In such case follow the troubleshooting and symptoms and solutions section.
Troubleshooting
Please follow the SAP KBA
SAP note 1805195 – Handling and troubleshooting of tablespaces during Upgrades, EhPs and SPs updates
SAP note 1715052 – Tablespace cannot be deleted after upgrade
SAP note 1848753 – SUM 1.0: exchange tablespace (e.g. PSAPSR3702X) handling during release upgrade/update
SAP note 1805195 – Handling and troubleshooting of tablespaces during Upgrades, EhPs and SPs updates
FAQ
CHK_POSTUP phase logfile LONGPOST.LOG:
You can solve some of the problems that occur during an update after you have completed the update. This type of problem is indicated by a P in the second column of the .ELG logs.
You can find a complete list of these P messages in the CHK_POSTUP phase in the LONGPOST.LOG file.
You have to usually remove the cause of these problems before you start using your SAP applications again.
Do I need to uninstall additional application server instances before update?
Uninstalling additional application server instances is only requested in exceptional cases. The SUM guide describes these specific cases:
Release upgrade only and for heterogeneous systems, that is, which have different operating systems on the primary and additional application server instances, you have to uninstall the additional application server instances before you start the upgrade procedure.
Release upgrade only and if you are upgrading from a source release based on SAP NetWeaver 2004 and you have additional application server instances, back them up in case a system restore is required and then uninstall them.
To uninstall the additional application server instances, proceed as described in the installation guide for your source release. If you want to use the profiles of the additional application server instances to adapt the profiles of the target system, save them before you uninstall the additional application server instances.
Can I apply higher SP to only some of the components on my NetWeaver system?
Software Update Manager allows you to update the whole system to a higher support package stack using a stack xml file generated in SAP Solution Manager’s Maintenance optimizer system – MOPZ. You can also update one or more software components to a higher patch (not support package) level using transaction SPAM for ABAP stack and ‘Manually prepared directory’ option for JAVA stack (See SAP note 1641062 for more information).
Why are DDIC objects (tables, domains, etc.) displayed in SPAU?
All DDIC objects which were not adjusted in SPDD will be displayed in SPAU. Please make sure that all DDIC objects will be adjusted in SPDD phase to avoid data loss.
Note
Any time during the update procedure, you can increase or decrease the limits that have been specified for the different types of parallel processes.
For some phases, these changes will have an immediate effect.
For example, changing the values for R3trans processes during downtime will influence the phase TABIM_UPG immediately.
For other phases, you have to carry out the change before the corresponding phase is running. For example, the values for parallel background processes during uptime have to be set before the profiles for the shadow system are created and take effect.
In the browser, enter the following internet address: http://<hostname&gt;:1128/lmsl/sumabap/<SID>/set/procpar
Access via Command Line Interface
To access the command line interface in scroll mode (for example, using telnet), enter the following commands :
cd <update directory>/abap/bin
SAPup set procpar gt=scroll


Friday, December 7, 2018

SAP HANA Adminstration – Topics

SAP HANA Introduction
  • SAP HANA – A short Introduction
  • SAP HANA Information sources
  • Revision strategy of SAP HANA
Preparing Installation
  • Sizing of the SAP HANA
  • Requirements
Installation
  • Introduction SAP HANA Lifecycle Management Tools
  • Advanced installation options
  • SAP HANA Studio installation
  • SHINE – SAP HANA Interactive Education
  • Performing a Distributed System Installation
Post Installation
  • Post-Installation Steps
  • Updating SAP HANA
Architecture and Scenarios
  • SAP HANA Memory Management and Data Persistence
  • Software Packaging
  • SAP HANA Roadmap and Scenarios
  • Deployment Options
Admin Tools for SAP HANA
  • SAP HANA studio for administrator
  • DBA Cockpit
  • HDBSQL command line tool
Operate SAP HANA
  • Starting and stopping SAP HANA
  • Configuring SAP HANA
  • Periodic Manual  tasks
  • Transporting changes
Backup and recovery
  • Concepts of Backup and Recovery
  • Data Area backup
  • Log Area backup
  • Additional backup topics
  • Recovery & Database copy
  • Backup and Recovery using storage snapshot
Monitoring and Troubleshooting
  • Configuring Traces
  • Working with Diagnosis Information and Diagnosis Files
  • SQL Console & Query Analysis
  • Remote Support
Maintaining Users and Authorizations
  • User Management
  • Types of Privileges
  • Roles
  • Administrative tasks
  • Information Sources for Administrators
  • SAP HANA Live Authorization Assistant
High Availability and Disaster
  • High Availability
  • SAP HANA Scale Out
Multitenant Database Containers
  • Administration of Multitenant Database Containers
  • Backup and Recovery of Multitenant Database Containers
Data Provisioning
  • Configure data replication with SAP Landscape Transformation (SLT)
SAP BW HANA Migration
  • SAP Certification Guidance

Tuesday, December 4, 2018

Delta Merge in SAP HANA


DELTA merge is an operation to move the data from WRITE optimized DELTAmemory to READ optimized and Compressed MAIN memory. This can be done automatically by HANAusing Smart Merge technology or manually using MERGE DETLA OF sql statement or using right click option on HANA studio.

The Delta Merge Operation is an operation on a table column store data structure.

The purpose of the delta merge operation is to move changes collected in the delta storage to the read-optimized main storage.

After the delta merge operation, the content of the main storage is persisted to disk and its compression recalculated and optimized if necessary.

A further result of the delta merge operation is truncation of the delta log (ie redo operations)

It is important to understand that even if a column store table is unloaded or partly loaded, the whole table is loaded into memory to perform the delta merge.


During the delta merge operation, every partition of a partitioned table is treated internally as a standalone table with its own data and delta store.