Oracle Database 12c New Features

Showing posts with label Performance Tuning. Show all posts
Showing posts with label Performance Tuning. Show all posts

Cost Based Optimizer (CBO) Parameters Setting

The parameter that control the optimizer behaviors are:


  • CURSOR_SHARING

determines what kind of SQL statements can shared the same cursors. The possible values you can use are:

FORCE
FORCE, forces statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect the meaning of the statement.

SIMILAR
SIMILAR causes statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literas affect either the meaning of the statement or the degree to which the plan is optimized. Forcing cursor sharing among similar (but not identical) statements can have unexpected results in some DSS application, or application that used stored outlines.

EXACT
EXACT only allows statements with identical text to share the same cursor. This is the default.



  • DB_FILE_MULTIBLOCK_READ_COUNT

is one of the parameters you can use to minimized I/O during table scans or index fast full scan. It specified the maximum number of blocks read in one I/O operation during a sequential scan. The total number of I/Os needed to perform a full table scan or an index fast full scan depends on factors, such as the size of the segment, the multiblock read count, and whether parallel execution is being utilized for the operation.

As of Oracle Database 10gR2 the default value of this parameter is a value that corresponds to the maximum I/O size that can be perform efficiently. This value is platform dependent and is 1 MB for most platforms.

Because the parameter is expressed in blocks, it automatically compute a value that is equal to the maximum I/O size that can be performed efficiently divided by the standard block size.

Note that if the number of sessions is extremely large, the multiblock read count value is decreased to avoid the buffer cache getting flooded with too many table scan buffers. Even tough the default value may be a large value, the optimizer does not favor large plans if you do not see this parameter. It would do so only if you explicitly set this parameter to a large value. Basically, if this parameter is not set explicitly (o is set at 0), the optimizer uses a default value of 8 when costing full table scans and index fast full scans.

OLTP and batch environments typically have values in the range of 4 to 16 for this parameter. DSS and data warehouse environments tend to benefit most from maximizing the value of this parameter. The optimizer is more likely to select full table scan over an index, if the value of this parameter is hight.



  • PGA_AGGREGATE_TARGET

specifies the target aggregate PGA memory available to all server processes attached to the instance. Setting PGA_AGGREGATE_TARGET to a nonzero value has the effect of automatically setting the WORKAREA_SIZE_POLICY parameter to AUTO. This means that SQL working areas used by memory-intensive SQL operators, such as sort, group by, has-join, bitmap merge, and bitmap create, are automatically sized.

A nonzero value for the PGA_AGGREGATE_TARGET parameter is the default since, unless you specify otherwise, the system sets it to 20% of the SGA or 10MB, whichever is greater.

Setting PGA_AGGREGATE_TARGET to 0 automatically sets the WORKAREA_SIZE_POLICY parameter to MANUAL. This means that SQL work areas are sized using the *_AREA_SIZE parameters. The system attempts to keep the amount of private memory below the targe specified by this parameter by adapting the size of the work areas to private memory.

When increasing the value of this parameter, you indirectly increase the memory allotted to work area. Consequently, more memory-intensive operations are able to run fully in memory and a less number of them work their way over to the disk. When setting this parameter, you should examine the total memory on your system that is available to the Oracle instance and subtract the SGA. You can assign the remaining memory to PGA_AGGREGATE_TARGET.



  • STAR_TRANSFORMATION_ENABLED

determines whether a cost based query transformation is applied to star queries.



  • RESULT_CACHE_MODE

The query optimizer manages the result cache mechanism depending on the settings of the RESULT_CACHE_MODE parameter in the initialization parameter file. You can use this parameter to determine whether or not the optimizer automatically sends the results of queries to the result cache. The possible parameter values are:


MANUAL
when set to MANUAL (the default), you must specify, by using the RESULT CACHE hint, that a particular result is to be stored in the cache.

FORCE
When set FORCE, all results are stored in the cache. For the FORCE setting, if the statement contains a [NO_]RESULT_CACHE hint, the hint take precedence over the parameter setting.



  • RESULT_CACHE_MAX_SIZE
  • RESULT_CACHE_MAX_RESULT

The memory size allocated to the result cache depends on the memory size of the SGA as well as the memory management system. You can change the memory allocated to the result cache by setting the RESULT_CACHE_MAX_SIZE parameter. The result cache is disabled if you set its value to 0. The value of this parameter is rounded to the largest multiple of 32 KB that is not greater than the specified value. If the rounded value is 0, the feature is disabled.

Use the RESULT_CACHE_MAX_RESULT parameter to specify the maximum amount of cache memory that can be used by any single result. The default value is 5%, but you can specify any percentage value between 1 and 100.



  • RESULT_CACHE_REMOTE_EXPIRATION

Use the RESULT_CACHE_REMOTE_EXPIRATION parameter to specify the time (in number of minutes) for which a result that depends on remote database objects remains valid. The defaul value is 0, which implies that results using remote objects should not be cached. Setting this parameter to a nonzero value can produce stale answers, for example, if the remote table used by a result is modified at the remote database .



  • OPTIMIZER_INDEX_CACHING

OPTIMIZER_INDEX_CACHING controls the costing of an index probe in conjunction with a nested loop or an inlist iterator. The range of value 0 - 100 for OPTIMIZER_INDEX_CACHING indicates the percentage of index blocks in the buffer cache, which modifies the optimizer's assumptions about index caching for nested loop and inlist iteratos.

A value of 100 infers that 100% of the index blocks are likely to be found in the buffer cache and the optimizer adjust the cost of an index probe or nested loop accordingly. The default for this parameter is 0, which results in default optimizer behavior. Use caution when using this parameter because execution plans can change in favor of index caching.



  • OPTIMIZER_INDEX_COST_ADJ

OPTIMIZER_INDEX_COST_ADJ lets you tune optimizer behavior for access path selection to be more or less index friendly. That is, to make the optimizer more or less prone to selecting an index access path over a full table scan.

The range of values is 1 to 100. The default for this parameter is 100%, at which the optimizer evaluates index access paths at the regular cost. Any other value makes the optimizer evaluate the access path at that percentage of the regular cost. For example, a setting 50 makes the index access path look half as expensive as normal.


OPTIMIZER_FEATURES_ENABLED
OPTIMIZER_FEATURES_ENABLED acts as umbrella parameter for enabling a series of optimizer features based on an Oracle release number

For example, if you upgrade your database from 10.1 to release 11.1, but you want to keep the release 10.1 optimizer behavior, you can do so by setting this parameter to 10.1.0 At a later time, you can try to enhancements introduced in the releases up to an including release 11.1 by setting the parameter to 11.1.0.6.

However, it is not recommended to explicitly set the OPTIMIZER_FEATURES_ENABLED parameter to an earlier release. To avoid possible SQL performance regression that may result from execution plan changes, consider using SQL plan management instead.



  • OPTIMIZER_MODE

OPTIMIZER_MODE establishes the default behavior for selecting an optimizing approach for either the instance or your session. The possible values are

ALL_ROWS
The ALL_ROWS value causes the optimizer to use a cost based approach for all SQL statements in the session regardless of the presence of statistics an optimizes with a goal of best throughput (minimum resource use to complete the entire statement). This is the default value.

FIRST_ROWS_n
When you use the FIRST_ROWS_n value, the optimizer uses a cost based approach, regardless of the presence of statistics, and optimizes with a goal best response time to return the first n number of rows; n can equal 1, 10, 100 or 1000.

FIRST_ROWS
The FIRST_ROWS value causes the optimizer to use a mix of cost and heuristics for find the best plan for fast delivery of the first few rows. Using heuristics sometimes leads the query optimizer to generate a plan with a cost that is significantly larger than the cost of a plan without applying the heuristic. FIRST_ROWS is available for backward compatibility and plan stability; use FIRS_ROW_n instead.



  • OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES
  • OPTIMIZER_USER_SQL_PLAN_BASELINES
  • OPTIMIZER_DYNAMIC_SAMPLING

OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES enables or disables the automatic recvognition of repeatable SQL statements, as well as the generations of SQL plan baselines for such statements.

OPTIMIZER_USER_SQL_PLAN_BASELINES enables or disabales the use of SQL plan baselines stored in SQL Management Base. When enabled, the optimizer looks for a SQL plan baseline for the SQL steatment being compiled. If one is found in SQL Management Base, the optimizer costs each of the baseline plans and pick one with the lowest cost.

OPTIMIZER_DYNAMIC_SAMPLING controls the level of dynamic sapling performed by the optimizer. If OPTIMIZER_FEATURE_ENABLE is set to 10.0.0 or later, the default value is 2. If it is set to 9.2.0, the default value is 1, and if it is set to 9.0.1 or earlier, the default value is 0.


  • OPTIMIZER_USE_INVISIBLE_INDEXES
  • OPTIMIZER_USE_PENDING_STATISTICS

OPTIMIZER_USE_INVISIBLE_INDEXES enables or disables the use of invisible indexes.


OPTIMIZER_USE_PENDING_STATISTICS specifies whether or not the optimizer uses pending statisctis when compiling SQL statements.


Cost Based Optimizer (CBO) Parameters Setting

The parameter that control the optimizer behaviors are:


  • CURSOR_SHARING

determines what kind of SQL statements can shared the same cursors. The possible values you can use are:

FORCE
FORCE, forces statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect the meaning of the statement.

SIMILAR
SIMILAR causes statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literas affect either the meaning of the statement or the degree to which the plan is optimized. Forcing cursor sharing among similar (but not identical) statements can have unexpected results in some DSS application, or application that used stored outlines.

EXACT
EXACT only allows statements with identical text to share the same cursor. This is the default.



  • DB_FILE_MULTIBLOCK_READ_COUNT

is one of the parameters you can use to minimized I/O during table scans or index fast full scan. It specified the maximum number of blocks read in one I/O operation during a sequential scan. The total number of I/Os needed to perform a full table scan or an index fast full scan depends on factors, such as the size of the segment, the multiblock read count, and whether parallel execution is being utilized for the operation.

As of Oracle Database 10gR2 the default value of this parameter is a value that corresponds to the maximum I/O size that can be perform efficiently. This value is platform dependent and is 1 MB for most platforms.

Because the parameter is expressed in blocks, it automatically compute a value that is equal to the maximum I/O size that can be performed efficiently divided by the standard block size.

Note that if the number of sessions is extremely large, the multiblock read count value is decreased to avoid the buffer cache getting flooded with too many table scan buffers. Even tough the default value may be a large value, the optimizer does not favor large plans if you do not see this parameter. It would do so only if you explicitly set this parameter to a large value. Basically, if this parameter is not set explicitly (o is set at 0), the optimizer uses a default value of 8 when costing full table scans and index fast full scans.

OLTP and batch environments typically have values in the range of 4 to 16 for this parameter. DSS and data warehouse environments tend to benefit most from maximizing the value of this parameter. The optimizer is more likely to select full table scan over an index, if the value of this parameter is hight.



  • PGA_AGGREGATE_TARGET

specifies the target aggregate PGA memory available to all server processes attached to the instance. Setting PGA_AGGREGATE_TARGET to a nonzero value has the effect of automatically setting the WORKAREA_SIZE_POLICY parameter to AUTO. This means that SQL working areas used by memory-intensive SQL operators, such as sort, group by, has-join, bitmap merge, and bitmap create, are automatically sized.

A nonzero value for the PGA_AGGREGATE_TARGET parameter is the default since, unless you specify otherwise, the system sets it to 20% of the SGA or 10MB, whichever is greater.

Setting PGA_AGGREGATE_TARGET to 0 automatically sets the WORKAREA_SIZE_POLICY parameter to MANUAL. This means that SQL work areas are sized using the *_AREA_SIZE parameters. The system attempts to keep the amount of private memory below the targe specified by this parameter by adapting the size of the work areas to private memory.

When increasing the value of this parameter, you indirectly increase the memory allotted to work area. Consequently, more memory-intensive operations are able to run fully in memory and a less number of them work their way over to the disk. When setting this parameter, you should examine the total memory on your system that is available to the Oracle instance and subtract the SGA. You can assign the remaining memory to PGA_AGGREGATE_TARGET.



  • STAR_TRANSFORMATION_ENABLED

determines whether a cost based query transformation is applied to star queries.



  • RESULT_CACHE_MODE

The query optimizer manages the result cache mechanism depending on the settings of the RESULT_CACHE_MODE parameter in the initialization parameter file. You can use this parameter to determine whether or not the optimizer automatically sends the results of queries to the result cache. The possible parameter values are:


MANUAL
when set to MANUAL (the default), you must specify, by using the RESULT CACHE hint, that a particular result is to be stored in the cache.

FORCE
When set FORCE, all results are stored in the cache. For the FORCE setting, if the statement contains a [NO_]RESULT_CACHE hint, the hint take precedence over the parameter setting.



  • RESULT_CACHE_MAX_SIZE
  • RESULT_CACHE_MAX_RESULT

The memory size allocated to the result cache depends on the memory size of the SGA as well as the memory management system. You can change the memory allocated to the result cache by setting the RESULT_CACHE_MAX_SIZE parameter. The result cache is disabled if you set its value to 0. The value of this parameter is rounded to the largest multiple of 32 KB that is not greater than the specified value. If the rounded value is 0, the feature is disabled.

Use the RESULT_CACHE_MAX_RESULT parameter to specify the maximum amount of cache memory that can be used by any single result. The default value is 5%, but you can specify any percentage value between 1 and 100.



  • RESULT_CACHE_REMOTE_EXPIRATION

Use the RESULT_CACHE_REMOTE_EXPIRATION parameter to specify the time (in number of minutes) for which a result that depends on remote database objects remains valid. The defaul value is 0, which implies that results using remote objects should not be cached. Setting this parameter to a nonzero value can produce stale answers, for example, if the remote table used by a result is modified at the remote database .



  • OPTIMIZER_INDEX_CACHING

OPTIMIZER_INDEX_CACHING controls the costing of an index probe in conjunction with a nested loop or an inlist iterator. The range of value 0 - 100 for OPTIMIZER_INDEX_CACHING indicates the percentage of index blocks in the buffer cache, which modifies the optimizer's assumptions about index caching for nested loop and inlist iteratos.

A value of 100 infers that 100% of the index blocks are likely to be found in the buffer cache and the optimizer adjust the cost of an index probe or nested loop accordingly. The default for this parameter is 0, which results in default optimizer behavior. Use caution when using this parameter because execution plans can change in favor of index caching.



  • OPTIMIZER_INDEX_COST_ADJ

OPTIMIZER_INDEX_COST_ADJ lets you tune optimizer behavior for access path selection to be more or less index friendly. That is, to make the optimizer more or less prone to selecting an index access path over a full table scan.

The range of values is 1 to 100. The default for this parameter is 100%, at which the optimizer evaluates index access paths at the regular cost. Any other value makes the optimizer evaluate the access path at that percentage of the regular cost. For example, a setting 50 makes the index access path look half as expensive as normal.


OPTIMIZER_FEATURES_ENABLED
OPTIMIZER_FEATURES_ENABLED acts as umbrella parameter for enabling a series of optimizer features based on an Oracle release number

For example, if you upgrade your database from 10.1 to release 11.1, but you want to keep the release 10.1 optimizer behavior, you can do so by setting this parameter to 10.1.0 At a later time, you can try to enhancements introduced in the releases up to an including release 11.1 by setting the parameter to 11.1.0.6.

However, it is not recommended to explicitly set the OPTIMIZER_FEATURES_ENABLED parameter to an earlier release. To avoid possible SQL performance regression that may result from execution plan changes, consider using SQL plan management instead.



  • OPTIMIZER_MODE

OPTIMIZER_MODE establishes the default behavior for selecting an optimizing approach for either the instance or your session. The possible values are

ALL_ROWS
The ALL_ROWS value causes the optimizer to use a cost based approach for all SQL statements in the session regardless of the presence of statistics an optimizes with a goal of best throughput (minimum resource use to complete the entire statement). This is the default value.

FIRST_ROWS_n
When you use the FIRST_ROWS_n value, the optimizer uses a cost based approach, regardless of the presence of statistics, and optimizes with a goal best response time to return the first n number of rows; n can equal 1, 10, 100 or 1000.

FIRST_ROWS
The FIRST_ROWS value causes the optimizer to use a mix of cost and heuristics for find the best plan for fast delivery of the first few rows. Using heuristics sometimes leads the query optimizer to generate a plan with a cost that is significantly larger than the cost of a plan without applying the heuristic. FIRST_ROWS is available for backward compatibility and plan stability; use FIRS_ROW_n instead.



  • OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES
  • OPTIMIZER_USER_SQL_PLAN_BASELINES
  • OPTIMIZER_DYNAMIC_SAMPLING

OPTIMIZER_CAPTURE_SQL_PLAN_BASELINES enables or disables the automatic recvognition of repeatable SQL statements, as well as the generations of SQL plan baselines for such statements.

OPTIMIZER_USER_SQL_PLAN_BASELINES enables or disabales the use of SQL plan baselines stored in SQL Management Base. When enabled, the optimizer looks for a SQL plan baseline for the SQL steatment being compiled. If one is found in SQL Management Base, the optimizer costs each of the baseline plans and pick one with the lowest cost.

OPTIMIZER_DYNAMIC_SAMPLING controls the level of dynamic sapling performed by the optimizer. If OPTIMIZER_FEATURE_ENABLE is set to 10.0.0 or later, the default value is 2. If it is set to 9.2.0, the default value is 1, and if it is set to 9.0.1 or earlier, the default value is 0.


  • OPTIMIZER_USE_INVISIBLE_INDEXES
  • OPTIMIZER_USE_PENDING_STATISTICS

OPTIMIZER_USE_INVISIBLE_INDEXES enables or disables the use of invisible indexes.


OPTIMIZER_USE_PENDING_STATISTICS specifies whether or not the optimizer uses pending statisctis when compiling SQL statements.


General Database Tuning Task

Many SQL Tuning tasks should be performed on a regular basis. You may see a way to rewrite a WHERE clause, but it may depend on a new index being built. The following is a list of tasks that gives you a background of some important tasks that must be performed, and gives you an idea of what dependencies you may have as you tune SQL:

1. Identifying high-load SQL statements

Identifying high-load SQL statements is one of the most important tasks you should perform. The ADDM is the ideal tool for this particular task.

2. Scheduling job to gather optimizer statistics

By default, Oracle Database gathers optimizer statistics automatically. For this, a job is schduled to run in the maintenance windows.

3. Identifying usage and performance of hardware components

OS statistics provide information about the usage and performance of the main hardware components as well as the performance of the OS itself,

4. Rebuilding indexes

Often, there is a beneficial impact on performance by rebuilding indexes. For example, removing non selective indexes to speed the DML, or adding columns to the index to improve selectivity.


5. Using stored statistics, outlines, or SQL plan baselines

You can maintain the existing execution plan of SQL statements over time by using stored statistics, outlines, or SQL plan baselines.


When you tune your system, it is important that you compare the CPU time with the wait time of your system. By comparing CPU time with wait time, you can determine how much of response time is spent on useful work and how much on waiting for resources potentially held by other processes.
As general rule, the systems where CPU time is dominant usually need less tuning than the ones where wait time is dominant. On the other hand, high CPU usage can be caused by badly-written SQL statements.

High CPU Time = Possibly needs SQL tuning
High Wait time = Need instance/RAC tuning

Although the proposition of CPU time to wait time always tends to decrease as load on the system increases, steep increases in wait time are a sign of contention and must be addressed for good scalability. Scalability is a system's ability to process more workload with a proportional increase in system resource use.

Adding more CPUs to a node, or nodes to a cluster, would provide very limited benefit under contention. Conversely, a system where the proportion of CPU time does not decrease significantly as load increases can scale better, and would most likely benefit from adding CPUs or RAC instances if needed.

AWR reports display CPU time together with wait time in the TOP Timed Events section, if the CPU time portion is among the top five events.

Poor application design, implementation, and configuration have a significant impact on scalability. This results in poor SQL and index design, resulting in a higher number of logical I/O for same number if rows returned. It also results in reduced availability because database objects take longer to maintain.

However, design is not the only problem. The physical implementation of the application can be the weak link, as in the following examples:
  • System can move to production environments with poorly written SQL that cause high I/O
  • Infrequent transaction COMMITs or ROLLBACKs can cause long locks on resources.
  • The production environment can use different execution plan than those generated in testing
  • Memory-intensive applications that allocated a large amount of memory without much thought for freeing the memory can cause excessive memory fragmentation, and.
  • Inefficient memory usage places high stress on the operating virtual memory subsystem, and this affects performance and availability.

There are some common mistakes made on customer systems. Although these mistakes are not directly detected by ADDM, ADDM highlights the resulting high-load SQL. These mistakes are:

1. Bad Connection Management

The application connects and disconnects for each database interaction. This problem is common with stateless middleware in application servers. It has over two orders of magnitude impact on performance and is totally unscalable

2. Bad use of cursors and the shared pool

Not using cursors results in repeated parses. If bind variables are not used, there may be hard parsing of all similar SQL statements. This has an order of magnitude impact on performance, and it is totally unscalable. Use cursors with bind variables that open the cursor and execute it many times. Be suspicious of application generating dynamic SQL.

3. Bad SQL

Bad SQLis SQL that uses more resources than appropriate for the application. This can be a DSS query that runs for more than 24 hours or a query from an online application that takes more than a minute.
SQL that consumes significant system resources should be investigated for potential improvement. ADDM identifies high-load SQL and the SQL Tuning Advisor can be used to provide recommendations for improvement.


4. Use of nonstandard initialization parameters

The use of nonstandard initialization parameters might have been implemented based on poor advise or incorrect assumptions. Most systems give acceptable performance using only the set of basic parameters. In particular, undocumented optimizer features can cause a great deal of problems that may require considerable investigation.
Likewise, optimizer parameter set in the initialization parameter file can override proven optimal execution plans. For these reasons, schemas, schema statistics, and optimizer setting should be managed together as a group to ensure consistency of performance.


5. Getting the database I/O wrong

Many sites lay out their databases poorly over the available disks. Other sites specify the number of disks incorrectly because they configure disks by disk space and not by I/O bandwidth

6. Redo log setup problems

Many sites run with too small redo log files. Small redo logs cause system checkpoints to continuously put a high load on the buffer cache and the I/O system. If there are very few redo logs, the archive cannot keep up, and the database waits for the archive process to catch up.

7. Excessive serialization

Serialization of data blocks in the buffer cache due to shortage of undo segments is particularly common in application with large number of active users and a few undo segments. Use Automatic Segment Space Management of ASSM and automatic undo management to solve this problem.


8. Long full table scans

Long full table scans for high-volume or interactive online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Long table scans, by nature, are I/O intensive an unscalable.


9. High amount of recursive (SYS) SQL

Large amounts of recursive SQL executed by SYS could indicate that space management activities, such as extent allocations, take place. This is unscalable and impact user response time, Use locally managed tablespaces top reduce recursive SQL due to extent allocation. Reccursive SQL executed under another user ID is probably SQL and PL/SQL, so this is not a problem.


10. Deployment and migration errors

In many cases, an application uses too many resources because the schema owning the tables has not been successfully migrated from the development environment or from an older implementations. Example of this are missing indexes or incorrect statistics. These errors can lead to suboptimal execution plans and poor interactive user performance, When migrating applications of known performance, export the schema statistics to maintain plan stability using DBMS_STAT package.


Proactive Tuning

Tuning usually implies fixing a performance problem. However , tuning should be part of the life cycle of an application, through the analysis, design, coding, production, and maintenance stages. The tuning phase is after left until the system is in production. At that time, tuning becomes a reactive exercise, where the most important bottleneck is identified and fixed.

Application are no different from any other designed and engineered product. If the design look right, it is probably is right. This principle should always kept in mind when building application. Consider some of the following design issue that should be tune proactively instead of reactively :


1. Simple Design

If the table design is so complicated that nobody can fully understand it, the table is probably designed badly.

If SQL statements are so long and involved that it would be impossible for any optimizer to effectively optimize it in real time, there is probably a bad statement, underlying transaction, or table design.

If there are many indexes on a table and the same columns are repeatly indexed, there is probably a bad index design.

If queries are submitted without suitable qualification (the WHERE clause) for rapid response for online users, there is probably a bad user interface or transaction design.


2. Data Modeling

Data modeling is important in successful relation application design. This should be done in a way that quickly and accurately represents the business practices. Apply our greatest modeling efforts to those entities affected by the most frequent business transactions. Use of modeling tools can then rapidly generae schema definations and can be useful when a fast prototype is required.

Normalizing data prevents duplication. When data is normalized, you have a clear picture of the keys and relationships. It is then easier to perform the next step of creating tables, constraints, and indexes, A good data model ultimately means that your queries are written more efficiently.

3. Table and Indexes

Table design is largely a compromise between flexibility and performance of core transactions. To keep the database flexible and able to accommodate unforeseen workloads, the table design should be very similar to the data model, and it should be normalized to at least third normal form. However, certain core transaction can require selective denormalization for performance purpose.

Use the features supplied with Oracle Database to simplify table design for performance, such as storing tables prejoined in clusters, adding derived columns and aggregate values, and using materialized view or partitioned tables. Additionally, crate check constrains and column with default value to prevent bad data from getting into the tables.

Design should be focused on business critical tables so that good performance can be achieved in areas  that are the most used. For noncritical tables, shortcuts in design can be adopted to enable a more rapid application development. If, however, a noncore table becomes a performance problem during prototyping and testing, remedial design efforts should be applied immediately.

Index design is also largely iterative process based on the SQL that is generated by application designers. However, it is possible to make a sensible start by building indexes that enforce foreign key constraints (to reduce response time on joining between primary key tables and foreign key tables) and creating indexes on frequently accessed data, such as a person's name.

Primary keys and unique keys are automatically indexed except for the DISABLE VALIDATE and DISABLE NONVALIDATE RELY constraints. As the application evolves and testing is performed on realistic sizes of data, certain queries need performance improvements, for which building a better index is a good solution.

When buildig a nexe index, one of the easiest way to speed up a query is to reduce the number of logical I/O by eliminating a table scan from the execution plan. This can be done by appending to index all the columns of the table referenced by the query. These columns are the select list columns and any required join or sort columns.

This technique is particularly useful in speeding up an online application's response time when time-consuming I/Os are reduced. This is best applied when testing the application with properly-sized data for the first time. The most aggressive form of this technique is to build an index-organized table or IOT

4. Views

Views can speed up and simplify application design. A simple view definition can mask data model complexity from the programmers whose priorities are to retrieve, display, collect, and store data.

However, though vies provide clean programming interfaces, they can cause suboptimal, resources-intensive queries when nested too deeply. The worst type of view use is creating joins on views that reference other views, which in turn reference other views. In many cases, developers can satisfy the query directly from the table without using a view. Because of their inherent properties, views usually make it difficult for the optimizer to generate the optimal execution plan.

5. Writing Efficient SQL

An application that is design for SQL execution efficiently must support the following characteristics:

5.1. good database connectivity

Connecting to the database is an expensive operation that is highly unscalable. Therefore, the number of concurrent connections to the database should be minimized as much as possible. A simple system, where a user connects at application initialization, is ideal.

However, in a web-based or multi-tiered application, where application servers are used to multiplex database connections to users, this can be difficult. With these types of applications, design efforts should ensure that database connections are pooled and not re-established for each user request.

5.2. good cursor usage and management

Maintaining user connections is equally important for minimizing the parsing activity on the system. Parsing is the process of interpreting a SQL statement and creating an execution plan for it. This process has many phases, including syntax checking, security checking, execution plan generation, and loading shared structures into the shared pool.

5.3. minimal parsing

There are two types of parse operations - hard and soft. In hard parsing, a SQL statement is submitted for the first time, and no match is found in the shared pool. Hard parses are the most resource-intensive and unsaclable because they perform all the operations involved in a parse.

In soft parsing, a SQL statement is submitted for the first time, and a match is found in the shared pool. The match can be the result of previous execution by another user. The SQL statement is shared, which is good for performance. However, soft parses are not ideal because they still require syntax and security checking, which consume system resources.

Because parsing should be minimized as much as possible, application developers should design their application to parse SQL statements once and execute them many times. This is done through cursors. Experienced SQL programmers should be familiar with the concept of opening and re-executing cursors.

5.4. usage of bind variables

Application developers must also ensure that SQL statements are shared within the shared pool. To do this, bind variables to represent the parts of the query that change for execution to execution. If this is not done, the SQL statement is likely to be parsed once and never reused by other users. To ensure that SQL is shared, use bind variables and do not use string literals with SQL statements.

6. Cursor Sharing

For Oracle to shared cursors, the code must be written in the same way character wise, unless you use some special initialization parameters, such as CURSOR_SHARING. This allows the system to recognize that two statements are the same and thus can be shared, You should therefore, develop coding conventions for SQL statements in ad hoc queries, SQL scripts, and ORacle Call Interface or OCI calls.

7. Bind Variables

Use bind variables.


SQL code readability can be improved by adhering to these guidelines:


1. Developing fromat standard for all statements, including those in PL/SQL code.
2. Developing rules for the use of uppercase and lowercase characters.
3. Developing rules for the use of whitespace (spces, table, returns).
4. Developing rules for the use of comments (preferably keeping them out of the SQL stements themselves).
5. Using the same names to refer to identical database objects. If possible, prefix each object with a schema name.

You can also create following performance checklist:

Set minimal initialization parameters

Set the minimal number of initialization parameters. Ideally, most initialization parameters should be left at default. If there is more tuning to perform, this shows up when the system is under load. Set storage options ofr tables and indexes in appropriate tablespaces.

Verify SQL statement

Verify that all SQL statements are optimal and understand their usages.

Validate middleware and programs efficiency

Validate that middle ware and programs that connected to the database are efficient in their connection management and do not log on and log off repeatdly.

Validate the use of cursors

Validate that the SQL statements use cursors efficiently. Each SQL statement should be parsed once and then executed multiple times. This does not happen mostly because bind variables are not used properly and the WHERE clause predicates are sent as string literals.

Validate migration of schema objects

Validate that all schema objects are correctly migrated from the development environment to the production database. This includes tables, indexes, sequences, triggers, packages, proceduires, functions, Java objects, synonyms, grants, and views. Ensure that any modifications made in testing are made to the production system.

Establish a baseline set of statistics

As soon as the system is rolled out, establish a baseline set of statistics from the database and operating system. This first set of statistics validates or corrects any assumptions made in the design and rollout process.


General Database Tuning Task

Many SQL Tuning tasks should be performed on a regular basis. You may see a way to rewrite a WHERE clause, but it may depend on a new index being built. The following is a list of tasks that gives you a background of some important tasks that must be performed, and gives you an idea of what dependencies you may have as you tune SQL:

1. Identifying high-load SQL statements

Identifying high-load SQL statements is one of the most important tasks you should perform. The ADDM is the ideal tool for this particular task.

2. Scheduling job to gather optimizer statistics

By default, Oracle Database gathers optimizer statistics automatically. For this, a job is schduled to run in the maintenance windows.

3. Identifying usage and performance of hardware components

OS statistics provide information about the usage and performance of the main hardware components as well as the performance of the OS itself,

4. Rebuilding indexes

Often, there is a beneficial impact on performance by rebuilding indexes. For example, removing non selective indexes to speed the DML, or adding columns to the index to improve selectivity.


5. Using stored statistics, outlines, or SQL plan baselines

You can maintain the existing execution plan of SQL statements over time by using stored statistics, outlines, or SQL plan baselines.


When you tune your system, it is important that you compare the CPU time with the wait time of your system. By comparing CPU time with wait time, you can determine how much of response time is spent on useful work and how much on waiting for resources potentially held by other processes.
As general rule, the systems where CPU time is dominant usually need less tuning than the ones where wait time is dominant. On the other hand, high CPU usage can be caused by badly-written SQL statements.

High CPU Time = Possibly needs SQL tuning
High Wait time = Need instance/RAC tuning

Although the proposition of CPU time to wait time always tends to decrease as load on the system increases, steep increases in wait time are a sign of contention and must be addressed for good scalability. Scalability is a system's ability to process more workload with a proportional increase in system resource use.

Adding more CPUs to a node, or nodes to a cluster, would provide very limited benefit under contention. Conversely, a system where the proportion of CPU time does not decrease significantly as load increases can scale better, and would most likely benefit from adding CPUs or RAC instances if needed.

AWR reports display CPU time together with wait time in the TOP Timed Events section, if the CPU time portion is among the top five events.

Poor application design, implementation, and configuration have a significant impact on scalability. This results in poor SQL and index design, resulting in a higher number of logical I/O for same number if rows returned. It also results in reduced availability because database objects take longer to maintain.

However, design is not the only problem. The physical implementation of the application can be the weak link, as in the following examples:
  • System can move to production environments with poorly written SQL that cause high I/O
  • Infrequent transaction COMMITs or ROLLBACKs can cause long locks on resources.
  • The production environment can use different execution plan than those generated in testing
  • Memory-intensive applications that allocated a large amount of memory without much thought for freeing the memory can cause excessive memory fragmentation, and.
  • Inefficient memory usage places high stress on the operating virtual memory subsystem, and this affects performance and availability.

There are some common mistakes made on customer systems. Although these mistakes are not directly detected by ADDM, ADDM highlights the resulting high-load SQL. These mistakes are:

1. Bad Connection Management

The application connects and disconnects for each database interaction. This problem is common with stateless middleware in application servers. It has over two orders of magnitude impact on performance and is totally unscalable

2. Bad use of cursors and the shared pool

Not using cursors results in repeated parses. If bind variables are not used, there may be hard parsing of all similar SQL statements. This has an order of magnitude impact on performance, and it is totally unscalable. Use cursors with bind variables that open the cursor and execute it many times. Be suspicious of application generating dynamic SQL.

3. Bad SQL

Bad SQLis SQL that uses more resources than appropriate for the application. This can be a DSS query that runs for more than 24 hours or a query from an online application that takes more than a minute.
SQL that consumes significant system resources should be investigated for potential improvement. ADDM identifies high-load SQL and the SQL Tuning Advisor can be used to provide recommendations for improvement.


4. Use of nonstandard initialization parameters

The use of nonstandard initialization parameters might have been implemented based on poor advise or incorrect assumptions. Most systems give acceptable performance using only the set of basic parameters. In particular, undocumented optimizer features can cause a great deal of problems that may require considerable investigation.
Likewise, optimizer parameter set in the initialization parameter file can override proven optimal execution plans. For these reasons, schemas, schema statistics, and optimizer setting should be managed together as a group to ensure consistency of performance.


5. Getting the database I/O wrong

Many sites lay out their databases poorly over the available disks. Other sites specify the number of disks incorrectly because they configure disks by disk space and not by I/O bandwidth

6. Redo log setup problems

Many sites run with too small redo log files. Small redo logs cause system checkpoints to continuously put a high load on the buffer cache and the I/O system. If there are very few redo logs, the archive cannot keep up, and the database waits for the archive process to catch up.

7. Excessive serialization

Serialization of data blocks in the buffer cache due to shortage of undo segments is particularly common in application with large number of active users and a few undo segments. Use Automatic Segment Space Management of ASSM and automatic undo management to solve this problem.


8. Long full table scans

Long full table scans for high-volume or interactive online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Long table scans, by nature, are I/O intensive an unscalable.


9. High amount of recursive (SYS) SQL

Large amounts of recursive SQL executed by SYS could indicate that space management activities, such as extent allocations, take place. This is unscalable and impact user response time, Use locally managed tablespaces top reduce recursive SQL due to extent allocation. Reccursive SQL executed under another user ID is probably SQL and PL/SQL, so this is not a problem.


10. Deployment and migration errors

In many cases, an application uses too many resources because the schema owning the tables has not been successfully migrated from the development environment or from an older implementations. Example of this are missing indexes or incorrect statistics. These errors can lead to suboptimal execution plans and poor interactive user performance, When migrating applications of known performance, export the schema statistics to maintain plan stability using DBMS_STAT package.


Proactive Tuning

Tuning usually implies fixing a performance problem. However , tuning should be part of the life cycle of an application, through the analysis, design, coding, production, and maintenance stages. The tuning phase is after left until the system is in production. At that time, tuning becomes a reactive exercise, where the most important bottleneck is identified and fixed.

Application are no different from any other designed and engineered product. If the design look right, it is probably is right. This principle should always kept in mind when building application. Consider some of the following design issue that should be tune proactively instead of reactively :


1. Simple Design

If the table design is so complicated that nobody can fully understand it, the table is probably designed badly.

If SQL statements are so long and involved that it would be impossible for any optimizer to effectively optimize it in real time, there is probably a bad statement, underlying transaction, or table design.

If there are many indexes on a table and the same columns are repeatly indexed, there is probably a bad index design.

If queries are submitted without suitable qualification (the WHERE clause) for rapid response for online users, there is probably a bad user interface or transaction design.


2. Data Modeling

Data modeling is important in successful relation application design. This should be done in a way that quickly and accurately represents the business practices. Apply our greatest modeling efforts to those entities affected by the most frequent business transactions. Use of modeling tools can then rapidly generae schema definations and can be useful when a fast prototype is required.

Normalizing data prevents duplication. When data is normalized, you have a clear picture of the keys and relationships. It is then easier to perform the next step of creating tables, constraints, and indexes, A good data model ultimately means that your queries are written more efficiently.

3. Table and Indexes

Table design is largely a compromise between flexibility and performance of core transactions. To keep the database flexible and able to accommodate unforeseen workloads, the table design should be very similar to the data model, and it should be normalized to at least third normal form. However, certain core transaction can require selective denormalization for performance purpose.

Use the features supplied with Oracle Database to simplify table design for performance, such as storing tables prejoined in clusters, adding derived columns and aggregate values, and using materialized view or partitioned tables. Additionally, crate check constrains and column with default value to prevent bad data from getting into the tables.

Design should be focused on business critical tables so that good performance can be achieved in areas  that are the most used. For noncritical tables, shortcuts in design can be adopted to enable a more rapid application development. If, however, a noncore table becomes a performance problem during prototyping and testing, remedial design efforts should be applied immediately.

Index design is also largely iterative process based on the SQL that is generated by application designers. However, it is possible to make a sensible start by building indexes that enforce foreign key constraints (to reduce response time on joining between primary key tables and foreign key tables) and creating indexes on frequently accessed data, such as a person's name.

Primary keys and unique keys are automatically indexed except for the DISABLE VALIDATE and DISABLE NONVALIDATE RELY constraints. As the application evolves and testing is performed on realistic sizes of data, certain queries need performance improvements, for which building a better index is a good solution.

When buildig a nexe index, one of the easiest way to speed up a query is to reduce the number of logical I/O by eliminating a table scan from the execution plan. This can be done by appending to index all the columns of the table referenced by the query. These columns are the select list columns and any required join or sort columns.

This technique is particularly useful in speeding up an online application's response time when time-consuming I/Os are reduced. This is best applied when testing the application with properly-sized data for the first time. The most aggressive form of this technique is to build an index-organized table or IOT

4. Views

Views can speed up and simplify application design. A simple view definition can mask data model complexity from the programmers whose priorities are to retrieve, display, collect, and store data.

However, though vies provide clean programming interfaces, they can cause suboptimal, resources-intensive queries when nested too deeply. The worst type of view use is creating joins on views that reference other views, which in turn reference other views. In many cases, developers can satisfy the query directly from the table without using a view. Because of their inherent properties, views usually make it difficult for the optimizer to generate the optimal execution plan.

5. Writing Efficient SQL

An application that is design for SQL execution efficiently must support the following characteristics:

5.1. good database connectivity

Connecting to the database is an expensive operation that is highly unscalable. Therefore, the number of concurrent connections to the database should be minimized as much as possible. A simple system, where a user connects at application initialization, is ideal.

However, in a web-based or multi-tiered application, where application servers are used to multiplex database connections to users, this can be difficult. With these types of applications, design efforts should ensure that database connections are pooled and not re-established for each user request.

5.2. good cursor usage and management

Maintaining user connections is equally important for minimizing the parsing activity on the system. Parsing is the process of interpreting a SQL statement and creating an execution plan for it. This process has many phases, including syntax checking, security checking, execution plan generation, and loading shared structures into the shared pool.

5.3. minimal parsing

There are two types of parse operations - hard and soft. In hard parsing, a SQL statement is submitted for the first time, and no match is found in the shared pool. Hard parses are the most resource-intensive and unsaclable because they perform all the operations involved in a parse.

In soft parsing, a SQL statement is submitted for the first time, and a match is found in the shared pool. The match can be the result of previous execution by another user. The SQL statement is shared, which is good for performance. However, soft parses are not ideal because they still require syntax and security checking, which consume system resources.

Because parsing should be minimized as much as possible, application developers should design their application to parse SQL statements once and execute them many times. This is done through cursors. Experienced SQL programmers should be familiar with the concept of opening and re-executing cursors.

5.4. usage of bind variables

Application developers must also ensure that SQL statements are shared within the shared pool. To do this, bind variables to represent the parts of the query that change for execution to execution. If this is not done, the SQL statement is likely to be parsed once and never reused by other users. To ensure that SQL is shared, use bind variables and do not use string literals with SQL statements.

6. Cursor Sharing

For Oracle to shared cursors, the code must be written in the same way character wise, unless you use some special initialization parameters, such as CURSOR_SHARING. This allows the system to recognize that two statements are the same and thus can be shared, You should therefore, develop coding conventions for SQL statements in ad hoc queries, SQL scripts, and ORacle Call Interface or OCI calls.

7. Bind Variables

Use bind variables.


SQL code readability can be improved by adhering to these guidelines:


1. Developing fromat standard for all statements, including those in PL/SQL code.
2. Developing rules for the use of uppercase and lowercase characters.
3. Developing rules for the use of whitespace (spces, table, returns).
4. Developing rules for the use of comments (preferably keeping them out of the SQL stements themselves).
5. Using the same names to refer to identical database objects. If possible, prefix each object with a schema name.

You can also create following performance checklist:

Set minimal initialization parameters

Set the minimal number of initialization parameters. Ideally, most initialization parameters should be left at default. If there is more tuning to perform, this shows up when the system is under load. Set storage options ofr tables and indexes in appropriate tablespaces.

Verify SQL statement

Verify that all SQL statements are optimal and understand their usages.

Validate middleware and programs efficiency

Validate that middle ware and programs that connected to the database are efficient in their connection management and do not log on and log off repeatdly.

Validate the use of cursors

Validate that the SQL statements use cursors efficiently. Each SQL statement should be parsed once and then executed multiple times. This does not happen mostly because bind variables are not used properly and the WHERE clause predicates are sent as string literals.

Validate migration of schema objects

Validate that all schema objects are correctly migrated from the development environment to the production database. This includes tables, indexes, sequences, triggers, packages, proceduires, functions, Java objects, synonyms, grants, and views. Ensure that any modifications made in testing are made to the production system.

Establish a baseline set of statistics

As soon as the system is rolled out, establish a baseline set of statistics from the database and operating system. This first set of statistics validates or corrects any assumptions made in the design and rollout process.


Poor performance SQL Statements

SQL statements can perform poorly for a variety of reason:

1. Stale optimizer statistics

SQL execution plans are generated by the cost-based optimizer (CBO). For CBO to effectively choose the most efficient plan, it need accurate information on the data volume and distribution of tables and indexes referenced in the queries. Without accurate optimizer statistics, the CBO can be easily mislead and generate suboptimal execution plans.

2. Missing access structure

Absence of access structures, such as indexes, materialized views, and partitions, is a common reason for poor SQL performance. The right set of access structures can improve SQL.

3. Suboptimal Execution Plan selection, and

The CBO can sometimes select a suboptimal execution plan for a SQL statement. This happen for the most part because of incorrect estimates of some attributes of the SQL statement, such as its cost, carnality, or predicate selectivity.

4. Poorly constructed SQL

If the SQL is designed poorly, there is not much that the optimizer can do to improve its performance. A missing join condition leading to a Cartesian product, or the use of more expensive SQL constructs like UNION in place of UNION ALL, are just a couple of examples of inefficient SQL design.

Above are the four main causes of poor SQL optimization can have a drastic impact on performance.

Additional reasons for poor performance might be connected with hardware-related issues, such as memory, I/O, CPUs, and so on.

Example of poorly constructs SQL query:

Query common business question type. Query is to determines how many products have list prices less than 15% above the average cost of the product. This statement has a correlated subquery, which means that the subquery is run for every row found in the outer query.
SELECT COUNT(*) FROM product p
WHERE prod_list_price < 1.15 * (SELECT AVG(unit_cost) FROM cost c
     WHERE c.prod_id = p.prod_id);

Following is better written.
SELECT COUNT(*)
FROM product p,
    (SELECT prod_id, AVG(unit_cost) auc FROM costs
     GROUP BY prod_id) c
WHERE p.prod_id = c.prod_id
AND p.prod_list_price < 1.15 * c.auc;

Use simple equality of having function based indexes

The query, in this example, applies functions to the join columns, restricting the condition where indexes can be used. Use a simple equality, if you can. Otherwise, a function-based index may be necessary.

SELECT * FROM job_history jh, employees e
  WHERE SUBSTR(TO_CHAR(e.employee_id),2 = SUBSTR(TO_CHAR(jh.employee_id),2);

In this example, the query ha a condition that forces implicit data type conversion; the ORDER_ID_CHAR column is a character type and the constant is a numeric type. You should make the literal match the column type.

SELECT * FROM orders WHERE order_id_char = 1000;

Example of query uses a data type conversions function in it to make the data types match in the comparison.

The problem here is that the TO_CHAR function is applied to the column values, rather than to the constant. This means that the function is called for ecery row in the table. It would be better to convert the literal once, an not convert the column.

SELECT * FROM employees WHERE TO_CHAR(salary) = :sal

The query is better written using this SQL statement

SELECT * FROM employees WHERE TO_CHARsalary = TO_NUMBER(:sal)

Unnecessary unique sort

In this query, the UNION operator, as opposed to the UNION ALL operator, ensures that there are no duplicate rows in the result set. However, this required an extra step, a unique sort, to eliminate any duplicates. If you know there a no rows in common between the two UNIONed queries, use UNION ALL instead of UNION. This eliminates the unnecessary sort.

SELECT * FROM parts_old
UNION
SELECT * FROM parts_new

The query is better writte using this SQL statement

SELECT * FROM parts_old
UNION ALL
SELECT * FROM parts_new



Poor performance SQL Statements

SQL statements can perform poorly for a variety of reason:

1. Stale optimizer statistics

SQL execution plans are generated by the cost-based optimizer (CBO). For CBO to effectively choose the most efficient plan, it need accurate information on the data volume and distribution of tables and indexes referenced in the queries. Without accurate optimizer statistics, the CBO can be easily mislead and generate suboptimal execution plans.

2. Missing access structure

Absence of access structures, such as indexes, materialized views, and partitions, is a common reason for poor SQL performance. The right set of access structures can improve SQL.

3. Suboptimal Execution Plan selection, and

The CBO can sometimes select a suboptimal execution plan for a SQL statement. This happen for the most part because of incorrect estimates of some attributes of the SQL statement, such as its cost, carnality, or predicate selectivity.

4. Poorly constructed SQL

If the SQL is designed poorly, there is not much that the optimizer can do to improve its performance. A missing join condition leading to a Cartesian product, or the use of more expensive SQL constructs like UNION in place of UNION ALL, are just a couple of examples of inefficient SQL design.

Above are the four main causes of poor SQL optimization can have a drastic impact on performance.

Additional reasons for poor performance might be connected with hardware-related issues, such as memory, I/O, CPUs, and so on.

Example of poorly constructs SQL query:

Query common business question type. Query is to determines how many products have list prices less than 15% above the average cost of the product. This statement has a correlated subquery, which means that the subquery is run for every row found in the outer query.
SELECT COUNT(*) FROM product p
WHERE prod_list_price < 1.15 * (SELECT AVG(unit_cost) FROM cost c
     WHERE c.prod_id = p.prod_id);

Following is better written.
SELECT COUNT(*)
FROM product p,
    (SELECT prod_id, AVG(unit_cost) auc FROM costs
     GROUP BY prod_id) c
WHERE p.prod_id = c.prod_id
AND p.prod_list_price < 1.15 * c.auc;

Use simple equality of having function based indexes

The query, in this example, applies functions to the join columns, restricting the condition where indexes can be used. Use a simple equality, if you can. Otherwise, a function-based index may be necessary.

SELECT * FROM job_history jh, employees e
  WHERE SUBSTR(TO_CHAR(e.employee_id),2 = SUBSTR(TO_CHAR(jh.employee_id),2);

In this example, the query ha a condition that forces implicit data type conversion; the ORDER_ID_CHAR column is a character type and the constant is a numeric type. You should make the literal match the column type.

SELECT * FROM orders WHERE order_id_char = 1000;

Example of query uses a data type conversions function in it to make the data types match in the comparison.

The problem here is that the TO_CHAR function is applied to the column values, rather than to the constant. This means that the function is called for ecery row in the table. It would be better to convert the literal once, an not convert the column.

SELECT * FROM employees WHERE TO_CHAR(salary) = :sal

The query is better written using this SQL statement

SELECT * FROM employees WHERE TO_CHARsalary = TO_NUMBER(:sal)

Unnecessary unique sort

In this query, the UNION operator, as opposed to the UNION ALL operator, ensures that there are no duplicate rows in the result set. However, this required an extra step, a unique sort, to eliminate any duplicates. If you know there a no rows in common between the two UNIONed queries, use UNION ALL instead of UNION. This eliminates the unnecessary sort.

SELECT * FROM parts_old
UNION
SELECT * FROM parts_new

The query is better writte using this SQL statement

SELECT * FROM parts_old
UNION ALL
SELECT * FROM parts_new