Which two are the fastest methods for fetching a single row from a table based on an equality predicate?
A. Fast full index scan on an index created for a column with unique key
B. Index unique scan on an created for a column with unique key
C. Row fetch from a single table hash cluster
D. Index range scan on an index created from a column with primary key
E. Row fetch from a table using rowid
Correct Answer: CE
A scan is slower than a row fetch (from hash value or rowid).
Question 82:
An application user complains about statement execution taking longer than usual. You find that the query uses a bind variable in the WHERE clause as follows:
You want to view the execution plan of the query that takes into account the value in the bind variable PCAT. Which two methods can you use to view the required execution plan?
A. Use the DBMS_XPLAN.DISPLAY function to view the execution plan.
B. Identify the SQL_ID for the statementsand use DBMS_XPLAN.DISPLAY_CURSOR for that SQL_ID to view the execution plan.
C. Identify the SQL_ID for the statement and fetch the execution plan PLAN_TABLE.
D. View the execution plan for the statement from V$SQL_PLAN.
E. Execute the statement with different bind values and set AUTOTRACE enabled for session.
Correct Answer: BD
D: V$SQL_PLAN contains the execution plan information for each child cursor loaded in the library cache.
B: The DBMS_XPLAN package supplies five table functions:
DISPLAY_SQL_PLAN_BASELINE - to display one or more execution plans for the SQL statement identified by SQL handle
DISPLAY - to format and display the contents of a plan table.
DISPLAY_AWR - to format and display the contents of the execution plan of a stored SQL statement in the AWR.
DISPLAY_CURSOR - to format and display the contents of the execution plan of any loaded cursor.
DISPLAY_SQLSET - to format and display the contents of the execution plan of statements stored in a SQL tuning set.
Question 83:
You are administering a database supporting an OLTP application. The application runs a series of extremely similar queries the MYSALES table where the value of CUST_ID changes.
Examine Exhibit1 to view the query and its execution plan.
Examine Exhibit 2 to view the structure and indexes for the MYSALES table. The MYSALES table has 4 million records.
Data in the CUST_ID column is highly skewed. Examine the parameters set for the instance:
Which action would you like to make the query use the best plan for the selectivity?
A. Decrease the value of the OPTIMIZER_DYNAMIC_SAMPLING parameter to 0.
B. Us the /*+ INDEX(CUST_ID_IDX) */ hint in the query.
C. Drop the existing B* -tree index and re-create it as a bitmapped index on the CUST_ID column.
D. Collect histogram statistics for the CUST_ID column and use a bind variable instead of literal values.
Correct Answer: D
Using Histograms In some cases, the distribution of values within a column of a table will affect the optimizer's decision to use an index vs. perform a full-table scan. This scenario occurs when the value with a where clause has a disproportional amount of values, making a full-table scan cheaper than index access.
A column histogram should only be created when we have data skew exists or is suspected.
Question 84:
Which three are tasks performed in the hard parse stage of a SQL statement executions?
A. Semantics of the SQL statement are checked.
B. The library cache is checked to find whether an existing statement has the same hash value.
C. The syntax of the SQL statement is checked.
D. Information about location, size, and data type is defined, which is required to store fetched values in variables.
E. Locks are acquired on the required objects.
Correct Answer: BDE
Parse operations fall into the following categories, depending on the type of statement submitted and the result of the hash check: A) Hard parse
If Oracle Database cannot reuse existing code, then it must build a new executable version of the application code. This operation is known as a hard parse, or a
library cache miss. The database always perform a hard parse of DDL.
During the hard parse, the database accesses the library cache and data dictionary cache numerous times to check the data dictionary. When the database
accesses these areas, it uses a serialization device called a latch on required objects so that their definition does not change (see "Latches"). Latch contention
increases statement execution time and decreases concurrency.
B) Soft parse
A soft parse is any parse that is not a hard parse. If the submitted statement is the same as a reusable SQL statement in the shared pool, then Oracle Database
reuses the existing code. This reuse of code is also called a library cache hit.
Soft parses can vary in the amount of work they perform. For example, configuring the session cursor cache can sometimes reduce the amount of latching in the
soft parses, making them "softer."
In general, a soft parse is preferable to a hard parse because the database skips the optimization and row source generation steps, proceeding straight to
execution.
Incorrect: A, C: During the parse call, the database performs the following checks: Syntax Check Semantic Check Shared Pool Check The hard parse is within Shared Pool check. Reference: Oracle Database Concepts 11g, SQL Parsing
Question 85:
Examine the Exhibit to view the structure of an indexes for the SALES table.
The SALES table has 4594215 rows. The CUST_ID column has 2079 distinct values. What would you do to influence the optimizer for better selectivity?
A. Drop bitmap index and create balanced B*Tree index on the CUST_ID column.
B. Create a height-balanced histogram for the CUST_ID column.
C. Gather statistics for the indexes on the SALES table.
D. Use the ALL_ROWS hint in the query.
Correct Answer: D
OPTIMIZER_MODE establishes the default behavior for choosing an optimization approach for the instance.
Values:
FIRST_ROWS_N - The optimizer uses a cost-based approach and optimizes with a goal of best response time to return the first n rows (where n = 1, 10, 100,
1000).
FIRST_ROWS - The optimizer uses a mix of costs and heuristics to find a best plan for fast delivery of the first few rows.
ALL_ROWS - The optimizer uses a cost-based approach for all SQL statements in the session and optimizes with a goal of best throughput (minimum resource
use to complete the entire statement).
Question 86:
View Exhibit1 and examine the structure and indexes for the MYSALES table.
The application uses the MYSALES table to insert sales record. But this table is also extensively used for generating sales reports. The PROD_ID and CUST_ID columns are frequently used in the WHERE clause of the queries. These columns have few distinct values relative to the total number of rows in the table. The MYSALES table has 4.5 million rows.
View exhibit 2 and examine one of the queries and its autotrace output.
Which two methods can examine one of the queries and its autotrace output?
A. Drop the current standard balanced B* Tree indexes on the CUST_ID and PROD_ID columns and re-create as bitmapped indexes.
B. Use the INDEX_COMBINE hint in the query.
C. Create a composite index involving the CUST_ID and PROD_ID columns.
D. Rebuild the index to rearrange the index blocks to have more rows per block by decreasing the value for PCTFRE attribute.
E. Collect histogram statistics for the CUST_ID and PROD_ID columns.
Correct Answer: BC
B: The INDEX hint explicitly chooses an index scan for the specified table. You can use the INDEX hint for domain, B-tree, bitmap, and bitmap join indexes. However, Oracle recommends using INDEX_COMBINE rather than INDEX for the combination of multiple indexes, because it is a more versatile hint.
C: Combining the CUST_ID and PROD_ID columns into an composite index would improve performance.
Question 87:
You enabled auto degree of parallelism (DOP) for your instance.
Examine the query:
Which two are true about the execution of this query?
A. Dictionary DOP will be used, if present, on the tables referred in the query.
B. DOP is calculated if the calculated DOP is 1.
C. DOP is calculated automatically.
D. Calculated DOP will always by 2 or more.
E. The statement will execute with auto DOP only when PARALLEL_DEGREE_POLICY is set to AUTO.
Correct Answer: AC
*
PARALLEL (AUTO): The database computes the degree of parallelism (C), which can be 1 or greater (not D). If the computed degree of parallelism is 1, then the statement runs serially.
*
You can use the PARALLEL hint to force parallelism. It takes an optional parameter: the DOP at which the statement should run. In addition, theNO_PARALLEL hint overrides a PARALLEL parameter in the DDL that created or altered the table.
The following example illustrates computing the DOP the statement should use: SELECT /*+ parallel(auto) */ ename, dname FROM emp e, dept d
WHERE e.deptno=d.deptno;
* When the parameter PARALLEL_DEGREE_POLICY is set to AUTO, Oracle Database auto- matically decides if a statement should execute in parallel or not and what DOP it should use. Oracle Database also determines if the statement can be executed immediately or if it is queued until more system resources are available. Finally, Oracle Database decides if the statement can take advantage of the aggregated cluster memory or not.
Question 88:
Examine the following query and execution plan: Which query transformation technique is used in this scenario?
A. Join predicate push-down
B. Subquery factoring
C. Subquery unnesting
D. Join conversion
Correct Answer: A
*
Normally, a view cannot be joined with an index-based nested loop (i.e., index access) join, since a view, in contrast with a base table, does not have an index defined on it. A view can only be joined with other tables using three methods: hash, nested loop, and sort-merge joins.
*
The following shows the types of views on which join predicate pushdown is currently supported. UNION ALL/UNION view Outer-joined view Anti-joined view Semi-joined view DISTINCT view GROUP-BY view
Question 89:
Examine the following anonymous PL/SQL code block of code:
Which two are true concerning the use of this code?
A. The user executing the anonymous PL/SQL code must have the CREATE JOB system privilege.
B. ALTER SESSION ENABLE PARALLEL DML must be executed in the session prior to executing the anonymous PL/SQL code.
C. All chunks are committed together once all tasks updating all chunks are finished.
D. The user executing the anonymous PL/SQL code requires execute privilege on the DBMS_JOB package.
E. The user executing the anonymous PL/SQL code requires privilege on the DBMS_SCHEDULER package.
F. Each chunk will be committed independently as soon as the task updating that chunk is finished.
Correct Answer: AE
A (not D, not E):
To use DBMS_PARALLEL_EXECUTE to run tasks in parallel, your schema will need the CREATE JOB system privilege.
E (not C): DBMS_PARALLEL_EXECUTE now provides the ability to break up a large table according to a variety of criteria, from ROWID ranges to key values and
user-defined methods. You can then run a SQL statement or a PL/SQL block against these different "chunks" of the table in parallel, using the database scheduler
to manage the processes running in the background. Error logging, automatic retries, and commits are integrated into the processing of these chunks.
Note:
*
The DBMS_PARALLEL_EXECUTE package allows a workload associated with a base table to be broken down into smaller chunks which can be run in parallel. This process involves several distinct stages. 1.Create a task 2.Split the workload into chunks CREATE_CHUNKS_BY_ROWID CREATE_CHUNKS_BY_NUMBER_COL CREATE_CHUNKS_BY_SQL 3.Run the task RUN_TASK User-defined framework Task control 4.Check the task status 5.Drop the task
*
The workload is associated with a base table, which can be split into subsets or chunks of rows. There are three methods of splitting the workload into chunks.
CREATE_CHUNKS_BY_ROWID CREATE_CHUNKS_BY_NUMBER_COL CREATE_CHUNKS_BY_SQL The chunks associated with a task can be dropped using the DROP_CHUNKS procedure.
*
CREATE_CHUNKS_BY_ROWID The CREATE_CHUNKS_BY_ROWID procedure splits the data by rowid into chunks specified by the CHUNK_SIZE parameter. If the BY_ROW parameter is set to TRUE, the CHUNK_SIZE refers to the number of rows, otherwise it refers to the number of blocks.
Reference: TECHNOLOGY: PL/SQL Practices, On Working in Parallel
Question 90:
Examine the exhibit to view the query and its execution plan?
What two statements are true?
A. The HASH GROUP BY operation is the consumer of the HASH operation.
B. The HASH operation is the consumer of the HASH GROUP BY operation.
C. The HASH GROUP BY operation is the consumer of the TABLE ACCESS FULL operation for the CUSTOMER table.
D. The HASH GROUP BY operation is consumer of the TABLE ACCESS FULL operation for the SALES table.
E. The SALES table scan is a producer for the HASH JOIN operation.
Correct Answer: AE
A, not C, not D: Line 3, HASH GROUP BY, consumes line 6 (HASH JOIN BUFFERED).
E: Line 14, TABLE ACCESS FULL (Sales), is one of the two producers for line 6 (HASH JOIN).
Nowadays, the certification exams become more and more important and required by more and more enterprises when applying for a job. But how to prepare for the exam effectively? How to prepare for the exam in a short time with less efforts? How to get a ideal result and how to find the most reliable resources? Here on Vcedump.com, you will find all the answers. Vcedump.com provide not only Oracle exam questions, answers and explanations but also complete assistance on your exam preparation and certification application. If you are confused on your 1Z0-117 exam preparations and Oracle certification application, do not hesitate to visit our Vcedump.com to find your solutions here.