Call webservice from oracle database

Database Web Service

Note:

The purpose of this article is to provide a step-by-step instruction on how to install the DBWS Callout Utilities on Oracle Database Server 11g and how to verify that it works properly.

All of these environments may be placed on different machines. Anyway, it has been found convenient to have these environments on one machine for an initial installation and test of the DBWS Callout Utilities. This does rule out network problems from affecting the proper operation of the DBWS Callout Utilities.

11g Database Webservice Callout Steps

preparing the Oracle Client and Oracle Database Server

  1. Setup the appropriate environment for the Oracle Client

ORACLE_HOME = D:\app\database\product\11.2.0\dbhome_1
PATH = %PATH%;%ORACLE_HOME%\bin;%ORACLE_HOME%\jdk\bin

  1. Install 11g Release 2 (11.2) for Microsoft Windows (32-Bit) with Installation Type “Administrator”

This will install all the tools and jars required to work with the DBWS Callout Utilities.

  1. Drop The Package “UTL_DBWS” in Schema “SYS”. If existed

drop package UTL_DBWS;

  1. Check the OracleJVM for Invalid Classes

The schema “SYS” and the schema into which the DBWS Callout Utilities should be installed must not have invalid classes. The status of all the java classes installed on the Oracle Database Server instance can be checked as SYSDBA with following SQL statement:

SELECT owner, status, count(*) FROM DBA_OBJECTS
WHERE OBJECT_TYPE=’JAVA CLASS’
GROUP BY owner, status;

For a standard installation of Oracle Database Server 11.1 this SQL statement should return about 20000 valid classes for the owner/schema “SYS”.

It is very difficult to resolve invalid classes once they are in schema “SYS”. There is a huge number of classes in this schema and it is not easy to find out the dependencies. It is a lot easier to resolve invalid classes in schemas that are not system schemas of the Oracle Database Server, because in this case all uploaded jar files and classes with their dependencies should be known to the user. Re-compile all invalid classes( take maximum 100 ) in one compile.

The complete initialization of the OracleJVM is usually the most efficient approach to resolve problems with invalid classes in schema “SYS”, even though this is a risky and time consuming procedure.

  1. Configure the OracleJVM

Preparing the database consists mainly in loading the Web Services client stack into the database. Loading the client stack requires extra Java space inside the OracleJVM to resolve and store load jar files. Make sure the initialization parameters SHARED_POOL_SIZE and JAVA_POOL_SIZE are equal to or greater than 132M and 80M, i.e. or more

shared_pool_size=132M
java_pool_size=80M

Depending on the number of Java classes already loaded into the database, the two parameters may need to be increased beyond 132M and 80M.

In SQL*Plus you can check the size of this initialization parameters as SYSDBA:

show parameter SHARED_POOL_SIZE
show parameter JAVA_POOL_SIZE

alter system set SHARED_POOL_SIZE=132M scope=both;
alter system set JAVA_POOL_SIZE=80M scope=both;

  1. Download the Oracle DBWS Callout Utility 10.1.3.1: this with work with Oracle Database 10g & 11g both.
  2. Take a backup of %Oracle_Home%\sqlj\   folder first:
  3. Unzip the Content of the ZIP File to $ORACLE_HOME (Oracle Database Server Environment)

Unzip in %Oracle_Home%\sqlj\  dbws-callout-utility-10131.zip

  1. Modify the CLASSPATH in Clients_Home\ sqlj\bin\jpub

D:\dbhome_2\product\11.2.0\client_1\bin;D:\app\database\product\11.2.0\dbhome_2;D:\app\database\product\11.2.0\dbhome_1\bin;D:\DevSuiteHome_1\jdk\jre\bin\classic;D:\DevSuiteHome_1\jdk\jre\bin;D:\DevSuiteHome_1\jdk\jre\bin\client;D:\DevSuiteHome_1\jlib;D:\DevSuiteHome_1\bin;D:\DevSuiteHome_1\jre\1.4.2\bin\client;D:\DevSuiteHome_1\jre\1.4.2\bin;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;D:\ORANT\BIN;%ORACLE_HOME%\bin;%ORACLE_HOME%\OPatch;%ORACLE_HOME%\sqlj\bin;%JAVA_HOME%\bin;D:\dbhome_2\product\11.2.0\client_1\jdk\bin

 

  1. Make JPublisher and JDK 1.5.0 available for an Oracle Client Operating System Shell

Oracle Client 11g ships with a JDK 1.5.0 already. The JDK being used for JPublisher must match with the version of the OracleJVM the generated classes will be deployed to.

e.g.:
PATH=$ORACLE_HOME\jdk\bin;%PATH %

echo %PATH%
D:\dbhome_2\product\11.2.0\client_1\bin;D:\app\database\product\11.2.0\dbhome_1;D:\app\database\product\11.2.0\dbhome_1\bin;;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;%ORACLE_HOME%\bin;%ORACLE_HOME%\OPatch;%ORACLE_HOME%\sqlj\bin;%JAVA_HOME%\bin;D:\app\database\product\11.2.0\dbhome_1\sqlj\lib;

 

  1. Load the WebService Client into Oracle Database Server

loadjava -u SYS/MiracleWorker7 -r -v -f -s –grant public -genmissing dbwsclientws.jar dbwsclientdb11.jar

Output:

[oracle@sracanov-au2 lib]$ tail -8 loadjava.txt

Classes Loaded: 4061

Resources Loaded: 81

Sources Loaded: 0

Published Interfaces: 0

Classes generated: 63

Classes skipped: 0

Synonyms Created: 0

Errors: 0

[oracle@sracanov-au2 lib]$

 

Note:

The loadjava commands above load the Oracle JAX-RPC client into the target schema. By default this classes will only be available for the target user. Other users require execute privileges and synonyms for this classes, if such users should also be able to have web service callouts.

e.g.:
loadjava -u SYS/MiracleWorker7 -r -v -f -s –grant public -genmissing dbwsclientws.jar dbwsclientdb11.jar

  1. Once the database webservice call out classes have been loaded, I need to run the PL/SQL wrappers that will call these classes. The scripts are found in $ORACLE_HOME\sqlj\lib folder. Since I installed the jar in scott, scott needs to run these scripts:

Load below in sys schema.

  1. sql
  2. sql

Verify.

desc utl_dbws

  1. Here is the PL/SQL code using UTL_DBWS to call an external webservice:

CREATE OR REPLACE FUNCTION celciusToFahrenheit(temperature NUMBER) RETURN VARCHAR2 AS

service_ scott.utl_dbws.SERVICE;

call_ scott.utl_dbws.CALL;

service_qname scott.utl_dbws.QNAME;

port_qname scott.utl_dbws.QNAME;

response sys.XMLTYPE;

request sys.XMLTYPE;

BEGIN

scott.utl_dbws.set_http_proxy(‘myproxy.com:80’);

service_qname := scott.utl_dbws.to_qname(null, ‘CelciusToFahrenheit’);

service_      := scott.utl_dbws.create_service(service_qname);

call_         := scott.utl_dbws.create_call(service_);

scott.utl_dbws.set_target_endpoint_address(call_, ‘http://webservices.daehosting.com/services/TemperatureConversions.wso’);

scott.utl_dbws.set_property( call_, ‘OPERATION_STYLE’, ‘document’);

request       := sys.XMLTYPE(‘<celciustofahrenheit xmlns=”http://webservices.daehosting.com/temperature”><ncelcius>&#8217;||temperature||'</ncelcius></celciustofahrenheit>’);

response      := scott.utl_dbws.invoke(call_, request);

return response.extract(‘//CelciusToFahrenheitResult/child::text()’, ‘xmlns=”http://webservices.daehosting.com/temperature”&#8216;).getstringval();

END;

Now Test:

 

SELECT celciusToFahrenheit(30) from dual;

 

CELCIUSTOFAHRENHEIT(30)

——————————————————————————–

86

 

 

—————————————————————–

Tablespace Encryption in Oracle

Oracle 11g Database Release 1

The Transparent Data Encryption (TDE) feature was introduced in Oracle 10g Database Release 2 to simplify the encryption of data within datafiles, preventing access to it from the operating system. Tablespace encryption extends this technology, allowing encryption of the entire contents of a tablespace, rather than having to configure encryption on a column-by-column basis.

  • Wallet Creation
  • Tablespace Creation
  • Test Encryption

Wallet Creation

The environment setup for tablespace encryption is the same as that for transparent data encryption. Before attempting to create an encrypted tablespace, a wallet must be created to hold the encryption key. The search order for finding the wallet is as follows:

  1. If present, the location specified by the ENCRYPTION_WALLET_LOCATION parameter in the sqlnet.ora file.
  2. If present, the location specified by the WALLET_LOCATION parameter in the sqlnet.ora file.
  3. The default location for the wallet ($ORACLE_BASE/admin/$ORACLE_SID/wallet).

Although encrypted tablespaces can share the default database wallet, Oracle recommend you use a separate wallet for transparent data encryption functionality by specifying theENCRYPTION_WALLET_LOCATION parameter in the sqlnet.ora file. To accomplish this we add the following entry into the sqlnet.ora file on the server and make sure the specified directory has been created.

ENCRYPTION_WALLET_LOCATION=

(SOURCE=(METHOD=FILE)(METHOD_DATA=

(DIRECTORY=/u01/app/oracle/admin/DB11G/encryption_wallet/)))

This parameter can also be used to identify a Hardware Security Model (HSM) as the location for the wallet.

The following command creates and opens the wallet.

CONN sys/password@db11g AS SYSDBA

ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY “myPassword”;

Wallets must be reopened after an instance restart and can be closed to prevent access to encrypted data.

ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY “myPassword”;

ALTER SYSTEM SET ENCRYPTION WALLET CLOSE;

Tablespace Creation

Encrypted tablespaces are created by specifying the ENCRYPTION clause with an optional USING clause to specify the encryption algorithm. If the USING clause is omitted, the encryption algorithm defaults to ‘AES128’. In addition, the default storage clause of ENCRYPT must be specified. Tablespace encryption does not allow the NO SALT option that is available in TDE. The following statement creates an encrypted tablespace by explicitly naming the ‘AES256’ encryption algorithm in the USING clause.

CREATE TABLESPACE encrypted_ts

DATAFILE ‘/u01/app/oracle/oradata/DB11G/encrypted_ts01.dbf’ SIZE 128K

AUTOEXTEND ON NEXT 64K

ENCRYPTION USING ‘AES256’

DEFAULT STORAGE(ENCRYPT);

ALTER USER test QUOTA UNLIMITED ON encrypted_ts;

The ENCRYPTED column of the DBA_TABLESPACES and USER_TABLESPACES views indicates if the tablespace is encrypted or not.

SELECT tablespace_name, encrypted FROM dba_tablespaces;

TABLESPACE_NAME                ENC

—————————— —

SYSTEM                         NO

SYSAUX                         NO

UNDOTBS1                       NO

TEMP                           NO

USERS                          NO

ENCRYPTED_TS                   YES

6 rows selected.

SQL>

Regular tablespaces cannot be converted to encrypted tablespaces. Instead, data must be transferred manually using export/import, “ALTER TABLE … MOVE …” or “CREATE TABLE … AS SELECT * FROM …”.

Test Encryption

With the tablespace in place, we can create some objects to test the encryption. The following code creates a table and index in the encrypted tablespace and inserts a single row into the table.

CONN test/test@db11g

CREATE TABLE ets_test (

id    NUMBER(10),

data  VARCHAR2(50)

)

TABLESPACE encrypted_ts;

CREATE INDEX ets_test_idx ON ets_test(data) TABLESPACE encrypted_ts;

INSERT INTO ets_test (id, data) VALUES (1, ‘This is a secret!’);

COMMIT;

Flush the buffer cache to make sure the data is written to the datafile.

CONN sys/password@db11g AS SYSDBA

ALTER SYSTEM FLUSH BUFFER_CACHE;

When the file is opened using a HEX editor (like UltraEdit) only non-printable characters are present. The ‘This is a secret!’ string is not visible in the table or index data within the encrypted tablespace.

When you are finished testing the encrypted tablespace, be sure to clean up the tablespace and associated datafile.

DROP TABLESPACE encrypted_ts INCLUDING CONTENTS AND DATAFILES;

ALL, ANY and SOME Comparison Conditions in SQL

ALL, ANY and SOME Comparison Conditions in SQL

It is quite possible you could work with Oracle databases for many years and never come across the ALL, ANY and SOME comparison conditions in SQL because there are alternatives to them that are used more regularly. If you are planning to sit the Oracle Database SQL Expert (1Z0-047) exam you should be familiar with these conditions as they are used frequently in the questions.

All you need to know for the exam is the usage and characteristics of these comparison conditions. The article also includes information about the transformations the optimizer does when processing them. This extra information is not a requirement for the exam, but goes some way to explaining why you rarely see people using the conditions. Each of the SQL statements will be run against the “SCOTT.EMP” table to show the transformations are equivalent.

SQL> SELECT * FROM emp;

     EMPNO ENAME      JOB	       MGR HIREDATE		       SAL	 COMM	  DEPTNO
---------- ---------- --------- ---------- -------------------- ---------- ---------- ----------
      7369 SMITH      CLERK	      7902 17-DEC-1980 00:00:00        800		      20
      7499 ALLEN      SALESMAN	      7698 20-FEB-1981 00:00:00       1600	  300	      30
      7521 WARD       SALESMAN	      7698 22-FEB-1981 00:00:00       1250	  500	      30
      7566 JONES      MANAGER	      7839 02-APR-1981 00:00:00       2975		      20
      7654 MARTIN     SALESMAN	      7698 28-SEP-1981 00:00:00       1250	 1400	      30
      7698 BLAKE      MANAGER	      7839 01-MAY-1981 00:00:00       2850		      30
      7782 CLARK      MANAGER	      7839 09-JUN-1981 00:00:00       2450		      10
      7788 SCOTT      ANALYST	      7566 19-APR-1987 00:00:00       3000		      20
      7839 KING       PRESIDENT 	   17-NOV-1981 00:00:00       5000		      10
      7844 TURNER     SALESMAN	      7698 08-SEP-1981 00:00:00       1500	    0	      30
      7876 ADAMS      CLERK	      7788 23-MAY-1987 00:00:00       1100		      20
      7900 JAMES      CLERK	      7698 03-DEC-1981 00:00:00        950		      30
      7902 FORD       ANALYST	      7566 03-DEC-1981 00:00:00       3000		      20
      7934 MILLER     CLERK	      7782 23-JAN-1982 00:00:00       1300		      10

SQL>
  • ALL
  • ANY
  • SOME

ALL

The ALL comparison condition is used to compare a value to a list or subquery. It must be preceded by =, !=, >, <, <=, >= and followed by a list or subquery.

When the ALL condition is followed by a list, the optimizer expands the initial condition to all elements of the list and strings them together with AND operators, as shown below.

SELECT empno, sal
FROM   emp
WHERE  sal > ALL (2000, 3000, 4000);

     EMPNO        SAL
---------- ----------
      7839       5000

SQL>

-- Transformed to equivalent statement without ALL.

SELECT empno, sal
FROM   emp
WHERE  sal > 2000 AND sal > 3000 AND sal > 4000;

     EMPNO        SAL
---------- ----------
      7839       5000

SQL>

When the ALL condition is followed by a subquery, the optimizer performs a two-step transformation as shown below.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  e1.sal > ALL (SELECT e2.sal
                     FROM   emp e2
                     WHERE  e2.deptno = 20);

     EMPNO        SAL
---------- ----------
      7839       5000

SQL>

-- Transformed to equivalent statement using ANY.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  NOT (e1.sal <= ANY (SELECT e2.sal
                           FROM emp e2
                           WHERE e2.deptno = 20));

     EMPNO        SAL
---------- ----------
      7839       5000

SQL>

-- Transformed to equivalent statement without ANY.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  NOT EXISTS (SELECT e2.sal
                   FROM emp e2
                   WHERE e2.deptno = 20
                   AND   e1.sal <= e2.sal);

     EMPNO        SAL
---------- ----------
      7839       5000

SQL>

Assuming subqueries don’t return zero rows, the following statements can be made for both list and subquery versions:/p>

  • “x = ALL (…)”: The value must match all the values in the list to evaluate to TRUE.
  • “x != ALL (…)”: The value must not match any values in the list to evaluate to TRUE.
  • “x > ALL (…)”: The value must be greater than the biggest value in the list to evaluate to TRUE.
  • “x < ALL (…)”: The value must be smaller than the smallest value in the list to evaluate to TRUE.
  • “x >= ALL (…)”: The value must be greater than or equal to the biggest value in the list to evaluate to TRUE.
  • “x <= ALL (…)”: The value must be smaller than or equal to the smallest value in the list to evaluate to TRUE.

If a subquery returns zero rows, the condition evaluates to TRUE.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  e1.sal > ALL (SELECT e2.sal FROM emp e2 WHERE e2.deptno = 100);

     EMPNO        SAL
---------- ----------
      7369        800
      7900        950
      7876       1100
      7521       1250
      7654       1250
      7934       1300
      7844       1500
      7499       1600
      7782       2450
      7698       2850
      7566       2975
      7788       3000
      7902       3000
      7839       5000

SQL>

ANY

The ANY comparison condition is used to compare a value to a list or subquery. It must be preceded by =, !=, >, <, <=, >= and followed by a list or subquery.

When the ANY condition is followed by a list, the optimizer expands the initial condition to all elements of the list and strings them together with OR operators, as shown below.

SELECT empno, sal
FROM   emp
WHERE  sal > ANY (2000, 3000, 4000);

     EMPNO        SAL
---------- ----------
      7566       2975
      7698       2850
      7782       2450
      7788       3000
      7839       5000
      7902       3000

SQL>

-- Transformed to equivalent statement without ANY.

SELECT empno, sal
FROM   emp
WHERE  sal > 2000 OR sal > 3000 OR sal > 4000;

     EMPNO        SAL
---------- ----------
      7566       2975
      7698       2850
      7782       2450
      7788       3000
      7839       5000
      7902       3000

SQL>

When the ANY condition is followed by a subquery, the optimizer performs a single transformation as shown below.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  e1.sal > ANY (SELECT e2.sal
                     FROM   emp e2
                     WHERE  e2.deptno = 10);

     EMPNO        SAL
---------- ----------
      7839       5000
      7902       3000
      7788       3000
      7566       2975
      7698       2850
      7782       2450
      7499       1600
      7844       1500

SQL>

-- Transformed to equivalent statement without ANY.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  EXISTS (SELECT e2.sal
               FROM emp e2
               WHERE e2.deptno = 10
               AND   e1.sal > e2.sal);

     EMPNO        SAL
---------- ----------
      7839       5000
      7902       3000
      7788       3000
      7566       2975
      7698       2850
      7782       2450
      7499       1600
      7844       1500

SQL>

Assuming subqueries don’t return zero rows, the following statements can be made for both list and subquery versions:

  • “x = ANY (…)”: The value must match one or more values in the list to evaluate to TRUE.
  • “x != ANY (…)”: The value must not match one or more values in the list to evaluate to TRUE.
  • “x > ANY (…)”: The value must be greater than the smallest value in the list to evaluate to TRUE.
  • “x < ANY (…)”: The value must be smaller than the biggest value in the list to evaluate to TRUE.
  • “x >= ANY (…)”: The value must be greater than or equal to the smallest value in the list to evaluate to TRUE.
  • “x <= ANY (…)”: The value must be smaller than or equal to the biggest value in the list to evaluate to TRUE.

If a subquery returns zero rows, the condition evaluates to FALSE.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  e1.sal > ANY (SELECT e2.sal FROM emp e2 WHERE e2.deptno = 100);

no rows selected

SQL>

SOME

The SOME and ANY comparison conditions do exactly the same thing and are completely interchangeable.

Autonomous Transactions

Autonomous Transactions

Autonomous transactions allow you to leave the context of the calling transaction, perform an independant transaction, and return to the calling transaction without affecting it’s state. The autonomous transaction has no link to the calling transaction, so only commited data can be shared by both transactions.

The following types of PL/SQL blocks can be defined as autonomous transactions:

  • Stored procedures and functions.
  • Local procedures and functions defined in a PL/SQL declaration block.
  • Packaged procedures and functions.
  • Type methods.
  • Top-level anonymous blocks.

The easiest way to understand autonomous transactions is to see them in action. To do this, we create a test table and populate it with two rows. Notice that the data is not commited.

CREATE TABLE at_test (
  id           NUMBER       NOT NULL,
  description  VARCHAR2(50) NOT NULL
);

INSERT INTO at_test (id, description) VALUES (1, 'Description for 1');
INSERT INTO at_test (id, description) VALUES (2, 'Description for 2');

SELECT * FROM at_test;

        ID DESCRIPTION
---------- --------------------------------------------------
         1 Description for 1
         2 Description for 2

2 rows selected.

SQL>

Next, we insert another 8 rows using an anonymous block declared as an autonomous transaction, which contains a commit statement.

DECLARE
  PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  FOR i IN 3 .. 10 LOOP
    INSERT INTO at_test (id, description)
    VALUES (i, 'Description for ' || i);
  END LOOP;
  COMMIT;
END;
/

PL/SQL procedure successfully completed.

SELECT * FROM at_test;

        ID DESCRIPTION
---------- --------------------------------------------------
         1 Description for 1
         2 Description for 2
         3 Description for 3
         4 Description for 4
         5 Description for 5
         6 Description for 6
         7 Description for 7
         8 Description for 8
         9 Description for 9
        10 Description for 10

10 rows selected.

SQL>

As expected, we now have 10 rows in the table. If we now issue a rollback statement we get the following result.

ROLLBACK;
SELECT * FROM at_test;

        ID DESCRIPTION
---------- --------------------------------------------------
         3 Description for 3
         4 Description for 4
         5 Description for 5
         6 Description for 6
         7 Description for 7
         8 Description for 8
         9 Description for 9
        10 Description for 10

8 rows selected.

SQL>

The 2 rows inserted by our current session (transaction) have been rolled back, while the rows inserted by the autonomous transactions remain. The presence of the PRAGMA AUTONOMOUS_TRANSACTION compiler directive made the anonymous block run in its own transaction, so the internal commit statement did not affect the calling session. As a result rollback was still able to affect the DML issued by the current statement.

Autonomous transactions are commonly used by error logging routines, where the error messages must be preserved, regardless of the the commit/rollback status of the transaction. For example, the following table holds basic error messages.

CREATE TABLE error_logs (
  id             NUMBER(10)     NOT NULL,
  log_timestamp  TIMESTAMP      NOT NULL,
  error_message  VARCHAR2(4000),
  CONSTRAINT error_logs_pk PRIMARY KEY (id)
);

CREATE SEQUENCE error_logs_seq;

We define a procedure to log error messages as an autonomous transaction.

CREATE OR REPLACE PROCEDURE log_errors (p_error_message  IN  VARCHAR2) AS
  PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  INSERT INTO error_logs (id, log_timestamp, error_message)
  VALUES (error_logs_seq.NEXTVAL, SYSTIMESTAMP, p_error_message);
  COMMIT;
END;
/

The following code forces an error, which is trapped and logged.

BEGIN
  INSERT INTO at_test (id, description)
  VALUES (998, 'Description for 998');

  -- Force invalid insert.
  INSERT INTO at_test (id, description)
  VALUES (999, NULL);
EXCEPTION
  WHEN OTHERS THEN
    log_errors (p_error_message => SQLERRM);
    ROLLBACK;
END;
/

PL/SQL procedure successfully completed.

SELECT * FROM at_test WHERE id >= 998;

no rows selected

SELECT * FROM error_logs;

        ID LOG_TIMESTAMP
---------- ---------------------------------------------------------------------------
ERROR_MESSAGE
----------------------------------------------------------------------------------------------------
         1 28-FEB-2006 11:10:10.107625
ORA-01400: cannot insert NULL into ("TIM_HALL"."AT_TEST"."DESCRIPTION")

1 row selected.

SQL>

From this we can see that the LOG_ERRORS transaction was separate to the anonymous block. If it weren’t, we would expect the first insert in the anonymous block to be preserved by the commit statement in the LOG_ERRORS procedure.

Global Temporary Tables.

Global Temporary Tables

Applications often use some form of temporary data store for processes that are to complicated to complete in a single pass. Often, these temporary stores are defined as database tables or PL/SQL tables

  • Creation of Global Temporary Tables
  • Miscellaneous Features

Creation of Global Temporary Tables

The data in a global temporary table is private, such that data inserted by a session can only be accessed by that session. The session-specific rows in a global temporary table can be preserved for the whole session, or just for the current transaction. The ON COMMIT DELETE ROWS clause indicates that the data should be deleted at the end of the transaction.

CREATE GLOBAL TEMPORARY TABLE my_temp_table (

  column1  NUMBER,

  column2  NUMBER

) ON COMMIT DELETE ROWS;

In contrast, the ON COMMIT PRESERVE ROWS clause indicates that rows should be preserved until the end of the session.

CREATE GLOBAL TEMPORARY TABLE my_temp_table (

  column1  NUMBER,

  column2  NUMBER

) ON COMMIT PRESERVE ROWS;

Miscellaneous Features

  • If the TRUNCATE statement is issued against a temporary table, only the session specific data is trucated. There is no affect on the data of other sessions.
  • Data in temporary tables is stored in temp segments in the temp tablespace.
  • Data in temporary tables is automatically deleted at the end of the database session, even if it ends abnormally.
  • Indexes can be created on temporary tables. The content of the index and the scope of the index is the same as the database session.
  • Views can be created against temporary tables and combinations of temporary and permanent tables.
  • Temporary tables can have triggers associated with them.
  • Export and Import utilities can be used to transfer the table definitions, but no data rows are processed.
  • Statistics on temporary tables are common to all sessions. Oracle 12c allows session specific statistics.
  • There are a number of restrictions related to temporary tables but these are version specific.

What is a Global Temporary Tables?

What is a Global Temporary Tables? 

First of all, it is a table, it looks like an ordinary (heap) table when you describe it, and you can access it like an ordinary table.  Its definition is persistent, but its content is temporary. 

There are two flavours of GT table: 

•      Delete on Commit: Data is lost at the end of each transaction, as well as the end of the session.  This is the default action. 

•      Preserve on Commit: The data is preserved at the end of a transaction and only lost when the session terminates. 

This flavour is specified in the ‘create table’ statement. 

CREATE TABLE heap (a number) TABLESPACE USERS; 

CREATE GLOBAL TEMPORARY TABLE gt (a NUMBER) ON COMMIT PRESERVE ROWS; 

Physically, a GT table exists in the temporary segment.  Hence it is not possible to specify any physical storage parameters, or tablespace for the GT table. The values for all these parameters will be reported as NULL on DBA_TABLES for GT tables.  Like an ordinary table, you can create indexes, constraints (other than referential constraints), comments and triggers.  Indexes on GT tables also reside in the temporary segment.  There is no special syntax to be added, but the same restrictions exist on specifying physical storage parameters, which will also be reported as null on DBA_INDEXES. 

There is nothing to stop you from analysing a GT table, but no cost-based optimiser statistics will be generated or stored.  The columns in DBA_TABLES and DBA_INDEXES will also be null.  The only way to specify optimiser statistics is with the DBMS_STATS package (see Oracle8i Supplied PL/SQL Packages Reference). 

The content of the GT is only visible to the session or transaction that created it. Two different sessions can insert identical rows into what is apparently the same GT table with a unique constraint. Physically, each session has its own version of the GT table in the temporary segment, which was created when the session or transaction first referenced it.  If one session inserts two identical rows into the same table then a unique constraint error will still be produced as usual.  When a session terminates the temporary segment is released.  For ON COMMIT DELETE tables, the segment is released at the end of the transaction. The space overhead in the temporary segment is comparable to that required for a similar heap table.  The instantaneous space overhead for GT tables, and any other temporary object, can be observed in v$sort_usage.  It is not reported in DBA_SEGMENTS.  When a transaction is committed, entries for ON COMMIT DELETE tables disappear. 

GT tables are not recoverable, and so redo logging is greatly reduced.  Hence also, it is not possible to specify the LOGGING, NOLOGGING, MONITORING or NOMONITORING options when creating a GT table.  There is still some log information written when accessing a GT table, which is mainly related to rollback segment activity. 

The SQL Reference Manual also lists restrictions on partitioning, clustering, index-organisation, parallel DML, distributed transactions, and nested table or varray type columns. 

How much redo logging is saved? 

In order to quantify the saving in redo logging I constructed a test.  I created a number of heap and GT tables with identical structures, but with different numbers of indexes and columns in the indexes.  I populated them with a simple PL/SQL loop, and measured the difference in the ‘redo size’ parameter on v$mystat for different numbers of rows inserted into the tables.  I used v$mystat rather than v$sysstat to remove the background noise of other database activity. 

The proportionate saving in redo is fairly constant for larger numbers of rows, but there is a saving for a few as 20 rows.  I have tried this test on different platforms and with different sizes of tables, and number of indexes, and indexed columns. The results are always similar, but the savings when only working with a small number of rows depend upon the volume of data. 

How did we use GT tables? 

Detailed testing and measurement had shown that introducing Global Temporary tables would significantly improve payroll performance.  Their introduction became the major driver for the upgrade from Oracle 8.0 to 8.1.6. 

We changed the intermediate result tables to be global temporary tables.  The payroll calculation commits between the sixty or so payroll ‘rules’.  The temporary calculation results are written to ‘temporary calculation result’ tables that are truncated between rules.  These became DELETE ON COMMIT tables.  The retained results are written to ‘permanent result tables’ that are copied to the final balance table at the end of the calculation for each month. The permanent result tables are truncated between calculations for each month. These became PRESERVE ON COMMIT tables. 

No High Watermark  Problems 

Batch processes sometimes temporarily store data in tables.  Such tables are sometimes keyed by a process instance number, so that many instances of the process can run in parallel.  Such tables often drive the processing, and so the queries on them tend to access them by full table scans, which scan the table up to the High Water Mark (HWM).  An unusually large volume of data to be processed can raise the high water mark, and so increase the process time. Even an insert into a table that is rolled back, rather than committed, can raise the HWM.  This sort of table is a good candidate GT table.  Physically, a copy of the table is created afresh for each instance of the process.  Thus the HWM is always set to zero at the start of the process. 

In the payroll system we observed that the performance of a payroll process, run in isolation, improved when the intermediate result tables were truncated at the start of the process for each month. 

Truncate Peformance Problems 

All DDL that performs space management (CREATE TABLE, DROP TABLE etc) takes out a lock on the Space Transaction enqueue. Thus, only one session can perform space management at a time, and so DDL that performs space management will serialise.  This includes the TRUNCATE command. In Oracle

8.0.5 the truncate command repeated calls an internal function, kcbcxx(), and generates significant quantities of redo log (bug 650614, fixed in Oracle 8.1.4). This degrades its performance, and aggravates the serialisation. 

The payroll process in broken in 14 streams.  Each stream processes a different set of employees and the 14 calculation processes are run in parallel.  In order to reset high water marks, and improve the performance of certain scans, the intermediate result tables were truncated at points within the calculation.  This created contention between the streams.  In Oracle 8.0, we had found it was faster to delete from the tables than to truncate them, even though this further increased redo logging, and we had to reduce the number of concurrent streams to 7. 

After introducing GT tables we could have removed some of the truncate or 

delete statements altogether.  However, truncating a GT table is much faster than 

truncating a heap table because the space management occurs inside the temporary segment, and the space is not truly free for other segments.  These truncates were no longer a problem, so we reverted to truncate.  Thus, we retained the ability to run the same code on heap tables for debugging purposes in the development and test environments.  Overall performance increased further when we went back to 14 streams. 

Quirks 

As I have already said, GT tables are new in Oracle 8.1, I have encountered one genuine bug, and discovered some slightly suprising behaviour. 

If you truncate a PRESERVE ON COMMIT table with the REUSE STORAGE option, then the nothing happens.  The table is not truncated, any data remains in the table, and no error is raised (Oracle Bug 1396741). This bug does not apply

to DELETE ON COMMIT because the truncate implies a commit.  I encounted this because our payroll process truncated the intermediate result tables with the REUSE STORAGE option, and I had simply replaced heap tables with GT tables. The process then crashed with a unique constraint error. 

With a PRESERVE ON COMMIT table, you cannot drop it if DML has been performed on it in any current session, even if all transactions are committed.

You get ‘ ORA-14452: attempt to create, alter or drop an index on temporary table already in use’, all the sessions must be terminated before the object can be droppd.   The message itself is misleading because I was trying to drop the table, and there was no index involved.  This behaviour does not occur with DELETE

ON COMMIT tables, because the DDL implies a commit.  When converting a process to use GT tables, beware of performing DDL (other than truncate) on GT tables.  Some processes create and drop working storage tables on-the-fly so

that the table names are distinct.  This approach is no longer necessary with GT 

tables, and all processes can share the same table name. 

If your default tablespace is a temporary tablespace (not that it ever should be) you will be unable to create a permanent object unless you specify a permanent tablespace.  You will also be unable to create an index on a global temporary table, because you will still get ‘ORA-02195: Attempt to create PERMANENT object in a TEMPORARY tablespace’, and you cannot specify a tablespace because you get ‘ORA-14451: unsupported feature with temporary table’.  While 

this is not a problem, it does indicate how this new feature has been bolted onto the existing CREATE TABLE command. 

Benefits 

Minimal Code Change: No code change was made to the payroll calculation in order to implement GT tables.  Although another process which issues the DDL to build the sets of intermediate result tables was changed. 

Reduction in Redo Logging: Overall, redo logging generated by the payroll process reduced by 40% due to GT tables alone, and 60% in combination with the truncate bug fix on upgrading to 8.1.6.  Execution time reduced by 67%. 

High Water Marks: GT tables naturally start with a reset HWM.  This improves full scans on them and their indexes. 

Truncate Table: Where it is still necessary to truncate a GT table, it will be even faster than on a heap table. 

Optimiser: Use of GT tables does not force use of the cost based optimiser.

Drawbacks 

You cannot pass data between processes via a GT table.  There they are not suitable for use in multi-threaded processes, such as an application server. 

During development or debugging it is frequently useful to see data written by a process.  If this is written to GT table it will be lost when the process terminates. With the payroll process it was possible to switch between GT and heap tables in specific streams. 

It is not possible to capture cost based optimiser statistics for a global temporary table with the ANALZYE command.  It be may be necessary to hint queries that reference GT tables, or explicitly set statistics with the DBMS_STATS package. 

Materialized View

MATERIALIZED VIEW:

A materialized view is a database object that contains the results of a query. They are local copies of data located remotely, or are used to create summary tables based on aggregations of a table’s data. Materialized views, which store data based on remote tables are also, know as snapshots.

A materialized view is a database object that contains the results of a query. The FROM clause of the query can name tables, views, and other materialized views. Collectively these objects are called master tables (a replication term) or detail tables (a data warehousing term). This reference uses “master tables” for consistency. The databases containing the master tables are called the master databases.

A materialized view can query tables, views, and other materialized views. Collectively these are called master tables (a replication term) or detail tables (a data warehouse term).

For replication purposes, materialized views allow you to maintain copies of remote data on your local node. These copies are read-only. If you want to update the local copies, you have to use the Advanced Replication feature. You can select data from a materialized view as you would from a table or view.

For replication purposes, materialized views allow you to maintain copies of remote data on your local node. The copies can be updatable with the Advanced Replication feature and are read-only without this feature. You can select data from a materialized view as you would from a table or view. In replication environments, the materialized views commonly created are primary key, rowid, object, and subquery materialized views.

For data warehousing purposes, the materialized views commonly created are aggregate views, single-table aggregate views, and join views.

For data warehousing purposes, the materialized views commonly created are materialized aggregate views, single-table materialized aggregate views, and materialized join views. All three types of materialized views can be used by query rewrite, an optimization technique that transforms a user request written in terms of master tables into a semantically equivalent request that includes one or more materialized views.

Prerequisites:

The privileges required to create a materialized view should be granted directly rather than through a role.

To create a materialized view in your own schema:

  • You must have been granted the CREATE MATERIALIZED VIEW system privilege and either the CREATE TABLE or CREATE ANY TABLE system privilege.
  • You must also have access to any master tables of the materialized view that you do not own, either through a SELECT object privilege on each of the tables or through the SELECT ANY TABLE system privilege.

To create a materialized view in another user’s schema:

  • You must have the CREATE ANY MATERIALIZED VIEW system privilege.
  • The owner of the materialized view must have the CREATE TABLE system privilege. The owner must also have access to any master tables of the materialized view that the schema owner does not own (for example, if the master tables are on a remote database) and to any materialized view logs defined on those master tables, either through a SELECT object privilege on each of the tables or through the SELECT ANY TABLE system privilege.

To create a refresh-on-commit materialized view (ON COMMIT REFRESH clause), in addition to the preceding privileges, you must have the ON COMMIT REFRESH object privilege on any master tables that you do not own or you must have the ON COMMIT REFRESH system privilege.

To create the materialized view with query rewrite enabled, in addition to the preceding privileges:

  • If the schema owner does not own the master tables, then the schema owner must have the GLOBAL QUERY REWRITE privilege or the QUERY REWRITE object privilege on each table outside the schema.
  • If you are defining the materialized view on a prebuilt container (ON PREBUILT TABLE clause), then you must have the SELECT privilege WITH GRANT OPTION on the container table.

The user whose schema contains the materialized view must have sufficient quota in the target tablespace to store the master table and index of the materialized view or must have the UNLIMITED TABLESPACE system privilege.

When you create a materialized view, Oracle Database creates one internal table and at least one index, and may create one view, all in the schema of the materialized view. Oracle Database uses these objects to maintain the materialized view data. You must have the privileges necessary to create these objects.

In this article, we shall see how to create a Materialized View and discuss Refresh Option of the view.

In replication environments, the materialized views commonly created are primary key, rowid, and subquery materialized views.

Primary Key Materialized Views

The following statement creates the primary-key materialized view on the table emp located on a remote database.

SQL>    CREATE MATERIALIZED VIEW mv_emp_pk
        REFRESH FAST START WITH SYSDATE 
        NEXT  SYSDATE + 1/48
        WITH PRIMARY KEY 
        AS SELECT * FROM emp@remote_db;

Materialized view created.

Note: When you create a materialized view using the FAST option you will need to create a view log on the master tables(s) as shown below:

SQL> CREATE MATERIALIZED VIEW LOG ON emp;
Materialized view log created.

Rowid Materialized Views

The following statement creates the rowid materialized view on table emp located on a remote database:

 

SQL>    CREATE MATERIALIZED VIEW mv_emp_rowid 
        REFRESH WITH ROWID 
        AS SELECT * FROM emp@remote_db; 

Materialized view log created.

Subquery Materialized Views

The following statement creates a subquery materialized view based on the emp and dept tables located on the remote database:

SQL> CREATE MATERIALIZED VIEW  mv_empdept
AS SELECT * FROM emp@remote_db e
WHERE EXISTS
     (SELECT * FROM dept@remote_db d
     WHERE e.dept_no = d.dept_no)

REFRESH CLAUSE

[refresh [fast|complete|force]
            [on demand | commit]
            [start with date] [next date]
            [with {primary key|rowid}]]

The refresh option specifies:

  1. The refresh method used by Oracle to refresh data in materialized view
  2. Whether the view is primary key based or row-id based
  3. The time and interval at which the view is to be refreshed

Refresh Method – FAST Clause

The FAST refreshes use the materialized view logs (as seen above) to send the rows that have changed from master tables to the materialized view.

You should create a materialized view log for the master tables if you specify the REFRESH FAST clause.

SQL> CREATE MATERIALIZED VIEW LOG ON emp;

Materialized view log created.

Materialized views are not eligible for fast refresh if the defined subquery contains an analytic function.

Refresh Method – COMPLETE Clause

The complete refresh re-creates the entire materialized view. If you request a complete refresh, Oracle performs a complete refresh even if a fast refresh is possible.

Refresh Method –FORCE Clause

When you specify a FORCE clause, Oracle will perform a fast refresh if one is possible or a complete refresh otherwise. If you do not specify a refresh method (FAST, COMPLETE, or FORCE), FORCE is the default.

PRIMARY KEY and ROWID Clause

WITH PRIMARY KEY is used to create a primary key materialized view i.e. the materialized view is based on the primary key of the master table instead of ROWID (for ROWID clause). PRIMARY KEY is the default option. To use the PRIMARY KEY clause you should have defined PRIMARY KEY on the master table or else you should use ROWID based materialized views.

Primary key materialized views allow materialized view master tables to be reorganized without affecting the eligibility of the materialized view for fast refresh.

Rowid materialized views should have a single master table and cannot contain any of the following:

  • Distinct or aggregate functions
  • GROUP BY Subqueries , Joins & Set operations

Timing the refresh

The START WITH clause tells the database when to perform the first replication from the master table to the local base table. It should evaluate to a future point in time. The NEXT clause specifies the interval between refreshes

SQL>     CREATE MATERIALIZED VIEW mv_emp_pk
            REFRESH FAST 
            START WITH SYSDATE 
            NEXT  SYSDATE + 2
            WITH PRIMARY KEY 
            AS SELECT * FROM emp@remote_db;

Materialized view created.

In the above example, the first copy of the materialized view is made at SYSDATE and the interval at which the refresh has to be performed is every two days.

Summary

Materialized Views thus offer us flexibility of basing a view on Primary key or ROWID, specifying refresh methods and specifying time of automatic refreshes.


 

Transaction Control Statements Description

Transaction Control Statements Description

Transaction control statements manage the changes made by DML statements and group DML statements into transactions. They enable you to:

  • Make a transaction’s changes permanent (COMMIT)
  • Undo the changes in a transaction, either since the transaction started or since a savepoint (ROLLBACK)
  • Set a point to which you can roll back (SAVEPOINT)
  • Establish properties for a transaction(SET TRANSACTION)

Data Definition Language Statements Description

Data Definition Language Statements Description

Data definition language (DDL) statements define, alter the structure of, and drop schema objects. DDL statements enable you to:

  • Create, alter, and drop schema objects and other database structures, including the database itself and database users (CREATE, ALTER, DROP)
  • Change the names of schema objects (RENAME)
  • Delete all the data in schema objects without removing the objects’ structure (TRUNCATE)
  • Grant and revoke privileges and roles (GRANT, REVOKE)
  • Turn auditing options on and off (AUDIT, NOAUDIT)
  • Add a comment to the data dictionary (COMMENT)

DDL statements implicitly commit the preceding and start a new transaction. Some examples of DDL statements are:

CREATE TABLE plants  
    (COMMON_NAME VARCHAR2 (15), LATIN_NAME VARCHAR2 (40)); 

DROP TABLE plants; 

GRANT SELECT ON emp TO scott; 
 

REVOKE DELETE ON emp FROM scott;

Data Manipulation Language Statements Description

Data Manipulation Language Statements Description

Data manipulation language (DML) statements query or manipulate data in existing schema objects. They enable you to:

  • Retrieve data from one or more tables or views (SELECT); in Oracle9i, Release 1 (9.0.1), fetches can be scrollable (see “Scrollable Cursors”)
  • Add new rows of data into a table or view (INSERT)
  • Change column values in existing rows of a table or view (UPDATE)
  • Update or insert rows conditionally into a table or view (MERGE)
  • Remove rows from tables or views (DELETE)
  • See the execution plan for a SQL statement (EXPLAIN PLAN)
  • Lock a table or view, temporarily limiting other users’ access (LOCK TABLE)

DML statements are the most frequently used SQL statements. Some examples of DML statements are:

SELECT ename, mgr, comm + sal FROM emp; 

INSERT INTO emp VALUES 
    (1234, 'DAVIS', 'SALESMAN', 7698, '14-FEB-1988', 1600, 500, 30); 

DELETE FROM emp WHERE ename IN ('WARD','JONES');