Call webservice from oracle database

Database Web Service

Note:

The purpose of this article is to provide a step-by-step instruction on how to install the DBWS Callout Utilities on Oracle Database Server 11g and how to verify that it works properly.

All of these environments may be placed on different machines. Anyway, it has been found convenient to have these environments on one machine for an initial installation and test of the DBWS Callout Utilities. This does rule out network problems from affecting the proper operation of the DBWS Callout Utilities.

11g Database Webservice Callout Steps

preparing the Oracle Client and Oracle Database Server

  1. Setup the appropriate environment for the Oracle Client

ORACLE_HOME = D:\app\database\product\11.2.0\dbhome_1
PATH = %PATH%;%ORACLE_HOME%\bin;%ORACLE_HOME%\jdk\bin

  1. Install 11g Release 2 (11.2) for Microsoft Windows (32-Bit) with Installation Type “Administrator”

This will install all the tools and jars required to work with the DBWS Callout Utilities.

  1. Drop The Package “UTL_DBWS” in Schema “SYS”. If existed

drop package UTL_DBWS;

  1. Check the OracleJVM for Invalid Classes

The schema “SYS” and the schema into which the DBWS Callout Utilities should be installed must not have invalid classes. The status of all the java classes installed on the Oracle Database Server instance can be checked as SYSDBA with following SQL statement:

SELECT owner, status, count(*) FROM DBA_OBJECTS
WHERE OBJECT_TYPE=’JAVA CLASS’
GROUP BY owner, status;

For a standard installation of Oracle Database Server 11.1 this SQL statement should return about 20000 valid classes for the owner/schema “SYS”.

It is very difficult to resolve invalid classes once they are in schema “SYS”. There is a huge number of classes in this schema and it is not easy to find out the dependencies. It is a lot easier to resolve invalid classes in schemas that are not system schemas of the Oracle Database Server, because in this case all uploaded jar files and classes with their dependencies should be known to the user. Re-compile all invalid classes( take maximum 100 ) in one compile.

The complete initialization of the OracleJVM is usually the most efficient approach to resolve problems with invalid classes in schema “SYS”, even though this is a risky and time consuming procedure.

  1. Configure the OracleJVM

Preparing the database consists mainly in loading the Web Services client stack into the database. Loading the client stack requires extra Java space inside the OracleJVM to resolve and store load jar files. Make sure the initialization parameters SHARED_POOL_SIZE and JAVA_POOL_SIZE are equal to or greater than 132M and 80M, i.e. or more

shared_pool_size=132M
java_pool_size=80M

Depending on the number of Java classes already loaded into the database, the two parameters may need to be increased beyond 132M and 80M.

In SQL*Plus you can check the size of this initialization parameters as SYSDBA:

show parameter SHARED_POOL_SIZE
show parameter JAVA_POOL_SIZE

alter system set SHARED_POOL_SIZE=132M scope=both;
alter system set JAVA_POOL_SIZE=80M scope=both;

  1. Download the Oracle DBWS Callout Utility 10.1.3.1: this with work with Oracle Database 10g & 11g both.
  2. Take a backup of %Oracle_Home%\sqlj\   folder first:
  3. Unzip the Content of the ZIP File to $ORACLE_HOME (Oracle Database Server Environment)

Unzip in %Oracle_Home%\sqlj\  dbws-callout-utility-10131.zip

  1. Modify the CLASSPATH in Clients_Home\ sqlj\bin\jpub

D:\dbhome_2\product\11.2.0\client_1\bin;D:\app\database\product\11.2.0\dbhome_2;D:\app\database\product\11.2.0\dbhome_1\bin;D:\DevSuiteHome_1\jdk\jre\bin\classic;D:\DevSuiteHome_1\jdk\jre\bin;D:\DevSuiteHome_1\jdk\jre\bin\client;D:\DevSuiteHome_1\jlib;D:\DevSuiteHome_1\bin;D:\DevSuiteHome_1\jre\1.4.2\bin\client;D:\DevSuiteHome_1\jre\1.4.2\bin;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;D:\ORANT\BIN;%ORACLE_HOME%\bin;%ORACLE_HOME%\OPatch;%ORACLE_HOME%\sqlj\bin;%JAVA_HOME%\bin;D:\dbhome_2\product\11.2.0\client_1\jdk\bin

 

  1. Make JPublisher and JDK 1.5.0 available for an Oracle Client Operating System Shell

Oracle Client 11g ships with a JDK 1.5.0 already. The JDK being used for JPublisher must match with the version of the OracleJVM the generated classes will be deployed to.

e.g.:
PATH=$ORACLE_HOME\jdk\bin;%PATH %

echo %PATH%
D:\dbhome_2\product\11.2.0\client_1\bin;D:\app\database\product\11.2.0\dbhome_1;D:\app\database\product\11.2.0\dbhome_1\bin;;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;%ORACLE_HOME%\bin;%ORACLE_HOME%\OPatch;%ORACLE_HOME%\sqlj\bin;%JAVA_HOME%\bin;D:\app\database\product\11.2.0\dbhome_1\sqlj\lib;

 

  1. Load the WebService Client into Oracle Database Server

loadjava -u SYS/MiracleWorker7 -r -v -f -s –grant public -genmissing dbwsclientws.jar dbwsclientdb11.jar

Output:

[oracle@sracanov-au2 lib]$ tail -8 loadjava.txt

Classes Loaded: 4061

Resources Loaded: 81

Sources Loaded: 0

Published Interfaces: 0

Classes generated: 63

Classes skipped: 0

Synonyms Created: 0

Errors: 0

[oracle@sracanov-au2 lib]$

 

Note:

The loadjava commands above load the Oracle JAX-RPC client into the target schema. By default this classes will only be available for the target user. Other users require execute privileges and synonyms for this classes, if such users should also be able to have web service callouts.

e.g.:
loadjava -u SYS/MiracleWorker7 -r -v -f -s –grant public -genmissing dbwsclientws.jar dbwsclientdb11.jar

  1. Once the database webservice call out classes have been loaded, I need to run the PL/SQL wrappers that will call these classes. The scripts are found in $ORACLE_HOME\sqlj\lib folder. Since I installed the jar in scott, scott needs to run these scripts:

Load below in sys schema.

  1. sql
  2. sql

Verify.

desc utl_dbws

  1. Here is the PL/SQL code using UTL_DBWS to call an external webservice:

CREATE OR REPLACE FUNCTION celciusToFahrenheit(temperature NUMBER) RETURN VARCHAR2 AS

service_ scott.utl_dbws.SERVICE;

call_ scott.utl_dbws.CALL;

service_qname scott.utl_dbws.QNAME;

port_qname scott.utl_dbws.QNAME;

response sys.XMLTYPE;

request sys.XMLTYPE;

BEGIN

scott.utl_dbws.set_http_proxy(‘myproxy.com:80’);

service_qname := scott.utl_dbws.to_qname(null, ‘CelciusToFahrenheit’);

service_      := scott.utl_dbws.create_service(service_qname);

call_         := scott.utl_dbws.create_call(service_);

scott.utl_dbws.set_target_endpoint_address(call_, ‘http://webservices.daehosting.com/services/TemperatureConversions.wso’);

scott.utl_dbws.set_property( call_, ‘OPERATION_STYLE’, ‘document’);

request       := sys.XMLTYPE(‘<celciustofahrenheit xmlns=”http://webservices.daehosting.com/temperature”><ncelcius>&#8217;||temperature||'</ncelcius></celciustofahrenheit>’);

response      := scott.utl_dbws.invoke(call_, request);

return response.extract(‘//CelciusToFahrenheitResult/child::text()’, ‘xmlns=”http://webservices.daehosting.com/temperature”&#8216;).getstringval();

END;

Now Test:

 

SELECT celciusToFahrenheit(30) from dual;

 

CELCIUSTOFAHRENHEIT(30)

——————————————————————————–

86

 

 

—————————————————————–

Advertisements

ADF Mobile Design -> 0 Mobile Design Principals

  1. Know your end users. 
    Before designing, spend time getting to know the role of your end users and the specific needs that they will have while mobile. Understanding key characteristics of your users, their work environments, and the tasks that they perform will help ensure that your product has the right features and optimal user experience. We have created a set of mobile personas that are based on international ethnographic research for the following roles: sales representative, field service technician, manager, retail merchandiser, and Generation Y.
  2. Define the essential mobile tasks. 
    A mobile application must be able to stand alone and not depend on an understanding of a related desktop version. When assessing how to convert an existing desktop application into a mobile design, be aware that the key task for the desktop may be quite different from that of the mobile. We strongly encourage you to think through the mobile use case rather than relying on the desktop workflows. Do not hesitate to eliminate tasks that are not essential to the mobile requirements. Successful mobile applications are frequently simplified to primary tasks, such as searching for coworkers or managing task lists. 
  3. Design contextually.
    Mobile applications are used in trains, warehouses, taxis, oil platforms, and more. Designs must work in the target work environment and maximize context awareness of the mobile device. A GPS-enabled device that maps sales representatives’ locations helps them arrive at their next appointments on time. A device that scans and displays information about an asset, such as details of the last service, helps a technician bring the right parts and make the appropriate repairs.
  4. Create a flattened navigation model. 
    Users do not want to traverse deep structures to complete a task. Flatten the navigation model for quick access to key task flows. Once users navigate to begin their work, provide a clear understanding of where they are and how they can return to their starting point. Instead of having a user log in, access a section, find an object, and then perform a task, a flattened navigation model provides quick access to the mobile task immediately after login.
  5. Design for the two-minutes-to-get-it-done mobile work style. 
    Typically, mobile devices are used in short bursts, while personal computers are used for extended work sessions. Mobile users will not tolerate designs that require a lot of data entry or multistep flows because they need to get to work faster and complete their tasks more quickly. Tasks should be simple and quick to complete.
  6. Integrate analytics. 
    Analytics and business intelligence are not limited to the desktop. Mobile users need analytics that work for small screens. A regional sales manager might see a screen that highlights store locations with the biggest sales data from last quarter or last year. A color-coded grid of locations versus key metrics draws attention to good, moderate, and bad situations. The first step in determining which analytics will be useful is to understand the mobile use case and how to integrate analytics that help decision making. A needless insertion of analytics takes up valuable real estate and makes it harder for mobile workers to do their jobs.
  7. Simplify search requirements. 
    Search is an important part of mobile applications and must be quickly accessible. Because mobile data entry is more difficult, simplify search entry requirements to a single field when possible and place the field above related data. If a user is on an inventory screen within a handheld application and initiates a search for an item, all results should relate to inventory attributes. When multiple-field searches are required, load field values with default data that the user can change with choice lists and lists of values. Do not require the user to enter text in multiple fields.
  8. Embed collaboration. 
    Embed collaboration into workflows, and include triggers to call a person, connect to a social network, and text using SMS and IM. The proliferation of social networking within the work environment demonstrates the increased importance of keeping in touch with colleagues and affiliated professionals. Mobility extends this trend by keeping coworkers in contact more often and in more places.
  9. Progressively disclose information. 
    Because screen real estate is precious on handheld devices, carefully consider the type and quantity of data that you display when designing the application. Information must be concisely summarized with basic overviews and limited actions. Details and additional actions should be available in drill-down pages. An example of progressive disclosure is used by mobile email programs that simplify functions in the overview interface and require a user to select an email address and only then provide functions for reply, delete, forward, and so on from the resulting details screen.
  10. Leverage the mobile platform. 
    Mobile applications can be built to run in the browser or as native applications installed on the device. Enterprise applications should leverage mobile capabilities that enable a user to tap a phone number to call or text, touch an address to map its location, and rotate the device for an alternate view. Native enterprise applications enable more integration than those applications run in the browser and provide the ability to transfer enterprise data to local built-in applications, such as calendars and contacts, so that users can view important business information without signing in. Understanding each platform and maximizing the appropriate mobile actions will ensure a productive and natural mobile experience.

Oracle 12c New Features.

The Oracle 12C means different things to different people.It all depends on which areas you are looking at, as there are improvements in many areas. Summarized below is the list of Top 12 Features of Oracle 12C as I see it. I have summarized below, the top 12 which I found interesting.

01. Pluggable Databases Through Database Consolidation:

Oracle is doing every thing to jump into the cloud bandwagon. With 12C, Oracle is trying to address the problem of Multitenancy through this feature. There is a radical change and a major change in the core database architecture through the introduction of Container Databases also called CBD and Pluggable Databases (PDB). The memory and process is owned by the Container Database. The container holds the metadata where the PDBs hold the user data. You can create upto 253 PDBs including the seed PDB.

In a large setup, it is common to see 20 or 30 different instances running in production environment. With these many instances, it is a maintenance nightmare as all these instances have to be separately

•Upgraded
•Patched
•Monitored
•Tuned
•RAC Enabled
•Adjusted
•Backed up and
•Data Guarded.

With Pluggable Databases feature, you just have to do all this for ONE single instance. Without this feature, prior to 12C, you would have to create separate schemas and there is always a thread of security how much ever the isolation we build into it. There are problems with namespace conflicts, there is always going to be one public synonym that you can create. With PDBs you can have a separate HR or Scott schema for each PDB, separate Emp, Dept Tables and separate public synonyms. Additionally, 2 PDBs can talk to each other through the regular DB Link feature. There is no high startup cost of creating a database any more. Instead of one instance per day, the shift is into one instance per many databases. For the developer community, you can be oblivious of all this and still continue to use the PDBs as if it were a traditional database, but for the DBAs the world would look like it has changed a lot.

Another cool feature is, you can allocate a CPU percentage for each PDB.

Another initiative being, it has announced a strategic tieup with salesforce.com during the first week of July 2013.

 

02. Redaction Policy:

This is one of the top features in Oracle 12C. Data Redaction in simple terms means, masking of data. You can setup a Data Redaction policy, for example SSN field in a Employee table can be masked. This is called redaction. From Sql Develop you can do this by going to the table: Employee->Right click on Security Policy->click on New->click on Redaction Policy->Enter SSN.
When you do a select * from employee, it will show that the SSN is masked.
The new data masking will use a package called DBMS_REDACT. It is the extension to the FGAC and VPD present in earlier versions.
By doing this, whoever needs to view the data will be able to see it where as the other users will not be able to view it.

 

03. Top N Query and Fetch and offset Replacement to Rownum:

With the release of Oracle Database 12c, Oracle has introduced this new SQL syntax to simplify fetching the first few rows. The new sql syntax “Fetch First X Rows only” can be used.

 

04. Adaptive Query Optimization and Online Stats Gathering:

With this feature, it helps the optimizer to make runtime adjustments to execution plan which leads to better stats. For statements like CTAS (Create Table As Select) and IAS (Insert As Select), the stats is gathered online so that it is available immediately.

 

05. Restore a Table easily through RMAN:

Earlier if you had to restore a particular table, you had to do all sorts of things like restoring a tablespace and or do Export and Import. The new restore command in RMAN simplifies this task.

 

06. Size Limit on Varchar2, NVarchar2, Raw Data Types increased:

The previous limit on these data types was 4K. In 12C, it has been increased to 32,767 bytes. Upto 4K, the data is stored inline. I am sure everyone will be happy with this small and cute enhancement.

 

07. Inline PL/SQL Functions and Procedures:

The in line feature is extended in Oracle 12C. In addition to Views, we can now have PL/SQL Procedures and Functions as in line constructs. The query can be written as if it is calling a real stored procedure, but however the functions do not actually exist in the database. You will not be able to find them in ALL_OBJECTS. I think this will be a very good feature for the developers to explore as there is no code that needs to be compiled.

 

08. Generated as Identity/Sequence Replacement:

You can now create a col with ‘generated as identity’ clause. Thats it. Doing this is equivalent to creating a separate sequence and doing a sequence.nextval for each row. This is another handy and a neat feature which will help developer community. This is also called No Sequence Auto Increment Primary Key.

 

09. Multiple Indexes on a Single Column:

Prior to 12C, a column cant be in more than one index. In 12C, you can include a column in B-tree index as well as a Bit Map index. But, please note that only one index is usable at a given time.

 

10. Online Migration of Table Partition or Sub Partition:

You can very easily migrate a partition or sub partition from one tablespace to another. Similar to how the online migration was achieved for a non-partitioned table in prior releases, a table partition or sub partition can be moved to another tablespace online or offline. When an ONLINE clause is specified, all DML operations can be performed without any interruption on the partition|sub-partition which is involved in the procedure. In contrast, no DML operations are allowed if the partition|sub-partition is moved offline.

 

11. Temporary UNDO:

Prior to 12C, undo records generated by TEMP Tablespace is stored in the undo tablespace. With Temp undo feature in 12C, temp undo records can be stored in temporary table instead of UNDO TS. The benefit is – reduced undo tablespace and reduced redo log space used.

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=2G;
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT=0; –disables the hard limit

 

12. In Database Archiving:

This feature enables archiving rows within a table by marking them as inactive. These inactive rows are in the database and can be optimized using compression but are not visible to the application. These records are skipped during FTS (Full Table Scan).

Tablespace Encryption in Oracle

Oracle 11g Database Release 1

The Transparent Data Encryption (TDE) feature was introduced in Oracle 10g Database Release 2 to simplify the encryption of data within datafiles, preventing access to it from the operating system. Tablespace encryption extends this technology, allowing encryption of the entire contents of a tablespace, rather than having to configure encryption on a column-by-column basis.

  • Wallet Creation
  • Tablespace Creation
  • Test Encryption

Wallet Creation

The environment setup for tablespace encryption is the same as that for transparent data encryption. Before attempting to create an encrypted tablespace, a wallet must be created to hold the encryption key. The search order for finding the wallet is as follows:

  1. If present, the location specified by the ENCRYPTION_WALLET_LOCATION parameter in the sqlnet.ora file.
  2. If present, the location specified by the WALLET_LOCATION parameter in the sqlnet.ora file.
  3. The default location for the wallet ($ORACLE_BASE/admin/$ORACLE_SID/wallet).

Although encrypted tablespaces can share the default database wallet, Oracle recommend you use a separate wallet for transparent data encryption functionality by specifying theENCRYPTION_WALLET_LOCATION parameter in the sqlnet.ora file. To accomplish this we add the following entry into the sqlnet.ora file on the server and make sure the specified directory has been created.

ENCRYPTION_WALLET_LOCATION=

(SOURCE=(METHOD=FILE)(METHOD_DATA=

(DIRECTORY=/u01/app/oracle/admin/DB11G/encryption_wallet/)))

This parameter can also be used to identify a Hardware Security Model (HSM) as the location for the wallet.

The following command creates and opens the wallet.

CONN sys/password@db11g AS SYSDBA

ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY “myPassword”;

Wallets must be reopened after an instance restart and can be closed to prevent access to encrypted data.

ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY “myPassword”;

ALTER SYSTEM SET ENCRYPTION WALLET CLOSE;

Tablespace Creation

Encrypted tablespaces are created by specifying the ENCRYPTION clause with an optional USING clause to specify the encryption algorithm. If the USING clause is omitted, the encryption algorithm defaults to ‘AES128’. In addition, the default storage clause of ENCRYPT must be specified. Tablespace encryption does not allow the NO SALT option that is available in TDE. The following statement creates an encrypted tablespace by explicitly naming the ‘AES256’ encryption algorithm in the USING clause.

CREATE TABLESPACE encrypted_ts

DATAFILE ‘/u01/app/oracle/oradata/DB11G/encrypted_ts01.dbf’ SIZE 128K

AUTOEXTEND ON NEXT 64K

ENCRYPTION USING ‘AES256’

DEFAULT STORAGE(ENCRYPT);

ALTER USER test QUOTA UNLIMITED ON encrypted_ts;

The ENCRYPTED column of the DBA_TABLESPACES and USER_TABLESPACES views indicates if the tablespace is encrypted or not.

SELECT tablespace_name, encrypted FROM dba_tablespaces;

TABLESPACE_NAME                ENC

—————————— —

SYSTEM                         NO

SYSAUX                         NO

UNDOTBS1                       NO

TEMP                           NO

USERS                          NO

ENCRYPTED_TS                   YES

6 rows selected.

SQL>

Regular tablespaces cannot be converted to encrypted tablespaces. Instead, data must be transferred manually using export/import, “ALTER TABLE … MOVE …” or “CREATE TABLE … AS SELECT * FROM …”.

Test Encryption

With the tablespace in place, we can create some objects to test the encryption. The following code creates a table and index in the encrypted tablespace and inserts a single row into the table.

CONN test/test@db11g

CREATE TABLE ets_test (

id    NUMBER(10),

data  VARCHAR2(50)

)

TABLESPACE encrypted_ts;

CREATE INDEX ets_test_idx ON ets_test(data) TABLESPACE encrypted_ts;

INSERT INTO ets_test (id, data) VALUES (1, ‘This is a secret!’);

COMMIT;

Flush the buffer cache to make sure the data is written to the datafile.

CONN sys/password@db11g AS SYSDBA

ALTER SYSTEM FLUSH BUFFER_CACHE;

When the file is opened using a HEX editor (like UltraEdit) only non-printable characters are present. The ‘This is a secret!’ string is not visible in the table or index data within the encrypted tablespace.

When you are finished testing the encrypted tablespace, be sure to clean up the tablespace and associated datafile.

DROP TABLESPACE encrypted_ts INCLUDING CONTENTS AND DATAFILES;

Data Encryption in Oracle

Transparent Data Encryption (TDE) in Oracle 10g database

Oracle has many security features available within the database, but until now there has been no “out-of-the-box” method for protecting the data at the operating system level. The Transparent Data Encryption (TDE) feature introduced in Oracle 10g Database Release 2 allows sensitive data to be encrypted within the datafiles to prevent access to it from the operating system. This article presents some basic examples of its use.

  • Setup
  • Normal Column
  • Encrypted Column
  • Performance
  • External Tables
  • Views

Setup

In order to show the encryption working we need to open a datafile in a HEX editor. Rather than trying to open a huge datafile, it makes sense to create a small file for this test.

CONN sys/password AS SYSDBA

CREATE TABLESPACE tde_test

DATAFILE ‘/u01/oradata/DB10G/tde_test.dbf’ SIZE 128K

AUTOEXTEND ON NEXT 64K;

Next, create a user with with a quota on the new tablespace.

CREATE USER test IDENTIFIED BY test DEFAULT TABLESPACE tde_test;

ALTER USER test QUOTA UNLIMITED ON tde_test;

GRANT CONNECT TO test;

GRANT CREATE TABLE TO test;

Normal Column

First we will prove that the data from a normal column can be seen from the OS. To do this create a test table and insert some data.

CONN test/test

CREATE TABLE tde_test (

id    NUMBER(10),

data  VARCHAR2(50)

)

TABLESPACE tde_test;

INSERT INTO tde_test (id, data) VALUES (1, ‘This is a secret!’);

COMMIT;

Then flush the buffer cache to make sure the data is written to the datafile.

CONN sys/password AS SYSDBA

ALTER SYSTEM FLUSH BUFFER_CACHE;

Open the datafile using a HEX editor (like UltraEdit) and the sentence “This is a secret!” is clearly visible amongst all the non-printable characters.

Encrypted Column

Before attempting to create a table with encrypted columns, a wallet must be created to hold the encryption key. The search order for finding the wallet is as follows:

  1. If present, the location specified by the ENCRYPTION_WALLET_LOCATION parameter in the sqlnet.ora file.
  2. If present, the location specified by the WALLET_LOCATION parameter in the sqlnet.ora file.
  3. The default location for the wallet ($ORACLE_BASE/admin/$ORACLE_SID/wallet).

Although encrypted tablespaces can share the default database wallet, Oracle recommend you use a separate wallet for transparent data encryption functionality by specifying theENCRYPTION_WALLET_LOCATION parameter in the sqlnet.ora file. To accomplish this we add the following entry into the sqlnet.ora file on the server and make sure the specified directory has been created.

ENCRYPTION_WALLET_LOCATION=

(SOURCE=(METHOD=FILE)(METHOD_DATA=

(DIRECTORY=/u01/app/oracle/admin/DB10G/encryption_wallet/)))

The following command creates and opens the wallet.

CONN sys/password AS SYSDBA

— 10g version

ALTER SYSTEM SET ENCRYPTION KEY AUTHENTICATED BY “myPassword”;

— 11g version

ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY “myPassword”;

Wallets must be reopened after an instance restart and can be closed to prevent access to encrypted columns.

— 10g version

ALTER SYSTEM SET ENCRYPTION WALLET OPEN AUTHENTICATED BY “myPassword”;

— 11g version

ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY “myPassword”;

ALTER SYSTEM SET ENCRYPTION WALLET CLOSE;

Create a test table with an encrypted column and insert some data. Using the ENCRYPT clause on its own is the same as using the ENCRYPT USING ‘AES192’ clause, as AES192 is the default encryption method.

CONN test/test

DROP TABLE tde_test;

PURGE RECYCLEBIN;

CREATE TABLE tde_test (

id    NUMBER(10),

data  VARCHAR2(50) ENCRYPT

)

TABLESPACE tde_test;

INSERT INTO tde_test (id, data) VALUES (1, ‘This is a secret!’);

COMMIT;

Flush the buffer cache to make sure the data is written to the datafile.

CONN sys/password AS SYSDBA

ALTER SYSTEM FLUSH BUFFER_CACHE;

When the file is opened using a HEX editor only non-printable characters are present. The test sentence cannot be seen anywhere, but the data is still clearly visible from a database connection.

SELECT * FROM tde_test;

ID DATA

———- ————————————————–

1 This is a secret!

1 row selected.

Performance

There is a performance overhead associated with the encryption/decryption process. The following tables are used in a performance comparison.

CONN test/test

CREATE TABLE tde_test_1 (

id    NUMBER(10),

data  VARCHAR2(50)

)

TABLESPACE tde_test;

CREATE TABLE tde_test_2 (

id    NUMBER(10),

data  VARCHAR2(50) ENCRYPT

)

TABLESPACE tde_test;

The following script uses these tables to compare the speed of regular and encrypted inserts and regular and decrypted queries. Each test repeats 1000 times, with the timings reported in 100ths of a second.

SET SERVEROUTPUT ON SIZE UNLIMITED

DECLARE

l_loops  NUMBER := 1000;

l_data   VARCHAR2(50);

l_start  NUMBER;

BEGIN

EXECUTE IMMEDIATE ‘TRUNCATE TABLE tde_test_1’;

EXECUTE IMMEDIATE ‘TRUNCATE TABLE tde_test_2’;

l_start := DBMS_UTILITY.get_time;

FOR i IN 1 .. l_loops LOOP

INSERT INTO tde_test_1 (id, data)

VALUES (i, ‘Data for ‘ || i);

END LOOP;

DBMS_OUTPUT.put_line(‘Normal Insert   : ‘ || (DBMS_UTILITY.get_time – l_start));

l_start := DBMS_UTILITY.get_time;

FOR i IN 1 .. l_loops LOOP

INSERT INTO tde_test_2 (id, data)

VALUES (i, ‘Data for ‘ || i);

END LOOP;

DBMS_OUTPUT.put_line(‘Encrypted Insert: ‘ || (DBMS_UTILITY.get_time – l_start));

l_start := DBMS_UTILITY.get_time;

FOR i IN 1 .. l_loops LOOP

SELECT data

INTO   l_data

FROM   tde_test_1

WHERE  id = i;

END LOOP;

DBMS_OUTPUT.put_line(‘Normal Query    : ‘ || (DBMS_UTILITY.get_time – l_start));

l_start := DBMS_UTILITY.get_time;

FOR i IN 1 .. l_loops LOOP

SELECT data

INTO   l_data

FROM   tde_test_2

WHERE  id = i;

END LOOP;

DBMS_OUTPUT.put_line(‘Decrypted Query : ‘ || (DBMS_UTILITY.get_time – l_start));

END;

/

Normal Insert   : 31

Encrypted Insert: 45

Normal Query    : 42

Decrypted Query : 58

PL/SQL procedure successfully completed.

SQL>

The results clearly demonstrate that encrypted inserts and decrypted queries are slower than their normal counterparts.

External Tables

External tables can be encrypted in a similar way to regular tables. First, we make sure the default data pump directory is available to the test user.

CONN sys/password AS SYSDBA

GRANT READ, WRITE ON DIRECTORY data_pump_dir TO test;

Next, we create the external table as a copy of an existing table, using the ENCRYPT clause.

CONN test/test

CREATE TABLE tde_test_1_ext (

id,

data ENCRYPT IDENTIFIED BY “myPassword”

)

ORGANIZATION EXTERNAL

(

TYPE ORACLE_DATAPUMP

DEFAULT DIRECTORY data_pump_dir

location (‘tde_test_1_ext.dmp’)

)

AS

SELECT id,

data

FROM   tde_test_1;

Views

The %_ENCRYPTED_COLUMNS views are used to display information about encrypted columns.

SET LINESIZE 100

COLUMN owner FORMAT A15

COLUMN tble_name FORMAT A15

COLUMN column_name FORMAT A15

SELECT * FROM dba_encrypted_columns;

OWNER           TABLE_NAME                     COLUMN_NAME     ENCRYPTION_ALG                SAL

————— —————————— ————— —————————– —

TEST            TDE_TEST_2                     DATA            AES 192 bits key              YES

TEST            TDE_TEST_1_EXT                 DATA            AES 192 bits key              YES

2 rows selected.

ALL, ANY and SOME Comparison Conditions in SQL

ALL, ANY and SOME Comparison Conditions in SQL

It is quite possible you could work with Oracle databases for many years and never come across the ALL, ANY and SOME comparison conditions in SQL because there are alternatives to them that are used more regularly. If you are planning to sit the Oracle Database SQL Expert (1Z0-047) exam you should be familiar with these conditions as they are used frequently in the questions.

All you need to know for the exam is the usage and characteristics of these comparison conditions. The article also includes information about the transformations the optimizer does when processing them. This extra information is not a requirement for the exam, but goes some way to explaining why you rarely see people using the conditions. Each of the SQL statements will be run against the “SCOTT.EMP” table to show the transformations are equivalent.

SQL> SELECT * FROM emp;

     EMPNO ENAME      JOB	       MGR HIREDATE		       SAL	 COMM	  DEPTNO
---------- ---------- --------- ---------- -------------------- ---------- ---------- ----------
      7369 SMITH      CLERK	      7902 17-DEC-1980 00:00:00        800		      20
      7499 ALLEN      SALESMAN	      7698 20-FEB-1981 00:00:00       1600	  300	      30
      7521 WARD       SALESMAN	      7698 22-FEB-1981 00:00:00       1250	  500	      30
      7566 JONES      MANAGER	      7839 02-APR-1981 00:00:00       2975		      20
      7654 MARTIN     SALESMAN	      7698 28-SEP-1981 00:00:00       1250	 1400	      30
      7698 BLAKE      MANAGER	      7839 01-MAY-1981 00:00:00       2850		      30
      7782 CLARK      MANAGER	      7839 09-JUN-1981 00:00:00       2450		      10
      7788 SCOTT      ANALYST	      7566 19-APR-1987 00:00:00       3000		      20
      7839 KING       PRESIDENT 	   17-NOV-1981 00:00:00       5000		      10
      7844 TURNER     SALESMAN	      7698 08-SEP-1981 00:00:00       1500	    0	      30
      7876 ADAMS      CLERK	      7788 23-MAY-1987 00:00:00       1100		      20
      7900 JAMES      CLERK	      7698 03-DEC-1981 00:00:00        950		      30
      7902 FORD       ANALYST	      7566 03-DEC-1981 00:00:00       3000		      20
      7934 MILLER     CLERK	      7782 23-JAN-1982 00:00:00       1300		      10

SQL>
  • ALL
  • ANY
  • SOME

ALL

The ALL comparison condition is used to compare a value to a list or subquery. It must be preceded by =, !=, >, <, <=, >= and followed by a list or subquery.

When the ALL condition is followed by a list, the optimizer expands the initial condition to all elements of the list and strings them together with AND operators, as shown below.

SELECT empno, sal
FROM   emp
WHERE  sal > ALL (2000, 3000, 4000);

     EMPNO        SAL
---------- ----------
      7839       5000

SQL>

-- Transformed to equivalent statement without ALL.

SELECT empno, sal
FROM   emp
WHERE  sal > 2000 AND sal > 3000 AND sal > 4000;

     EMPNO        SAL
---------- ----------
      7839       5000

SQL>

When the ALL condition is followed by a subquery, the optimizer performs a two-step transformation as shown below.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  e1.sal > ALL (SELECT e2.sal
                     FROM   emp e2
                     WHERE  e2.deptno = 20);

     EMPNO        SAL
---------- ----------
      7839       5000

SQL>

-- Transformed to equivalent statement using ANY.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  NOT (e1.sal <= ANY (SELECT e2.sal
                           FROM emp e2
                           WHERE e2.deptno = 20));

     EMPNO        SAL
---------- ----------
      7839       5000

SQL>

-- Transformed to equivalent statement without ANY.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  NOT EXISTS (SELECT e2.sal
                   FROM emp e2
                   WHERE e2.deptno = 20
                   AND   e1.sal <= e2.sal);

     EMPNO        SAL
---------- ----------
      7839       5000

SQL>

Assuming subqueries don’t return zero rows, the following statements can be made for both list and subquery versions:/p>

  • “x = ALL (…)”: The value must match all the values in the list to evaluate to TRUE.
  • “x != ALL (…)”: The value must not match any values in the list to evaluate to TRUE.
  • “x > ALL (…)”: The value must be greater than the biggest value in the list to evaluate to TRUE.
  • “x < ALL (…)”: The value must be smaller than the smallest value in the list to evaluate to TRUE.
  • “x >= ALL (…)”: The value must be greater than or equal to the biggest value in the list to evaluate to TRUE.
  • “x <= ALL (…)”: The value must be smaller than or equal to the smallest value in the list to evaluate to TRUE.

If a subquery returns zero rows, the condition evaluates to TRUE.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  e1.sal > ALL (SELECT e2.sal FROM emp e2 WHERE e2.deptno = 100);

     EMPNO        SAL
---------- ----------
      7369        800
      7900        950
      7876       1100
      7521       1250
      7654       1250
      7934       1300
      7844       1500
      7499       1600
      7782       2450
      7698       2850
      7566       2975
      7788       3000
      7902       3000
      7839       5000

SQL>

ANY

The ANY comparison condition is used to compare a value to a list or subquery. It must be preceded by =, !=, >, <, <=, >= and followed by a list or subquery.

When the ANY condition is followed by a list, the optimizer expands the initial condition to all elements of the list and strings them together with OR operators, as shown below.

SELECT empno, sal
FROM   emp
WHERE  sal > ANY (2000, 3000, 4000);

     EMPNO        SAL
---------- ----------
      7566       2975
      7698       2850
      7782       2450
      7788       3000
      7839       5000
      7902       3000

SQL>

-- Transformed to equivalent statement without ANY.

SELECT empno, sal
FROM   emp
WHERE  sal > 2000 OR sal > 3000 OR sal > 4000;

     EMPNO        SAL
---------- ----------
      7566       2975
      7698       2850
      7782       2450
      7788       3000
      7839       5000
      7902       3000

SQL>

When the ANY condition is followed by a subquery, the optimizer performs a single transformation as shown below.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  e1.sal > ANY (SELECT e2.sal
                     FROM   emp e2
                     WHERE  e2.deptno = 10);

     EMPNO        SAL
---------- ----------
      7839       5000
      7902       3000
      7788       3000
      7566       2975
      7698       2850
      7782       2450
      7499       1600
      7844       1500

SQL>

-- Transformed to equivalent statement without ANY.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  EXISTS (SELECT e2.sal
               FROM emp e2
               WHERE e2.deptno = 10
               AND   e1.sal > e2.sal);

     EMPNO        SAL
---------- ----------
      7839       5000
      7902       3000
      7788       3000
      7566       2975
      7698       2850
      7782       2450
      7499       1600
      7844       1500

SQL>

Assuming subqueries don’t return zero rows, the following statements can be made for both list and subquery versions:

  • “x = ANY (…)”: The value must match one or more values in the list to evaluate to TRUE.
  • “x != ANY (…)”: The value must not match one or more values in the list to evaluate to TRUE.
  • “x > ANY (…)”: The value must be greater than the smallest value in the list to evaluate to TRUE.
  • “x < ANY (…)”: The value must be smaller than the biggest value in the list to evaluate to TRUE.
  • “x >= ANY (…)”: The value must be greater than or equal to the smallest value in the list to evaluate to TRUE.
  • “x <= ANY (…)”: The value must be smaller than or equal to the biggest value in the list to evaluate to TRUE.

If a subquery returns zero rows, the condition evaluates to FALSE.

SELECT e1.empno, e1.sal
FROM   emp e1
WHERE  e1.sal > ANY (SELECT e2.sal FROM emp e2 WHERE e2.deptno = 100);

no rows selected

SQL>

SOME

The SOME and ANY comparison conditions do exactly the same thing and are completely interchangeable.

Autonomous Transactions

Autonomous Transactions

Autonomous transactions allow you to leave the context of the calling transaction, perform an independant transaction, and return to the calling transaction without affecting it’s state. The autonomous transaction has no link to the calling transaction, so only commited data can be shared by both transactions.

The following types of PL/SQL blocks can be defined as autonomous transactions:

  • Stored procedures and functions.
  • Local procedures and functions defined in a PL/SQL declaration block.
  • Packaged procedures and functions.
  • Type methods.
  • Top-level anonymous blocks.

The easiest way to understand autonomous transactions is to see them in action. To do this, we create a test table and populate it with two rows. Notice that the data is not commited.

CREATE TABLE at_test (
  id           NUMBER       NOT NULL,
  description  VARCHAR2(50) NOT NULL
);

INSERT INTO at_test (id, description) VALUES (1, 'Description for 1');
INSERT INTO at_test (id, description) VALUES (2, 'Description for 2');

SELECT * FROM at_test;

        ID DESCRIPTION
---------- --------------------------------------------------
         1 Description for 1
         2 Description for 2

2 rows selected.

SQL>

Next, we insert another 8 rows using an anonymous block declared as an autonomous transaction, which contains a commit statement.

DECLARE
  PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  FOR i IN 3 .. 10 LOOP
    INSERT INTO at_test (id, description)
    VALUES (i, 'Description for ' || i);
  END LOOP;
  COMMIT;
END;
/

PL/SQL procedure successfully completed.

SELECT * FROM at_test;

        ID DESCRIPTION
---------- --------------------------------------------------
         1 Description for 1
         2 Description for 2
         3 Description for 3
         4 Description for 4
         5 Description for 5
         6 Description for 6
         7 Description for 7
         8 Description for 8
         9 Description for 9
        10 Description for 10

10 rows selected.

SQL>

As expected, we now have 10 rows in the table. If we now issue a rollback statement we get the following result.

ROLLBACK;
SELECT * FROM at_test;

        ID DESCRIPTION
---------- --------------------------------------------------
         3 Description for 3
         4 Description for 4
         5 Description for 5
         6 Description for 6
         7 Description for 7
         8 Description for 8
         9 Description for 9
        10 Description for 10

8 rows selected.

SQL>

The 2 rows inserted by our current session (transaction) have been rolled back, while the rows inserted by the autonomous transactions remain. The presence of the PRAGMA AUTONOMOUS_TRANSACTION compiler directive made the anonymous block run in its own transaction, so the internal commit statement did not affect the calling session. As a result rollback was still able to affect the DML issued by the current statement.

Autonomous transactions are commonly used by error logging routines, where the error messages must be preserved, regardless of the the commit/rollback status of the transaction. For example, the following table holds basic error messages.

CREATE TABLE error_logs (
  id             NUMBER(10)     NOT NULL,
  log_timestamp  TIMESTAMP      NOT NULL,
  error_message  VARCHAR2(4000),
  CONSTRAINT error_logs_pk PRIMARY KEY (id)
);

CREATE SEQUENCE error_logs_seq;

We define a procedure to log error messages as an autonomous transaction.

CREATE OR REPLACE PROCEDURE log_errors (p_error_message  IN  VARCHAR2) AS
  PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
  INSERT INTO error_logs (id, log_timestamp, error_message)
  VALUES (error_logs_seq.NEXTVAL, SYSTIMESTAMP, p_error_message);
  COMMIT;
END;
/

The following code forces an error, which is trapped and logged.

BEGIN
  INSERT INTO at_test (id, description)
  VALUES (998, 'Description for 998');

  -- Force invalid insert.
  INSERT INTO at_test (id, description)
  VALUES (999, NULL);
EXCEPTION
  WHEN OTHERS THEN
    log_errors (p_error_message => SQLERRM);
    ROLLBACK;
END;
/

PL/SQL procedure successfully completed.

SELECT * FROM at_test WHERE id >= 998;

no rows selected

SELECT * FROM error_logs;

        ID LOG_TIMESTAMP
---------- ---------------------------------------------------------------------------
ERROR_MESSAGE
----------------------------------------------------------------------------------------------------
         1 28-FEB-2006 11:10:10.107625
ORA-01400: cannot insert NULL into ("TIM_HALL"."AT_TEST"."DESCRIPTION")

1 row selected.

SQL>

From this we can see that the LOG_ERRORS transaction was separate to the anonymous block. If it weren’t, we would expect the first insert in the anonymous block to be preserved by the commit statement in the LOG_ERRORS procedure.

Global Temporary Tables.

Global Temporary Tables

Applications often use some form of temporary data store for processes that are to complicated to complete in a single pass. Often, these temporary stores are defined as database tables or PL/SQL tables

  • Creation of Global Temporary Tables
  • Miscellaneous Features

Creation of Global Temporary Tables

The data in a global temporary table is private, such that data inserted by a session can only be accessed by that session. The session-specific rows in a global temporary table can be preserved for the whole session, or just for the current transaction. The ON COMMIT DELETE ROWS clause indicates that the data should be deleted at the end of the transaction.

CREATE GLOBAL TEMPORARY TABLE my_temp_table (

  column1  NUMBER,

  column2  NUMBER

) ON COMMIT DELETE ROWS;

In contrast, the ON COMMIT PRESERVE ROWS clause indicates that rows should be preserved until the end of the session.

CREATE GLOBAL TEMPORARY TABLE my_temp_table (

  column1  NUMBER,

  column2  NUMBER

) ON COMMIT PRESERVE ROWS;

Miscellaneous Features

  • If the TRUNCATE statement is issued against a temporary table, only the session specific data is trucated. There is no affect on the data of other sessions.
  • Data in temporary tables is stored in temp segments in the temp tablespace.
  • Data in temporary tables is automatically deleted at the end of the database session, even if it ends abnormally.
  • Indexes can be created on temporary tables. The content of the index and the scope of the index is the same as the database session.
  • Views can be created against temporary tables and combinations of temporary and permanent tables.
  • Temporary tables can have triggers associated with them.
  • Export and Import utilities can be used to transfer the table definitions, but no data rows are processed.
  • Statistics on temporary tables are common to all sessions. Oracle 12c allows session specific statistics.
  • There are a number of restrictions related to temporary tables but these are version specific.

What is a Global Temporary Tables?

What is a Global Temporary Tables? 

First of all, it is a table, it looks like an ordinary (heap) table when you describe it, and you can access it like an ordinary table.  Its definition is persistent, but its content is temporary. 

There are two flavours of GT table: 

•      Delete on Commit: Data is lost at the end of each transaction, as well as the end of the session.  This is the default action. 

•      Preserve on Commit: The data is preserved at the end of a transaction and only lost when the session terminates. 

This flavour is specified in the ‘create table’ statement. 

CREATE TABLE heap (a number) TABLESPACE USERS; 

CREATE GLOBAL TEMPORARY TABLE gt (a NUMBER) ON COMMIT PRESERVE ROWS; 

Physically, a GT table exists in the temporary segment.  Hence it is not possible to specify any physical storage parameters, or tablespace for the GT table. The values for all these parameters will be reported as NULL on DBA_TABLES for GT tables.  Like an ordinary table, you can create indexes, constraints (other than referential constraints), comments and triggers.  Indexes on GT tables also reside in the temporary segment.  There is no special syntax to be added, but the same restrictions exist on specifying physical storage parameters, which will also be reported as null on DBA_INDEXES. 

There is nothing to stop you from analysing a GT table, but no cost-based optimiser statistics will be generated or stored.  The columns in DBA_TABLES and DBA_INDEXES will also be null.  The only way to specify optimiser statistics is with the DBMS_STATS package (see Oracle8i Supplied PL/SQL Packages Reference). 

The content of the GT is only visible to the session or transaction that created it. Two different sessions can insert identical rows into what is apparently the same GT table with a unique constraint. Physically, each session has its own version of the GT table in the temporary segment, which was created when the session or transaction first referenced it.  If one session inserts two identical rows into the same table then a unique constraint error will still be produced as usual.  When a session terminates the temporary segment is released.  For ON COMMIT DELETE tables, the segment is released at the end of the transaction. The space overhead in the temporary segment is comparable to that required for a similar heap table.  The instantaneous space overhead for GT tables, and any other temporary object, can be observed in v$sort_usage.  It is not reported in DBA_SEGMENTS.  When a transaction is committed, entries for ON COMMIT DELETE tables disappear. 

GT tables are not recoverable, and so redo logging is greatly reduced.  Hence also, it is not possible to specify the LOGGING, NOLOGGING, MONITORING or NOMONITORING options when creating a GT table.  There is still some log information written when accessing a GT table, which is mainly related to rollback segment activity. 

The SQL Reference Manual also lists restrictions on partitioning, clustering, index-organisation, parallel DML, distributed transactions, and nested table or varray type columns. 

How much redo logging is saved? 

In order to quantify the saving in redo logging I constructed a test.  I created a number of heap and GT tables with identical structures, but with different numbers of indexes and columns in the indexes.  I populated them with a simple PL/SQL loop, and measured the difference in the ‘redo size’ parameter on v$mystat for different numbers of rows inserted into the tables.  I used v$mystat rather than v$sysstat to remove the background noise of other database activity. 

The proportionate saving in redo is fairly constant for larger numbers of rows, but there is a saving for a few as 20 rows.  I have tried this test on different platforms and with different sizes of tables, and number of indexes, and indexed columns. The results are always similar, but the savings when only working with a small number of rows depend upon the volume of data. 

How did we use GT tables? 

Detailed testing and measurement had shown that introducing Global Temporary tables would significantly improve payroll performance.  Their introduction became the major driver for the upgrade from Oracle 8.0 to 8.1.6. 

We changed the intermediate result tables to be global temporary tables.  The payroll calculation commits between the sixty or so payroll ‘rules’.  The temporary calculation results are written to ‘temporary calculation result’ tables that are truncated between rules.  These became DELETE ON COMMIT tables.  The retained results are written to ‘permanent result tables’ that are copied to the final balance table at the end of the calculation for each month. The permanent result tables are truncated between calculations for each month. These became PRESERVE ON COMMIT tables. 

No High Watermark  Problems 

Batch processes sometimes temporarily store data in tables.  Such tables are sometimes keyed by a process instance number, so that many instances of the process can run in parallel.  Such tables often drive the processing, and so the queries on them tend to access them by full table scans, which scan the table up to the High Water Mark (HWM).  An unusually large volume of data to be processed can raise the high water mark, and so increase the process time. Even an insert into a table that is rolled back, rather than committed, can raise the HWM.  This sort of table is a good candidate GT table.  Physically, a copy of the table is created afresh for each instance of the process.  Thus the HWM is always set to zero at the start of the process. 

In the payroll system we observed that the performance of a payroll process, run in isolation, improved when the intermediate result tables were truncated at the start of the process for each month. 

Truncate Peformance Problems 

All DDL that performs space management (CREATE TABLE, DROP TABLE etc) takes out a lock on the Space Transaction enqueue. Thus, only one session can perform space management at a time, and so DDL that performs space management will serialise.  This includes the TRUNCATE command. In Oracle

8.0.5 the truncate command repeated calls an internal function, kcbcxx(), and generates significant quantities of redo log (bug 650614, fixed in Oracle 8.1.4). This degrades its performance, and aggravates the serialisation. 

The payroll process in broken in 14 streams.  Each stream processes a different set of employees and the 14 calculation processes are run in parallel.  In order to reset high water marks, and improve the performance of certain scans, the intermediate result tables were truncated at points within the calculation.  This created contention between the streams.  In Oracle 8.0, we had found it was faster to delete from the tables than to truncate them, even though this further increased redo logging, and we had to reduce the number of concurrent streams to 7. 

After introducing GT tables we could have removed some of the truncate or 

delete statements altogether.  However, truncating a GT table is much faster than 

truncating a heap table because the space management occurs inside the temporary segment, and the space is not truly free for other segments.  These truncates were no longer a problem, so we reverted to truncate.  Thus, we retained the ability to run the same code on heap tables for debugging purposes in the development and test environments.  Overall performance increased further when we went back to 14 streams. 

Quirks 

As I have already said, GT tables are new in Oracle 8.1, I have encountered one genuine bug, and discovered some slightly suprising behaviour. 

If you truncate a PRESERVE ON COMMIT table with the REUSE STORAGE option, then the nothing happens.  The table is not truncated, any data remains in the table, and no error is raised (Oracle Bug 1396741). This bug does not apply

to DELETE ON COMMIT because the truncate implies a commit.  I encounted this because our payroll process truncated the intermediate result tables with the REUSE STORAGE option, and I had simply replaced heap tables with GT tables. The process then crashed with a unique constraint error. 

With a PRESERVE ON COMMIT table, you cannot drop it if DML has been performed on it in any current session, even if all transactions are committed.

You get ‘ ORA-14452: attempt to create, alter or drop an index on temporary table already in use’, all the sessions must be terminated before the object can be droppd.   The message itself is misleading because I was trying to drop the table, and there was no index involved.  This behaviour does not occur with DELETE

ON COMMIT tables, because the DDL implies a commit.  When converting a process to use GT tables, beware of performing DDL (other than truncate) on GT tables.  Some processes create and drop working storage tables on-the-fly so

that the table names are distinct.  This approach is no longer necessary with GT 

tables, and all processes can share the same table name. 

If your default tablespace is a temporary tablespace (not that it ever should be) you will be unable to create a permanent object unless you specify a permanent tablespace.  You will also be unable to create an index on a global temporary table, because you will still get ‘ORA-02195: Attempt to create PERMANENT object in a TEMPORARY tablespace’, and you cannot specify a tablespace because you get ‘ORA-14451: unsupported feature with temporary table’.  While 

this is not a problem, it does indicate how this new feature has been bolted onto the existing CREATE TABLE command. 

Benefits 

Minimal Code Change: No code change was made to the payroll calculation in order to implement GT tables.  Although another process which issues the DDL to build the sets of intermediate result tables was changed. 

Reduction in Redo Logging: Overall, redo logging generated by the payroll process reduced by 40% due to GT tables alone, and 60% in combination with the truncate bug fix on upgrading to 8.1.6.  Execution time reduced by 67%. 

High Water Marks: GT tables naturally start with a reset HWM.  This improves full scans on them and their indexes. 

Truncate Table: Where it is still necessary to truncate a GT table, it will be even faster than on a heap table. 

Optimiser: Use of GT tables does not force use of the cost based optimiser.

Drawbacks 

You cannot pass data between processes via a GT table.  There they are not suitable for use in multi-threaded processes, such as an application server. 

During development or debugging it is frequently useful to see data written by a process.  If this is written to GT table it will be lost when the process terminates. With the payroll process it was possible to switch between GT and heap tables in specific streams. 

It is not possible to capture cost based optimiser statistics for a global temporary table with the ANALZYE command.  It be may be necessary to hint queries that reference GT tables, or explicitly set statistics with the DBMS_STATS package. 

Oracle new features & enhancements Release wise

Oracle new features & enhancements

New Features of Oracle10g release 2:

  • Web server load balancing – The web cache component includes Apache extension to load-balance transactions to the least-highly-loaded Oracle HTTP server (OHS).
  • RAC instance load balancing – Staring in Oracle 10g release 2, Oracle JDBC and ODP.NET provide connection pool load balancing facilities through integration with the new “load balancing advisory” tool.  This replaces the more-cumbersome listener-based load balancing technique.
  • Automated Storage Load balancing – Oracle’s Automatic Storage Management (SAM) now enables a single storage pool to be shared by multiple databases for optimal load balancing.  Shared disk storage resources can alternatively be assigned to individual databases and easily moved from one database to another as processing requirements change.
  • Data Guard Load Balancing – Oracle Dataguard allows for load balancing between standby databases.
  • Listener Load Balancing – If advanced features such as load balancing and automatic failover are desired, there are optional sections of the listener.ora file that must be present.  Automatic Storage Management (ASM) includes multiple disk operations and a non-ASM database migration utility
  • Enhancements to data provisioning and Oracle Streams designed to make it easier to archive, move, and copy large data sets
  • New Fast-Start Failover for automatic fail over to standby databases
  • Integrated data encryption and key management in the database
  • Automated statistics collection directly from memory designed to eliminate the need to execute SQL queries
  • New administrative reports include automatic database workload repository comparison
  • Extended use of Standard Chunk sizes – In 10gR2, the server has been enhanced to further leverage standard chunk allocation sizes. This additional improvement reduces the number of problems arising from memory fragmentation.
  • Mutexes – To improve cursor execution and also hard parsing, a new memory serialization mechanism has been created in 10gR2. For certain shared-cursor related operations, mutexes are used as a replacement for library cache latches and librarycache pins. Using mutexes is faster, uses less CPU and also allows significantly improved concurrency over the existing latch mechanism. The use of mutexes for cursor pins can be enabled by setting the init.ora parameter _use_kks_mutex to TRUE.
  • V$SGASTAT – V$SGASTAT has been enhanced to display a finer granularity of memory to component allocation within the shared pool. This allows faster diagnosis of memory usage (in prior releases many smaller allocations were grouped under the ‘miscellaneous’ heading).
  • V$SQLSTAT – A new view, V$SQLSTAT has been introduced which contains SQL related statistics (such as CPU time, elapsed time, sharable memory). This view is very cheap to query even on high-concurrency systems, as it does not require librarycache latch use. It contains the most frequently used SQL statistics in the V$SQL family of views.
  • V$OPEN_CURSOR – This implementation of this view has also been enhanced to be latchless, making it inexpensive to query.
  • V$SQLAREA – The V$SQLAREA view has been improved in 10gR2; the view optimizes the aggregation of the SQL statements while generating the view data.

New Features in Oracle10g release 10.1.0

  • Oracle10g Grid – RAC enhanced for Oracle10g dynamic scalability with server blades
  • Completely reworked 10g Enterprise Manager (OEM)
  • AWR and ASH tables incorporated into OEM Performance Pack and Diagnostic Pack options
  • Automated Session History (ASH) materializes the Oracle Wait Interface over time
  • Data Pump replaces imp utility with impdp
  • Automatic Database Diagnostic Monitor (ADDM)
  • Automatic Storage Management (ASM) introduced Stripe And Mirror Everywhere (SAME) standard
  • Automatic Workload Repository (AWR) replaces STATSPACK
  • SQLTuning Advisor
  • SQLAccess Advisor
  • Rolling database upgrades (using Oracle10g RAC)
  • dbms_scheduler package replaces dbms_job for scheduling
  • OEM Partition Manager introduced

Miscellaneous Oracle10g enhancements:

  • Set Database Default Tablespace syntax
  • Rename Tablespace command
  • Introduced RECYCLEBIN command for storing objects before they are dropped. Required new PURGE command for maintenance.
  • sqlplus / as sysdba accessibility without quote marks
  • SYSAUX tablespace
  • Multiple Temporary Tablespaces supported to reduce stress on sorting in TEMP
  • RMAN introduces compression for backups
  • New drop database syntax
  • New alter database begin backup syntax
  • Oracle10g Data Guard Broker introduced
  • Oracle10g RAC supports secure Redo Log transport
  • Flashback enhancements for flashback database and flashback table syntax
  • SQL Apply feature
  • VPD (FGAC, RLS) supports both row-level and column-level VPD
  • Cross Platform Transportable Tablespaces
  • External Table unload utility
  • SQL Regular Expression Support with the evaluate syntax
  • New ROW TIMESTAMP column
  • Improvement to SSL handshake speed
  • Automatic Database Tuning of Checkpoints, Undo Segments and shared memory
  • Automated invoking of dbms_stats for CBO statistics collection
  • RAC introduces Integrated Cluster ware
  • Oracle Application Builder supports HTML DB
  • Browser Based Data Workshop and SQL Workshop
  • PL/SQL Compiler enhanced for compile-time Warnings in utl_mail and utl_compress
  • VPD (FGAC, RLS) supports both row-level and column-level VPD
  • External Table unload utility
  • SQL Regular Expression Support with the evaluate syntax
  • New ROW TIMESTAMP column

Feature Enhancements in Oracle9i 9.2:

  • Oracle Streams
  • RAC Cluster Guard II introduced
  • XML DB enhanced for XML repository
  • Oracle JVM made compliant with JDK 1.3
  • SQL Apply added to Oracle DataGuard
  • Locally Managed tablespaces (LMT) for the SYSTEM tablespaces
  • Default Install Account locking
  • VPD adds support for table and column synonyms

Major new utilities in Oracle9i – (9.0.1):

  • Oracle Data Guard introduced
  • Oracle9i Real Application Clusters introduced
  • Oracle9i Flashback Utility (dbms_flashback)
  • Online table reorganization with dbms_redefinition
  • Database Configuration Assistant (DBCA) – OEM Wizard
  • Multiple blocksize support
  • New blocksize parameter for create tablespace syntax
  • Easy extraction of DDL with dbms_metadata package
  • System-level triggers – startup trigger DDL trigger
  • Oracle Internet File System (IFS) becomes functional
  • Oracle upsert statement introduced
  • SQL case statement replaces decode syntax
  • Oracle external tables

Feature Enhancements on Oracle9i 9.0.1:

  • New v$sql_plan allows execution plan analysis
  • New set autotrace traceonly explain syntax for SQL*Plus replace explain plan syntax
  • System-managed rollback segments
  • Updateable Scrollable Cursors
  • analyze command and dbms_utility.analyze_schema replaced by dbms_stats package
  • connect as sysdba replaces as sysdba syntax in SQL*Plus
  • Crummy svrmgrl utility disappears
  • List Partitioning the multi-level list-hash partitioning
  • RMAN allows re-startable backups
  • RMAN supports block-level recovery
  • Oracle ultra-search

Oracle Enhancements 8i (8.1.7)

  • Oracle HTTP server gets Apache extensions
  • Oracle Integration Server (OIS) introduced
  • PL/SQL Gateway introduced for deploying PL/SQL based solutions on the Web
  • Oracle JVM Accelerator
  • Java Server Pages (JSP) engine
  • New memstat utility for analyzing Java Memory footprints
  • OEM auto-discover for new services
  • New UNDO tablespace
  • Oracle Character Set Migration utility

New features in Oracle 8i (8.1.6)

  • PL/SQL Server Pages (PSP’s)
  • Oracle DBA Studio Introduced
  • New SQL analytic Functions (rank, moving average)
  • Alter table xxx storage (freelists) command supported
  • Java XML parser
  • PL/SQL dbms_crypto_toolkit encryption package

New Features in Oracle8i 8.1.5:

  • Oracle Log Miner
  • Oracle iFS
  • JAVA stored procedures introduced (Oracle Java VM)
  • Virtual private database (VPD, FGAC) using dbms_rls package
  • Locally managed tablespace (LMT) reduces dictionary contention

New Feature Enhancements in Oracle8i 8.1.5:

  • Functional based indexes (FBI)
  • ESTAT/BSTAT is made obsolete by STATSPACK
  • MTTR Fast Start recovery – Checkpoint rate auto-adjusted to match roll forward criteria
  • Online index rebuilding
  • Support for alter table xxx drop column syntax
  • Oracle Parallel Server (OPS) adds Cache Fusion
  • Advanced Queuing improved
  • New execute immediate syntax
  • New dbms_debug package
  • Oracle users and roles can be accessed across dblinks for multiple databases
  • Resource Management introduces priorities and resource classes
  • Partitioned tables enhanced for Hash and Composite partitioning
  • SQL*Loader introduces a direct load API
  • dbms_stats enhanced to allow saving and migration of CBO statistics
    analyze table in parallel
  • Net8 support for SSL, HTTP, HOP protocols

Major Enhancements in Oracle8 – Release 8.0

  • Nested tables
  • OID addressing
  • Pointers allowed in table columns
  • VARRAY support within tables (non first normal form tables)
  • New create type syntax
  • Table Partitioning and Index partitioning enhancements
  • Oracle Universal Server introduced
  • Oracle Data Cartridges introduced for Oracle Spatial
  • Oracle ConText cartridge introduced
  • RMAN introduced to replace Enterprise Backup Utility (EBU)
  • RMAN has incremental backups and parallel syntax support
  • Oracle security server introduced (precursor to Oracle SSO)
  • Oracle WebServer enhanced

Other Oracle8 8.0 enhancements:

  • OPS introduces $gv views
  • OPS allows transparent failover to a new node
  • OPS has Oracle-created Integrated Distributed Lock Manager (IDLM)
  • Ability to call external procedures
  • exp enhanced to export specific table partitions
  • online/offline, backup/recover individual partitions
  • reverse key index supported
  • updatable views
  • syntax to merge and balance partitions
  • Advanced Queuing allows message handling
  • parallel syntax supported for insert, update and delete DML
  • MTS connection pooling
  • Improved “STAR” query joins in CBO
  • Tablespace point in time recovery (TSPITR)
  • Oracle password expiration feature
  • Database links are privileged (no embedded passwords)
  • Oracle Replication manager introduced into OEM
  • Deferred Oracle constraints aids data loading
  • SQL*Net v 2 is renamed to Net8

Major enhancements in Oracle 7.3:

  • Bitmap Indexes
  • Partitioned Views
  • Oracle Standby Database packages
  • Full table scans support prefetch (asynchronous read ahead)
  • alter index xx rebuild syntax
  • Oracle db_verify package for corruption detection
  • Oracle block editor (BBED) created
  • Oracle Spatial
  • Oracle ConText
  • Oracle Trace

Other New Features in Oracle 7.3:

  • Updatable Join Views
  • SQL*DBA obseleted
  • Alter tablespace coalesce syntax
  • temporary parameter added to create tablespace syntax for TEMP tablespace
  • extents unlimited syntax added to table syntax
  • CBO introduced histograms for skewed columns and n-way joins
  • CBO introduces hash joins and hash_area_size parameter
  • CBO supports antijoins for NOT subqueries
  • New dba_histograms view
  • New utl_file package allows interface to flat files

New features in Oracle 7.2

  • Manual Shrink Rollback Segment syntax
  • Resizing of datafiles introduced with resize syntax
  • New dbms_job package
  • Autoextend option added for data files