Sunday, January 23, 2011

Authentication

Oracle Syntax
Users and authentication rules are defined in the database using the CREATE USER and ALTER USER SQL statements. There is some difference in the available syntax for these two commands, but we will
attempt to cover all of the syntax options. Following is a demonstration of some of the basic capabilities
for defining users to the database:

create user joe identified by newpass1;
create user cam identified by origpass password expire;
create user ops$mary identified externally;
create user chris identified globally as ‘CN=chris,O=xyzcorp’;
create user app_proxy_user identified by app_serv_pw;

alter user joe password expire;
alter user cam identified by newpass3 replace origpass;
alter user ops$mary account lock;
alter user chris grant connect through app_proxy_user
authentication required;

In this series of commands, you see the creation of five new database user accounts. The first, “joe,” is cre-
ated with a password that will be used for authentication. The second, “cam,” is given a password also,but it is set to expire so that it will have to be reset by the administrator or “cam” before the account can
be used to connect to the database. The “ops$mary” account will be authenticated using an external
mechanism (the operating system) rather than by supplying a password; the O/S user “mary” will be
able to connect directly to the “ops$mary” database account. User “chris” will be authenticated through
the use of an Lightweight Directory Access Protocol (LDAP) directory, the topic of another section later in
this chapter. The final account being created is going to be a proxy account used by an application server
to be able to maintain a connection and share it among various application users. This is different than
simply having the application server use a single account for multiple end user transactions in that the
database will be able to track the individual users who are proxied. This is useful for maintaining different
authorizations for individual users, for applying virtual private database policies, or for proxy auditing,
discussed later in this chapter. The fundamental purpose of using proxy users is the ability to preserve the
actual user identity even though the actual connection is made through a “generic” user connection.
(The ops$ prefix is a default used for external authorization through the operating system but it can be
modified by setting the OS_AUTHENT_PREFIX initialization parameter.)
The code example continues by showing five examples of altering existing user accounts to change
authentication rules. First the password for the “joe” account is explicitly expired by the DBA (although
you’ll see later how to force periodic password expiration automatically.) Next, the “cam” password is
explicitly changed. This can be done by the DBA or by cam directly if cam has the alter user privilege.
(The extra clause shown here of REPLACE current_password is only needed when a user is changing
his or her own password and a password complexity routine has been specified by the administrator.)
The third change shown locks the “ops$mary” account so no logins will be allowed.
The final example alters the “chris” account to allow connections for this account to be made using the
shared “app_proxy_user” account that was created for this purpose. (This example shows the syntax
from Oracle 10g; slightly different syntax for proxy authentication was available in release 9i.) You will
see how such proxy accounts are used later in the chapter.

We mentioned that password complexity routines can be defined by the DBAto ensure that users specify
passwords that meet certain minimum standards for complexity to reduce the chance of someone guessing
their password or discovering it through repeated trial and error. A password complexity routine is
assigned as part of a profile that is assigned to a user. Typically, an organization will define one or more
standard profiles, and every user account will be assigned one of these when the user is created. Profile
assignments can also be changed:
alter user joe profile corp_security_profile;
Profiles must first be created by the DBA using the CREATE PROFILE command. Profiles are used for two
purposes, as a means of controlling resources used by an account and for enforcing password and other
authentication rules. Here is an example of password and login control:
CREATE PROFILE corp_security_profile
LIMIT
failed_login_attempts       5
password_lock_time          1
password_life_time         90
password_grace_time         3
password_reuse_max          5
password_reuse_time       365
password_verify_function  f_corp_pwd_vfy;
Most organizations will not necessarily implement all of the available password rules as shown in this
exhaustive example. This command says that any user who has been assigned the corp_security_profile will have his or her account locked after five successive failed attempts to log
in and that the account will remain locked for 24 hours (one day) unless explicitly unlocked by the DBA.
Any password can be used for no more than 90 days, but the user will receive advance warnings for
three days before the actual password expiration. A user’s password cannot be reused until five other
password changes have been made, and a password can’t be reused within 365 days.
Finally, the profile specifies that all passwords will be validated using a verification function, named
f_corp_pwd_vfy. This function can perform any tests on the actual proposed password string and will
either accept or reject the password. If rejected, the user will have to select another password. Before this
profile can be created this function will have to be written using PL/SQL using techniques discussed
later in Chapter 13, “Functions.” Since many readers don’t have extensive PL/SQL experience yet, we’ll
show an overly simple example of how this function might be coded just to ensure that a user doesn’t
use his or her username as a password. (This function must be created by the privileged user SYS in
order for it to be used in a profile.)

CREATE OR REPLACE FUNCTION f_corp_pwd_vfy
(in_username       IN VARCHAR,
in_new_password   IN VARCHAR,
in_old_password   IN VARCHAR)
RETURN BOOLEAN
AS
pwd_okay BOOLEAN;
BEGIN
IF in_new_password = in_username THEN
raise_application_error(-20001, ‘Password may not be username’);
END IF;
RETURN TRUE;
END f_corp_pwd_vfy;
Remember this simple example only makes one test of password validity. Oracle provides a script
(utlpwdmg.sql, found within the rdbms/admin subdirectory under the ORACLE_HOME directory) to
create a more useful password verification routine or to use as a template for your own routine. Figure
5.1 shows what happens when a user who has been assigned this profile attempts to change his
password in violation of the rule.

Third Normal Form

Once an entity is in 2NF, the final test of purity of design that is  performed looks for transient dependencies
among the entity’s attributes. In this third normal form (abbreviated as 3NF), you verify that each attribute is
really dependent upon the key as opposed to being dependent upon some other attribute that is dependent
upon the key. Did that make sense? Let’s look at a violation of 3NF as a way to better understand it.
Look back at Figure 4-3. There’s another questionable attribute in the first-pass POLICY entity attribute list.
Does ANNUAL_MAX_OUT_OF_POCKET_EXP really belong here? Is this dependent upon the key, POLICY_ID?
Perhaps. But it may be true that this value changes each year as the policy is renewed. If so, the value of
ANNUAL_MAX_OUT_OF_POCKET_EXP is dependent upon the POLICY_ID and the year. Since you may need
to process CLAIMs from one year during the next year, it might be necessary to keep both the current year
and some previous years’ values in the database. This attribute, and perhaps some others, really should be
put into a new entity, POLICY_YEAR. Further discussion of requirements will be needed to know for sure.
Let’s continue our examination of Figure 4-3’s proposed list of attributes just a little longer. What about
AGENT_NAME? It can be argued that the agent for any given policy should be known once you identify
the specific policy, so there isn’t a clear violation of first normal form. However, as you continue to flesh
out the design, you are likely to find that you need an AGENT entity to which the POLICY entity will be
related. When that happens you will want to carry the key (such as AGENT_ID) within the POLICY entity
and use AGENT_NAME as a non-key attribute within the AGENT entity. This would be a better 3NF design
if you are going to track any other information about agents besides their name.

Second Normal Form

Sometimes a table requires a multiple part key. If so, then every attribute (column) must be dependent
upon the entire multiple part key and not just part of the key. There is one example in the insurance data
model (created by resolving the many-to-many relationship). Each CLAIM must have at least one
CLAIM_TREATMENT.
To build the key to this new entity, you will have to include the CLAIM_ID (key to the CLAIM entity) and
another key that will relate each row to a particular TREATMENT entity key. Deciding which attributes
belong with CLAIM and which belong with CLAIM_TREATMENT is done by assessing whether the attribute
can be uniquely identified by just the CLAIM_ID or whether you will have to know which specific TREATMENT
of that CLAIM. Therefore, in Figure 4-4, you see that TREATMENT_ID has been included in this entity.
CLAIM_DATE has been left in the CLAIM entity because it applies to the entire claim. Further analysis may
lead to additional attributes that are dependent upon this entity’s composite key, but frequently intersection
entities like this carry just the key values that related it to the two primary entities in order to resolve the
many-to-many relationship.

So, the rule of second normal form (2NF) is that, for any entity that has a multiple-column key, all data
attributes in that entity must be dependent upon the full key, not just part of that key. If the value of a
column could be determined strictly through knowing part of the key, then that column doesn’t belong
in this entity (or else you have not correctly defined the key structure.).
When you find that you need multiple columns to identify an instance of an entity, our rule not allowing
keys to be null now applies to each of the columns in the multipart key.

First Normal Form

Because the relational database (as least logically) links data only through data values, there must be
something in the data that lets you find a specific row when you want it. This identifying key may be one
column (perhaps ORDER_ID in an ORDERS table), or it may require multiple columns (such as the combination
of ORDER_ID and ORDER_LINE_NUMBER within an ORDER_LINES table.)
We’ll be using this concept of keys throughout the discussion of normalization in the following sections.
There are several forms of keys (primary keys, candidate keys, foreign keys) that we’ll address more pre-
cisely when we get to the physical design stage. For now, we’ll be using the general term” key” to
loosely refer to the primary key of a physical table.

So, for first normal form, each column must be related to the key of the table, but more than just related—
each should be dependent upon the key for its identity. Another way of stating this is that for a table in
first normal form, if you know the value of a row’s key, then you can determine, without ambiguity, the
value for each column in the table. Once you find a row in that table using a key value you have also
found the single value for each of the other columns of that row.

Entity-Relationship Modeling

Entity-Relationship Diagrams (ERDs) attempt to accomplish two things early in the design of a database. First they document the entities of interest to the enterprise. Entities may represent things (such as a customer, an
employee, or a product) or an event (like a registration, a shipment, or an order). Entities have identifying
characteristics and additional descriptive characteristics. For instance, an employee has an employee number,
a name, a home address, a work location, a supervisor, a hire date, a pay rate, and so on. One of the challenges
during design will be sorting through all of the available data “thingies” to figure out which are entities, which
are identifiers of entity instances, and which are descriptive of particular instances of an entity.
In addition to documenting entities, an ERD documents the relationship between entities. From the list
of example entities in the previous paragraph, you might conclude that there is a relationship between a
customer and a product. It might be direct (a CUSTOMER owns a PRODUCT) or it might be more complex
set of relationships:
❑ each CUSTOMER may place one or more ORDERs
❑ each ORDER must include one or more PRODUCTs
❑ each SHIPMENT must include at least one PRODUCT to one and only one CUSTOMER
❑ each ORDER may be fulfilled with one or more SHIPMENTs
❑ a SHIPMENT may include parts of one or more ORDERs for one and only CUSTOMER

This list is an example of the relationship rules that might exist for a company. Another company with
the same entities may have very different relationships and rules. For instance, an alternative relationship
might be that “a SHIPMENT fulfills one and only one ORDER”. Sorting through these relationships and the
business rules that define them is the essence of designing a database that will meet the requirements of
a specific business. Getting one rule wrong during logical design can lead to a database that will not be
able to support necessary business processes.
Setting aside the order entry discussion, see Figure 4-1 for a quick illustration of the ERD modeling
technique for documenting a logical database design that might be part of a (highly simplified) insur-
ance billing system. Each box designates an entity . . . an object of importance to the enterprise. The ERD
doesn’t attempt to show all of the low-level data elements (attributes) associated with each entity. At
most, identifying elements (keys) are shown. A total of seven entities have been shown in this example.

While we might have shown INSURED and POLICY_OWNER as completely distinct entities, this model
assumes that there are many characteristics of the two that overlap, so they have been shown as two
subtypes of the PERSON entity.

As we said, the second aspect of an ERD is to document the relationships between entities, as shown by
the connecting lines. One line is drawn for each relationship; every entity should be expected to have at
least one relationship to some other entity, but not all entities are necessarily related to every other entity.
It is possible for two entities to have more than one relationship.

Further, it should be recognized that every relationship exists in two directions. If entity Ahas a relationship
to entity B, then entity B has a relationship to entity A. Asingle line defines this bidirectional relationship,
but it is useful to define the relationship in both directions. Many methodologies and tools are not strict
in this regard, but it helps when reading an ERD to be able to traverse the relationships without having
to translate on the fly. In Figure 4-1, the relationship between POLICY and CLAIM entities can be stated in
two ways: (1) A POLICY may have one or more CLAIMs submitted and (2) A CLAIM must be submitted
against exactly one POLICY.
The nature of each relationship may be optional or mandatory in one direction or both. In this statement
of the bi-directional relationship between the POLICY and CLAIM entities, the term “MUST” indicates
that the relationship is mandatory in that direction (a CLAIM cannot exist without an associated POLICY.)
A solid line in the ERD indicates that the relationship, in that direction, is mandatory.
An optional relationship exists in the other direction—the term “MAY” informs us that a POLICY is
allowed to exist without any associated CLAIMs. In the ERD, a broken line indicates that end of the line
to indicate that the relationship from that direction is optional.
The degree of each relationship, additionally, may be a one-to-one correspondence between the entities
or, more commonly, a one-to-many relationship. Where many occurrences of an entity may exist for a
relationship, a “crow’s foot” is used to designate the many end of the one-to-many relationship, such as
the fact that multiple CLAIMs are allowed for a POLICY. The written description of the relationship uses the
“one or more” phrase to designate this.
Where only one occurrence of an entity may participate in the particular relationship, no crow’s foot is
used and we’ll describe the relationship using “one and only one” or “exactly one” terminology.

Many-to-many relationships do occur in the real world, and during this logical design phase, they may
appear in our ERD. Figure 4-1 showed such a relationship between CLAIM and TREATMENT. Many-to-many
relationships, however, present difficulties in creating a physical database. Remember that a relational
database makes its connection between tables using data values themselves rather than pointers. We will
eventually, during physical design, implement a one-to-many relationship by carrying the key value of
the “one” side as a data value stored within each row on the “many” side table. Many-to-many relationships
can’t be stored that way, so data modelers will resolve the many-to-many relationship into two one-to-
many relationships and by creating a new connecting entity to help define the relationship. This issue
was ignored in the preliminary example, but Figure 4-2 shows that specific portion of our total ERD with
this relationship resolved.

Another common construct included within an ERD are “Type of” subentities exemplified by INSURED
and POLICY_OWNER that have been shown as subtypes of the PERSON entity. At a later revision of this
ERD, some differences in types of PROVIDERs might have been discovered that would lead to a similar
subtyping for doctors, hospitals, physical therapists, and pharmacies.
One other ERD diagramming convention shows “one of” choices among multiple relationships. This option
is shown by the arc crossing the relationship lines connecting the CLAIM entity to DENIAL and PAYMENT with
an arc. According to our simple ERD, either (but not both) of these may exist for any given CLAIM.
The primary advantage of ERD diagramming is that ERDs are easily explained to nontechnical users and
sponsors of a proposed system. Most business people can easily validate the important entities, their identifying
attributes, and the rules that define the relationships between the entities in their real world. Asecond advan-
tage is that ERDs are very easily translated from their logical representation to a preliminary physical design of
a relational database. It is, after all, a relational technique. The third advantage of ERD modeling is that ERDs
can scale to show the total information model of an entire enterprise or a restricted view can present just the
subset of entities of interest to a particular system. (Pulling a small part of the entire ERD, as in Figure 4-2, is an
example of this capability.) This allows enterprise data architects a wonderful way to “zoom” in and out to
examine global relationships or detailed implications of a proposed change in one particular program.
The disadvantage of ERD modeling is that it is primarily a data-focused methodology and doesn’t
attempt to model the processing needs of the application. Processing will have to be designed to support
each relationship, but the ERD doesn’t suggest how that processing will be handled. ERDs, as valuable
as they are, require a complementary design and documentation technique to handle processing needs
and module decomposition.

Database Design Phases

Logical Database Design
From a list of requirements (as they are discovered and documented), the team initially identifies a long
list of data elements that they know must be included in the database. They additionally sort the data
elements into groupings, informally at first and then with more rigor, around the entities of importance
to the organization.
Logical design is performed without concern to the specific database management system that will
eventually host the application’s data. The logical database design for a system will look identical
whether the eventual system will be hosted on DB2 on the mainframe or on SQL Server on a Windows
workstation. That isn’t to say that the eventual systems will be identical—it’s just to say that those
differences don’t appear during logical design.

During the logical design phase the development team determines what data needs to be acquired, stored,
and used by a particular application. Simultaneously, the team identifies what functional processes need
to be performed by the application. As each model is refined, it is compared to the other to ensure that
needed data for each process is included in the data model and that each element of data included in the
data model has some identified processes that will create, use, and (usually) modify or purge the data.

For all of the data elements collected in this phase, it is necessary to begin documenting some basic
characteristics of the data. You need to know where the data comes from (if it isn’t being created by the
new application), and you will want to gather some preliminary estimates of the type of data:
❑ Is it strictly numeric? Will it always be integers or allow decimal values?
❑ Is the data item a date? If so, what granularity is required . . . to the day? to the minute? to the
millisecond?
❑ Will the data be alphanumeric or will it need to include other characters?
❑ What if a needed element isn’t character data but an image or some other unstructured object? I
so, how large do you expect each instance to be?
❑ Is there a defined domain (range or list of allowable values) associated with the data? Does the
domain relate to other data expected to be in your system? Does the domain have an external
definition rule (such as spatial coordinates limited to valid latitude and longitude pairs)?
❑ What is the expected lifetime of this particular data element? Is it transitory or does it continue
to exist for a predictable period of time?
Gathering an initial shopping list of data elements is a necessary but relatively casual process during the
early parts of logical system design. Some discipline, however, is soon needed. The primary means of
organizing this rapidly growing list of data elements is normalization and is traditionally documented
using Entity-Relationship Diagram (ERD) techniques. Both ERDs and the process of normalization are
covered in more detail in a later section of this chapter.

Performance Implications

All this use of rollback buffers sounds like it might take up a lot of resources, which would impact perfor-
mance. Granted, the Oracle database is a complex piece of software, but rollback buffers have been built into the core of the Oracle database for over 15 years. Oracle uses a variety of techniques to reduce the overhead of using this unique architecture. Although we hesitate to make performance claims for any software, since much of performance depends on good design and implementation, industry standard benchmarks have
shown the Oracle database to certainly be one of the fastest and most scalable databases in the world.
There is another side performance benefit of the way that the Oracle database implements locking that
is not directly related to MVRC but does have to do with locks. Most other databases manage locks
in memory. Since these databases typically use more locks than Oracle, they keep track of these locks in
memory for faster performance. Memory is typically one of the more constrained resources, so the over-
head of managing locks can take away from the memory used by the database. In addition, database
administrators have to make sure that there is sufficient memory to manage all the locks in the database,
since a problem with the lock manager can impact the entire database.
Oracle does not store locks in memory. Instead, the Oracle database keeps track of row locks in the header
of the physical page that contains the row. This header also stores the location of a previous version of its
rows in the rollback segments. If a row is going to be accessed for an SQL operation, the page it resides on
has to be retrieved, so the lock information will be present in the page header. Because of this, there is no
need for a separate lock manager. In addition, there is no increased overhead for an increased number of
rows with locks. So Oracle not only eliminates the number of locks used to handle large numbers of users
but also eliminates one of the ways that larger numbers of users demand greater resource usage.
Of course, the performance benefits delivered by reduced contention can be extremely significant. It
doesn’t matter how fast your database can go if you are waiting on locked rows. Add to this the fact that
you, as a developer, don’t have to spend any time writing code to achieve this level of concurrency and
data integrity, and you have a significant advantage overall.

Multiversion Read Consistency

The somewhat cumbersome name multiversion read consistency, henceforth referred to as MVRC, does
describe how the Oracle database works to provide its unique consistency. Whenever any transaction
changes data in the database, Oracle creates a new version of the changed row. Oracle uses its rollback
buffer to store earlier versions of rows. Simple enough, but the value it provides is impressive.
Each row version is marked with the system change number, or SCN, of the transaction that changed the
values. Every transaction gets a global SCN number assigned in the order that the transaction was
started. The SCN provides an absolute record of the serial order of the transaction.
When a transaction is retrieving rows to satisfy a query, it checks the SCN of the latest version of the
row. If the SCN is smaller than the SCN associated with the transaction, the row version is retrieved. If
the SCN of the row is larger than the transaction SCN, Oracle automatically knows that it cannot use
that version of the row. Oracle reaches back into the rollback buffers to retrieve an earlier version of the
row with an appropriate SCN. This process is shown in Figure 3-4.
If, for some reason, Oracle cannot find an appropriate version of the row in the rollback buffers, the
database will return an ORA-01555 error, which reads as “Snapshot too old.” You should size your
rollback buffers to be large enough to avoid this type of error. Starting with Oracle9i, you can choose to
allow the database to automatically handle UNDO, which should eliminate these errors.

MRVC gives each user a specific view of their data, at the point in time that their transaction began.
Because of this, there is no need for Oracle to use read locks in their locking system. No read locks
means less contention in two ways.
First, write operations do not block read operations. If another user changes data during the course of a
transaction, there is no need to protect the data, since the version of the data that has been read will be
stored as a version in the database.
Even more importantly, read operations do not block write operations. Since no read locks are used, a
transaction can grab an exclusive write lock whenever it needs to, regardless of whether anyone is read-
ing the data at the time or not.
For both of these reasons, you typically have less contention with an Oracle database, which not only
means better overall performance at run time but less time spent developing applications that run
against Oracle, since you don’t have to add logic to deal with either the possibility of contention or the
repercussions of that contention.
The Oracle database keeps track of the SCN of the oldest active transaction. When it needs to reclaim space
in the rollback buffer, it cleans out versions of data that are no longer required by any active transactions—
in other words, data rows that have an SCN number smaller than the SCN of the oldest active transaction.

Locks

A lock in a database performs a similar function to a lock on a door—it prevents access, but to data
instead of a room. Locks are used to isolate activity during a transaction and are automatically released
when a transaction is concluded through a COMMIT or a ROLLBACK.
There are two basic types of interactions with data, which call for two different types of locks. When you read a piece of data, you don’t care if other people also read it. This type of data sharing is the core of a multiuser database. This requirement calls for a shared lock, sometimes known as a read lock. Many different people can read the same data, and many shared locks can be placed on an individual row. Shared locks prevent the problems of non-repeatable reads.

But since there is only one physical copy of a row in a database, you cannot have more than one user writing to a row at a time. To enforce this limitation, and to avoid problems like lost updates, the database uses exclusive locks, sometimes known as write locks. An exclusive lock prevents more than one user from accessing a piece of data.
You may have noticed our strict use of the phrase “piece of data” rather than “row.” Locks are a physical
entity placed on a physical entity. A lock can be placed on a row, or the page that row resides in. As you
will quickly realize in the discussion of contention that follows, page locks can cause a lot more performance
degradation than a row lock. Oracle was one of the first databases to support row locking, but other
databases have gradually implemented this level of lock granularity.

Since an exclusive lock only allows a single user for a piece of data, the database cannot issue an exclusive
lock for a piece of data that already has a shared lock on it. This limitation makes sense from the point of
the user who is writing data, as well as the user who is reading the data. If the database allowed another
user to change the data in the middle of your transaction, the result could be either a lost update, a
nonrepeatable read, or some phantom data.

Concurrent User Integrity Problems



Transactions

The transaction is the key tool used to deal with both the issue of data integrity and to address the problems
created by multiple simultaneous requests for the same data. Atransaction is defined as a unit of work. Within
a transaction, all of the SQL activity aimed at a database is guaranteed to be completed, and at the same time.
You can begin a transaction either explicitly, with the BEGIN TRANSACTION syntax, or implicitly, by issuing an
SQL statement. You end a transaction by either writing the changes to the database and making them visible
to other users, with the keyword COMMIT, or by erasing all the work in the transaction, with the keyword
ROLLBACK. The code for a simple transaction is shown in the following example, with the optional keywords BEGIN TRANSACTION included:
BEGIN TRANSACTION;
UPDATE USER_ACCT SET BALANCE = BALANCE + :nAmount WHERE ACCT_ID = :nDestAcct;
UPDATE USER_ACCT SET BALANCE = BALANCE - :nAmount WHERE ACCT_ID = :nSourceAcct;

COMMIT;

This code illustrates the classic use of a transaction. The transaction is taking money from one account and
moving it to another account. You would not want one part of this transaction to succeed and the other
part to fail. Although the SQL that adds an amount to the destination account actually comes before the
SQL that subtracts the same amount from a source account, the order of statements in the transaction
doesn’t matter. In a transaction, either the results of all the statements are applied permanently to the
database or all the transactions are rolled back with the syntax ROLLBACK.
Within transactions, a ROLLBACK rolls back all the changes made by the transaction. There may be times
when you do not want to roll back all the changes, just some of them. To handle these situations, you can
use the SAVEPOINT syntax. A ROLLBACK either rolls back all the work for the entire transaction or just to
the point of the last checkpoint. In the following example, the first piece of SQL shown is adding a line to
an overall order. The application allows a user to delete just that line of an order without destroying the
entire order. The SAVEPOINT statement sets a point for the end of a rollback in this scenario.
.
.
INSERT INTO ORDER_HEADERS (CUSTOMER_NAME, ORDER_HEADER_ID) VALUES (:sCustomerName,
:sOrderHeaderID);
SAVEPOINT first_insert;
INSERT INTO ORDER_LINES (ORDER_HEADER_ID, ITEM_ID, QUANTITY) VALUES
(:sOrderHeaderID, :sItemID, :nQuantity);
IF ORDER_LINE_DELETED
ROLLBACK TO first_insert;
ENDIF;
.
.
The transaction provides not only the mechanism to ensure logical data integrity but also the container
for implementing user isolation. Until a transaction is committed, the changes made by the transaction
are not visible to any other users in the database. But multiple users, accessing the same sets of data, can
create problems of data integrity.