Dynamic SQL in A Dynamic World
Michael Rosenblum, Dulcian Inc
Introduction
For the past five years, I have attended a number of Oracle
conferences all over the
· It is difficult to build a system that actually meets the stated requirements.
· It is very difficult to build a system that does not require massive changes within a short period of time.
· It is impossible to build a system that will not be obsolete sooner or later.
When building systems in the real world, information technology professionals must understand our own limitations, namely:
· 100% perfect analysis
· 100% complete set of requirements
· 100% adequate hardware
· 100% competent users
· 100% knowledgeable tech support
It is not possible to discuss the limitations of users and
available tech support but this paper will attempt to briefly address the
remaining limitations listed above:
· Although it is impossible to do perfect analysis, it is possible to identify the direction that the system you are trying to build may take to keep it “on track”
· You may not have complete system specifications, but you can define the most volatile areas of a system.
· The hardware will never be exactly what you want, but you should worry about the overall performance of the system early on by making extra tuning options accessible in the post-deployment period.
In general, keep the following “developer’s credo” in mind: The focus of the whole development process should be shifted from what we know to what we don’t know.
There is usually a lot that we don't know:
1. What elements are involved – For example, the system requires a monthly reporting mechanism but there are no monthly summary tables.
2. What to do with the elements you have – For example, how often does the finance department switch between asking for weekly/quarterly/yearly data? Originally, they may have only requested bi-weekly data.
3. How you should proceed – the DBA's nightmare: How can you be sure that a nested loop join hint will still be the best option a few years later with significantly more data in the system?
4. Whether or not you can proceed at all – Usually for each restriction, you have at least one workaround or "back door." But what if the location of that "back door" changes in the next release or version update?
This paper discusses how dynamic SQL can help solve some of the problems mentioned above and avoid some of the major pitfalls that contribute to system failure and obsolescence.
The
Hero – Dynamic SQL
Usually, if you hit a “brick wall” (and that is often the current state of affairs in IT), there are two ways out: change the methodology, or change the toolset. Although the first approach is more powerful, unless you are the CIO, it is often hard to change an organization's development methodology on any large scale. Fortunately, over the years the Oracle DBMS provided us with tools rich enough to solve many problems. Dynamic SQL is one of the most useful of these tools.
By definition, Dynamic SQL allows you to build SQL or PL/SQL statements dynamically by using text strings, some additional command syntax and by processing the statements at runtime.
At that point I could just cut and paste about 30 pages of my own chapter about Dynamic SQL syntax from PL/SQL for Dummies (2006, Wiley Publishing) but for this paper, that is definitely an overkill. This paper will try to summarize the most critical aspects of dynamic SQL and how to use it effectively.
Technically, there are three variations of dynamic SQL (some authors group first two into a single one called “native dynamic SQL):
· EXECUTE IMMEDIATE
· Dynamic cursors
· DBMS_SQL package
Each will be discussed in the following sections.
EXECUTE IMMEDIATE
About 95% of all cases implemented using dynamic SQL are covered by the single command:
declare
v_variable_tx varchar2(32000);
begin
v_variable_tx:='whatever_you_want';
EXECUTE IMMEDIATE v_variable_tx [additional options];
end;
OR
begin
EXECUTE IMMEDIATE 'whatever_you_want' [additional options];
end;
The syntax is as simple as it gets, since you build a SQL
statement with an anonymous PL/SQL block as a text string (up to 32KB) and run
it (plus some extra twists).
In this real-life example, it should be possible to present every row in the database as a combination of its ID and display value (defined in the repository). Technically, you could to hard code all tables in the system, but since the ID information and what is displayed is already stored, why not make the solution generic as shown here:
Create type lov_oty is object (id_nr NUMBER, display_tx VARCHAR2(256));
Create function f_getdisplay_oty (i_table_tx varchar2,
i_id_tx varchar2,i_display_tx varchar2,i_pkColumn_tx varchar2,i_pkValue_nr number)
return lov_oty is
v_out_oty lov_oty;
begin
execute immediate
'select lov_oty('||i_id_tx||','||i_display_tx||
') '||' from '||i_table_tx||
' where '||i_pkColumn_tx||'=:1'
into v_out_oty
using i_pkValue_nr;
return v_out_oty;
end;
In this example, you can see the following major additions to the basic syntax (reading from top to bottom):
· Use of all SQL datatypes, including user-defined
· Definition of bind variables inside of the executable strings and passing values at the last step (rather than hard coding the whole string)
· If the statement produces results (SELECT, UPDATE…RETURNING, returning variables from PL/SQL block), you can retrieve it into local variables of the appropriate type.
Taking this example one step, you can create an LOV generator by creating a collection type over the original object type and appropriate function and returning the collection as shown here:
Create type lov_nt as table of lov_oty;
Create function f_getlov_nt
(i_table_tx varchar2,i_id_tx
varchar2,i_display_tx varchar2,i_order_tx varchar2)
return lov_nt is
v_out_nt lov_nt := lov_nt();
begin
execute immediate
'select lov_oty('
||i_id_tx||','||i_display_tx||')'||
' from '||i_table_tx||
' order by '||i_order_tx
bulk collect into v_out_nt;
return v_out_nt;
end;
This demonstrates that you can not only return scalar types from dynamic SQL, but whole collections. In addition, one of the nice things about nested tables (a.k.a. object collections) is that they can be converted to table-like representations. This means that if you plug the following code into an application, you can present any possible value list at any point of time as shown here:
select id_nr, display_tx
from table(
cast(f_getLov_nt
('emp',
'empno',
'ename||''-''||job',
'ename')
as
lov_nt)
)
If necessary, you can create a view on the top of that function call as shown here. (However, there are no parameters for views - only global variables):
create view v_generic_lov as
select id_nr, display_tx
from table( cast(f_getLov_nt
(GV_pkg.f_getCurTable,
GV_pkg.f_getPK(GV_pkg.f_getCurTable),
GV_pkg.f_getDSP(GV_pkg.f_getCurTable),
GV_pkg.f_getSORT(GV_pkg.f_getCurTable))
as lov_nt)
)
A very critical point about EXECUTE IMMEDIATE is that it removes the restrictions on what you cannot do in PL/SQL. An example is shown in the following code:
create procedure p_dropFBIndex (i_index_tx varchar2)is
Begin
execute immediate
'ALTER SESSION SET EVENTS ''10624 trace name context forever, level 12''';
execute immediate 'drop index '||i_index_tx;
execute immediate
'ALTER SESSION SET EVENTS ''10624 trace name context off''';
end;
This is an actual routine that was needed in a recent project to handle the following behavior (absolutely inappropriate for batch processing): when you drop a function-based index from the table, all PL/SQL programs referencing that table become invalid (which is not true for regular indexes). Using this approach, you can now fire off commands on the fly indeed even including DDLs and ALTER SESSION commands.
Dynamic cursors
Another variation of Native Dynamic SQL is the possibility of opening a cursor against a text string as shown here:
declare
v_cur SYS_REFCURSOR;
v_sql_tx varchar2(32000):=...
v_rec ...%rowtype; -- or record type
begin
open
v_cur for v_sql_tx;
fetch v_cur into v_rec;
close v_cur;
end;
The most popular usage of that feature comes usually from Oracle Forms development teams with a long history of active usage of REF CURSORS, because now they can request from the database anything they want:
Create function f_getRefCursor_REF
(i_table_tx varchar2,i_where_tx varchar2,i_order_tx varchar2)
return SYS_REFCURSOR
is
v_out_ref SYS_REFCURSOR;
v_sql_tx varchar2(32000);
begin
v_sql_tx:='select * from '||i_table_tx||
' where '||i_where_tx||
' order by'||i_order_tx;
open v_out_ref for v_sql_tx;
return v_out_ref;
end;
Another justification of dynamic cursors is that with EXECUTE IMMEDIATE you just cannot fetch a defined number of rows at a time – but if you need, for example, to process couple millions of rows, it could be a real problem. So, alternative is the following:
create procedure p_getLOV
(i_table_tx varchar2,i_id_tx varchar2,i_display_tx varchar2,i_order_tx varchar2,
i_limit_nr number, i_lovTable_tx varchar2)
is
v_buffer_nt lov_nt := lov_nt();
v_cur SYS_REFCURSOR;
begin
open v_cur for
'select lov_oty('||i_id_tx||','||i_display_tx||')'||
' from '||i_table_tx||' order by '||i_order_tx;
loop
fetch
v_cur bulk collect into v_buffer_nt limit i_limit_nr;
exit when v_buffer_nt.count=0;
execute immediate 'insert into '||i_lovTable_tx||
' select * from table (cast (:1 as lov_nt))' using v_buffer_nt;
end loop;
close v_cur;
end;
The major difference between this procedure and the function F_GETLOV_NT is that, instead of retrieving all of ID/DISPLAY pairs from the requested table and stuffing them into a nested table, you would get only a limited number of rows as a time and insert them into a special table. Since only a limited amount of memory is used at any point of time, this operation is completely safe.
DBMS_SQL
The DBMS_SQL built-in package is the first reincarnation of the dynamic SQL idea. DBMS_SQL provides you with the lowest possible level of control. You can parse, execute, fetch, define output, and process bind variables as independent commands at will. Of course, the cost of this granularity is performance (currently about 5 times slower) and complexity. Usually, the rule of thumb is that unless you know that you cannot avoid DBMS_SQL, don’t use it.
However, there are some cases when you do not have another option. The most common situations are when:
· you need to go over the 32K restriction of EXECUTE IMMEDIATE
· you have an unknown number or type of input/output variables
The first case is more or less straightforward; however, the second is much more interesting. As an illustration, the following task was presented to me a few months ago. The existing (ancient) user interface was generating a query on the fly and there was no way to intercept that generation or at least limit the number of possible cases. It was hard coded into the front-end that only the first 500 rows were returned, but the client needed the complete output set as comma-delimited file. In general, it is possible to take any query and generate a CLOB (the easiest way of representing unstructured textual data). However, between the query and the CLOB, there should be some kind of transportation mechanism (normally a record). That part cannot be defined until you really get the query. This looks like an opportunity for DBMS_SQL to prove its worth. And indeed, it appeared to be the only possible solution as shown here:
create or replace function
f_getSqlResults_cl
(i_sql varchar2,i_fetch_limit_nr
number:=65000)
return clob
is
v_out_cl CLOB;
dynamicProcedure varchar2(32000);
v_header varchar2(4000);
v_column
varchar2(32000);
l_colval varchar2(4000);
cur INTEGER := DBMS_SQL.open_cursor;
ncols NUMBER := 0;
cols DBMS_SQL.desc_tab;
pragma autonomous_transaction;
BEGIN
DBMS_SQL.parse (cur, i_sql,
DBMS_SQL.native);
DBMS_SQL.describe_columns (cur,
ncols, cols);
FOR i IN 1 .. ncols LOOP
if i = 1 then
v_column:='v_row_tx :=''"''||Dynamic_tt(i).'||cols(i).col_name||'||''"'' ';
v_header := '"'||NVL (cols (i).col_name, ' ')||'"';
else
v_column:=v_column||'||chr(9)||''"''||Dynamic_tt(i).'||
cols (i).col_name||'||''"'' ';
v_header := v_header||chr(9)||'"'||NVL (cols(i).col_name, ' ')||'"';
end if;
END
v_header := v_header||chr(10);
v_column:=v_column||'||chr(10);'||chr(10);
dynamicProcedure :=
'DECLARE '||
' v_out_cl CLOB; '||
' v_buffer_tx VARCHAR2(32000); '||
' v_buffer_length_nr number:=0; '||
' v_row_tx VARCHAR2(32000); '||
' cursor c1 is '||
i_sql||';'||
' TYPE DynamicTable IS TABLE OF c1%rowtype INDEX BY BINARY_INTEGER; '||
' Dynamic_tt DynamicTable; '||
' PROCEDURE p_addline (pi_tx VARCHAR2) IS '||
' v_add_nr number:=NVL (LENGTH (pi_tx), 0);'||
' BEGIN '||
' if v_buffer_length_nr+v_add_nr > 32000 then '||
' DBMS_LOB.writeappend (v_out_cl, v_buffer_length_nr, v_buffer_tx);'||
' v_buffer_length_nr:=v_add_nr; '||
' v_buffer_tx:=pi_tx; '||
' else '||
' v_buffer_length_nr:=v_buffer_length_nr+v_add_nr; '||
' v_buffer_tx:=v_buffer_tx||pi_tx; '||
' end if; '||
' END; '||
'BEGIN '||
' DBMS_LOB.createtemporary (v_out_cl, TRUE, DBMS_LOB.CALL);'||
' p_addline(:1); '||
' open c1;'||
' fetch c1 bulk collect into Dynamic_tt limit :2;'||
' close c1;'||
'
FOR i IN 1 .. Dynamic_tt.COUNT
v_column ||
' p_addline(v_row_tx); '||
'
END
' dbms_output.put_line(v_buffer_length_nr);'||
' DBMS_LOB.writeappend (v_out_cl, v_buffer_length_nr, v_buffer_tx);'||
' :3 := v_out_cl;'||
'END;';
EXECUTE IMMEDIATE dynamicProcedure
USING IN v_header, IN i_fetch_limit_nr, OUT v_out_cl;
return v_out_cl;
EXCEPTION
WHEN OTHERS
THEN
DBMS_SQL.close_cursor (cur);
return '"<Invalid Query>"';
END;
The idea was very simple: parse whatever SQL statement needs to be processed and describe all columns in the statement using parsing. Since you now have the complete list of columns in the statement, you can easily generate the whole anonymous block where the query will be processed one row at a time (take row/concatenate all columns/write result to the CLOB).
Our “war stories”
After discussing some of the ways that dynamic SQL can be used in theory, this section describes some real-life instances where its use was helpful.
Universal wrapper
The first story comes from a large system that had in been in production for a number of years. We were asked to help implement a very advanced business rule-based system. The major bottleneck was not the rules engine itself (even though it was very complicated and included hundreds of millions of records), but the whole deployment cycle. The smallest change to the front-end required significant effort from multiple resources. Another point of concern was that end-users were not exactly sure what modules in addition to the rules engine were needed. The requirement was something close to one extra small screen every few hours.
The decision was made to "fight with the swamp rather than killing mosquitoes." In four man-days we built a somewhat limited but generic module with the following requirements:
1. User requests are represented (wrapped if needed) as functions with:
· A generic name (header of the pop-up screen)
· Up to 5 parameters, each with
· a header
· an identified whether it is mandatory or not
· a datatype (NUMBER/DATE/TEXT/LOV)
· an optional conversion expression (for example, default date format in the UI – since everything on the web is text-based)
· a value list name (for LOV datatypes)
· A return CLOB
2. The response is already pre-formatted as HTML (so front-end just need to present the output on the screen)
For a function that would return a list of current employees, filtered by job and hire date, something like the following needs to be inserted into the repository:
insert into t_extra_ui (
id_nr,
displayName_tx,
function_tx,
v1_label_tx, v1_type_tx, v1_required_yn, v1_lov_tx, v1_convert_tx
v2_label_tx, v2_type_tx, v2_required_yn, v2_lov_tx, v2_convert_tx
)
values (id_seq.nextval,
'Filter Employees',
'f_getEmp_cl',
'Job','TEXT','Y',null,null,
'Hire Date','DATE','N',null,'to_date(:2,''YYYYMMDD''')
Now we have a repository with everything needed to prepare the screen. What is required is a utility that will process a request that comes as an ID of a function, plus up to 5 parameters coming in as text. The code for the utility is shown here:
function f_wrapper_cl (i_id_nr, v1_tx varchar2:=null,...,v5_tx varchar2:=null)
return CLOB is
v_out_cl CLOB;
v_sql_tx varchar2(2000);
v_rec t_extra_ui%rowtype;
begin
select * into v_rec from t_extra_ui where id_nr=i_id_nr;
if v_rec.v1_label_tx is not null then
v_sql_tx:=nvl(v_rec.v1_convert_tx,':1');
end if;
...
if v_rec.v5_label_tx is not null then
v_sql_tx:=v_sql_tx||','||nvl(v_rec.v5_convert_tx,':5');
end if;
v_sql_tx:='begin :out:='||v_rec.function_tx||'('||v_sql_tx||'); end;';
if v5_tx is not null then
execute immediate v_sql_tx
using out v_out_cl, v1_tx,…,v5_tx;
...
elsif v1_tx is not null then
execute immediate v_sql_tx
using out v_out_cl, v1_tx;
else
execute immediate v_sql_tx
using out v_out_cl;
end if;
return v_out_cl;
end;
The final order of actions is as follows:
1. The end user sees the list of possible options (display names from the repository tables) and selects one.
2. The front-end utility reads the repository and builds a pop-up screen on the fly with appropriate input fields and mandatory indicators. If the data type of the input field is LOV, the utility requests the generic LOV mechanism from the appropriate list of ID/display pairs.
3. The user enters whatever is needed and presses “SUBMIT.” The front-end calls a special wrapper procedure.
4. The wrapper procedure builds a real function call, passes the required parameters, and returns whatever result is generated.
5. The front-end displays the result.
Now everybody is happy since it only takes about five minutes from the moment any new procedure is ready to the moment when it is accessible from the front-end.
Universal cloner
The next story involved pushing dynamic SQL to the extreme. From the very beginning the task looked like a classic nightmare:
·
There were about 100 tables in a hierarchical
structure describing a customer application:
Customer A has phone B, confirmed by person C, that has an address D …
· The previous application could be a reasonable starting point; however, since it cannot simply be reused, the requirement was to create a full copy of the previous application (with all dependencies) inside the same database.
· Environment: All tables have a single-column synthetic primary key generated from the shared sequence; all tables are linked by foreign keys
· Restrictions: the data model changes frequently, so hard-coding is not allowed. Requests must be processed in real time so there is no way to disable constraints or play any other data transformation tricks.
What is involved in the cloning process in this case? Let’s start from the second level of the chain CustomeràPhoneàReferenceàAddress.
This situation definitely requires some kind of storage (associative array in this case) that will keep the information about old/new pairs (find the new ID using the old ID one) as shown here:
type pair_tt is table of number index by binary_integer;
v_Pair_t pair_tt;
A nested table data type is needed to keep a list of primary keys to be passed to the next level. What if we need to go from employees to their addresses? In addition, we need a database-level object because package-level ones cannot be used in SQL. The code to accomplish this is shown here:
create type id_nr is object (id
number);
create or replace type id_tt is table of id_nr;
At this point the steps logical steps are as follows:
1. Take a collection of old phone IDs.
2. Retrieve all references corresponding to detected phones.
3. Collect a list of involved references to be passed further down
4. For each row, retrieve a new ID from the sequence (and record old/new pair into the global package variable)
5. Substitute the reference ID and phone ID with new values
6. Write new employees to the database
The code to accomplish these steps is shown here:
declare
type rows_t is table of REF%rowtype;
v_rows rows_t;
v_new_id number;
v_parent_tt id_tt:=id_tt();
begin
select * bulk collect into v_rows
from REF t where PHONE_ID in
(select id from table (cast (v_oldPhone_tt as id_tt)));
for i in v_rows.first..v_rows.last loop
v_parent_tt.extend;
v_parent_tt(v_parent_tt.last):=id_nr(v_rows(i).REF_ID);
select object_Seq.nextval into v_new_id from dual;
clone_pkg.v_Pair_t(v_rows(i).REF_ID):=v_new_id;
v_rows(i).REF_ID :=v_new_id;
v_rows(i).PHONE_ID:=clone_pkg.v_Pair_t(v_rows(i).PHONE_ID);
end loop;
forall i in v_rows.first..v_rows.last
insert into REF values v_rows(i);
-- pass v_parent_tt into the next module to clone addresses
end;
Going to the next levelfrom references to address, the changes have been highlighted:
declare
type rows_t is table of ADDRESS%rowtype;
v_rows rows_t;
v_new_id number;
v_parent_tt id_tt:=id_tt();
begin
select * bulk collect into v_rows
from ADDRESS t where REF_ID in
(select id from table (cast (v_oldRef_tt as id_tt)));
for i in v_rows.first..v_rows.last loop
v_parent_tt.extend;
v_parent_tt(v_parent_tt.last):=id_nr(v_rows(i).ADDR_ID);
select object_Seq.nextval into v_new_id from dual;
clone_pkg.v_Pair_t(v_rows(i).ADDR_ID):=v_new_id;
v_rows(i).ADDR_ID :=v_new_id;
v_rows(i).REF_ID:=clone_pkg.v_Pair_t(v_rows(i).REF_ID);
end loop;
forall i in v_rows.first..v_rows.last
insert into ADDRESS values v_rows(i);
end;
Interestingly enough, the only changes needed were a combination of Table/Primary Key/Foreign Key and a variable to hold the parent ID (phone IDs in the first case and reference IDs in the second). But this means that we can easily build any of these levels on the fly using dynamic SQL since the first piece of data is accessible from the Oracle data dictionary and a list of ID can be passed as a parameter (user-defined object types are supported!).
First, we need a data type and a data dictionary viewer that will take a table name and return a list of its child tables. The code to do this is shown here:
type list_rec is record
(table_tx varchar2(50), fk_tx varchar2(50), pk_tx varchar2(50));
type list_rec_t is table of list_rec;
function f_getChildrenRec (in_tablename_tx varchar2) return list_rec_t is
v_out_t list_rec_t;
begin
select fk_tab.table_name, fk_tab.column_name fk_tx, pk_tab.column_name pk_tx
bulk collect into v_Out_t
from
(select ucc.column_name, uc.table_name
from user_cons_columns ucc,
user_constraints uc
where ucc.constraint_name = uc.constraint_name
and constraint_type = 'P') pk_tab,
(select ucc.column_name, uc.table_name
from user_cons_columns ucc,
(select constraint_name, table_name
from user_constraints
where r_constraint_name = (select constraint_name
from user_constraints
where table_name = in_tablename_tx
and constraint_type = 'P'
)
) uc
where ucc.constraint_name = uc.constraint_name ) fk_tab
where pk_tab.table_name = fk_tab.table_name;
return v_out_t;
end;
Second, we can build generic processing logic based on the already discovered template that would call itself recursively until it reaches the end of the parent-child chains as shown here:
procedure p_process (in_list_rec dictionary_rec, in_parent_list id_tt) is
v_execute_tx varchar2(2000);
begin
v_execute_tx:=
'declare '||
' type rows_t is table of '||in_list_rec.table_tx||'%rowtype;'||
' v_rows rows_t;'||
' v_new_id number;'||
' v_list clone_pkg.list_rec_t;'||
' v_parent_list id_tt:=id_tt();'||
'begin '||
' select * bulk collect into v_rows '||
' from '||in_list_rec.table_tx||' t where '||in_list_rec.fk_tx||
' in (select id from table (cast (:1 as id_tt)));'||
' if v_rows.count()=0 then return; end if;'||
' for i in v_rows.first..v_rows.last loop '||
' select object_Seq.nextval into v_new_id from dual;'||
' v_parent_list.extend;'||
' v_parent_list(v_parent_list.last):=id_nr(v_rows(i).'
||in_list_rec.pk_tx||');'||
' clone_pkg.v_Pair_t(v_rows(i).'||in_list_rec.pk_tx||'):=v_new_id;'||
' v_rows(i).'||in_list_rec.pk_tx||':=v_new_id;'||
' v_rows(i).'||in_list_rec.fk_tx||':=clone_pkg.v_Pair_t(v_rows(i).'
||in_list_rec.fk_tx||');'||
' end loop;'||
' forall i in v_rows.first..v_rows.last '||
' insert into '||in_list_rec.table_tx||' values v_rows(i);'||
' v_list:=clone_pkg.f_getchildrenRec('''||in_list_rec.table_tx||''');'||
' if v_list.count()=0 then return; end if;'||
' for l in v_list.first..v_list.last loop '||
' clone_pkg.p_process(v_list(l),v_parent_list);'||
' end loop;'||
'end;';
execute immediate
v_execute_tx using in_parent_list;
end;
The last piece needed is an entry point that will take a root object and clone it. To start the whole process you need to know the starting table (CUSTOMER), its primary key column (CUST_ID) and the primary key of the root element to be cloned:
procedure p_clone (in_table_tx varchar2, in_pk_tx varchar2, in_id number) is
v_new_id number;
procedure p_processRoot is
v_sql_tx varchar2(32000);
begin
v_sql_tx:=
'declare '||
' v_row '||in_table_tx||'%rowtype; '||
' v_listDirectChildren_t clone_pkg.list_rec_t; '||
' v_parent_list id_tt:=id_tt(); '||
' v_old_id number:=:1; '||
' v_new_id number:=:2; '||
'begin '||
' select * into v_row from '||in_table_tx||
' where '||in_pk_tx||'=v_old_id;'||
' v_row.'||in_pk_tx||':=v_new_id;'||
' clone_pkg.v_Pair_t(v_old_id):=v_new_id;'||
' v_parent_list.extend;'||
' v_parent_list(v_parent_list.last):=id_nr(v_old_id);'||
' insert into '||in_table_tx||' values v_row;'||
' v_listDirectChildren_t:=clone_pkg.f_getChildrenRec(:3);'||
' for i in v_listDirectChildren_t.first..v_listDirectChildren_t.last'||
' loop'||
' clone_pkg.p_process(v_listDirectChildren_t(i),v_parent_list); '||
' end loop;'||
'end; ';
execute immediate v_sql_tx using in_id,v_new_id,upper(in_table_tx);
end;
begin
clone_pkg.v_Pair_t.delete;
select object_seq.nextval into v_new_id from dual;
p_processRoot;
end;
The only step left is to assemble all of these pieces into a single package to create a completely generic cloning module. The whole package can be downloaded from the Dulcian website. (See Papers and Presentations/papers by conference/IOUG 2007)
Security issues
Many developers fear using dynamic SQL because of the notorious SQL injections. If you build code on the fly, you run the risk of having something unexpected happen, but it would take a lot of simultaneous mistakes to cause a serious problem.
Powerful administrative tools that are simple to create for your own like drop_any_index or copy_any_table) should only be accessible to those with administrative privileges. If you need to make some dynamic SQL modules visible to end users, applying just two rules can keep your system safe:
· If the end users' input will influence the structure of the code, the available options should be based only on the repository elements without being able to alter the repository. That is the sole prerogative of administrators or power users. End users should only select from a previously populated list of functions.
· If the end user's input is simply data (e.g. values of columns, etc.), these values must be passed to the dynamic SQL using bind variables. Since these variables cannot affect the structure of the query, whatever users type will not cause any problems (see the generic wrapper example).
And that is everything you need to worry – it is technically impossible to have code injection if (a) privileges are properly managed (b) code is written in the sane way.
Summary
Dynamic SQL does:
· significantly extend the list of available options for resolving many problems.
· provide extra maneuvering room in production environments.
Dynamic SQL should NOT:
· be considered a substitute for good analysis.
· be used where “regular” solutions are valid.
Keep in mind that dynamic SQL is just a feature. It is not in and of itself good or bad. It is simply very powerful and easy to misuse.
About the Author
Michael Rosenblum is a Development DBA at Dulcian, Inc. He is
responsible for system tuning and application architecture. He supports Dulcian
developers by writing complex PL/SQL routines and researching new features. Mr.
Rosenblum is the co-author of PL/SQL for
Dummies (Wiley Press, 2006). Michael is a frequent presenter at various
regional and national Oracle user group conferences. In his native