Thursday, April 2, 2009

Designing the mapping/workflow efficiently - part II

In few of my other blog articles, I've been mentioning about the push down optimization. In simple terms, try to push as much processing as possible into the database source or target based on the necessity. I will explain few instances where we can do so.

Using analytics

Instead of using aggregator transformation or a combination of sorter/expression/filter for occassions when they are demanded, we can make use of the anlytic functions of the database. For eg: While reading your source data from the table, you may need to identify the last record from the source. For this requirement, the first option that comes into mind is something like this:

One source pipeline picking up the data records and the second pipeline counting the number of source records and eventually you equate that number to the current record number. When they are equal, it implies that the current record is the last record from the source.

As I said earlier, people resort to a count(*) and GROUP BY from SQL or an aggregator/sorter/expression/filter combination for the same. Instead of this, the analytic functions provide you greater flexibility. The same task can be acheived with the below SQL.

select col1,
col2,
col2,
lead (col1) over (order by input_sort_criteria) next_record
from table_name
/

For the last record with the given "input_sort_criteria", the value will be NULL.


Avoid Stored Procedures

I have seen many mappings with unnecessary usage of Stored Procedures. Stored Procedures "are" performance killers. If run for every record in the source, stored procedures consume much of the workflow run-time. A database connection has to be established each time for a record and then the execution takes place in the database. As you can see, this beahviour leads to worst performing interfaces.

A general guideline is that when you can do it in PowerCenter using transformations, do not use store procedures.


Conclusion:

The two sections in this article seem to be conflicting with each other. In the first section, I've urged you to push as much processing as possible to the database and in the second section, I'm suggesting you to avoid database.

The difference here is when you connect to database using a Lookup or Source qualifier or pre/post-SQL procedures, you are really connecting to the database only ONCE for all the source records. When you connect to the database using a stored procedure transformation that is called for each record, then you are connecting to the database for as many times as the number of records in the source.

When you have a vehicle (a loading/unloading truck) at your disposal, would you take 100 bricks from one place to the other place at a time or each brick at a time in the vehicle?

3 comments:

Anonymous said...

Hi,

Just wanted to make a quick comment. One of the main reasons for using an ETL tool is to make solutions independent of database vendor.

Using analytis functions are making your mappings database vendor dependant.

Regards

Lars

Radhakrishna Sarma said...

Hi Lars,

Thank you for your comment. I am not sure if there is any database-independent goal for ETL usage. In that most of the mappings that use SQL override and Lookup overrides fail. On top of that it is a trade-off between the performance and flexibility the organization will look at. My posts in the blog are relevant in certain contexts.
I thank you very much for your comment. Please keep giving feedback.

Cheers
Sarma.

Anonymous said...

I have a Query RK ...
I need to read only say first 1 or 2 lines of a HUGE flat file (say 20 Gigs).

How can I do that Fast without loading the whole file in cache ? i. e. without reading all the millions of records.

Ans one I know : Use a Unix script say "head" or a sed and cut of the first two lines in a particluar file and then read that new small file.

Any other way ? Can we use the Rank.

 
HTML Hit Counters