One of our customers is currently going through a major modernization project, involving the introduction of several new custom applications, built on new development frameworks. This has prompted a lot of discussion about the best way to do things in terms of application design and the use of our overall technology stack. Even though I am not directly involved in most application development (I’m responsible for production operations), I have seen the way the wind is blowing as I assist with the deployment of new development environments and have taken the opportunity to sing the praises of the SmartDB approach, as advocated by Bryn Llewellyn and Toon Koppelaars.
In a nutshell, SmartDB emphasizes the use of the database as a processing engine that exploits the full power of set-based SQL, and creating a PL/SQL API shell around application data to contain as much organizational business logic as possible. Front-end developers then use this API to power whatever the framework of the day is, without having to worry about re-inventing the wheel every time that framework changes with new technology or security concerns.
One thing that going over this material again and again over the last few weeks has driven home for me, especially as our customer considers how best to implement their various choices for development frameworks, is just how much of a fairy tale the promises of middle-tier application servers turned out out to be in terms of processing business logic. I won’t repeat everything that Bryn and Toon have already published on the performance gains to be had by moving business logic processing out of the middle-tier and back into the database (see the link above), but I will say this: in the world of Cloud computing where compute time literally is money, it makes far more sense to leverage your entire technology stack – whatever it is – for all of its available efficiency and performance features, than to ignore them completely.
For data processing, that means using your database – whether Oracle or any other – as a processing engine and using the full power of set-based SQL instead of the row-by-row processing forced on us by the use of the Object Relational Model (ORM) frameworks used by most mid-tier programming languages. Any real database engine using set-based SQL will out perform an ORM-generated row-by-row processed mid-tier application, every time.
This exposes one of the biggest lies of the multi-tiered computing fad of the last 15 years: that by moving all of our business logic out of the database (where we correctly put it in the ’90s) and treating our data as objects obscured by an ORM, that we could make our applications “database independent” and improve their performance at the same time. In theory this would have allowed us to take advantage of any new database technology that came along without having to rebuild our applications. The problem here is that it turns out that our ORMs and application development frameworks change far more often than do our database technologies, and that ORMs don’t really take advantage of the real processing power of a database at all. Instead, they introduce a whole load of unnecessary overhead computing and make those set-based processing features of the database layer largely inaccessible.
I get that it makes sense for a commercial software developer to want a certain amount of database independence in their application design so that they might be able to sell their product to the widest possible audience, but for those of us supporting custom, in-house applications this generally isn’t a concern. Our customer, like many corporations, for example:
- Has invested consistently in a specific database technology like Oracle, SQL Server, or DB2 for decades; they have no interest in changing course – giving up on that investment and starting over – without a very, very strong reason
- Has most of their key business management databases (and often many, many others) already built on that specific database technology
- Needs many of those databases to be inter-operable, exchanging data regularly, frequently and efficiently
Given that hot new development frameworks seem to come and go about every 5-6 years (maybe lasting 10 or so at most), and database investments tend last for multiple decades (going on 40 years in our case), it just makes even more sense for most of us to embrace SmartDB. Following its dictates to their logical conclusion, we should be looking not only to move business logic out of the middle-tier, but also to leverage every possible feature of our chosen database to make data processing as efficient and scale-able as possible.
Our reality is not – and never has been – the quest for database independence in our technology stack, because the truth is that as corporate consumers the database is the least likely part of that stack to change. It simply costs too much to re-engineer that significant a portion of our infrastructure. Rather our goal should be for application framework independence. In the long run that is what will net the biggest cost savings for our companies or customers in the world of virtualization and Cloud computing, as they put the right types of workloads in the right places on the stack while reducing the frequency required and complexity of maintaining back-end business logic.