08
Sep
16

The Frustrated DBA

While perusing recent postings on the Ask Tom web site, I chanced across this one, entitled “Frustrated DBA“, in which John asks the following question:

“Is it just me or does doing the “typical DBA” tasks like installing, patching, backing up, checking backups, monitoring databases, creating users, checking space and other mundane tasks get boring after doing it for a few years? In the first few years, it was cool since everything was new to me.

Now, 10 years later, only about 4-8 weeks per year are devoted to cool stuff like resolving performance problems and restoring data. I stay up to date with the latest Oracle technology, but I only get to use a small percentage of it and I get to use setup only a few times.

Tom, please advise on how one can get away from doing the typical DBA work to doing other types of DBA work that aren’t seen by the industry as DBA tasks like PL/SQL coding, data modeling, creating solutions with APEX, taking on web server tuning, etc.?”

There is a great deal of debate which follows over what it means to be a DBA, a Developer, a DBA/Developer and a Developer/DBA, but eventually one reviewer submits the following matrix which describes a sliding scale of skills from pure DBA (0) to pure Developer (10):

    DBA       DEVELOPER
    SKILLS       SKILLS

 0: X X X X X X X X X X
 1: X X X X X X X X X O
 2: X X X X X X X X O O
 3: X X X X X X X O O O
 4: X X X X X X O O O O
 5: X X X X X O O O O O
 6: X X X X O O O O O O
 7: X X X O O O O O O O
 8: X X O O O O O O O O
 9: X O O O O O O O O O
10: O O O O O O O O O O

Given this matrix, I thought I would attempt to answer this question in my own way.

John speaks of becoming bored with the more mundane aspects of being a DBA and craves more interesting and creative things to do with his time. This is – in my humble opinion – a potential problem with any job. Even a pure developer can fall prey to this if all they are doing is porting old code to new operating systems or fixing bugs in someone else’s programs. You become trapped at one extreme of the scale or the other, stuck in a niche that can feel impossible to get out of. In my experience, the best way to fight this is to always be on the lookout for ways to make things better than they used to be. The best way to do that is to always be looking for opportunities to expand your skills into the middle of the scale and finding ways to improve your own little corner of the world.

I have been blessed to mostly work in small shops over the years, mostly as a contractor working for someone else. I will be the first to admit that this has afforded me a fair amount of flexibility in my job responsibilities compared to others, but I believe the lesson I learned applies just as well to more limited job descriptions as well. What I found is that in order to do the specific job I was hired to do well, I had to know a fair amount about the job next door as well.

For example: I started my career as a pure developer (the database was definitely a black box for me), but found quickly that in order to do that development work really well I had to know a fair amount about how the database worked too. After a year on the development team I had found a way to leverage the database in such a way that I could reduce the total lines of application code by over 95%, decrease maintenance time for changes from days or weeks to minutes, and improve system performance dramatically. Next thing I know I’m a DBA.

Turns out that what I learned as a developer about writing code helped me be a better DBA because I could create better automation while anticipating both the developer’s needs and their stupid mistakes. Before long I was able to tune disk I/O and memory configurations to increase performance by orders of magnitude. I introduced auditing that improved security and identified previously undetected issues with developer behaviors and software bugs in their tools that were causing unplanned system outages. Then I found out that to be a really good DBA I also needed to understand something of the operating system and storage arrays the database runs on, and the network. Pretty soon I was a system/network administrator.

What I knew about being a DBA helped me be a better sysadmin because I could scope/scale hardware more accurately and configure it more properly for database use. This allowed me to completely rebuild a corporate development and testing data center from scratch. The progression of jobs continued through a couple more layers of the IT industry, but now I’m back to being a production support DBA and I find that all of those other things continue to make me better at that as well: I can converse intelligently about system and network requirements when working through a tech refresh of our infrastructure; build a server, database, and application infrastructure that is highly available and secure on a shoestring budget; monitor and tune my databases and manage storage with ASM; run backups; install patches; create users; and advise developers competently on security and performance considerations because I have all of those other experiences to draw on. More recently I have used those experiences to help a customer rework much of their Oracle and corresponding *NIX infrastructure design into something much more flexible, reliable and secure than they had before.

While opportunities to expand your job role may be harder to come by in a large shop with more separation of duties, my advice would be to always be on the lookout for ways to improve the way you can perform your particular job responsibilities – through new automation, enhanced monitoring or shell scripting, introduction of new hardware or technologies like SSD storage, or whatever else you can think of. Aim for somewhere in that 4-6 range of the DBA/Developer Skills scale and really get to know more about other closely related jobs – even if for no other reason than to understand their requirements or frustrations – and it is hard to go wrong.

Never accept that you’ve learned all there is to learn or get stuck in that “this is the way we’ve always done it” mentality and you’ll never get bored.

25
Apr
16

How to Load Large WebSheets into APEX

A few years ago I started keeping most of my system documentation in APEX websheets. It’s great for keeping things organized and readily available no matter how I’m accessing my work network (on-site, VPN, etc.). A lot of my installation documents have embedded screenshots, and the png files are attached to the websheet as annotations. Everything worked great until we set up a new APEX server and database and I had to migrate my documentation to the new system.

I used the APEX user interface to export the websheet, including all of its annotations; the total size was well over 100MB. What I discovered when I tried to import that file into the new system was that APEX has an internal limitation on the size of any import file. I could upload my file into the APEX repository with nor problem, but it would not install. I don’t know if it is a bug or an undocumented feature, but I was unable to install any websheet file that was any larger than about 10-15MB.

While attempting the installation, I observed the following:

  • Disk I/O takes place, primarily in the TEMP tablespace.
  • The database session performing the installation hangs forever showing a lot of “direct path read temp” waits.
  • The web session in the APEX user interface eventually times out.
  • No error appears in any log file: web server, Tomcat, or database.

As a work-around I tried running the installation directly, executing the websheet SQL file from SQL*Plus. This too failed, showing the same symptoms. At this point I turned to Oracle Support, and learned that the the APEX UI import process adds many layers of processing to the installation, and that running the import from SQL*Plus required some additional, undocumented steps for a file as large as the one I was trying to load.

The APEX import process loads the websheet export file from the client browser into the database as a Binary Large Object, or BLOB. In this form the file cannot be run as a SQL script, so it must be converted into Character Large Object (CLOB) data. Once that step is complete, the character data must be parsed to extract the individual SQL statements, and those SQL statements are finally run using calls to DBMS_SYS_SQL. Once the import file reaches a certain size, or contains a certain number of SQL statements, the parsing process breaks down, generating so many waits that it never finishes. While testing I let the import process run for as long as three days without completing.

All of that extra overhead goes away if you load the export file using SQL*Plus, but this requires a little undocumented pre-configuration to get a good result. First, the target websheet schema account in the new workspace must be granted the APEX_ADMINISTRATOR_ROLE. Second, the following SQL code should be wrapped around the import file to make sure that the target schema is properly recognized by the export script. If the session isn’t properly configured, the import process will generate an (ORA-02291: integrity constraint (APEX_050000.WWV_FLOW_WS_APPLICATIONS_FK) violated – parent key not found) error. Substitute the name of the target workspace where shown.

-- Configure the session
declare 
   l_workspace_id number; 
 begin 
    select workspace_id into l_workspace_id 
    from apex_workspaces 
    where workspace = '[TARGET WORKSPACE NAME]'; 
 
    apex_application_install.set_workspace_id( l_workspace_id ); 
    apex_application_install.generate_application_id; 
    apex_application_install.generate_offset; 
 end; 
 /  

-- Call the websheet script
 @wsNNN.sql

Using this approach, websheet files of almost any size can be loaded into the database. I was able to get my documentation – including all binary annotations – loaded into the target database in a matter of minutes.

18
Apr
16

APEX Mystery Solved

I have used Oracle APEX in various capacities since version 3. Since Oracle introduced their packaged applications in version 4 I have played around with most of them from time to time and currently use several of them in my daily work. P-Track, ORAchk, WebSheets and others make my job a lot easier, and the new design interface in APEX 5 is fantastic. I am currently using APEX 5.0.2, ORDS 3.0.2, Tomcat 8, and Oracle Database 11.2.0.4.

One of the apps that has always seemed to have a lot of potential – but never quite lived up to it in my opinion – is the Survey Builder. While the user interface is not perfect, it has a lot of potential. The app offers functionality generally only seen in commercial products like Survey Monkey. Complicated surveys can be generated, distributed, and tracked with ease. It has one really major flaw however, that always prevented me from using it in any real sense: the page that displays the survey to the user randomly displays a blank white screen.

When I looked at the survey page’s source HTML code using Internet Explorer (Firefox showed no source code at all), I saw a truly blank document, with nothing between the <BODY> tags:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> 
 <HTML><HEAD> 
 <META content="text/html; charset=windows-1252" http-equiv=Content-Type></HEAD> 
 <BODY></BODY></HTML>

The problem was maddeningly inconsistent. Sometimes the survey would display with no problems, other times it would display correctly in Internet Explorer but not Firefox, or vice versa. Sometimes it wouldn’t work for any browser. The only thing that seemed to be completely predictable was that once a particular browser failed, the survey would not load correctly again until Tomcat was restarted.

People have noticed this and commented about it for years, posting in various online forums with threads like this one, offering advice on tuning Tomcat and such to minimize (but not eliminate) the problem. I tried their recommendations but never had any real success; adjusting Tomcat parameters didn’t really seem to make much difference other than to delay (by only a few minutes) the inevitable, and I couldn’t deploy surveys to my client base without being confident that they would be able to actually see them. Recently however a customer had a real need for this application’s capabilities, and I took another look at what was really going on in the application code in an effort to finally get it working the way it should.

This time I started with the code because because I had already spent a lot of time troubleshooting Tomcat and various browser configurations to no avail. Since no other part of APEX was having issues – only the delivery of this single, essentially static page – I had no reason to think that Tomcat was the culprit other than the various forum posts I had already found. If it had been I would have expected to see many more widespread problems with APEX and that just wasn’t happening.

What I did find after unlocking the application code and taking a look under the hood is that the Survey Builder application does things a little differently than other APEX apps in an attempt to improve performance and the end-user experience. The process is described in detail in this blog post from one of the original Survey Builder developers, but the gist of it is that when the survey is published by the author its HTML is pre-rendered and stored in a database column rather than being built dynamically at run time like all other APEX pages. This decreases the amount of processing time required to deliver the page to the user. When the user connects to the survey, the pre-rendered, effectively static content of the survey page is then delivered to the browser through a PL/SQL call.

After examining the content of the pre-rendered survey page in the EBS_SB_PAGES table, it appeared that the survey generation was working correctly. The HTML code stored in the database was complete and correct in every way. The fact that the survey was actually delivered correctly on some occasions supported this. How then, if the survey was being rendered correctly, was it not being delivered consistently?

The answer turned out to be in the application code itself. The same APEX page – page 100 of the Survey Builder application – is used to deliver the survey whether in testing mode or actual deployment. There is almost no code on the page, except a “static content” process which begins with a simple query to retrieve the static survey content and an exception to return a “not found” page if the query returns no results:

  begin
    select apex_escape.html(q.lang) lang,
      from_tz(to_timestamp(to_char(p.last_updated,'YYYYMMDDHH24MISS'),'YYYYMMDDHH24MISS'),
        to_char(systimestamp, 'TZR')) at time zone 'GMT',
      p.page_content
    into l_lang, l_last_updated_ts, l_page_content
    from eba_sb_pages p, eba_sb_questionnaires q
    where p.questionnaire_id = q.id and p.key = l_key;
  --
  exception when NO_DATA_FOUND then
    -- The web listener doesn't have the ability to return app specific 404 messages
    -- so return 200 with a message saying the survey is not found
    sys.owa_util.status_line(
      nstatus => 200,
      creason => 'Not Found',
      bclose_header => false);
    sys.owa_util.mime_header('text/html', false);
    sys.owa_util.http_header_close;
    --
    sys.htp.p('<html><head><title>Not Found</title></head><body>Survey not found</body></html>');
    wwv_flow.g_page_text_generated := true;
    return;
  end;

I wasn’t seeing “not found” and I could confirm that the survey code was correctly stored in the database, so it seemed logical to assume that we were making it past this part of the code without error. The next part of the procedure checked the HTTP_IF_MODIFIED_SINCE header to see if the survey had been previously downloaded, and if so return an HTTP 304 code instructing the browser to load it from its own cache – further improving response time. If the header value indicated that the page had not previously been downloaded the the code should return an HTTP 200 code and download the page normally.

  --
  -- Get Last Modified from the HTTP header and compare. If equal, emit an HTTP 304 to the browser
  -- and return, indicating that the file should be used from the browser cache
  --
  l_modified_since := sys.owa_util.get_cgi_env('HTTP_IF_MODIFIED_SINCE');
  --
  if l_modified_since is not null then
    --
    -- Account for the rare case when HTTP_ID_MODIFIED_SINCE contains a proprietary Netscape extension, a semicolon separated value
    if instr(l_modified_since,';') > 0 then
      l_modified_since := substr(l_modified_since,1,instr(l_modified_since,';')-1);
    end if;
    --
    l_modified_since_ts := to_timestamp_tz(l_modified_since, 'Dy, DD Mon YYYY HH24:MI:SS TZR', 'NLS_DATE_LANGUAGE = AMERICAN');
    if l_last_updated_ts = l_modified_since_ts then
      sys.owa_util.status_line(
        nstatus => 304,
        creason => 'Not Modified',
        bclose_header => true);
      wwv_flow.g_page_text_generated := true;
      return;
    end if;
  end if;

The end of the procedure actually transmits a number of HTTP headers as well as the actual survey code:

  --
  l_content_length := sys.dbms_lob.getlength(l_page_content);
  --
  sys.owa_util.mime_header('text/html', false, 'utf-8');
  sys.htp.p('Content-Length: ' || l_content_length);
  sys.htp.p('Content-Language: ' || l_lang);
  sys.htp.p('Last-Modified: ' || to_char(l_last_updated_ts, 'Dy, DD Mon YYYY HH24:MI:SS', 'NLS_DATE_LANGUAGE = AMERICAN') || ' GMT' );
  sys.htp.p('Cache-Control: must-revalidate, max-age=0');
  sys.htp.p('Expires: 0');
  --
  sys.owa_util.http_header_close;
  --
  sys.wpg_docload.download_file(l_page_content);
  --
  -- set page generated to true in order to stop rendering after completion of this PL/SQL block
  wwv_flow.g_page_text_generated := true;
end;

None of these headers were showing up in my “blank” page, so it also seemed reasonable to believe that we were never reaching this part of the procedure. It appeared that the application code was incorrectly identifying the survey as being in the browser cache, when it was actually not. The browsers then responded by showing a blank page.

I presented my findings to Oracle Support and they were able to duplicate my test results and essentially confirmed what I was seeing:

In house assessment is basically correct except the code never returned an HTTP 304 status. It never got that far.

Was able to duplicate the issue you described below. Added debug to the opening of the questionnaire page (100) and found that when the page rendered blank, the call to sys.owa_util.get_cgi_env(‘HTTP_IF_MODIFIED_SINCE’) (to populate the l_modified_since variable) was returning a very strange string not at all resembling a timestamp.

Since the l_modified_since variable was not null, the rest of the PL/SQL was trying to convert it into a timestamp (and failing)…

Also added a call to apex_debug.message() to show the value of the l_modified_since variable and found that whether the variable was null or populated correctly, the page rendered as expected. It only rendered blank when the timestamp conversion failed.

The final solution turned out to be quite elegant in its simplicity – simply to comment out the one line of code that reads the HTTP_IF_MODIFIED_SINCE header and force the l_modified_since variable to always be “null”:

--l_modified_since := sys.owa_util.get_cgi_env('HTTP_IF_MODIFIED_SINCE'); 
l_modified_since := null;

I made the change, which essentially forces the survey page to download every time and never use the browser cache, and immediately the survey page loaded correctly. I have now been testing it for a couple of months and it continues to load correctly every time.

26
Oct
15

The Database, Your Way

What needs to be done?

At this point it is important to stop and consider that if you have read my previous posts in this series and followed all the links that I recommended then you will have read a lot of other people’s documentation and best practices. With a little luck you may have put some of them into practice already, or at least be able to see some opportunities or possibilities. Likely you will have ignored others or perhaps modified them to your particular circumstances. Having gotten this far, it is critical that you take a deep breath and pause long enough to begin to pull together your own set of system documentation.

A complete set of system documents should include a wide variety of topics, but most importantly it should define and describe how you have met a particular set of design requirements using a particular set of vendor recommendations, best practices, and customizations. Think of it as “the story” of your system. It should include everything anyone would need to understand how and why your system does what it does. This includes how to “deliver” it, “feed” it, “exercise” it, protect it, watch it, fix it, and clean up after it when it makes a mess (and sooner or later it will make a mess).

Some examples of individual documents that you might want to consider including are:

  • Design Description – A high level description of your system’s purpose and how you plan to achieve it. It should include enough technical language to convey your design concepts accurately without overwhelming the reader with too many of the gritty details. Save those details for the installation and operations guides.
  • Project Plan – An overview of how you plan to implement your system, including all of the major phases like planning, purchasing, installation, configuration, testing, and final deployment and acceptance.
  • Installation Guide – A step-by-step detailed set of instructions on how to install and configure the system hardware and software in your particular environment. This one is especially important: don’t just rely on vendor documentation, as it can be hard to read, include options that don’t apply to your situation, and won’t include important details that do apply. Your installation instructions should be as short and to the point as possible, including only those steps and considerations that are important for you.
  • Test Plan – A detailed description of whatever acceptance and performance baseline testing you plan to do, including tools and methods you plan to use, the purpose and expected result of each test, and acceptable results.
  • Operations Guide – A simple explanation of all major administrative and maintenance operations associated with the system, and how to perform them. This guide should also include instructions for how to monitor the system for unexpected behavior.
  • Security Policy – A detailed description of the roles and responsibilities of all people associated with the system, from users to administrators. This should also include policy and procedures for security-related activities like processing new and retired users, allowing network connections to and from the system, changing access controls, password requirements, and the like.
  • Backup and Recovery Guide – Related to both security and operations, this is a topic that is often big enough to require its own manual, especially where databases are concerned. Describe your backup policies, the architecture of your backup infrastructure, backup scripts or commands, and examples of how to recover your system from various types of failure scenarios. These scenarios should be individually tested to ensure accuracy as part of your Test Plan.
  • Development Standards – Set down in writing the standards for developing your application, whether for security, architecture, naming of objects and procedures, or making decisions in various design scenarios. This may also include a list of recommended best practices or references to other reading related to your system.
  • Interface Agreements – If your system interfaces with other systems you should document the details of the connections. A description of any related subsystems, schedules, network configurations, expected data transfer protocols or system loads should be included, as well as processes for coordinating changes to the interface and points of contact.
  • Service Level Agreements – It is important to manage the expectations of your users in terms of system performance, responsiveness to problems or outages, and maintenance activity.

How can I do that?

There’s no great secret to any of these documents. Examples of all of them can be found with quick Google or Yahoo searches. The hardest thing to do is just to start writing. I usually start with an outline of what I want to include, but everyone has their own style and you may have to work a bit to figure out yours. I definitely recommend doing a little research and learning by example. Take a look at what other people have done (or what is required by your organization) and then decide for yourself what best fits your needs. If your system is small, you might be able to create what you need relatively quickly; you may not even need all of the documents I described. If your system is a bit larger, you may need all of this and more. Hopefully in that case you’re not doing all of the writing yourself, though.

Most technical people that I know (myself often included) really chafe at the idea of taking the time to create detailed documentation. We’d rather move on to the next technical challenge or work on fine-tuning our system than brain dump everything we know for someone else to not read at a later date. That is something that we all need to overcome.

How does that help?

A complete set of documentation like the one I described above can be important for all sorts of reasons. First, it can be important for us personally. Most systems – and especially Oracle databases – can be really, really complex with a lot of “moving parts”. Think about things like network listeners, datafile management, backup and recovery, user management, performance monitoring and analysis, code reviews and enforcement of design standards, requirements analysis, security policy and implementation, and the list goes on, and that is just for the database. If is difficult, if not impossible, for any one person to keep all of the details of design, policy, installation, configuration and operations all in their head. Sooner or later we’re going to forget something. When that happens it is critical for the sake of consistency in system operations to have a resource that tells us what to do and the reasons for doing it in that particular way.

Then there’s the classic “what would happen if I were hit by a bus today” scenario. What would happen if someone completely unfamiliar with your system had to come in and learn how to work with it from scratch, without the benefit your thoughtful insights and explanations as to why things have been done (or were not done) in a particular way? What would be best for your organization? A complete and concise set of documentation can make a daunting task like that a lot easier for whoever gets tasked with covering for you. Assuming that you haven’t actually been hit by a bus and you are eventually able to return to your job it would be good for things to have been maintained in a way as close to “your way” as possible in your absence.

There are a lot of other ways that system documentation can be described as “good” and “good for you” but I’ll limit myself to one last one. The act of writing things out in the first place forces you to slow down and consider what you are creating. It is a good opportunity to reflect on your decisions and look for opportunities for future improvement. Once you start to see those, the “story” of your system will be in good hands.

09
Jun
15

License to Drive

What needs to be done?

In my last two posts I discussed the rules of the road in relation to designing and implementing an Oracle database information system, and how to evaluate compliance and overall security posture. In this post I will discuss the basic Oracle license, which has been known to cause a lot of confusion.

A proper understanding of Oracle product licensing is essential to planning and controlling costs associated with system development and deployment. Taking into account that the cost and number of licenses you need may vary depending on the operating system you choose to use, the number of physical CPU sockets in your hardware, the number of users your system will support, or whether or not your servers are virtualized, and it doesn’t take much to get completely lost. If you don’t plan and implement carefully, an audit of database usage by Oracle (yes, they do that!) could wind up costing you hundreds of thousands of dollars or more in additional fees.

The best explanation of Oracle licensing I have seen so far is the Oracle Software Investment Guide. It contains a thorough explanation of Oracle’s licensing model and includes several specific examples of how to calculate the cost of your system. Assuming that you plan carefully and purchase the correct number and type of licenses, then the issue becomes one of implementation.

One of the most problematic and confusing aspects of installing and using Oracle database (and especially the Enterprise Edition) is that by default all of the extra bells and whistles are installed along with the core functionality. Assuming for now that you have done your homework and accurately planned out which version of Oracle and which options you will need, how can you be sure that you are correctly configured and that there won’t be feature creep in your application as developers “discover” things that you thought were locked down or disabled? Right out of the gate it is important to know how to avoid installing any more than necessary, and to know how to detect and disable the features you don’t need or don’t have the right to use. Sounds simple, right?

How can I do that?

Prior to Oracle 12c it was possible to select which extra options were enabled during the software installation process. Undesired options could be disabled right from the start. Beginning with 12c all binary options are enabled by default, and you must disable them manually after the installation is complete but before any instances are created. As such, it is important when installing Oracle to initially perform a software only installation. Once the Oracle software has been installed and patched appropriately for your environment, use the “chopt” utility on Windows or Linux to remove the extra enterprise features for which you are not licensed from the Oracle kernel. For most people this will likely mean disabling all of them.

12.1.0.2 usage:

chopt <enable|disable>
us
options:
          dm = Oracle Data Mining RDBMS Files
        olap = Oracle OLAP
partitioning = Oracle Partitioning
         rat = Oracle Real Application Testing

e.g. chopt disable rat

11.2.0.3 usage:

chopt <enable|disable>

options:
          dm = Oracle Data Mining RDBMS Files
          dv = Oracle Database Vault option
        lbac = Oracle Label Security
        olap = Oracle OLAP
partitioning = Oracle Partitioning
         rat = Oracle Real Application Testing

e.g. chopt disable rat

Also note that beginning with 12c some binary options can no longer be disabled in this manner. Oracle Label Security Option and Database Vault Option cannot be removed from the kernel.

Once all the required kernel options have been disabled, you can create your database instance using the Database Configuration Assistant, or DBCA. When your database instance has been created, check all initialization parameters related to feature usage to ensure they are set correctly as well.

  • INMEMORY_QUERY=DISABLE
    New to Oracle 12c, this setting will disable the In-Memory Column Store feature, which is separately licensed and enabled by default, and prevent accidental activation and usage which is described by Kevin Closson in this post.
  • CONTROL_MANAGEMENT_PACK_ACCESS=NONE
    In both 11g and 12c this setting will disable the Performance and Tuning Packs for DB Console and Oracle Enterprise Manager, which are also separately licensed and enabled by default.

Check the DBA_FEATURE_USAGE_STATISTICS view on a regular basis to monitor feature usage and make sure nothing unexpected is or becomes active.

select u1.name, u1.detected_usages
from dba_feature_usage_statistics u1
where u1.version = 
  (select max(u2.version) from dba_feature_usage_statistics u2
   where u2.name = u1.name)
order by u1.name;

How does that help?

Proper enabling or disabling of database features is important on two levels. From a financial perspective it is important not to accidentally use database features for which you have not purchased a license. Oracle makes it very easy to make use of things that we should not, and eventually they will send a bill. From a security perspective it is important to disable all of the features that you are not using because it reduces the number of potential attack vectors that an intruder might use against you. If, for example, a vulnerability is found to exist in the OLAP Option that allows escalation of privileges, unlinking that feature from the Oracle kernel will neutralize the weakness. For these reasons it is critical for every database administrator to be precise in how they install and configure every database.

18
Jan
15

Passing the Test

What needs to be done?

In my last post I talked about the legal rules of the road in regards to designing and implementing a database information system, starting from the top level and driving down to the nitty gritty step-by-step hardening instructions, including the STIG. Once the applicable rules are understood, it is important to harden your system according to them.

I’ve posted several times on the DISA Database Secure Technical Implementation Guide (STIG) and how it is the standard for Oracle database hardening within the Department of Defense, but it isn’t the only standard out there. In this post I’m going to highlight some of the other hardening checklists available, as well as some tools (free, of course) that can be used to help you conduct audits and evaluations of your system.

How can I do that?

So what resources are out there to help you harden an Oracle database? Lots. In fact, I would be tempted to say too many. A quick Google search for “Oracle Security Checklist” will produce dozens of possibilities. Here’s a couple of the most prominent:

  • The Oracle Database Security Guide – Often the best place to start is with the official documentation. While the documentation is certainly complete, it is also big. It makes it clear how daunting a proposition securing an Oracle database can be.
  • The Center for Internet Security’s Oracle Database Server 11g Benchmark v1.1.0 – This is a very comprehensive checklist for Oracle 11, worked up in cooperation with the National Security Agency.
  • The SANS Institute has a couple of different Security Consensus Operational Readiness Evaluation (SCORE) checklists. They are very similar to the CIS benchmarks.
  • The DISA STIGs (my favorite) are the the most up to date checklists. DISA posts updates once or twice per year.

Even a cursory review of these checklists reveals hundreds and hundreds of control items. Some are technical (set this parameter, enable or disable that option sort of things) that are easily checked. Many others are procedural or policy controls that take a more thorough investigation and defy any sort of automation. Just to work through any checklist from beginning to end requires a significant amount of time and effort. Fortunately there are some tools available to lighten the load a bit.

  • The Java-based DISA STIG Viewer can generate an XML form of any STIG checklist and allows you to save results that can be exported as XML or an Excel spreadsheet. It’s only shortcoming is that it works with flat files and doesn’t support a multi-user format. If you’re the only one working on a checklist that’s fine, but if you’ve got multiple people trying to participate in the review then working from a single file can be a bit of a pain.
  • The Scuba Database Vulnerability Scanner by Imperva automates many of the controls that are defined in the STIGs, CIS Benchmark, and Oracle Database Security Guide. It’s a good first step in evaluating your database. The downside is that the vulnerability profile hasn’t been updated since 2009, so it may not identify some vulnerabilities correctly if your version of Oracle is more recent.
  • The Oracle Enterprise Manager can conduct security evaluations based on recommendations of the Oracle Database Security Guide. OEM carries a fair amount of infrastructure and administrative overhead, so this isn’t for everyone, but if your organization is already using it then this may be a feature you could take advantage of.

Once your security review is complete and your system has been hardened, a variety of other tools are also available to help you test the effectiveness of your configuration.

  • tnscmd is a Perl script that can be used to send commands to a remote Oracle listener.
  • Metasploit can be used to test a wide variety of infrastructure components.
  • Oracle Database Attacking Tool (ODAT) can exploit a variety of internal and external vulnerabilities.

How does that help?

At this stage of the process it’s a little hard to quantify exactly how proper hardening and security testing can help the bottom line. I look at it as proper investment in the infrastructure. It’s not enough just to know the rules. If you are really going to enforce them, you need to get down and dirty with the details and harden your system (and your application design!) as best you can. You’re going to have to get all your policies and documentation in order. You’re going to have to test your configuration to be sure it is resistant to known exploits, both internal and external. Some time and effort spent up front to do things correctly will save you a lot of time and money down the road. It could also mean the difference between your data – or your customer’s data – falling into the hands of someone it shouldn’t. The process can be time consuming, even with the tools I’ve listed here, but in the long run this investment will save you in more ways than one.

17
Jan
15

The Rules of the Road

What needs to be done?

Before a driver heads out on the road, it is important for him or her to know the rules. Not just common sense stuff like “don’t drive on the wrong side of the road”, but the actual rules and laws that govern what you’re about to do, like obeying a speed limit. In the same way as a new driver getting behind the wheel needs to be well versed on those rules, there is no way for you as a DBA to responsibly, securely design or administer your Oracle database system if you don’t know what is expected of you. You need to know which software you are allowed to use and which you aren’t, which rules are applicable to you, where to find them, and how to interpret them. Some important questions to consider are:

  • Will you be bound in your design decisions by laws or requirements particular to your location, the owner of the data, or the type of data you will be storing?
  • Will your management understand that the rules may drive certain design or architecture decisions?
  • Will your application developers understand that certain types of code libraries may not work in your regulatory framework, or that particular application features may be mandatory (or forbidden!) and not just nice to have?

So why is it so important for the DBA to have the answers to all of these? What I have found in my own experience to be often true is that the DBA is often the only person in the room with a “big-picture” technical view of what is going on. Developers are often focused on their code and on how pretty the user interface will be. System administrators are happy if the server is up and the disk partitions aren’t too full. Network administrators are happy if bandwidth is available. Management is happy when everyone else looks happy. I can say this because at one point or another in my career I have been each of these people. The DBA is often the only one who is concerned about how an application is coming together on a holistic level – security, performance, usability, availability, etc. – because the database is often the only place where all of those other concerns intersect. As such the DBA is often the only one paying much attention to the rules governing overall system security. It is critical therefore to be ready to raise concerns when necessary – with management – and be prepared to back them up with support from the appropriate legal sources.

How can I do that?

The short answer is to start reading. I can’t use this post to go into depth on every standard and guideline out there, but in the United States these are just a few of the most common ones that come up when designing data systems:

In general if you are administering a system that contains financial, health, or personal data of any kind, or if you are supporting a military system, one or more of these laws or regulations will apply directly to you.

 “When you break the big laws, you do not get freedom; you do not even get anarchy. You get the small laws.” – G. K. Chesterton

There is often a trickle-down effect of these top-level laws as well. For example, most of my security-related experience is related to the military environment, which has a multitude of additional directives, instructions, and guidelines. The high level laws don’t lay down the nitty gritty details of how I should secure my database; rather, they designate people in federal agencies to be responsible for coming up with an information security plan with all the details that I need to know. So from that we get the following rules:

  • The DODD 5144.02 directive establishes a DoD Chief Information Officer (CIO), responsible for cybersecurity.
  • The DODI 8500.01 instruction from the CIO establishes a DoD cybersecurity program to protect and defend DoD information and information technology.
  • The DODI 8510.01 instruction from the CIO establishes a risk management framework for DoD information and information technology as a part of the overall cybersecurity program. This rule specifically charges the Defense Information Systems Agency with creating guidelines called Secure Technical Implementation Guides (STIGs) for specific products to ensure a secure configuration.
  • The DODI 5200.40 instruction defines the process by which DoD information systems are accredited and receive permission to operate on DoD networks, which includes a risk management review.
  • The NIST SP 800-53 standard identifies the specific risk management controls for each information system that must be documented as part of the risk management review.
  • The ​Oracle 11.2g Database STIG – Version 1, Release 1 lays out the specific do’s and don’ts (almost 300 of them!) for the actual Oracle database implementation to satisfy the NIST controls.

Within the STIG, the higher level rules and regulations are called out specifically:

“1.2 Authority
DoD Instruction (DoDI) 8500.01 requires that “all IT that receives, processes, stores, displays, or transmits DoD information will be […] configured […] consistent with applicable DoD cybersecurity policies, standards, and architectures” and tasks that Defense Information Systems Agency (DISA) “develops and maintains control correlation identifiers (CCIs), security requirements guides (SRGs), security technical implementation guides (STIGs), and mobile code risk categories and usage guides that implement and are consistent with DoD cybersecurity policies, standards, architectures, security controls, and validation procedures, with the support of the NSA/CSS, using input from stakeholders, and using automation whenever possible.” This document is provided under the authority of DoDI 8500.01.”

Additionally, the DISA web site FAQ states the following, in regards to a STIG’s applicability to specific DoD systems:

“The most direct path to your answer appears in the DoD Directive 8500.1 as follows:

  • The directive applies to all applications: Section 2 (Ability and Scope) Paragraph 2.1.2: All DoD-owned, or controlled information systems that receive, process, store, display, or transmit DoD information, regardless of mission assurance category, classification or sensitivity …
  • The directive also states as policy: Section 4 (Policy) paragraph 4.1: IA requirements will be identified and included in the design, acquisition, installation, operation, upgrade, or replacement of all DOD information systems …

Section 4 (Policy) paragraph 4.13: All DoD information systems shall be certified and accredited in accordance with DoD instruction 5200.40 (reference (U)).

Section 4 (Policy) then goes on to address IA-enabled entities as a separate item.

Enclosure 2 (definitions) contains definitions for Application, DoD Information System, and other terms used in this document.

To summarize, the FSO and DISA consensus has always been that the 8500.1 directive applies to all DoD compute assets, unless specifically exempted (such as weapons systems for war fighters)…”

In other words, the STIG is the law of the land within DoD. The “implementation guides” are an integral part of system accreditation, and any deviation from the standard must be documented, justified, and approved by higher authorities if an information system is to receive its authority to operate on the network. In addition to the Database STIG, there are implementation guides for operating systems, web servers, application servers, network devices, desktop software, and even for the entire application development process. It all adds up to hundreds or even thousands of individual controls or requirements that have to be met for every information system in DoD.

How does that help?

So what does all of this have to do with cost-effective implementation? Designing, implementing and maintaining an information system is not cheap. Sure, hardware and software cost money, but more than that time is money. A little time spent getting to know the rules up front can save you a lot of time and trouble later on. How much time would be wasted if the people involved in design and implementation of an information system were ignorant of the rules by which they and the system are required to operate? What if an unfortunate design decision costs the team weeks or months of work because it turns out their plan won’t meet regulatory requirements? What happens if the system fails its accreditation process because it wasn’t properly put together? The entire program could be scrapped! Worse, what happens if a system is implemented without proper accreditation and something happens to the data for which you are responsible? Think those things never happen? Just look back to the HealthCare.gov rollout

Depending on the environment you are working in, failure to comply with the appropriate laws and regulations could result in a data breach, compromised consumer or government data, the system being shut down, loss of jobs, massive fines, or even jail time! At the very least, thousands of dollars per man hour may be at stake for as long as it takes to correct the problem. So whatever industry you are supporting, get to know the rules for your road as soon as possible. Be aware of changes when they occur and how they might affect your system, and be ready to tell management the truth.

Unfortunately, my personal experience has been that because of my unique perspective I am often forced to play the devil’s advocate and remind people (especially management!) to do or not to do certain things. I have been called obstructionist, negative, and a giant pain in the a$$ more than once over the years. If you stick to your guns on the rules you probably will be too. But if you as the DBA do your job correctly and raise legitimate concerns, then management will have the best information available to make their decisions, and they won’t be able to claim ignorance down the road if something goes sideways. Management can always (and often does) make bad decisions regarding regulatory compliance, but that’s what they are there for – they get “paid the big bucks” to make the decisions and take the heat. If you don’t do your job correctly, and don’t pay attention to the rules, then it might not be management that takes the heat.