A few years ago I started keeping most of my system documentation in APEX websheets. It’s great for keeping things organized and readily available no matter how I’m accessing my work network (on-site, VPN, etc.). A lot of my installation documents have embedded screenshots, and the png files are attached to the websheet as annotations. Everything worked great until we set up a new APEX server and database and I had to migrate my documentation to the new system.
I used the APEX user interface to export the websheet, including all of its annotations; the total size was well over 100MB. What I discovered when I tried to import that file into the new system was that APEX has an internal limitation on the size of any import file. I could upload my file into the APEX repository with nor problem, but it would not install. I don’t know if it is a bug or an undocumented feature, but I was unable to install any websheet file that was any larger than about 10-15MB.
While attempting the installation, I observed the following:
- Disk I/O takes place, primarily in the TEMP tablespace.
- The database session performing the installation hangs forever showing a lot of “direct path read temp” waits.
- The web session in the APEX user interface eventually times out.
- No error appears in any log file: web server, Tomcat, or database.
As a work-around I tried running the installation directly, executing the websheet SQL file from SQL*Plus. This too failed, showing the same symptoms. At this point I turned to Oracle Support, and learned that the the APEX UI import process adds many layers of processing to the installation, and that running the import from SQL*Plus required some additional, undocumented steps for a file as large as the one I was trying to load.
The APEX import process loads the websheet export file from the client browser into the database as a Binary Large Object, or BLOB. In this form the file cannot be run as a SQL script, so it must be converted into Character Large Object (CLOB) data. Once that step is complete, the character data must be parsed to extract the individual SQL statements, and those SQL statements are finally run using calls to DBMS_SYS_SQL. Once the import file reaches a certain size, or contains a certain number of SQL statements, the parsing process breaks down, generating so many waits that it never finishes. While testing I let the import process run for as long as three days without completing.
All of that extra overhead goes away if you load the export file using SQL*Plus, but this requires a little undocumented pre-configuration to get a good result. First, the target websheet schema account in the new workspace must be granted the APEX_ADMINISTRATOR_ROLE. Second, the following SQL code should be wrapped around the import file to make sure that the target schema is properly recognized by the export script. If the session isn’t properly configured, the import process will generate an (ORA-02291: integrity constraint (APEX_050000.WWV_FLOW_WS_APPLICATIONS_FK) violated – parent key not found) error. Substitute the name of the target workspace where shown.
-- Configure the session declare l_workspace_id number; begin select workspace_id into l_workspace_id from apex_workspaces where workspace = '[TARGET WORKSPACE NAME]'; apex_application_install.set_workspace_id( l_workspace_id ); apex_application_install.generate_application_id; apex_application_install.generate_offset; end; / -- Call the websheet script @wsNNN.sql
Using this approach, websheet files of almost any size can be loaded into the database. I was able to get my documentation – including all binary annotations – loaded into the target database in a matter of minutes.