At Amdocs, I worked on the CRM platform powering JCOM — Japan's largest cable telecom operator with over 10 million subscribers. When critical subscriber lookups started taking 12+ seconds, I was tasked with diagnosing and fixing the performance issues across 20+ high-traffic queries in a live production environment.

Here's the full story — from diagnosis to implementation to measurable results.

The Challenge

JCOM's CRM platform handled everything from billing to service activation for 10M+ subscribers. The system processed 50,000+ transactions daily through SOAP-based integrations with external BSS/OSS systems. As subscriber data volumes grew beyond initial capacity planning, query performance degraded significantly. Critical subscriber lookup operations that once completed in under 2 seconds were now taking 12+ seconds, directly impacting customer service response times and agent productivity.

The database team had already tried basic optimizations — adding single-column indexes and increasing allocated memory — but these changes had minimal impact. The root causes were deeper than surface-level tuning could address.

The Goal

My objective was clear but constrained: optimize the 20+ most-trafficked Oracle SQL queries and PL/SQL stored procedures to bring execution times back to acceptable levels, all without disrupting the live production system that thousands of customer service agents relied on daily. Every change needed to be thoroughly tested in staging before deployment, and rollback procedures had to be in place for each optimization.

The Approach

Step 1: Diagnosing with AWR Reports

Rather than guessing, I started with Oracle's Automatic Workload Repository (AWR) reports. AWR captures database performance statistics at regular intervals, providing a comprehensive view of resource consumption. The data revealed three distinct patterns causing the slowdowns.

First, full table scans were happening on subscriber tables containing 10M+ rows — queries weren't leveraging available indexes effectively. Second, the optimizer was choosing nested loop joins in situations where hash joins would be significantly more efficient for the data volumes involved. Third, several PL/SQL stored procedures were processing data row-by-row instead of using set-based operations.

Step 2: Composite Index Strategy

The biggest performance wins came from rethinking the indexing strategy. Single-column indexes existed on most frequently queried columns, but the actual query patterns filtered on multiple columns simultaneously. For example, a common query filtering on both region_id and subscription_status required Oracle to scan two separate indexes and merge the results.

I created composite indexes aligned with the actual query filter combinations, paying careful attention to column order. The most selective column (higher cardinality) goes first — in this case, region_id had far more distinct values than subscription_status:

CREATE INDEX idx_subscriber_region_status
ON subscribers(region_id, subscription_status)
TABLESPACE idx_ts;

Step 3: PL/SQL Refactoring

Several stored procedures used cursor-based row-by-row processing — what Oracle developers call the "slow by slow" anti-pattern. One procedure updating subscriber statuses was looping through millions of rows individually when a single UPDATE with a WHERE clause could accomplish the same operation in seconds.

Where pure set-based operations weren't feasible (due to complex business logic per row), I refactored to use FORALL and BULK COLLECT — Oracle's bulk processing features that drastically reduce context switches between the PL/SQL and SQL engines.

Step 4: Result Caching

Some queries executed identically thousands of times per day — subscriber plan lookups being the prime example. For these, I implemented Oracle's Result Cache, which stores query results in shared memory:

SELECT /*+ RESULT_CACHE */ plan_name, monthly_rate, data_limit
FROM subscription_plans
WHERE plan_status = 'ACTIVE';

For data that changed more frequently, I set up materialized views with fast refresh to pre-compute expensive join operations, eliminating the need to recalculate them on every query.

The Impact

After implementing these optimizations across all 20+ targeted queries, the measurable outcomes were significant. Average query execution times dropped from 12 seconds to 4.8 seconds — a 60% improvement. The work was done incrementally over several weeks, with each change tested in staging environments before production deployment. Throughout the entire process, 99.7% uptime was maintained with zero service disruptions.

Beyond the immediate performance gains, the optimizations reduced load on the database server, which had downstream benefits for other applications sharing the same Oracle instance. The composite indexing strategy I developed became a template that the team adopted for future query optimization work.

Key Takeaways

Measure before you optimize. AWR reports, execution plans, and wait event analysis tell you exactly where time is being spent — without them, you're guessing. Index for your actual query patterns, not for individual columns. Think in sets, not loops — row-by-row processing in PL/SQL is almost always the wrong approach at scale. And cache what doesn't change — Result Cache and materialized views eliminate redundant computation entirely.

Database optimization isn't glamorous work, but when your queries serve 10 million subscribers, every millisecond saved translates directly to better user experience and lower infrastructure costs.

Questions about database optimization? Reach me at [email protected]