10. Tuning node-oracledb

Some general tuning tips are:

10.1. Tuning Fetch Performance

To tune queries, you can adjust node-oracledb’s internal buffer sizes to improve the speed of fetching rows across the network from the database, and to optimize memory usage. This can reduce round-trips which helps performance and scalability. Tune “array fetching” with fetchArraySize and tune “row prefetching” with prefetchRows in each connection.execute() or connection.queryStream() call. In node-oracledb Thick mode, the internal buffers allocated for prefetchRows and arraysize are separate, so increasing both settings will require more Node.js process memory. Queries that return LOBs and similar types will never prefetch rows, so the prefetchRows value is ignored in those cases. Note when using getRows(numRows), where numRows is greater than 0, then tuning of “array fetching” is based on the numRows value instead of fetchArraySize.

The internal buffer sizes do not affect how or when rows are returned to your application. They do not affect the minimum or maximum number of rows returned by a query.

The difference between row prefetching and array fetching is when the internal buffering occurs. Internally node-oracledb performs separate “execute SQL statement” and “fetch data” steps. Prefetching allows query results to be returned to the application when the acknowledgment of successful statement execution is returned from the database. This means that the subsequent internal “fetch data” operation does not always need to make a round-trip to the database because rows are already buffered in node-oracledb or in the Oracle Client libraries. An overhead of prefetching when using the node-oracledb Thick mode is the need for additional data copies from Oracle Client’s prefetch buffer when fetching the first batch of rows. This cost may outweigh the benefits of using prefetching in some cases.

10.1.1. Choosing values for fetchArraySize and prefetchRows

The best fetchArraySize and prefetchRows values can be found by experimenting with your application under the expected load of normal application use. The reduction of round-trips may help performance and overall system scalability. The documentation in round-trips shows how to measure round-trips.

Here are some suggestions for tuning:

  • To tune queries that return an unknown number of rows, estimate the number of rows returned and increase the value of fetchArraySize for best performance, memory and round-trip usage. The default is 100. For example:

    const sql = `SELECT *
                 FROM very_big_table`;
    const binds = [];
    const options = { fetchArraySize: 1000, resultSet: true };
    const result = await connection.execute(sql, binds, options);

    In general for this scenario, leave prefetchRows at its default value. If you do change it, then set fetchArraySize as big, or bigger. Do not make the sizes unnecessarily large. For example, if your query always returns under 500 rows, then avoid setting fetchArraySize to 10000. Very large values are unlikely to improve performance.

  • If you are fetching a fixed number of rows, set fetchArraySize to the number of expected rows, and set prefetchRows to one greater than this value. Adding one removes the need for a round-trip to check for end-of-fetch. For example, if you are querying 20 rows, perhaps to display a page of data, then set prefetchRows to 21 and fetchArraySize to 20:

    const myoffset = 0;       // do not skip any rows (start at row 1)
    const mymaxnumrows = 20;  // get 20 rows
    const sql = `SELECT last_name
                 FROM employees
                 ORDER BY last_name
                 OFFSET :offset ROWS FETCH NEXT :maxnumrows ROWS ONLY`;
    const binds = { offset: myoffset, maxnumrows: mymaxnumrows };
    const options = { prefetchRows: mymaxnumrows + 1, fetchArraySize: mymaxnumrows };
    const result = await connection.execute(sql, binds, options);

    This will return all rows for the query in one round-trip.

  • If you know that a query returns just one row then set fetchArraySize to 1 to minimize memory usage. The default prefetch value of 2 allows minimal round-trips for single-row queries:

    const sql = `SELECT last_name
                 FROM employees
                 WHERE employee_id = :bv`;
    const binds = [100];
    const options = { fetchArraySize: 1 };
    const result = await connection.execute(sql, binds, options);

There are two cases that will benefit from disabling row prefetching by setting prefetchRows to 0:

  • When a query returns a ResultSet which is then passed into PL/SQL. Set prefetchRows to 0 during the initial query so the first rows from the cursor are not prematurely (and silently) fetched by node-oracledb. This lets all rows be available to the later, receiving PL/SQL code. See REF CURSOR Bind Parameters.

  • When querying a PL/SQL function that uses PIPE ROW to emit rows at intermittent intervals. By default, several rows needs to be emitted by the function before node-oracledb can return them to the application. Setting prefetchRows to 0 helps give a consistent flow of data to the application.

Prefetching can also be enabled in an external oraaccess.xml file, which may be useful for tuning an application when modifying its code is not feasible. Setting the size in oraaccess.xml or with the global oracledb.prefetchRow attribute will affect the whole application, so it should not be the first tuning choice.

10.2. Database Round-trips

A round-trip is defined as the travel of a message from node-oracledb to the database and back. Calling each node-oracledb function, or accessing each attribute, will require zero or more round-trips. For example, inserting a simple row involves sending data to the database and getting a success response back. This is a round-trip. Along with tuning an application’s architecture and tuning its SQL statements, a general performance and scalability goal is to minimize round-trips because they impact application performance and overall system scalability.

Some general tips for reducing round-trips are:

10.2.1. Finding the Number of Round-Trips

Oracle’s Automatic Workload Repository (AWR) reports show ’SQL*Net roundtrips to/from client’ and are useful for finding the overall behavior of a system.

Sometimes you may wish to find the number of round-trips used for a specific application. Snapshots of the V$SESSTAT view taken before and after doing some work can be used for this:

SELECT ss.value, sn.display_name
FROM v$sesstat ss, v$statname sn
AND ss.statistic# = sn.statistic#
AND sn.name LIKE '%roundtrip%client%'; Example of finding the number of round-trips

First, find the session id of the current connection:

const r = await connection.execute(`SELECT sys_context('userenv','sid') FROM dual`);
const sid = r.rows[0][0];  // session id

This can be used with V$SESSTAT to find the current number of round-trips. A second connection is used to avoid affecting the count. If your user does not have access to the V$ views, then use a SYSTEM connection:

async function getRT(sid) {
  let systemconnection;
  try {
    systemconnection = await oracledb.getConnection(
       'system', process.env.SYSTEMPASSWORD, 'localhost/orclpdb1');
    const result = await systemconnection.execute(
      `SELECT ss.value
      FROM v$sesstat ss, v$statname sn
      WHERE ss.sid = :sid
      AND ss.statistic# = sn.statistic#
      AND sn.name LIKE '%roundtrip%client%'`,
    return (result.rows[0]);
  } catch (err) {
  } finally {
    if (systemconnection) {
      try {
        await systemconnection.close();
      } catch (err) {

The main part of the application performs the “work” and calls getRT() to calculate the number of round-trips the work takes:

let before, after;

// Multiple execute() calls with explicit commit()

before = await getRT(sid);

const bindArray = [
  [1, 'Victory'],
  [2, 'Beagle'],
for (const binds of bindArray) {
  await connection.execute(
    `INSERT INTO ships (id, name) values (:i, :n)`,

after = await getRT(sid);
console.log('Round-trips required: ' + (after - before));   // 3 round-trips

// executeMany() with autoCommit

before = await getRT(sid);

const options = {
  autoCommit: true,
  bindDefs: [
    { type: oracledb.NUMBER },
    { type: oracledb.STRING, maxSize: 20 }

  `INSERT INTO ships (id, name) values (:1, :2)`,
    [1, 'Victory'],
    [2, 'Beagle'],

after = await getRT(sid);
console.log('Round-trips required: ' + (after - before));   // 1 round-trip

10.3. Statement Caching

Node-oracledb’s execute(), executeMany(), getStatementInfo(), and queryStream() methods use statement caching to make re-execution of statements efficient. Statement caching lets cursors be used without re-parsing the statement. Each cached statement will retain its cursor. Statement caching also reduces meta data transfer costs between node-oracledb and the database. Performance and scalability are improved.

The node-oracledb Thick mode uses Oracle Call Interface statement cache, whereas the Thin mode uses natively implemented statement caching.

Each non-pooled connection and each session in the connection pool has its own cache of statements with a default size of 30. The cache key is the statement string. This means a single cache entry can be reused when a statement is re-executed with different bind variable values.

The statement cache removes the need for the separate ‘prepare’ or ‘parse’ methods which are sometimes seen in other Oracle APIs: there is no separate method in node-oracledb.

10.3.1. Setting the Statement Cache

The statement cache size can be set globally with oracledb.stmtCacheSize:

oracledb.stmtCacheSize = 40;

The value can be overridden in an oracledb.getConnection() call, or when creating a pool with oracledb.createPool(). For example:

await oracledb.createPool({
  user              : "hr",
  password          : mypw,               // mypw contains the hr schema password
  connectString     : "localhost/FREEPDB1",
  stmtCacheSize     : 50

When node-oracledb Thick mode uses Oracle Client 21 (or later), changing the cache size with pool.reconfigure() does not immediately affect connections previously acquired and currently in use. When those connections are subsequently released to the pool and re-acquired, they will then use the new value. When the Thick mode uses Oracle Client prior to version 21, changing the pool’s statement cache size has no effect on connections that already exist in the pool but will affect new connections that are subsequently created, for example when the pool grows.

10.3.2. Tuning the Statement Cache

In general, set the statement cache to the size of the working set of statements being executed by the application. SODA internally makes SQL calls, so tuning the cache is also beneficial for SODA applications.

In node-oracledb Thick mode with Oracle Client libraries 12c, or later, the statement cache size can be automatically tuned with the Oracle Client Configuration oraaccess.xml file.

For manual tuning use views like V$SYSSTAT:

SELECT value FROM V$SYSSTAT WHERE name = 'parse count (total)'

Find the value before and after running application load to give the number of statement parses during the load test. Alter the statement cache size and repeat the test until you find a minimal number of parses.

If you have Automatic Workload Repository (AWR) reports you can monitor general application load and the “bytes sent via SQL*Net to client” values. The latter statistic should benefit from not shipping statement metadata to node-oracledb. Adjust the statement cache size and re-run the test to find the best cache size.

10.3.3. Disabling the Statement Cache

Individual statements can be excluded from the statement cache by setting the attribute keepInStmtCache to false. This will prevent a rarely executed statement from flushing a potential more frequently executed one from a full cache. For example, if a statement will only ever be executed once:

result = await connection.execute(
  `SELECT v FROM t WHERE k = 123`,
  { keepInStmtCache: false }

Statement caching can be disabled completely by setting the cache size to 0:

oracledb.stmtCacheSize = 0;

Disabling the cache may be beneficial when the quantity or order of statements causes cache entries to be flushed before they get a chance to be reused. For example, if there are more distinct statements than cache slots and the order of statement execution causes older statements to be flushed from the cache before they are re-executed.

Disabling the statement cache may also be helpful in test and development environments. The statement cache can become invalid if connections remain open and database schema objects are recreated. This can also happen when a connection uses identical query text with different fetchAsString or fetchInfo data types. Applications can receive errors such as ORA-3106. After a statement execution error is returned once to the application, node-oracledb automatically drops that statement from the cache. This lets subsequent re-executions of the statement on that connection to succeed.

10.4. Client Result Caching (CRC)

Node-oracledb applications can use Oracle Database’s Client Result Cache (CRC). This enables client-side caching of SQL query (SELECT statement) results in client memory for immediate use when the same query is re-executed. This is useful for reducing the cost of queries for small, mostly static, lookup tables, such as for postal codes. CRC reduces network round-trips and also reduces database server CPU usage.


In this release, Client Result Caching is only supported in the node-oracledb Thick mode. See Enabling node-oracledb Thick Mode.

The cache is at the application process level. Access and invalidation is managed by the Oracle Client libraries. This removes the need for extra application logic, or external utilities, to implement a cache. Pooled connections can use CRC. Repeated statement execution on a standalone connection will also use it, but sequences of calls using standalone connections like oracledb.getConnection({user: ...})/ execute()/ connection.close() will not. CRC requires statement caching to be enabled, which is true by default.

10.4.1. Configuring CRC

Client Result Caching can be enabled by setting the database parameters CLIENT_RESULT_CACHE_SIZE and CLIENT_RESULT_CACHE_LAG, for example:


Then restart the database:


or restart the pluggable database, for example:


Once CRC has been enabled in the database, the values used by the cache can optionally be tuned in an oraaccess.xml file, see Client Configuration Parameters. Also see Tuning the Result Cache, which discusses CRC and also the Server Result Cache.

10.4.2. Using CRC

Tables can be created, or altered, so queries use CRC. This allows applications to use CRC without needing modification. For example:


Alternatively, hints can be used in SQL statements. For example:

SELECT /*+ result_cache */ postal_code FROM locations

10.4.3. Verifying CRC

To verify that CRC is working, you can check the number of executions of your query in V$SQLAREA. When CRC is enabled in the database, the number of statement executions is reduced because the statement is not sent to the database unnecessarily.

// Run some load
const q = `SELECT postal_code FROM locations`;
const qc = `SELECT /*+ RESULT_CACHE */ postal_code FROM locations`;

for (let i = 0; i < 100; i++) {
  connection = await oracledb.getConnection();
  result = await connection.execute(q);
  await connection.close();

for (let i = 0; i < 100; i++) {
  connection = await oracledb.getConnection();
  result = await connection.execute(qc);
  await connection.close();

// Compare behaviors (using a connection as SYSTEM)
const m = `SELECT executions FROM v$sqlarea WHERE sql_text = :q1`;

result = await systemconn.execute(m, [q]);
console.log('No hint:', result.rows[0][0], 'executions');

result = await systemconn.execute(m, [qc]);
console.log('CRC hint:', result.rows[0][0], 'executions');

When CRC is enabled, output will be like:

No hint: 100 executions
CRC hint: 1 executions

If CRC is not enabled, output will be like:

No hint: 100 executions
CRC hint: 100 executions